Artificial Intelligence and the Copyright Question
Generative AIs are blowing up. Ever since OpenAI launched ChatGPT back in November, it feels like every company is rushing to integrate it into their products. Microsoft overhauled Bing with a focus on AI, and Adobe launched Firefly, all while Google — who is normally a leader in these types of things — is trying to play catch up with Bard. The AI wars have begun, and there doesn’t seem to be a clear winner right now.
But the biggest question, one which could ultimately decide the fate of these AIs, is copyright. All of the big generative AI models are trained on copyrighted material, and whether or not that’s considered fair use is still up for debate. The law is often slow to adapt, especially when it comes to technology, and this is no exception. Companies have insisted that it’s perfectly fine, but they’re also the ones that stand to profit off of it.
Artists, on the other hand, have overwhelmingly opposed generative AIs — and for good reason. Not only does it threaten their livelihood, by potentially automating and replacing their jobs, but the models themselves are trained on their work without their consent. They’re understandably angry and frustrated when companies move forward with these technologies without reflecting — or without caring — about the implications.
I have no idea how this will play out, but my own personal opinion is that they should not be considered fair use. Models are fundamentally different from say a human brain, because they lack any real understanding or creativity — and at the end of the day, they’re just really good at guessing.
OpenAI in 2023 feels like Napster in 2001; even if they get sued out of existence, they still set the ball rolling. And it feels like nothing can slow that down.