- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
OpenAI’s ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
if you release data into the public domain (aka, if it’s indexable by a search engine) then copying that data isnt stealing - it cant be, the data was already public in the first place.
this is just some lawyer trying to make a name for themselves
Just because the data is “public” doesn’t mean it was intended to be used in this manner. Some of the data was even explicitly protected by gpl licensing or similar.
but GPL licensing indicates that “If code was put in the public domain by its developer, it is in the public domain no matter where it has been” - so, likewise for data. if anyone has a case against OpenAI, it’d be whatever platforms they scraped - and ultimately those platforms would open their own, individual lawsuits.
That’s not at all how the GPL works…
can you expand on that? I’m not very familiar with the legal aspect of GPL.
If you release code under gpl, and I modify it, I’m required to release those modifications publicly under gpl as well.
so if content is under GPL and used for training data, how far is the process of training/fine-tuning considered “modification”? For example, if I scrape a bunch of blog posts and just try to use tools to analyze the language, does that considered “modification”? What is the minimum solution that OpenAI should do (or should have done) here, does it stop at making the code for processing the data public, or the entire code base?
I’m not sure. And I’m not sure there’s legal precedant for that either.
That’s why I dont have a problem with any of these lawsuits, it gives us clarity on the legal aspects, whichever way it goes.