You can also take a model trained on all kinds of data and tell it “generate ten billion articles of fascist knob-gobbling” and then train your own model on that data.
It’ll be complete AI slop, of course, but it’s not like you cared about truth or accuracy in the first place.
That’s a real world issue. AIs training on each other’s output and devolving because of it. There will be a point when vendors infringing on user content and training their AIs with it will leave them worse off.
You can also take a model trained on all kinds of data and tell it “generate ten billion articles of fascist knob-gobbling” and then train your own model on that data.
It’ll be complete AI slop, of course, but it’s not like you cared about truth or accuracy in the first place.
That’s a real world issue. AIs training on each other’s output and devolving because of it. There will be a point when vendors infringing on user content and training their AIs with it will leave them worse off.