Elon Musk’s social media platform X sued Minnesota on Wednesday over a state law that bans people from using AI-generated “deepfakes” to influence an election, which the company said violated protections of free speech.
The law replaces social media platforms’ judgment about the content with the judgment of the state and threatens criminal liability if the platforms get it wrong, according to the lawsuit that was filed in Minnesota federal court.
“This system will inevitably result in the censorship of wide swaths of valuable political speech and commentary,” X said in its complaint. Musk has described himself as a free speech absolutist and he did away with Twitter’s content moderation policy when he bought the company in 2022 and renamed it X.
Your intuition about this is not accurate. 24GB is more than enough for running local image generation and training a LoRA. You also don’t need an insane amount of data; a LoRA is generally trained with less than 100 images, usually around 15-30 images.
To do deepfakes, you’re not training an entire brand new image model from scratch, which is only within reach of big organizations, you’re just adapting an existing model that is publicly available. You can do this for free with open source tools. It is within reach of anyone with high-end gaming hardware or anyone willing to pay for some cheap cloud compute.
Further, LoRAs for most celebrities and famous people have already been trained and can be found on the internet for free, so the training step is likely not even necessary in most cases.
If this is the case, then images generated with the same expression in the same light will not look out of place.
But you will still be able to generate images with other lighting and facial expressions, even without sample images for them, because the base image model that is being adapted already “understands” differing facial expressions and lighting and can apply them to the subject of the LoRA, in the same way that it can combine random concepts together to create something “new” that wasn’t present in the training images (eg a painting of a zombie unicorn in the style of a specific painter)
Yeah I’ll be honest - I understand nueral networks, but don’t understand pretty much any actual implementarion of them.
It’s good to know that 24GB is big enough for something tho so maybe I can find AMD support to learn LLM shit anyway.
But thanks for that information, makes it a bit more… Uncomfortable in context.
AMD GPUs are well supported by many LLM frameworks. I’d recommend ollama