LLMs are a net-negative for society as a whole. The underlying technology is fine, but it’s far too easy for corporations to manipulate the populace with them, and people are just generally very vulnerable to them. Beyond the extremely common tendency to misunderstand and anthropomorphize them and think they have some real insight, they also delude (even otherwise reasonable) people into thinking that they are benefitting from them when they really… Aren’t. Instead, people get hooked on the feelings they give them, and people keep wanting to get their next hit (tokens).
can we agree that 90% of the problem with LLM are capitalism and not the actual technology?
after all, the genie is out of the bottle. you can’t destroy them, there are open source models. even if you ban them, you’ll still have people running them locally.
can we agree that 90% of the problem with cigarettes are capitalism and not the actual smoking?
after all, the genie is out of the bottle. you can’t destroy them, there are tobacco plants grown at home. even if you ban them, you’ll still have people hand-rolling cigarettes.
it’s fucking weird how I only hear about open source LLMs when someone tries to make this exact point. I’d say it’s because the open source LLMs fucking suck, but that’d imply that the commercial ones don’t. none of this horseshit has a use case.
Frankly yes. In a better world art would not be commodified and the economic barriers that hinder commissioning of art from skilled human artists in our capitalist system would not exist, and thus generative AI recombining existing art would likely be much less problematic and harmful to both artists and audiences alike.
But also that is not the world where we live, so fuck GenAI and its users and promoters lmao stay mad.
i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it
my point was more on him using it to do his worst-of-both-worlds arguments where he’s simultaneously saying that ‘alignment is FALSIFIED!’ and also doing heavy anthropomorphization to confirm his priors (whereas it’d be harder to say that with something that’s more leaning towards maybe in the question whether it should be anthro’d like claude since that has a much more robust system) and doing it off the back of someones death
@Anomalocaris@visaVisa The attention spent on people who think LLMs are going to evolve into The Machine God will only make good regulation & norms harder to achieve
I literally don’t care, AT ALL, about someone who’s too dumb not to kill themselves because of a LLM and we sure as shit shouldn’t regulate something just because they (unfortunately) exist.
It should be noted that the only person to lose his life in the article was because the police, who were explicitly told to be ready to use non-lethal means to subdue him because he was in the middle of a mental episode, immediately gunned him down when they saw him coming at them with a kitchen knife.
But here’s the thrice cursed part:
“You want to know the ironic thing? I wrote my son’s obituary using ChatGPT,” Mr. Taylor said. “I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through. And it was beautiful and touching. It was like it read my heart and it scared the shit out of me.”
Yeah, if you had any awareness about how stupid and unlikeable you’re coming across to everybody who crosses your path, I think you would recognise that this is probably not a good maxim to live your life by.
can we agree they Yudkowsky is a bit of a twat.
but also that there’s a danger in letting vulnerable people access LLMs?
not saying that they should me banned, but some regulation and safety is necessary.
LLMs are a net-negative for society as a whole. The underlying technology is fine, but it’s far too easy for corporations to manipulate the populace with them, and people are just generally very vulnerable to them. Beyond the extremely common tendency to misunderstand and anthropomorphize them and think they have some real insight, they also delude (even otherwise reasonable) people into thinking that they are benefitting from them when they really… Aren’t. Instead, people get hooked on the feelings they give them, and people keep wanting to get their next hit (tokens).
They are brain rot and that’s all there is to it.
can we agree that 90% of the problem with LLM are capitalism and not the actual technology?
after all, the genie is out of the bottle. you can’t destroy them, there are open source models. even if you ban them, you’ll still have people running them locally.
it’s fucking weird how I only hear about open source LLMs when someone tries to make this exact point. I’d say it’s because the open source LLMs fucking suck, but that’d imply that the commercial ones don’t. none of this horseshit has a use case.
Frankly yes. In a better world art would not be commodified and the economic barriers that hinder commissioning of art from skilled human artists in our capitalist system would not exist, and thus generative AI recombining existing art would likely be much less problematic and harmful to both artists and audiences alike.
But also that is not the world where we live, so fuck GenAI and its users and promoters lmao stay mad.
no thx
i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it
my point was more on him using it to do his worst-of-both-worlds arguments where he’s simultaneously saying that ‘alignment is FALSIFIED!’ and also doing heavy anthropomorphization to confirm his priors (whereas it’d be harder to say that with something that’s more leaning towards maybe in the question whether it should be anthro’d like claude since that has a much more robust system) and doing it off the back of someones death
yhea, we should me talking about this
just not talking with him
@Anomalocaris @visaVisa The attention spent on people who think LLMs are going to evolve into The Machine God will only make good regulation & norms harder to achieve
yhea, we need reasonable regulation now. about the real problems it has.
like making them liability for training on stolen data,
making them liable for giving misleading information, and damages caused by it…
things that would be reasonable for any company.
do we need regulations about it becoming skynet? too late for that mate
I literally don’t care, AT ALL, about someone who’s too dumb not to kill themselves because of a LLM and we sure as shit shouldn’t regulate something just because they (unfortunately) exist.
It should be noted that the only person to lose his life in the article was because the police, who were explicitly told to be ready to use non-lethal means to subdue him because he was in the middle of a mental episode, immediately gunned him down when they saw him coming at them with a kitchen knife.
But here’s the thrice cursed part:
Jesus Christ on a stick, thats some trice cursed shit.
Maybe susceptibility runs in families, culturally. Religion does, for one thing.
Yeah, if you had any awareness about how stupid and unlikeable you’re coming across to everybody who crosses your path, I think you would recognise that this is probably not a good maxim to live your life by.
it didn’t take me long at all to find the most recent post with a slur in your post history. you’re just a bundle of red flags, ain’t ya?
don’t let that edge cut you on your way the fuck out
" I don’t care if innocent people die if it inconvenience me in some way."
yhea, opinion dismissed