Of course LLMs aren’t a simulation of consciousness with the same abilities of a human, the idea is that if a model was trained first on Marxist theory and history before taking in more information through that perspective there could be a point where this can be used to simulate economic models that would be useful for economic planning. It could be used to simulate contradictions and formulate strategies to navigate them in organizing spaces. It could be used for propaganda purposes, like if an average person asks it questions it would default to discussing the topic from a revolutionary angle. If some American goes on Deepseek to ask about how to convince their boss to give them a raise, Deepseek should default to teaching the person how to unionize their workplace instead of helping them form a good argument to convince the boss alone. There are a lot of use cases for an LLM trained this way, and this type of work would pave the way for greater advancements as the technology advances and we inevitably do get closer to a science fiction understanding of AI which is obviously not what a LLM is.
There are a lot of leftists here have a reactionary stance on this technology because of the way it is being used by capitalists, in the same way that anarchists have a reactionary stance on the state because of the way it is being used by capitalists. My wish casting fan fiction about a “good” AI existing one day in the future instead of the 99% more commonly thought idea that if an AI of this type would ever be developed it would kill us all is obviously bloomer cope. We’ll be long dead before the technology gets there because the great minds of the left take a post about a marxist leninist dialectial materialist bot as a serious analysis of humanity’s current technological progress and feel the need to critique it.
Edit: this post isn’t directed towards the person I am responding to particularly, there has just been an ongoing undercurrent around this issue which is what I am speaking to more broadly
I think computers themselves are already the under-utilized tool for economic planning and coordination. Perhaps LLMs (or some other sort of trained neural net) have a role in that, though I think the way it spits out answers without being able to go back and follow the steps it took to get there makes it a bit unreliable for economic modeling. Honestly, I’ve seen a pretty compelling case put forward that all we really need is some open-source algebra equations, and a dedicated network for incorporating worker feedback and real time data.
And, I’m… skeptical about the effectiveness of online messaging in general. It’s good for getting ideas out there, but real organizing happens offline, between people. Ideally, we want them to be able to recognize, analyze, and work through contradictions themselves - rather than relying on the computer to hand them answers.
I generally agree with all of this where we are now, especially the last part in regards to organizing.
I’m imagining that there are two concurrent timelines, one where machine learning and related technology continues to develop at an increasingly rapid pace, and another where westerners are under the heel of capitalism, increasingly desperate for change, but more or less alienated from revolutionary theory and practices due to their settler/colonial/fascist base ideologies preventing them from accepting the solutions to their problems.
Playing out that scenario, I could foresee a time where the technology (all machine learning, neural networks, “AI,” and not LLMs particularly) has advanced to a point that it is more useful than the average American leftist in finding solutions to American problems, because American leftists are inhibited by the aforementioned ideologies and show no signs of letting them go. Many are doubling down these days.
Even now, the most talented organizers I know are mostly bogged down by the reproductive labor of keeping organizations afloat, and if even a third of that could be offloaded to machines and just touched up by the humans, it would save hundreds of hours a year that could go back into human to human interactions
Of course LLMs aren’t a simulation of consciousness with the same abilities of a human, the idea is that if a model was trained first on Marxist theory
there could be a point where this can be used to simulate economic models t
but they don’t simulate anything, they’re word calculators
How are you defining simulation? We can already generate images, videos, 3D models, text, interpret data and train models on particular data to base any of that generation on with currenly available platforms.
AI is already being used to assist simulation. One team used it to train robots by taking photos of a room and having the AI simulations train the robot on movements virtually instead of having to physically repeat the tasks in a real space. A quick search will yield many examples of the work being done that will allow the types of simulations you don’t see now being done in 5-10 years.
You are confusing the wider field of machine learning, which has been developing in strides throughout 2010s (and before that really) without the media overhyping it to the extent that people think machines can think now, and LLMs, which birthed the media hype cycle that is the subject of criticism in this thread.
my response to the OP was about a fictional communist AI to save humanity, clearly riffing on the OP’s title, which prompted all the debate perverts to come out and make sure everyone understands that LLMs aren’t actually HAL 9000.
Of course LLMs aren’t a simulation of consciousness with the same abilities of a human, the idea is that if a model was trained first on Marxist theory and history before taking in more information through that perspective there could be a point where this can be used to simulate economic models that would be useful for economic planning.
You couldn’t make it any more confusing than talking about LLMs and economic model simulation in a single sentence then.
Is it confusing or are you just so locked in on your special interest that you are ignoring the context? I made a comment about a future CPC AI that I have imagined for fun, someone responded to inform me about how LLMs work, which isn’t what I was talking about, I responded saying of course that is true, then elaborated the idea they had misunderstood in my original comment.
You even left out the part where I clarified that LLMs as they stand are a part of paving the way towards the idea I brought up. If something like what I have imagined for kicks is ever made, LLMs will certainly be a part of its development.
I’m sure I could have been more concise but considering you used the word “gaslighting” to describe what you feel my comment was, it seems like you just reaching heavily for the outcomes you seek
Of course LLMs aren’t a simulation of consciousness with the same abilities of a human, the idea is that if a model was trained first on Marxist theory and history before taking in more information through that perspective there could be a point where this can be used to simulate economic models that would be useful for economic planning. It could be used to simulate contradictions and formulate strategies to navigate them in organizing spaces. It could be used for propaganda purposes, like if an average person asks it questions it would default to discussing the topic from a revolutionary angle. If some American goes on Deepseek to ask about how to convince their boss to give them a raise, Deepseek should default to teaching the person how to unionize their workplace instead of helping them form a good argument to convince the boss alone. There are a lot of use cases for an LLM trained this way, and this type of work would pave the way for greater advancements as the technology advances and we inevitably do get closer to a science fiction understanding of AI which is obviously not what a LLM is.
There are a lot of leftists here have a reactionary stance on this technology because of the way it is being used by capitalists, in the same way that anarchists have a reactionary stance on the state because of the way it is being used by capitalists. My wish casting fan fiction about a “good” AI existing one day in the future instead of the 99% more commonly thought idea that if an AI of this type would ever be developed it would kill us all is obviously bloomer cope. We’ll be long dead before the technology gets there because the great minds of the left take a post about a marxist leninist dialectial materialist bot as a serious analysis of humanity’s current technological progress and feel the need to critique it.
Edit: this post isn’t directed towards the person I am responding to particularly, there has just been an ongoing undercurrent around this issue which is what I am speaking to more broadly
I think computers themselves are already the under-utilized tool for economic planning and coordination. Perhaps LLMs (or some other sort of trained neural net) have a role in that, though I think the way it spits out answers without being able to go back and follow the steps it took to get there makes it a bit unreliable for economic modeling. Honestly, I’ve seen a pretty compelling case put forward that all we really need is some open-source algebra equations, and a dedicated network for incorporating worker feedback and real time data.
And, I’m… skeptical about the effectiveness of online messaging in general. It’s good for getting ideas out there, but real organizing happens offline, between people. Ideally, we want them to be able to recognize, analyze, and work through contradictions themselves - rather than relying on the computer to hand them answers.
I generally agree with all of this where we are now, especially the last part in regards to organizing.
I’m imagining that there are two concurrent timelines, one where machine learning and related technology continues to develop at an increasingly rapid pace, and another where westerners are under the heel of capitalism, increasingly desperate for change, but more or less alienated from revolutionary theory and practices due to their settler/colonial/fascist base ideologies preventing them from accepting the solutions to their problems.
Playing out that scenario, I could foresee a time where the technology (all machine learning, neural networks, “AI,” and not LLMs particularly) has advanced to a point that it is more useful than the average American leftist in finding solutions to American problems, because American leftists are inhibited by the aforementioned ideologies and show no signs of letting them go. Many are doubling down these days.
Even now, the most talented organizers I know are mostly bogged down by the reproductive labor of keeping organizations afloat, and if even a third of that could be offloaded to machines and just touched up by the humans, it would save hundreds of hours a year that could go back into human to human interactions
but they don’t simulate anything, they’re word calculators
How are you defining simulation? We can already generate images, videos, 3D models, text, interpret data and train models on particular data to base any of that generation on with currenly available platforms.
dumping out scrabble tiles doesn’t simulate systems.
AI is already being used to assist simulation. One team used it to train robots by taking photos of a room and having the AI simulations train the robot on movements virtually instead of having to physically repeat the tasks in a real space. A quick search will yield many examples of the work being done that will allow the types of simulations you don’t see now being done in 5-10 years.
that sounds like not an LLM
I didn’t say it was an LLM, other people brought up LLMs in response to my comment
You are confusing the wider field of machine learning, which has been developing in strides throughout 2010s (and before that really) without the media overhyping it to the extent that people think machines can think now, and LLMs, which birthed the media hype cycle that is the subject of criticism in this thread.
My original comment was about AI, other people brought up LLMs in response to that. I’m not confusing anything.
if you made a comment about AGI on a post about an LLM and only said “AI” there is zero context clue for us to think you meant a different topic.
my response to the OP was about a fictional communist AI to save humanity, clearly riffing on the OP’s title, which prompted all the debate perverts to come out and make sure everyone understands that LLMs aren’t actually HAL 9000.
You couldn’t make it any more confusing than talking about LLMs and economic model simulation in a single sentence then.
Is it confusing or are you just so locked in on your special interest that you are ignoring the context? I made a comment about a future CPC AI that I have imagined for fun, someone responded to inform me about how LLMs work, which isn’t what I was talking about, I responded saying of course that is true, then elaborated the idea they had misunderstood in my original comment.
You even left out the part where I clarified that LLMs as they stand are a part of paving the way towards the idea I brought up. If something like what I have imagined for kicks is ever made, LLMs will certainly be a part of its development.
I’m sure I could have been more concise but considering you used the word “gaslighting” to describe what you feel my comment was, it seems like you just reaching heavily for the outcomes you seek