ByteDance officially launches its latest Doubao large model 1.5 Pro (Doubao-1.5-pro), which demonstrates outstanding comprehensive capabilities in various fields, successfully surpassing the well-known GPT-4o and Claude3.5Sonnet in the industry. The release of this model marks an important step forward for ByteDance in the field of artificial intelligence. Doubao 1.5 Pro adopts a novel sparse MoE (Mixture of Experts) architecture, utilizing a smaller set of activation parameters for pre-training. This design's innovation...
This is a stupid take. I like the autocorrect analogy generally, but this veers into Luddite-ism.
Let me add, the way we’re pushed to use LLMs is pretty dumb and a waste of time and resources, but the technology has pretty fascinating use-cases in material and drug discovery.
This is mainly hype. The process of creating AI has been useful for drug discovery, LLMs as people practically know them (e.g. ChatGBT) have not other than the same kind of sloppy labor corner cost cutting bullshit.
If you read a lot of the practical applications in the papers it’s mostly publish or perish crap where they’re gushing about how drug trials should be like going to cvs.com where you get a robot and you can ask it to explain something to you and it spits out the same thing reworded 4-5 times.
They’re simply pushing consent protocols onto robots rather than nurses, which TBH should be an ethical violation.
I should have been more precise, but this is all in the context of news about a cutting-edge LLM using a fraction of the cost of ChatGPT, and comments calling it all “reactionary autocorrect” and “literally reactionary by design”. My issue is really with the overuse of the term “AI”, but I didn’t feel like explaining the difference between a GPT and deep kernel learning or graph neural networks, which have been used for drug and material discovery. Peppersky’s comment came off as very anti-intellectual to me, which I hate to see amongst “leftists”.
I should have been more precise, but this is all in the context of news about a cutting-edge LLM using a fraction of the cost of ChatGPT, and comments calling it all “reactionary autocorrect” and “literally reactionary by design”.
I disagree that it’s “reactionary by design”. I agree that it’s usage is 90% reactionary. Many companies are effectively trying to use it in a way that attempts to reinforce their deteriorating status quo. I work in software so I always see people calling this shit a magic wand to problems of the falling rate of profit and the falling rate of production. I’ll give you an extrememly common example that i’ve seen across multiple companies an industries.
Problem: Modern companies do not want to be responsible for the development and education of their employees. They do not want to pay for the development of well functioning specialized tools for the problems their company faces. They see it as a money and time sink. This often presents itself as:
missing, incomplete, incorrect documentation
horrible time wasting meeting practices
I’ve seen the following be pitched as AI Bandaids:
Proposal: push all your documentation into a RAG LLM so that users simply ask the robot and get what they want
Reality: The robot hallucinates things that aren’t there in technical processes. Attempts to get the robot to correct this involves the robot sticking to marketing style vagaries that aren’t even grounded in the reality of how the company actually works (things as simple as the robot assuming how a process/team/division is organized rather than the reality). Attempts to simply use it as a semantic search index end up linking to the real documentation which is garbage to begin with and doesn’t actually solve anyone’s real problems.
Proposal: We have too many meetings and spend ~4 hours on zoom. Nobody remembers what happens in the meetings, nobody takes notes, it’s almost like we didn’t have them at all. We are simply not good at working meetings and it’s just chat sessions where the topic is the project. We should use AI features to do AI summaries of our meetings.
Reality: The AI summaries cannot capture action items correctly if at all. The AI summaries are vague and mainly result in metadata rather than notes of important decisions and plans. We are still in meetings for 4 hours a day, but now we just copypasta useless AI summaries all over the place.
Don’t even get me started on CoPilot and code generation garbage. Or making “developers productive”. It all boils down to a million monkey problem.
These are very common scenarios that I’ve seen that ground the use of this technology in inherently reactionary patterns of social reproduction. By the way I do think DeepSeek and Duobao are an extremely important and necessary step because it destroys the status quo of Western AI development. AI in the West is made to be inefficient on purpose because it limits competition. The fact that you cannot run models locally due to their incredible size and compute demand is a vendor lock-in feature that ensures monetization channels for Western companies. The PayGo model bootstraps itself.
I think we agree that LLMs like ChatGPT and CoPilot largely will be (and are being) used to discipline labor and that is reactionary. But this feels more like a list of gripes with LLMs and not actually responding to my comment. DKL, GNNs and other machine learning architectures ARE being used in drug and material discovery research, I just didn’t feel like explaining the difference between that and the popular conception of “AI” to peppersky, given how flippant and troll-y their comments were. We should push back against anti-intellectualism in our spaces, and that’s all I was trying to do.
I agree that anti-intellectualism is bad, but I wouldn’t necessarily consider being AI negative by default, a form of anti-intellectualism. It’s the same thing as people who are negative on space exploration. It’s a symptom where it seems that there is infinite money for things that are fad/scams/bets, things that have limited practical use in people’s lives, and ultimately not enough to support people.
That’s really where I see those arguments coming from. AI is quite honestly a frivolity in a society where housing is a luxury.
Luddites were actually cool and right. They didn’t organize and destroy looms because they just loved the more tedious work of non-powered looms, they destroyed them because they were the beginning of industrial capitalism and wage labor.
This is a stupid take. I like the autocorrect analogy generally, but this veers into Luddite-ism.
Let me add, the way we’re pushed to use LLMs is pretty dumb and a waste of time and resources, but the technology has pretty fascinating use-cases in material and drug discovery.
This is mainly hype. The process of creating AI has been useful for drug discovery, LLMs as people practically know them (e.g. ChatGBT) have not other than the same kind of sloppy labor corner cost cutting bullshit.
If you read a lot of the practical applications in the papers it’s mostly publish or perish crap where they’re gushing about how drug trials should be like going to cvs.com where you get a robot and you can ask it to explain something to you and it spits out the same thing reworded 4-5 times.
They’re simply pushing consent protocols onto robots rather than nurses, which TBH should be an ethical violation.
I should have been more precise, but this is all in the context of news about a cutting-edge LLM using a fraction of the cost of ChatGPT, and comments calling it all “reactionary autocorrect” and “literally reactionary by design”. My issue is really with the overuse of the term “AI”, but I didn’t feel like explaining the difference between a GPT and deep kernel learning or graph neural networks, which have been used for drug and material discovery. Peppersky’s comment came off as very anti-intellectual to me, which I hate to see amongst “leftists”.
I disagree that it’s “reactionary by design”. I agree that it’s usage is 90% reactionary. Many companies are effectively trying to use it in a way that attempts to reinforce their deteriorating status quo. I work in software so I always see people calling this shit a magic wand to problems of the falling rate of profit and the falling rate of production. I’ll give you an extrememly common example that i’ve seen across multiple companies an industries.
Problem: Modern companies do not want to be responsible for the development and education of their employees. They do not want to pay for the development of well functioning specialized tools for the problems their company faces. They see it as a money and time sink. This often presents itself as:
I’ve seen the following be pitched as AI Bandaids:
Proposal: push all your documentation into a RAG LLM so that users simply ask the robot and get what they want
Reality: The robot hallucinates things that aren’t there in technical processes. Attempts to get the robot to correct this involves the robot sticking to marketing style vagaries that aren’t even grounded in the reality of how the company actually works (things as simple as the robot assuming how a process/team/division is organized rather than the reality). Attempts to simply use it as a semantic search index end up linking to the real documentation which is garbage to begin with and doesn’t actually solve anyone’s real problems.
Proposal: We have too many meetings and spend ~4 hours on zoom. Nobody remembers what happens in the meetings, nobody takes notes, it’s almost like we didn’t have them at all. We are simply not good at working meetings and it’s just chat sessions where the topic is the project. We should use AI features to do AI summaries of our meetings.
Reality: The AI summaries cannot capture action items correctly if at all. The AI summaries are vague and mainly result in metadata rather than notes of important decisions and plans. We are still in meetings for 4 hours a day, but now we just copypasta useless AI summaries all over the place.
Don’t even get me started on CoPilot and code generation garbage. Or making “developers productive”. It all boils down to a million monkey problem.
These are very common scenarios that I’ve seen that ground the use of this technology in inherently reactionary patterns of social reproduction. By the way I do think DeepSeek and Duobao are an extremely important and necessary step because it destroys the status quo of Western AI development. AI in the West is made to be inefficient on purpose because it limits competition. The fact that you cannot run models locally due to their incredible size and compute demand is a vendor lock-in feature that ensures monetization channels for Western companies. The PayGo model bootstraps itself.
I think we agree that LLMs like ChatGPT and CoPilot largely will be (and are being) used to discipline labor and that is reactionary. But this feels more like a list of gripes with LLMs and not actually responding to my comment. DKL, GNNs and other machine learning architectures ARE being used in drug and material discovery research, I just didn’t feel like explaining the difference between that and the popular conception of “AI” to peppersky, given how flippant and troll-y their comments were. We should push back against anti-intellectualism in our spaces, and that’s all I was trying to do.
I agree that anti-intellectualism is bad, but I wouldn’t necessarily consider being AI negative by default, a form of anti-intellectualism. It’s the same thing as people who are negative on space exploration. It’s a symptom where it seems that there is infinite money for things that are fad/scams/bets, things that have limited practical use in people’s lives, and ultimately not enough to support people.
That’s really where I see those arguments coming from. AI is quite honestly a frivolity in a society where housing is a luxury.
Just like every technological advancement. The problem isn’t the technology but how capitalism puts it to use
Luddites were actually cool and right. They didn’t organize and destroy looms because they just loved the more tedious work of non-powered looms, they destroyed them because they were the beginning of industrial capitalism and wage labor.