• tripartitegraph [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 hours ago

    I should have been more precise, but this is all in the context of news about a cutting-edge LLM using a fraction of the cost of ChatGPT, and comments calling it all “reactionary autocorrect” and “literally reactionary by design”. My issue is really with the overuse of the term “AI”, but I didn’t feel like explaining the difference between a GPT and deep kernel learning or graph neural networks, which have been used for drug and material discovery. Peppersky’s comment came off as very anti-intellectual to me, which I hate to see amongst “leftists”.

    • piggy [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 hours ago

      I should have been more precise, but this is all in the context of news about a cutting-edge LLM using a fraction of the cost of ChatGPT, and comments calling it all “reactionary autocorrect” and “literally reactionary by design”.

      I disagree that it’s “reactionary by design”. I agree that it’s usage is 90% reactionary. Many companies are effectively trying to use it in a way that attempts to reinforce their deteriorating status quo. I work in software so I always see people calling this shit a magic wand to problems of the falling rate of profit and the falling rate of production. I’ll give you an extrememly common example that i’ve seen across multiple companies an industries.

      Problem: Modern companies do not want to be responsible for the development and education of their employees. They do not want to pay for the development of well functioning specialized tools for the problems their company faces. They see it as a money and time sink. This often presents itself as:

      • missing, incomplete, incorrect documentation
      • horrible time wasting meeting practices

      I’ve seen the following be pitched as AI Bandaids:

      Proposal: push all your documentation into a RAG LLM so that users simply ask the robot and get what they want

      Reality: The robot hallucinates things that aren’t there in technical processes. Attempts to get the robot to correct this involves the robot sticking to marketing style vagaries that aren’t even grounded in the reality of how the company actually works (things as simple as the robot assuming how a process/team/division is organized rather than the reality). Attempts to simply use it as a semantic search index end up linking to the real documentation which is garbage to begin with and doesn’t actually solve anyone’s real problems.

      Proposal: We have too many meetings and spend ~4 hours on zoom. Nobody remembers what happens in the meetings, nobody takes notes, it’s almost like we didn’t have them at all. We are simply not good at working meetings and it’s just chat sessions where the topic is the project. We should use AI features to do AI summaries of our meetings.

      Reality: The AI summaries cannot capture action items correctly if at all. The AI summaries are vague and mainly result in metadata rather than notes of important decisions and plans. We are still in meetings for 4 hours a day, but now we just copypasta useless AI summaries all over the place.

      Don’t even get me started on CoPilot and code generation garbage. Or making “developers productive”. It all boils down to a million monkey problem.

      These are very common scenarios that I’ve seen that ground the use of this technology in inherently reactionary patterns of social reproduction. By the way I do think DeepSeek and Duobao are an extremely important and necessary step because it destroys the status quo of Western AI development. AI in the West is made to be inefficient on purpose because it limits competition. The fact that you cannot run models locally due to their incredible size and compute demand is a vendor lock-in feature that ensures monetization channels for Western companies. The PayGo model bootstraps itself.

      • tripartitegraph [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        I think we agree that LLMs like ChatGPT and CoPilot largely will be (and are being) used to discipline labor and that is reactionary. But this feels more like a list of gripes with LLMs and not actually responding to my comment. DKL, GNNs and other machine learning architectures ARE being used in drug and material discovery research, I just didn’t feel like explaining the difference between that and the popular conception of “AI” to peppersky, given how flippant and troll-y their comments were. We should push back against anti-intellectualism in our spaces, and that’s all I was trying to do.

        • piggy [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          I agree that anti-intellectualism is bad, but I wouldn’t necessarily consider being AI negative by default, a form of anti-intellectualism. It’s the same thing as people who are negative on space exploration. It’s a symptom where it seems that there is infinite money for things that are fad/scams/bets, things that have limited practical use in people’s lives, and ultimately not enough to support people.

          That’s really where I see those arguments coming from. AI is quite honestly a frivolity in a society where housing is a luxury.