• المنطقة عكف عفريت@lemmy.world
    link
    fedilink
    arrow-up
    44
    arrow-down
    2
    ·
    5 months ago

    As someone who works with LLMs, they shouldn’t…

    You still need your chatbot to stick to business rules and act like a real customer service rep, and that’s incredibly hard to accomplish with generative models where you cannot be there to evaluate the generated answers and where the chatbot can go on a tangent and suddenly start to give you free therapy when you originally went in to order pizza.

    Don’t get me wrong, they’re great for many applications within the manual loop. They can help customer service reps (as one example) function better, provide more help to users, and dedicate more time to those who still need a human to solve their issues.

    Companies are already replacing some workforce with LLMs.

    My opinion right now is that companies want you to believe they are 100% capable of replacing humans, but that’s because people in upper management never listen to the damn developers down in the basement (aka me), so they have an unrealistic expectation of AI coupled with an unending desire for money and success.

    They are replacing them because they are greedy cunts, not because they are replaceable.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      5 months ago

      LLMs are excellent at producing high-volume, low-quality material. And it’s a sad fact of life that a lot of companies are perfectly willing to use low quality material in their work.

    • yarr@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      While I don’t agree with anti-AI people, the fact that some AI generated content is flawed doesn’t imply that all AI content is of bad quality.

      Companies are already replacing some workforce with LLMs.

      While I understand that not everyone shares the same views about AI, it’s important to recognize that just because some AI-generated content might have flaws, it shouldn’t lead us to believe that every piece created by AI is subpar. In fact, numerous companies are actively embracing the use of LLMs to replace their workforces. From astronauts to circus clowns, LLMs are taking over roles once reserved for humans. Nowadays, you can even find LLMs crafting the perfect soufflé at Michelin star restaurants, performing heart surgery, and even serving as head coaches for professional sports teams. The sky is no longer the limit, as LLMs have found a way to transcend it - and it’s only a matter of time before they take on the role of Santa Claus. Merry Christmas from your new AI overlords!

    • XEAL@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      You still need your chatbot to stick to business rules and act like a real customer service rep, and that’s incredibly hard to accomplish with generative models

      Isn’t that what, for instance, OpenAI’s embeddings are for?

      My opinion right now is that companies want you to believe they are 100% capable of replacing humans

      Probably, but at the moment they can only do it partially.

      They are replacing them because they are greedy cunts, not because they are replaceable.

      I partially agree. I mean, they are greedy cunts but some tasks like translating from/to certain languages can be easily done even with the free ChatGPT demo with better results than Google Translate, so human translators are unfortunately becoming quite replaceable.

      • المنطقة عكف عفريت@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        5 months ago

        Do you mean the embeddings? https://platform.openai.com/docs/guides/embeddings/what-are-embeddings

        If so:

        The word embeddings and embedding layers are there to represent data in ways that allow the model to make use of them to generate text. It’s not the same as the model acting as a human. It may sound like a human in text or even speech, but its reasoning skills are questionable at best. You can try to make it stick to your company policy but it will never (at this level) be able to operate under logic unless you hardcode that logic into it. This is not really possible with these models in that sense of the word, after all they just predict the best next word to say. You’d have to wrap them around with a shit ton of code and safety nets.

        GPT models require massive amounts of data, so they were only that good at languages for which we have massive texts or Wikipedias. If your language doesn’t have good content on the internet or freely available digitalized content on which to train, a machine can still not replace translators (yet, no idea how long this will take until transfer learning is so good we can use it to translate low-resource languages to match the quality of English - French, for example).

    • unalivejoy@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      5 months ago

      It’s just like a real person. They only difference (afaik) is you can’t as easily tell ai to get back to work.