• Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    90
    arrow-down
    1
    ·
    3 months ago

    OpenAI: Here’s a new model that can think in steps and reason about things!

    User: How did you conclude this is the correct answer?

    OpenAI: No! Not like that! banhammer

      • Eril@feddit.org
        link
        fedilink
        English
        arrow-up
        39
        arrow-down
        1
        ·
        3 months ago

        When I heard about it first, I thought it was some open source project, because of the name. :(

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          3 months ago

          It was, originally. GPT-2 was eventually released after some push back from openAI and the models prior to that were fully released immediately. Its been apparent for quite a while that OpenAI have been transitioning from a non-profit org interested in pushing technology forward to a VC backed monopoly-seeking company. The big Altman putsch/counter putsch was just the solidfying of that.

  • spacecadet@lemm.ee
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    2
    ·
    3 months ago

    Please be applicable to business accounts, Please be applicable to business accounts, Please be applicable to business accounts, Please be applicable to business accounts,

    I want to get rid of this shit so bad, of another junior dev submits a shit MR they can’t explain because they had chatGPT write it I’m going to explode. Also, the number of AI executives we have in charge of our manufacturing company is somehow more than we have in charge of manufacturing, and guess what?! They are all MBAs who haven’t written a god damn line of code in their life but have become professional “prompt engineers”.

    • yemmly@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      1
      ·
      3 months ago

      Every time I hear someone talking up prompt engineering, I feel like I should say something. But I don’t.

      • elrik@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        3 months ago

        “Prompt engineering” must be the easiest job to replace with AI. You can simply ask an LLM to generate and refine prompts.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        18
        ·
        3 months ago

        I’ve met someone employed as a dev, who not only didn’t know that the compiler generates an executable file, but actually spent a month trying to change the code, not noticing that 0 of their code changes were having any effect whatsoever (because they kept running an old build of mine)

        • DominusOfMegadeus@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          ·
          3 months ago

          I would be really interested in learning a language. The AI assistance method actually meshes very well with my learning style. I would never submit anything to anyone that I was not certain was good working code though. My brain wouldn’t let me do it. Now i just need to choose a language.

          • Failx@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            1
            ·
            3 months ago

            I applaud your ethics. But you don’t know how close you are to falling from grace.

            Just yesterday I had to remove perfectly tested, sensible and non-ai code from our production system, not because that it did not do what the author intended, but because what the author intended was flawed. And this is exactly what ai also cannot teach you right now: Taking a step back to realize that your code might be right, but your intentions are not.

            Definetly keep at it. But be aware you will do the wrong things even with perfectly working code.

            • SlopppyEngineer@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 months ago

              Yeah, the code can work flawlessly in test, but after a few months of production there are a lot more records or files and the code starts to have issues.

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    3 months ago

    I don’t understand why it’s so hard to sandbox an LLM’s configuration data from it’s training data.

    • MoondropLight
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      Because its all one thing. The promise of AI is that you can basically throw anything at it, and you don’t need to understand exactly how/why it makes the connections it does; you just adjust the weights until it kinda looks alright.

      There are many structural hacks used to give it better results (and in this case some form of reasoning) but ultimately they’re mostly relying on connecting multiple nets together and retrying queries and such. There’s no human understandable settings. Neural networks are basically one input and one output (unless you’re training it).