This is just a draft, best refrain from linking. (I hope we’ll get this up tomorrow or Monday. edit: probably this week? edit 2: it’s up!!) The [bracketed] stuff is links to cites.

Please critique!


A vision came to us in a dream — and certainly not from any nameable person — on the current state of the venture capital fueled AI and machine learning industry. We asked around and several in the field concurred.

AIs are famous for “hallucinating” made-up answers with wrong facts. The hallucinations are not decreasing. In fact, the hallucinations are getting worse.

If you know how large language models work, you will understand that all output from a LLM is a “hallucination” — it’s generated from the latent space and the training data. But if your input contains mostly facts, then the output has a better chance of not being nonsense.

Unfortunately, the VC-funded AI industry runs on the promise of replacing humans with a very large shell script. If the output is just generated nonsense, that’s a problem. There is a slight panic among AI company leadership about this.

Even more unfortunately, the AI industry has run out of untainted training data. So they’re seriously considering doing the stupidest thing possible: training AIs on the output of other AIs. This is already known to make the models collapse into gibberish. [WSJ, archive]

There is enough money floating around in tech VC to fuel this nonsense for another couple of years — there are hundreds of billions of dollars (family offices, sovereign wealth funds) desperate to find an investment. If ever there was an argument for swingeing taxation followed by massive government spending programs, this would be it.

Ed Zitron gives it three more quarters (nine months). The gossip concurs with Ed on this being likely to last for another three quarters. There should be at least one more wave of massive overhiring. [Ed Zitron]

The current workaround is to hire fresh Ph.Ds to fix the hallucinations and try to underpay them on the promise of future wealth. If you have a degree with machine learning in it, gouge them for every penny you can while the gouging is good.

AI is holding up the S&P 500. This means that when the AI VC bubble pops, tech will drop. Whenever the NASDAQ catches a cold, bitcoin catches COVID — so expect crypto to go through the floor in turn.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    7 months ago

    What is the result of subtracting the floating point number with hexadecimal representation 488299c6 from the one with hexadecimal representation 4cbe9a58?

    Sooo… it turns out Bing Chat and Gemini can’t do floating point math right now :D

    And don’t tell me the question is vague or misleading until the chatbots can actually recognize that or return an error code, this isn’t exactly the sort of thing people are feeding into integration tests since they’re indeterministic as heck and generally computed by some company losing tons of money somewhere off-site.

    (Not sharing the result, because I still stubbornly refuse to spread AI output whatsoever)

    • froztbyte@awful.systems
      link
      fedilink
      arrow-up
      7
      ·
      7 months ago

      The really fantastic part about this is that it’s long been possible to get reliable performance out of irc chatbots for this sort of thing, with people pulling all kinds of nasty extractor shit

      “It’s just people badly providing input” was the most big-brain take I’ve seen in a while. Honestly reads like someone in the industry and hates their users, of the “my code is great, it’s these damn idiots that don’t know how to use the system!” variety

      • mlen@awful.systems
        link
        fedilink
        arrow-up
        6
        ·
        7 months ago

        “you’re holding it wrong” worked for iphones, so maybe it’ll work for llms too…

        • froztbyte@awful.systems
          link
          fedilink
          arrow-up
          6
          ·
          7 months ago

          the iphone did already have other utility alongside it though, so people were making use of it regardless. not excusing how apple handled that, mind, that was bullshit, but I meant that people were still motivated users

          openai’s particular flavours of this shit are still failing to find viable footholds and there’s nothing that is a so-called “killer app”, which is the other thing that really weakens its case

          but that dynamic, of programmer disregard for how people use their products… oof. less pls. return to sender. unsubscribe.