• mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      17 hours ago

      I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.

      None of it’s perfect, but a lot of it’s fuckin’ spooky, and any form of “well it can’t do [blank]” has a half-life.

      • SaraTonin@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

        You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

        The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.

        • mindbleach@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has has silenced a wide variety of naysaying.

          And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.

          Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks it [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.