• RabbitBBQ@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    11 hours ago

    If the standard is replicating human level intelligence and behavior, making up shit just to get you to go away about 40% of the time kind of checks out. In fact, I bet it hallucinates less and is wrong less often than most people you work with

    • bier@feddit.nl
      link
      fedilink
      arrow-up
      4
      ·
      6 hours ago

      My kid sometimes makes up shit and completely presents it as facts. It made me realize how many made up facts I learned from other kids.

    • Devanismyname@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 hours ago

      And it just keeps improving over time. People shit all over ai to make themselves feel better because scary shit is happening.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    6 hours ago

    I use chatgpt as a suggestion. Like an aid to whatever it is that I’m doing. It either helps me or it doesn’t, but I always have my critical thinking hat on.

  • Hikermick@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    10 hours ago

    I did a google search to find out how much i pay for water, the water department where I live bills by the MCF (1,000 cubic feet). The AI Overview told me an MCF was one million cubic feet. It’s a unit of measurement. It’s not subjective, not an opinion and AI still got it wrong.

  • SirSamuel@lemmy.world
    link
    fedilink
    arrow-up
    74
    arrow-down
    4
    ·
    16 hours ago

    First off, the beauty of these two posts being beside each other is palpable.

    Second, as you can see on the picture, it’s more like 60%

    • morrowind@lemmy.ml
      link
      fedilink
      arrow-up
      16
      ·
      12 hours ago

      No it’s not. If you actually read the study, it’s about AI search engines correctly finding and citing the source of a given quote, not general correctness, and not just the plain model

      • SirSamuel@lemmy.world
        link
        fedilink
        arrow-up
        22
        ·
        12 hours ago

        Read the study? Why would i do that when there’s an infographic right there?

        (thank you for the clarification, i actually appreciate it)

  • surph_ninja@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    7 hours ago

    If you want an AI to be an expert, you should only feed it data from experts. But these are trained on so much more. So much garbage.

  • Most of my searches have to do with video games, and I have yet to see any of those AI generated answers be accurate. But I mean, when the source of the AI’s info is coming from a Fandom wiki, it was already wading in shit before it ever generated a response.

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      I’ve tried it a few times with Dwarf Fortress, and it was always horribly wrong hallucinated instructions on how to do something.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    150
    arrow-down
    1
    ·
    20 hours ago

    I love that this mirrors the experience of experts on social media like reddit, which was used for training chatgpt…

    • PM_Your_Nudes_Please@lemmy.world
      link
      fedilink
      arrow-up
      40
      ·
      edit-2
      18 hours ago

      Also common in news. There’s an old saying along the lines of “everyone trusts the news until they talk about your job.” Basically, the news is focused on getting info out quickly. Every station is rushing to be the first to break a story. So the people writing the teleprompter usually only have a few minutes (at best) to research anything before it goes live in front of the anchor. This means that you’re only ever going to get the most surface level info, even when the talking heads claim to be doing deep dives on a topic. It also means they’re going to be misleading or blatantly wrong a lot of the time, because they’re basically just parroting the top google result regardless of accuracy.

      • ChickenLadyLovesLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        15 hours ago

        One of my academic areas of expertise way back in the day (late '80s and early '90s) were the so-called “Mitochondrial Eve” and “Out of Africa” hypotheses. The absolute mangling of this shit by journalists even at the time was migraine-inducing and it’s gotten much worse in the decades since then. It hasn’t helped that subsequent generations of scholars have mangled the whole deal even worse. The only advice I can offer people is that if the article (scholastic or popular) contains the word “Neanderthal” anywhere, just toss it.

          • ChickenLadyLovesLife@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 hours ago

            Are you saying neanderthal didn’t exist, or was just homo sapiens? Or did you mean in the context of mitochondrial Eve?

            All of these things, actually. The measured, physiological differences between “homo sapiens” and “neanderthal” (the air quotes here meaning “so-called”) fossils are much smaller than the differences found among contemporary humans, so the premise that “neanderthals” represent(ed) a separate species - in the sense of a reproductively isolated gene pool since gone extinct - is unsupported by fossil evidence. Of course nobody actually makes that claim anymore, since it’s now commonly reported that contemporary humans possess x% of neanderthal DNA (and thus cannot be said to be “extinct”). Of course nobody originally (when Mitochondrial Eve was first mooted) made any claims whatsoever about neanderthals: the term “neanderthal” was imported into the debate over the age and location of the last common mtDNA ancestor years later, after it was noticed that the age estimates of neanderthal remains happened to roughly match the age estimates of the genetic last common ancestor. And this was also after the term “neanderthal” had previously gone into the same general category in Anthropology as “Piltdown Man”.

            Most ironically, articles on the subject today now claim a correspondence between the fossil and genetic evidence, despite the fact that the very first articles (out of Allan Wilson’s lab and published in Nature and Science in the mid-1980s) drew their entire impact and notoriety from the fact that the genetic evidence (which supposedly gave 100,000 years ago and then 200,000 years ago as the age of the last common ancestor) completely contradicted the fossil evidence (which shows upright bipedal hominids spreading out of Africa more than a million and half years ago). To me, the weirdest thing is that academic articles on the subject now almost never cite these two seminal articles at all, and most authors seem genuinely unaware of them.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        14 hours ago

        There’s an old saying along the lines of “everyone trusts the news until they talk about your job.”

        This is something of a selection bias. Generally speaking, if you don’t trust a news broadcast then you won’t watch it. So of course you’re going to be predisposed to trust the news sources you do listen to. Until the news source bumps up against some of your prior info/intuition, at which point you start experiencing skepticism.

        This means that you’re only ever going to get the most surface level info, even when the talking heads claim to be doing deep dives on a topic.

        Investigative journalism has historically been a big part of the industry. You do get a few punchy “If it bleeds, it leads” hit pieces up front, but the Main Story tends to be the result of some more extensive investigation and coverage. I remember my home town of Houston had Marvin Zindler, a legendary beat reporter who would regularly put out interconnected 10-15 minute segments that offered continuous coverage on local events. This was after a stint at a municipal Consumer Fraud Prevention division that turned up numerous health code violations and sales frauds (he was allegedly let go by an incoming sheriff with ties to the local used car lobby, after Zindler exposed one too many odometer scams).

        But investigative journalism costs money. And its not “business friendly” from a conservative corporate perspective, which can cut into advertising revenues. So it is often the first line of business to be cut when a local print or broadcast outlet gets bought up and turned over for syndication.

        That doesn’t detract from a general popular appetite for investigative journalism. But it does set up an adversarial economic relationship between journals that do carry investigative reports and those more focused on juicing revenues.

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        5
        ·
        11 hours ago

        i was going to post this, too.

        The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

  • RedSnt 👓♂️🖥️@feddit.dk
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    15 hours ago

    I’ve been using o3-mini mostly for ffmpeg command lines. And a bit of sed. And it hasn’t been terrible, it’s a good way to learn stuff I can’t decipher from the man pages. Not sure what else it’s good for tbh, but at least I can test and understand what it’s doing before running the code.

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    9
    ·
    14 hours ago

    I just use it to write emails, so I declare the facts to the LLM and tell it to write an email based on that and the context of the email. Works pretty well but doesn’t really sound like something I wrote, it adds too much emotion.

  • balderdash@lemmy.zip
    link
    fedilink
    arrow-up
    16
    arrow-down
    7
    ·
    20 hours ago

    Deepseek is pretty good tbh. The answers sometimes leave out information in a way that is misleading, but targeted follow up questions can clarify.

          • snooggums@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            edit-2
            16 hours ago

            Is it though? I really can’t tell.

            Poe’s law has been working overtime recently.

            Edut: saw a comment further down that it is a default deepseek response for censored content, so yeah a joke. People who don’t have that context aren’t going to get the joke.

        • Geometrinen_Gepardi@sopuli.xyz
          link
          fedilink
          arrow-up
          9
          arrow-down
          2
          ·
          19 hours ago

          In my opinion it should have been the politburo that was pureed under tank tracks and hosed down into the sewers instead of those students.

          • InvertedParallax@lemm.ee
            link
            fedilink
            arrow-up
            4
            ·
            19 hours ago

            It really is so convenient, there are so many CPC members, but they all happen to be near a conveniently placed wall that is more than enough.

          • alcoholicorn@lemmy.ml
            link
            fedilink
            arrow-up
            3
            arrow-down
            8
            ·
            edit-2
            16 hours ago

            The western narrative about Tiananmen Square is basically orthogonal to the truth?

            Like it’s not just filled with fabricated events like tanks pureeing students, it completely misses the context and response to tell a weird “china bad and does evil stuff cuz they hate freedom” story.

            The other weird part is that the big setpieces of the western narrative, like tank man getting run over by tanks headed to the square are so trivial to debunk, just look at the uncropped video, yet I have yet to see 1 lemmiter actually look at the evidence and develop a more nuanced understanding. I’ve even had them show me compilations of photos from the events and never stop to think “Huh, these pictures of gorily lynched cops, protesters shot in streets outside the square, and burned vehicles aren’t consistent with what I’ve been told, maybe I’ve been mislead?”

            • Max@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              6 hours ago

              I just read the entire article you linked and it seems pretty inline with what I was taught about what happened in school. And it definitely doesn’t make me sympathetic to the PLA or the government.

              • alcoholicorn@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                3 hours ago

                Then your school did a better job of educating you than anyone talking about thousands of protesters getting ground into paste. Mine told me that tens of thousands of protesters were all blocked into the square, then tanks machinegunned them all down and ran them over, and the only picture to make it out of the event was Tank Man blocking the tanks from entering the square.

                The point isn’t to make you sympathetic to the PLA, if you have a more nuanced understanding than “china killed 1000s of protestors because they fear and hate freedom”, you’re already ahead of 9/10 lemmitors, including the one I was responding to.

                You can’t have a constructive discussion with someone whose analysis begins and ends with “china bad”, because they are incapable of actually engaging with the material beyond twisting any data into hostile evidence, and making up some if none is available.

        • Remember_the_tooth@lemmy.world
          link
          fedilink
          arrow-up
          13
          arrow-down
          6
          ·
          19 hours ago

          Is this a reference I’m not getting? Otherwise, I feel like censorship of massacre is not moraly acceptable regardless of culture. I’ll leave this here so this doesn’t get mistaken for nationalism:

          https://en.m.wikipedia.org/wiki/List_of_massacres_in_the_United_States

          It’s by no means a comprehensive list, but more of a primer. We do not forget these kinds of things in the hope that we may prevent future occurrences.

        • InvertedParallax@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          8
          ·
          19 hours ago

          Are we calling the communist party of China and their history of genocide and general evil, some kind of culture now?

          Can’t believe how hostile people are against nazis, we should have respected their cultural use of gas chambers.

          • 4am@lemm.ee
            link
            fedilink
            arrow-up
            10
            ·
            19 hours ago

            Communism was never the problem, authoritarianism is the problem

            • InvertedParallax@lemm.ee
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              15 hours ago

              The cpc is and has always been the definition of authoritarianism , and now it’s hyeprcapitalist authoritarianism.

  • lalala@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    14 hours ago

    I think that AI has now reached the point where it can deceive people ,not equal to humanity.

  • OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    15 hours ago

    Oof let’s see, what am I an expert in? Probably system design - I work at (insert big tech) and run a system design club there every Friday. I use ChatGPT to bounce ideas and find holes in my design planning before each session.

    Does it make mistakes? Not really? it has a hard time getting creative with nuanced examples (i.e. if you ask it to “give practical examples where the time/accuracy tradeoff in Flink is important” it can’t come up with more than 1 or 2 truly distinct examples) but it’s never wrong.

    The only times it’s blatantly wrong is when it hallucinates due to lack of context (or oversaturated context). But you can kind of tell something doesn’t make sense and prod followups.

    Tl;dr funny meme, would be funnier if true

    • RagingRobot@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      14 hours ago

      That’s not been my experience with it. I’m a software engineer and when I ask it stuff it usually gives plausible answers but there is always something wrong. For example it will recommend old outdated libraries or patterns that look like they would work but when you try them out you figure out they are setup differently now or didn’t even exist.

      I have been using windsurf to code recently and I’m liking that but it makes some weird choices sometimes and it is way too eager to code so it spits out a ton of code you need to review. It would be easy to get it to generate a bunch of spaghetti code that works mostly that’s not maintainable by a person out of the box.

    • spooky2092@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      I ask AI shitbots technical questions and get wrong answers daily. I said this in another comment, but I regularly have to ask it if what it gave me was actually real.

      Like, asking copilot about Powershell commands and modules that are by no means obscure will cause it to hallucinate flags that don’t exist based on the prompt. I give it plenty of context on what I’m using and trying to do, and it makes up shit based on what it thinks I want to hear.

  • Meltdown@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    34
    ·
    edit-2
    17 hours ago

    This, but for Wikipedia.

    Edit: Ironically, the down votes are really driving home the point in the OP. When you aren’t an expert in a subject, you’re incapable of recognizing the flaws in someone’s discussion, whether it’s an LLM or Wikipedia. Just like the GPT bros defending the LLM’s inaccuracies because they lack the knowledge to recognize them, we’ve got Wiki bros defending Wikipedia’s inaccuracies because they lack the knowledge to recognize them. At the end of the day, neither one is a reliable source for information.

      • A_norny_mousse@feddit.org
        link
        fedilink
        arrow-up
        8
        ·
        17 hours ago

        TBF, as soon as you move out of the English language the oversight of a million pair of eyes gets patchy fast. I have seen credible reports about Wikipedia pages in languages spoken by say, less than 10 million people, where certain elements can easily control the narrative.

        But hey, some people always criticize wikipedia as if there was some actually 100% objective alternative out there, and that I disagree with.

        • Fair point.

          I don’t browse Wikipedia much in languages other than English (mainly because those pages are the most up-to-date) but I can imagine there are some pages that straight up need to be in other languages. And given the smaller number of people reviewing edits in those languages, it can be manipulated to say what they want it to say.

          I do agree on the last point as well. The fact that literally anyone can edit Wikipedia takes a small portion of the bias element out of the equation, but it is very difficult to not have some form of bias in any reporting. I more use Wikipedia as a knowledge source on scientific aspects which are less likely to have bias in their reporting

      • PeterisBacon@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        18 hours ago

        Idk it says Elon Musk is a co-founder of openAi on wikipedia. I haven’t found any evidence to suggest he had anything to do with it. Not very accurate reporting.

      • Meltdown@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        17 hours ago

        With all due respect, Wikipedia’s accuracy is incredibly variable. Some articles might be better than others, but a huge number of them (large enough to shatter confidence in the platform as a whole) contain factual errors and undisguised editorial biases.

        • It is likely that articles on past social events or individuals will have some bias, as is the case with most articles on those matters.

          But, almost all articles on aspects of science are thoroughly peer reviewed and cited with sources. This alone makes Wikipedia invaluable as a source of knowledge.

    • glimse@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      18 hours ago

      What topics are you an expert on and can you provide some links to Wikipedia pages about them that are wrong?

      • Meltdown@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        17 hours ago

        I’m a doctor of classical philology and most of the articles on ancient languages, texts, history contain errors. I haven’t made a list of those articles because the lesson I took from the experience was simply never to use Wikipedia.

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      15 hours ago

      There’s an easy way to settle this debate. Link me a Wikipedia article that’s objectively wrong.

      I will wait.

    • Ms. ArmoredThirteen@lemmy.zip
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      19 hours ago

      If this were true, which I have my doubts, at least Wikipedia tries and has a specific goal of doing better. AI companies largely don’t give a hot fuck as long as it works good enough to vacuum up investments or profits

      • Meltdown@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        6
        ·
        17 hours ago

        Your doubts are irrelevant. Just spend some time fact checking random articles and you will quickly verify for yourself how many inaccuracies are allowed to remain uncorrected for years.

        • Korhaka@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          14 hours ago

          Small inaccuracies are different to just being completely wrong though

      • PeterisBacon@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        Because some don’t let you. I can’t find anything to edit Elon musk or even suggest an edit. It says he is a co-founder of OpenAi. I can’t find any evidence to suggest he has any involvement. Wikipedia says co-founder tho.

      • Meltdown@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        11
        ·
        17 hours ago

        There are plenty of high quality sources, but I don’t work for free. If you want me to produce an encyclopedia using my professional expertise, I’m happy to do it, but it’s a massive undertaking that I expect to be compensated for.