• TheReturnOfPEB@reddthat.com
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    edit-2
    1 day ago

    That guy got speed-run-ed out of his life in four months using A.I. like a walkthrough guide in an CRPG.

    I’m grateful for many thing. Youth without regular school shootings. How to occupy my life without internet from a childhood with room to roam. Paved streets and vaccines.

    But I’m also grateful for the fact that A.I. became prevalent after I had learned to wrestle my fears of missing out and my addictions.

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    1 day ago

    I am glad I realized just how bad AI is early on, I have sometimes had it help me write some simple HTML/CSS code, but it is mostly annoying to use.

    It makes me loose track of what does what in my code, and also takes away my initiative at trying to change the code myself.

    When it comes to general information, it mostly generates decent responses, but it keeps getting enough things wrong that you just can’t trust it.

    Combine that with the fact that AIs are trained to always accommodate the user and almost never tells the user straight up “No”, it keeps engaging the user, it is never angry, it focuses on reenforcement and validation of the particular arguments given to it.

    I feel dumber when I have used an AI

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      23 hours ago

      I am starting to appreciate all the times stackoverflow people told me my question itself is wrong and I am stupid.

      Well, the first part mainly.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      5
      ·
      21 hours ago

      There are things LLMs are genuinely useful for.

      Transforming text is one. To give examples, a friend of mine works in advertising and they routinely ask a LLM to turn a spec sheet into a draft for ad copy; another person I know works as a translator and also uses DeepL as a first pass to take care of routine work. Yeah, you can get mentally lazy doing that but it can be useful for taking care of boilerplate stuff.

      Another one is fuzzy data lookup. I occasionally use LLMs to search for things where I don’t know how to turn them into concise search terms. A vague description can be enough to get an LLM onto the right track and I can continue from there using traditional means.

      Mind you, all of that should be done sparingly and with the awareness that the LLM can convincingly lie to you at any time. Nothing it returns is useful as anything but a draft that needs revision and any information must be verified. If you simply rely on its answer you will get something reasonably useful much of the time, you will get mentally lazy, and sometimes you will act on complete bullshit without knowing it.

      • OneWomanCreamTeam@sh.itjust.works
        link
        fedilink
        arrow-up
        9
        ·
        17 hours ago

        This is a little besides the point, but even those use-cases LLMs have the fatal flaw of being obscenely resource intensive. They require huge amounts of electricity and cooling to continue operating. Not to mention most of them are trained on stolen data.

        Even when they’re an effective tool for a given task, they’re still not an ethical one.

        • Jesus_666@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          17 hours ago

          That’s true; I didn’t touch on those points but I very much agree. (Yes, while I occasionally use it. It’s easy to ignore the implications of what you’re doing for a moment.)

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      22
      ·
      1 day ago

      There have been studies that report the same thing: using an AI for too long actively makes you dumber.

  • trashcan@sh.itjust.works
    link
    fedilink
    arrow-up
    35
    arrow-down
    1
    ·
    1 day ago

    That wasn’t the worst of it. At that point he had blown nearly $12,000 trying to create world-changing code. He became manic, and his concerned therapist called the cops to check in on him. He was institutionalized for nearly two weeks, and even got tangled with an investor who threatened to kill him if he didn’t come up with the goods.

    What a microcosm of our current situation.

  • protist@mander.xyz
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    11
    ·
    1 day ago

    First guy: “I’ve never been manic in my life. I’m not bipolar."

    I’m just highly skeptical of this.

    Second guy: That wasn’t the worst of it. At that point he had blown nearly $12,000 trying to create world-changing code. He became manic, and his concerned therapist called the cops to check in on him. He was institutionalized for nearly two weeks

    Yeah that sounds right. Regarding “AI psychosis,” everything I’ve read indicates it exacerbates existing psychoses, it doesn’t create them. That’s not to say it can’t mess with people’s psychology, especially the stuff around suicide, but I think the “AI psychosis” the media portrays is not real

    • Mohamed@lemmy.ca
      link
      fedilink
      arrow-up
      22
      ·
      1 day ago

      If it can exacerbate psychotic tendencies, then it can cause psychosis. Claiming that increasing or exacerbating tendencies doesn’t necessarily mean it is causing it, is an interesting area for debate, but it’s just semantics. Of course, I am also arguing semantics here.

      I think ehat is more interesting psychologically to ask is just how much does AI exacerbate psychotic tendencies, or if AI-induced psychosis is temporary (like drug-induced psychosis often is), or is permanent. I dont know anything about this topic but I hope to hear from someone who does.

    • cecilkorik@piefed.ca
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      1 day ago

      I mean, I kind of agree that there’s a lot of undiagnosed and underreported mental health issues in our society, and it’s not surprising that highly functional people can turn out to have serious mental issues lurking just below the surface.

      But there’s also a sort of gatekeeping going on here, suggesting that “well as long as you’re not already sort of psychotic you don’t have anything to fear from AI psychosis” is sort of like throwing the low-key psychotic people to the wolves and basically saying they don’t really matter to us because most of us aren’t them. At least, we assume we aren’t them. And we don’t even know that for sure. We could be them.

      Lots of smug people with 20/20 hindsight always love to believe there are always signs, but signs aren’t proof, and you don’t have proof there are always signs.

    • nimble@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      edit-2
      1 day ago

      “AI induced psychosis” is new and relatively unstudied but it has been compared to mono mania which was before the current “kaleidoscope” of modern mania. Under mono mania there is one central focus which in this case is AI. This is to say its not a completely new phenomenon.

      But as far as whether or not AI* causes psychosis or exacerbates underlying conditions im not sure this distinction matters. There’s more risk factors than simply being part of a “vulnerable population”, where other factors could be lack of reality testing, missed crisis escalation, intensive use, and limited context windows compounding escalations over time.

      Whatever we want to call it, there is harm being done. That’s real to me.

  • mfed1122@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    24
    ·
    1 day ago

    Imma keep it real, AI has problems, but “AI Psychosis” has to be some of the most hilarious bullshit ever. This guy is either already mentally unwell or just really stupid

    • Twongo [she/her]@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      18 hours ago

      agree, not as harsh as you might word it. but a person that falls into ‘ai psychosis’ would be as perceptive to idk… twitter covid conspiracy psychosis

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      21 hours ago

      Despite the downvotes I actually agree. While I am not saying there is no AI psychosis there has to be groundwork for it to thrive. A healthy human would/could see it as the bullshit it is.

      So yes, I don’t think he was that well in the first place. Currently, since the worldwide situation isn’t all that good, I think that also adds to a lot of people not being well.

      • nieminen@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        18 hours ago

        This is a fairly big assumption. I’m not sure I agree or disagree with who you’re replying to, but most kids these days are growing up with atrophied executive function and problem solving due to lack of actual parenting, social media, and this idea that the answer to the question is worth more than how it was arrived at. I have seen teachers talk about how 2 of their 115 students are reading at grade level, the rest are far behind.

        I say all this because I was born into a cult, and believed anything anyone I “should listen to” told me. I have yet to find a non-technical person who doesn’t like “AI”, and think it’s the coolest thing since the Internet. It’s not a far leap for someone to go where this dude did.

        • Strider@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          15 hours ago

          Indeed, it’s all a combination of assumption and observation. And of course, technical knowledge.

          I could go into deeper details of the whole context but it is very difficult in this form.

          But for this dialog I think the most important aspect, which we also agree on, is that we can assume a huge number of people to be ‘broken’ or in need of help, which is the attack vector I was referring to.

          And in that context, I am convinced the tech bros have absolutely no idea what they’re doing (to humanity).

    • spartanatreyu@programming.dev
      link
      fedilink
      arrow-up
      18
      arrow-down
      1
      ·
      24 hours ago

      Nah, it’s triggering the same mental pathways as in gambling disorders (which is the same as other disorders, which is the same… etc), but we’re not going to lump it in with gambling because that wouldn’t help gamblers or people who’ve deluded themselves with AI.

      Best to give it its own name.