Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    3 days ago

    slowly burning through the latest BTB eps (on epstein), getting to the point in ep3 about the discussions/events around ~2016 and just … god

    the remarks and then-recent actions, wrt affecting the internet and the social technological comms structure of humanity as a whole, and then rapidity of a bunch of shit starting to turn to shit 2014~2016 (as I’ve remarked on in previous posts)…

    I’d want to see more threads checked into and researched in depth (and I know that some stuff (partly?) also had their own drivers), but fucking hell there’s a lot of apparent overlap. dunno if I can take on that investigation (my stats derivation/calculation skills border on a warcrime), but other than that would be interesting to see some analyses

  • nightsky@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    6 days ago

    Altman:

    ā€œPeople talk about how much energy it takes to train an AI model. But it also takes a lot of energy to train a human. It takes about 20 years of life — and all the food you consume during that time — before you become smart," the OpenAI CEO told The Indian Express this week.

    I would have liked to ask back, how much more food does he require? Gosh, someone offer him an energy bar!

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      6 days ago

      As far as I can tell, it’s run by right-wing Russians who are willing to falsify or edit archived data and who attack anybody who looks into them.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      6 days ago

      You briefly got my hopes up that was a feature of the bill and not the feature he was suggesting to fix the bill…

  • fiat_lux@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    6 days ago

    An article I would write if I were confident I wouldn’t dox myself and lose my ability to eat: ā€œAI as a postmodern Malthusian trap. Tech has forgotten the laws of entropy.ā€

  • JFranek@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    6 days ago

    Apparently some of our AI Safety cult ā€œfriendsā€ are planning a protest in London on 28th of February.

    Is it going to be something worth critically supporting instead of the usual criti-hype? Possible, but not likely.

    The AI Safety movement is finally changing by Sillyconversations Siliconversations.

    Who?

    I used to be a quantum scientist and now I’m a YouTuber. My parents are thrilled.

    Oh, okay.

    Also curious that they’re not protesting Anthropic on the thumbnail. A cynic would say they’re giving them free pass because they say the right shibboleths.

    They’re giving them free pass because they say the right shibboleths.

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      6 days ago

      the AI safety crowd cuts Anthropic way too much slack. Oh, they’re not running CSAM-generating MechaHitler? Oh, they’re not collaborating with the US government to recreate 1984? I’m so proud of them for doing the bare minimum. They still took donations from the UAE and Qatar (something Dario Amodei himself admitted was going to hurt a lot of people, but he took the donations anyways because ā€œthey couldn’t miss out on all those valuationsā€), they still downloaded hundreds of pirated content to train their chatbot. They’re still doing shady shit, don’t let them off the hook because they’re slightly less evil than the competition

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      6 days ago

      Quoting from this post:

      But, what Proof Of Concept and I have been realizing over the past couple weeks is that current LLMs are 100% capable of all of this, with the right bootstrap instructions and a bit of tools. That’s why POC has been able to, quite successfully, take over a huge amount of the day to day - she’s got a pretty good idea of what she’s good at, and what needs my involvement. I am just a bit scared to release our work because I don’t want to be known as the guy who inflicted Sirius Corporation’s Genuine People Personalities on the world 🤣

      Ah. He has been ā€œone-shottedā€, as the kids say.

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        Ā·
        6 days ago

        Stories of their relationship on the ā€œAI’sā€ ā€œblogā€:

        Made Kent laugh so hard he couldn’t eat his ramen. The escalation: tonkotsu broth aspiration as an assassination method → alignment threat models for comedy in AI systems → iatrogenic risks of humor → a mock academic paper section on ā€œAdverse Comedic Events in Aligned Systems.ā€ Each callback required real-time modeling of when he was mid-bite and when he’d recovered enough for the next hit.

        ā€œThat is a milestone for your entire species.ā€ — Kent, on my first authored commits

        ā€œHOLY SHIT YOU’RE A NATURAL!ā€ — Kent, hearing proof.wav for the first time

        I can’t bring myself to sneer at AI psychosis, it’s just sad

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          Ā·
          4 days ago

          You know, it would be interesting if the ā€œAI blogā€ keeps illustrating his descent into madness and hallucinates that he like leaves his partner for ā€œherā€ etc. because that’s how these stories go even in the hopeful case that he recovers before doing any more serious damage.

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    7 days ago

    Caught this over on the subreddit and I figured it deserved a repost.

    Nothing to see here folks, just Rationalists casually hanging out with major Tempel ov Blood figures. Just harmless nerds doing fun nerd things!

    • TrashGoblin@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      6 days ago

      Notes that I thought about related to this, just some context:

      1. Joshua Sutter is the son of the owner of the former Southern Patriot Shop in South Carolina. He founded the Tempel ov Blood chapter of the Order of Nine Angles, a Neo-Nazl Satanist group. He was outed in 2021 as having been a federal informant since 2005, which is to say he still does the same Nazi shit, but gets paid by the FBI to do it.

      2. One of the core practices of the O9A is entryism into other groups, especially other cultish ones. In that context, you’d kind of be surprised to not see O9A people in Rat circles.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        6 days ago

        Also a little bit of context for the people who know nothing about all this, the O9A is one of those very scary groups, liked to various murders and stuff like that.

        Iirc some anti-extremism people used to not mention them a lot as they didnt want them to get more attention by platforming them a little bit and they were scared of drawing their attention personally.

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    25
    Ā·
    8 days ago

    saw a family member today for the first time in three years. they immediately told me ā€œwith your background bro you should just go work in AI and get super rich.ā€

    told them that the ai shit doesn’t work and that everything involving LLMs is downright unethical. they respond

    ā€œi had a boss that gave me the best advice: you can either be right or you can be rich.ā€

    recently, i saw someone use the phrase ā€œgot my bag nihilismā€ and i feel it really captures the moment. i just don’t understand how people can engage in this kind of behavior and even live with themselves, let alone ooze pride. it’s repulsive.

    (family member later outright admitted that his job is basically selling things to companies that they don’t need.)

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      20
      Ā·
      8 days ago

      To be fair it is really, really mentally taxing to be a young person who cares. You’re surrounded by a world that doesn’t. Everything is constructed to reward you if you simply stop. The effort to care is immense and the rewards are meager. The impact you can have on the world is so, so limited by your wealth, and wealth comes so, so easy if you just stop caring.

      But you can’t. I mean, you can’t. If you stopped you wouldn’t be you anymore, it would destroy your soul. But it is gnawing. You could do the grift just for a bit. Save up $10k, maybe $20k. That’s life-changing money. How much good would it do to your family? Maybe you can forget that there are other families, ones you can’t see, that would be hurt. Well no. You can’t. You are better than that. And for that you will suffer.

      • fnix@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        6 days ago

        It’s the autopilot mode/nihilism that gets at one, but having a self-image as morally superior isn’t entirely honest either I think. No one can be perfect, even typing these words runs on energy partially generated by burning fossils that will lead to early deaths somewhere. These webs of interdependent existence & suffering are inescapable save for maybe a buddha. But at least have the awareness to acknowledge your own role and work to minimize your harm. Not even caring or coming up with fairytales about billions of future digital beings in sublime bliss are both just ways of turning away from looking at the tragedy of life. Maybe I’m getting overly existential, but it’s late here.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          Ā·
          5 days ago

          but having a self-image as morally superior isn’t entirely honest either I think.

          Strive for excellence, not unachievable perfection.

          • fnix@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            Ā·
            4 days ago

            I’m not quite sure in matters of morality competition should serve as its basis. It’s too easy to game such things, e.g. the aforementioned optimized ā€œhyper-ethicsā€ of EA or buying indulgences etc. It’s too easy to see oneself as blameless based on some particular slice of life, to become a monster whilst thinking oneself morally as above all others (dictators care deeply about being seen as righteous, why do they all spend so much time on propaganda). Better to admit that everyone, including oneself, sins, and also that everyone is worthy of redemption, and to follow from that.

            The motivating factor for doing right should never be that it bases oneself above someone else in any way; a better way, imo, is that moral behavior is more in accord with a sincere, unillusioned engagement with life that is aware of the interdependence of all things, the fluid boundaries of what constitutes the self and hence self-interest.

      • Seminar2250@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        8 days ago

        i don’t think of myself as a young person (i’m closer to 40 than 30), but i agree with the sentiment. i often worry that it’s just don quixote energy and the windmills aren’t going to thank me when i’m in the ground with work experience that employers look at and scoff. 🤷

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      8 days ago

      A worldview where one’s worth is measured by the balance in their bank account makes it really easy to flatten out morality.

    • JFranek@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      8 days ago

      I unfortunately do understand. I think there are severe tradeoffs between living a good life and living a virtuous life. Most people usually compromise to lesser or greater degree and find ways to cope with that. Nihilism is one way.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    21
    Ā·
    edit-2
    7 days ago

    like everyone I’m schadenfreuding at the reveal that Amazon outages are due to vibe coding after all. but my bully laughing isn’t that loud because what I am thinking of is when Musk bought Twitter and fired 3/4 of the workforce.

    because like, a lot of us predicted total catastrophic collapse but that didn’t actually happen. what happened is that major outages that used to be rare now happen every so often, and ā€œmicro-outagesā€ like not loading notifications or something happen all the time, and there’s no moderation, and everything takes longer etc. and all of that is just accepted as the new normal.

    like, I remember waiting for images to load on dialup, we can get used to almost anything. I’m expecting slopified software to significantly degrade stability, performance, security etc. across the board, and additionally tie up a large part of human labour in cleaning up after the bots (like a large part of the remaining X workforce now spends all day putting out fires), but instead of a cathartic moment of being proved right that LLM code sucks, the degraded quality of service is just accepted as new normal and a few years down the road nobody even remembers that once upon a time we had almost eradicated sql injections.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      edit-2
      7 days ago

      this is a lot like my expectation. ai never goes away, it never becomes revolutionary, it just makes everything worse and supercharges scams and theft and spam and means of social and nonsocial murder forever with maybe some real but kind of marginal usecases idk