• 1 Post
  • 904 Comments
Joined 1 year ago
cake
Cake day: December 12th, 2024

help-circle





  • I think it’s pretty obvious at this point that a lot of the backdrop for the AI arms race is a competition to see who can succeed in paywalling the internet as a whole.

    Machine learning is more in the nature of a side business. The primary goal of scraping the entire internet is to then strangle the original sites by denying them traffic, so that the data the corporations have stored away in their centers is all that’s left, and anyone who wants to access it will have to pay them.

    It’s essentially rent-seeking on the largest and thus most brazenly evil, scale the world has seen yet.

    And made just that much more evil by the fact that it’s all based on content that they’ve stolen from us in the first place.









  • Israel has long intended to do to southern Lebanon exactly what it’s doing to Gaza, and to the same end - to kill enough people and destroy enough infrastructure to terrorize the survivors into fleeing, so they can steal their land.

    This is nothing new - if you look back through history, you’ll find the same dynamic repeated over and over - every time the world focus shifts to Gaza or Iran or Syria or Libya or wherever, Israel takes the opportunity to go attack southern Lebanon some more and kill and terrorize some more civilians. And if necessary, they regurgitate some stock propaganda about Hezbollah. And if the world attention shifts too much to Lebanon, they’ll stop, and just wait for the next distraction to provide the next opportunity.




  • How is this a surprise to anyone?

    To me, the single most notable thing about Gabbard, from the first moment she sleazed her way onto the national stage, was that she had zero principles. Not just the relative lack and flexibility common to politicians but a complete absence - nothing but a featureless void where whatever passes for principles might be in another politician.

    Am I the only one who noticed that?



  • I’m never quite sure what to make of stories like this.

    On the one hand, it’s admirable to take a stand, but on the other hand, the stand implies that Trump’s decisions before this were acceptably sound, but this one passed some threshold making it unacceptably unsound.

    The obvious reality is that Trump’s decisions barely even count as decisions, and can’t really be judged as eirher sound or unsound

    Instead, they’re just the whims and obsessions of a delusional narcissist with the emotional maturity and attention span of a toddler, and the degree to which they might provide some benefit or serve some national interest is effectively random, since those are considerations of which Trump isn’t even capable. There’s barely room for anyone or anything else at all in Trump’s pathologically egocentric universe, and the degree to which other people and institutions exist at all to him is almost entirely just the degree to which they manage to threaten or soothe his ego.

    So it’s sort of like watching a blindfolded drunk firing a gun into a crowd and just nodding along complacently as long as he doesn’t hit anybody, then only when he does hit somebody saying, “Hey now - wait a minute! We can’t be having any of that!”


  • fancy autocomplete

    I hadn’t thought of it that way specifically, but not only is it fairly accurate - I’m willing to bet that the similarities aren’t coincidental. LLMs are almost certainly evolved in part (and potentially almost entirely) from autocomplete software, and likely started as just an attempt to make them more accurate by expanding their databases and making them recognize, and assess the likely connections between, more key words.

    tokens

    That’s an important clarification, not only because they process more than words, but because they don’t really process “words” per se.

    And personally, I’ve been more impressed by other things they’ve accomplished, like processing retinal scans and comparing them with diagnoses of diabetes to isolate indicators such that they can accurately diagnose the latter from the former, or processing the sounds that elephants make and noting that each elephant has a unique set of sounds that are associated with it, and that the other elephants use to get its attention or to refer to it, which is to say, they have names. (And that last is a particularly illustrative example of how LLMs work, since even we don’t know what those sounds actually mean - it’s just that the LLMs have processed enough data to find the patterns).