• 12 Posts
  • 3.6K Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle

  • If it’s good, it’s good

    “Sharing your point of view is what makes something interesting and I have yet to see AI-generated content that has a point of view”.

    The problem is that readers and writers necessarily are playing an adversarial game. If you’re reading, you want to determine whether what you are reading is a waste of your time as quickly as possible. If you’re writing, you want to signal to the reader that it is worth it to keep reading. Even if what you have to say is really deep and genuine and true and someone reading the whole thing would be able to tell, a reader could see an em dash and bail in the first two seconds, and that doesn’t mean they are shallow or not worth writing for, it’s just basic self defense.

    Like it or not if you are writing something you want people to read it’s not optional to consider how to get past their filters, they aren’t going to give you a second look if you fail one. IMO that doesn’t have to be a bad thing, honestly a lot of formal writing style was already just signaling of a different type; that you are educated and likely know what you are talking about, but a lot of that was always bullshit and unrelated to whether you have something worthwhile to say. At least now that AI has dragged old styles of writing through the mud, there is less incentive to waste time prettying up your words to match a standard.


  • although obviously they’re not going to throw ChatGPT-sized compute at it.

    I’m not entirely sure whether and what more fundamental distinctions between embeddings and LLMs may exist, but smaller LLMs really struggle with comprehension if things are phrased in an unexpected way, and embeddings use comparatively very few resources. Maybe a circumvention training tool could work like this: a writing game where the goal is to produce text about a topic such that the embedding fails to associate it with that topic, but a more powerful LLM succeeds (the idea being that maybe a human would be able to tell also). The biggest advantage these systems have is probably just the way people do not get direct feedback about how their work is being interpreted.




  • the words start grouping together automatically based on relevance just by the way the math works

    Sure but isn’t it still the words that are grouping together? The guy in the OP video seems to be claiming that the fact that he used certain words does not matter, which does not make sense to me, since the depth of understanding these algorithms have of what is being said is still somewhat shallow.

    I would guess that it should be possible to engineer a sentence that communicates a particular message, but is phrased in such a way that it targets a location in vector space that is not associated with that message (until the other parts of their system make that association).













  • I think violence is way more effective in people’s imaginations than it is in reality. Even where it is effective, the ends resemble the means in unintended ways. It inherently promotes hierarchy and control, because it’s a way of solving problems that more than any other does not require listening to or understanding the people you are dealing with.

    All of that doesn’t mean violence can never be a good decision, but it’s very strongly biased towards being a bad decision, and people are much too fixated on it and sabotage their own efforts by appraising its utility too highly, especially if their goals are opposed to authoritarian dominance.



  • “Buy experiences, not things”

    The rationale isn’t exactly wrong for the comparison, but it smuggles in an underlying assumption that it’s reasonable and normal to be spending all your available money in an effort to be happy. Money is way more useful for reinforcing your continued survival and freedom than for anything else and the idea that it’s good for regulating your emotions beyond that is a deception geared towards keeping consumer spending up.