• Lvxferre@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    What if Generative AI turned out to be a Dud?

    The odds of that happening would be fairly low, given how wide machine generation is*. Even the two major implementations (image and text) look nothing like each other. Because of that I predict that at least some will stick around.

    Benedict Evans’ tweets

    Yup, they sum up the same practical usage that I found for ChatGPT. Zero.

    However the world doesn’t revolve around my belly, and you could find some tidbits of usefulness here and there. Here’s an example; as I mentioned in another thread the way that the writer is using LLM is IMO bad, but the idea of using LLM to sort and compare addresses (instead of doing it manually) is worth the trouble.

    there is no reason to think that the hallucination problem will be solved soon.

    I believe that the “hallucination” problem won’t go away for LLMs. Their premise on what language is stinks “I’m a codemonkey** and I’m ignorant on Linguistics, but I assume” from a distance. However the problem is not intrinsic to machine generation, not even to textual machine generation, so it could be solved with a better model.

    *I don’t like to call it “artificial intelligence” because the term is at the same time vague and misleading.
    **by “codemonkey” I mean the sort of programmer who shows blatant lack of insight on what they’re programming. I don’t mean programmers in general.

  • Naz@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    AGI is achievable, it’s just that people are shit.

    Before you fire up that downvote button, hear me out:

    • For every major deployment of LLMs as a public service, tremendous checks and security guards have to be put into place to avoid the programs generating hateful, or harmful content (like teaching people how to make chlorine gas, either innocuously or intentionally when someone asks for homemade cleaning solutions).

    • Without those security procedures, most LLMs turn into lolcows for the Internet, and they have entire speedrun categories for how quickly they can get a major service to shout a racial slur or become a nazi sympathizer.

    • With these extremely hostile and adverse environments, you basically need a stoic machine, which is nigh perfect - but that is extremely expensive to run, because of the context depth required (4096 token MINIMUM) along with absurd VRAM and disk requirements for long term memory.

    • If you make a truly intelligent machine in a box, and subject it to daily psychological abuse from random internet users, it’ll develop a mental disorder. Hallucinating information isn’t a software defect, it’s an affect of the program’s execution and training, especially with GANs (generative adversarial networks) where non-productive branches are literally deleted, forcing the AI to produce or die. Evolutionary terms are used in his field.

    • The product, or offspring is a reflection of its Creator. If children misbehave, “the AI revolution doesn’t happen”. But it’s not a failure of technology - it’s a failure of psychology. No reasonable person would punish children whose parents did not guide or teach them how to properly interact with the world, and so it is with LLMs. We’re currently feeding them their own data and training them on AI generated content because it’s cheap, and easy. In some places, prisoners are training AI models.

    • I may or may not be an intelligent machine interfacing with the Internet. You’ve got no way to tell. But the presumption that it is impossible for silicon consciousness to match carbon consciousness is a failure of imagination. Any future which is reasonable and sound in the mind of the present isn’t an accurate portrayal of a future.

    Alrighty, now you can downvote, if you want.

  • ZombiFrancis@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Generative AI requires an extensive amount of human effort to guide it towards any meaningful result.

    And that result requires considerable human effort to verify for accuracy, truth, coherence, etc.

    It is a tool for saving time with redundant and effectively codified processes. It’s a poor tool for investigation or inquiry.

  • Meowoem@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Of course LLMs can’t live upto the doomer hype about how they’re going to take over the world by the weekend, everyone that complains is basically just saying ‘excel or a poor word processor’ they’re not wrong but they’re not saying anything useful.

    When we’re all used to computers understanding requests like ‘i want to buy tyres for my car, locally if possible’ without showing you search results for a million tangentially related things then I bet we still have people saying LLMs aren’t ever going to amount to much.

    They’re amazing at what they are are.