• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    You can use larger “open” models through free or dirt-cheap APIs though.

    TBH local LLMs are still kinda “meh” unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.