GGUF quants are already up and llama.cpp was updated today to support it.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 hours ago

    I tested these out and found they are really bad at longer context… at least in settings that can sanely fit on most GPUs.

    Seems the Gemma family is mostly for short-context work, still.

  • Picasso@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Im especially interested in its advanced OCR capabilities. Will be testing this one out on lm studio

  • SmokeyDope@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    Im happy for the Gemma enjoyers who get something out of it. I hear the real world domain knowledge is good. I never tried the Gemma models myself. Apparently its very overly censored and anything google puts out just gives me the icky feeling through association. Anyone remember when they still had “Don’t Be Evil” as a motto? Good times.