• poo_22@lemmygrad.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    According to this page to run the full model you need about 1.4TB of memory, or about 16 A100 GPUs. Which is still prohibitively expensive for an individual enthusiast, but yes you can run a simplified model locally with ollama. Still probably needs a GPU with a lot of memory.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      I got deepseek-r1:14b-qwen-distill-fp16 running locally with 32gb ram and a GPU, but yeah you do need a fairly beefy machine to run even medium sized models.