• Beaver [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    38
    ·
    3 days ago

    An incredible outcome would be if the US stock market bubble pops because Chinese developed open-source AI that can run locally on your phone end up being about as good as Silicon Valley’s stuff.

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      19
      ·
      3 days ago

      I think the bubble might not pop so easily. Even if Microsoft is set back dramatically by this, investors have nowhere else to go. The whole industry is in a turmoil, and since there’s nothing else to invest into, stocks stay high.

      At least that’s how i explain the ludicrously high stock rates that we’re seeing in the recent years.

      • redtea@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        What does it mean for an llm to run locally? Where’s all the data with the ‘answers’ stored?

        • KnilAdlez [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 days ago

          Imagine if an idea was a point on a graph, ideas that are similar would have points closer to each other, and points that are very different would be very far away. A llm is a predictive model for this graph, just like a line of best fit is a predictive model for a simple linear graph. So in a way, the model is predicting the information, it’s not stored directly or searched for.

          A locally running llm is just one of these models shrunk down and executing on your computer.

          Edit: removed a point about embeddings that wasnt fully accurate

          • redtea@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Thanks. That helps me understand things better. I’m guessing you need all the data initially to set up the graph (model). Then you only need that?

                • KnilAdlez [none/use name]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  2 days ago

                  That’s a great question! The models come in different sizes, where one ‘foundational’ model is trained, and that is used to train smaller models. US companies generally do not release the foundational models (I think) but meta, Microsoft, deepseek, and a few others will release smaller ones available on ollama.com. A rule of thumb is that 1 billion parameters is about 1 gigabyte. The foundational models are hundreds of billions if not trillions of parameters, but you can get a good model that is 7-8 billion parameters, small enough to run on a gaming gpu.