An AI lab out of China has ignited panic throughout Silicon Valley after releasing AI models that can outperform America's best despite being built more cheaply and with less-powerful chips. DeepSeek unveiled a free, open-source large-language model in late December that it says took only two months and less than $6 million to build. CNBC's Deirdre Bosa interviews Perplexity CEO Aravind Srinivas and explains why the DeepSeek has raised alarms on whether America's global lead in AI is shrinking.
An incredible outcome would be if the US stock market bubble pops because Chinese developed open-source AI that can run locally on your phone end up being about as good as Silicon Valley’s stuff.
I think the bubble might not pop so easily. Even if Microsoft is set back dramatically by this, investors have nowhere else to go. The whole industry is in a turmoil, and since there’s nothing else to invest into, stocks stay high.
At least that’s how i explain the ludicrously high stock rates that we’re seeing in the recent years.
Imagine if an idea was a point on a graph, ideas that are similar would have points closer to each other, and points that are very different would be very far away. A llm is a predictive model for this graph, just like a line of best fit is a predictive model for a simple linear graph. So in a way, the model is predicting the information, it’s not stored directly or searched for.
A locally running llm is just one of these models shrunk down and executing on your computer.
Edit: removed a point about embeddings that wasnt fully accurate
That’s a great question! The models come in different sizes, where one ‘foundational’ model is trained, and that is used to train smaller models. US companies generally do not release the foundational models (I think) but meta, Microsoft, deepseek, and a few others will release smaller ones available on ollama.com. A rule of thumb is that 1 billion parameters is about 1 gigabyte. The foundational models are hundreds of billions if not trillions of parameters, but you can get a good model that is 7-8 billion parameters, small enough to run on a gaming gpu.
An incredible outcome would be if the US stock market bubble pops because Chinese developed open-source AI that can run locally on your phone end up being about as good as Silicon Valley’s stuff.
I think the bubble might not pop so easily. Even if Microsoft is set back dramatically by this, investors have nowhere else to go. The whole industry is in a turmoil, and since there’s nothing else to invest into, stocks stay high.
At least that’s how i explain the ludicrously high stock rates that we’re seeing in the recent years.
Llms that run locally are already a thing, and I wager that one of those smaller models can do 99% of anything anyone would want.
What does it mean for an llm to run locally? Where’s all the data with the ‘answers’ stored?
Imagine if an idea was a point on a graph, ideas that are similar would have points closer to each other, and points that are very different would be very far away. A llm is a predictive model for this graph, just like a line of best fit is a predictive model for a simple linear graph. So in a way, the model is predicting the information, it’s not stored directly or searched for.
A locally running llm is just one of these models shrunk down and executing on your computer.
Edit: removed a point about embeddings that wasnt fully accurate
Thanks. That helps me understand things better. I’m guessing you need all the data initially to set up the graph (model). Then you only need that?
Yep, exactly. Every llm has a ‘cut off date’ which is the last day that the data used to make the model was updated.
How big are the files for the finished model, do you know?
That’s a great question! The models come in different sizes, where one ‘foundational’ model is trained, and that is used to train smaller models. US companies generally do not release the foundational models (I think) but meta, Microsoft, deepseek, and a few others will release smaller ones available on ollama.com. A rule of thumb is that 1 billion parameters is about 1 gigabyte. The foundational models are hundreds of billions if not trillions of parameters, but you can get a good model that is 7-8 billion parameters, small enough to run on a gaming gpu.
Thanks!
In the weights