An AI lab out of China has ignited panic throughout Silicon Valley after releasing AI models that can outperform America's best despite being built more cheaply and with less-powerful chips. DeepSeek unveiled a free, open-source large-language model in late December that it says took only two months and less than $6 million to build. CNBC's Deirdre Bosa interviews Perplexity CEO Aravind Srinivas and explains why the DeepSeek has raised alarms on whether America's global lead in AI is shrinking.
That’s a great question! The models come in different sizes, where one ‘foundational’ model is trained, and that is used to train smaller models. US companies generally do not release the foundational models (I think) but meta, Microsoft, deepseek, and a few others will release smaller ones available on ollama.com. A rule of thumb is that 1 billion parameters is about 1 gigabyte. The foundational models are hundreds of billions if not trillions of parameters, but you can get a good model that is 7-8 billion parameters, small enough to run on a gaming gpu.
Yep, exactly. Every llm has a ‘cut off date’ which is the last day that the data used to make the model was updated.
How big are the files for the finished model, do you know?
That’s a great question! The models come in different sizes, where one ‘foundational’ model is trained, and that is used to train smaller models. US companies generally do not release the foundational models (I think) but meta, Microsoft, deepseek, and a few others will release smaller ones available on ollama.com. A rule of thumb is that 1 billion parameters is about 1 gigabyte. The foundational models are hundreds of billions if not trillions of parameters, but you can get a good model that is 7-8 billion parameters, small enough to run on a gaming gpu.
Thanks!