Neural networks are a misnomer. They have very little if anything to do with actual neurons.
Neural networks are a misnomer. They have very little if anything to do with actual neurons.
You can try the “deepthink” option or add the search option next time. Those options will eat up a lot of the context window but you’ll get much better results for technical questions. It’s still all just LLMs though so caution is warranted.
That might change now that companies are creating “reasoning” models like DeepSeek R1. They aren’t really all that different architecturally but they produce longer outputs which just requires more compute.
R1 has an identical architecture to v3 though right? They just used reinforcement learning to fine tune the base model. There are no extra layers, just a few additional steps in its production.
The potential here is that these kinds of systems will be able to do tasks that fundamentally could not be automated previously.
Sure but the technology has honestly been a bit more evolutionary than revolutionary as far as I’m concerned. The biggest change was the amount of compute and data used to train these models. That only really happened because it seems capital had nowhere else to go and not because LLMs are uniquely promising.
Making this work would effectively be a new industrial revolution.
Sure, but how exactly are the companies investing in “AI” going to make it work? To me it just seems like they’re dumping resources into a dead end because they have no other path forward. Tech companies have been promising a new Industrial Revolution since their inception. However, even their “AI” products have yet to have a meaningful impact on worker productivity. It’s worth interrogating why that is.
As I stated before, I think they all fundamentally misunderstand how human cognition works, perhaps willfully. That’s why I’m confident tech companies as they exist will not deliver on the promise of “AGI”, a lovely marketing term created to make up for the fact that their “AIs” are not very intelligent.
I appreciate the link but I stand by my point. As far as I’m aware, “reasoning” models like R1 and O3 are not architecturally very different from Deepseek v3 or GPT4, which have already integrated some of the features mentioned in that paper.
Also as an aside, I really despise how compsci researchers and the tech sector borrow language from neuroscience. They take concepts they don’t fully understand and then use them in obscenely reductive ways. It ends up heavily obscuring how LLMs function and what their limitations are. They of course can’t speak plainly about these things otherwise the financial house of cards built up around LLMs would collapse. As such, I guess we’re just condemned to live in the fever dreams of tech entrepreneurs who are at their core are used car salesmen with god complexes.
Don’t get me wrong, LLMs and other kinds of deep generative models are useful in some contexts. It’s just their utility is not at all commensurate with the absurd amount of resources expended to create them.
I suspect “reasoning” models are just taking advantage of the law of averages. You could get much better results from prior llms if you provided plenty of context in your prompt. In doing so you would constrain the range of possible outputs which helps to reduce “hallucinations”. You could even use llms to produce that context for you. To me it seems like reasoning models are just trained to do that all in one go.
It’s largely cultural. China is a place where filial piety is import so anything that can be construed as disrespect for your forbearers is looked down upon.
In English, Mao’s book is called the “little red book” but that’s not the case in China. The direct translation of what they call it in China is “red treasure book”. As such, the name of the app only seems like a Mao reference to people who translated xiaohongshu into English.
I think it’s not fair to call DeepSeek open source. They’ve released the weights of their model but that’s all. The code they used to train it and the training data itself is decidedly not open source. They aren’t the only company to release their weights either. Meta’s LlaMa was probably the best open weight model you could use prior to DS v3. As I see it, this is just a consequence of competition in a market where capital has nowhere else to go. Meta and DeepSeek likely want to prevent OpenAI from becoming profitable.
As an aside, although I personally believe in some aspects of China’s reform and opening up it’s not without its faults. Tech companies in China often make the same absurd claims and engage in behavior that’s as deluded as companies in Silicon Valley.
I think this is our core disagreement. I agree, we have not pushed LLMs to their absolute limit. Mixture of Experts models, optimized training, and “reasoning models” are all incremental improvements over the previous generation of LLMs. That said, I strongly believe that the architecture of LLMs are fundamentally incapable of intelligent behavior. They’re more like a photograph of intelligence than the real thing.
I agree wholeheartedly. However, you don’t need to dump an absurd amount of resources into training an llm to test the viability of any of the incremental improvements that DeepSeek has made. You only do that if your goal is to compete with OpenAI and others for access to capital.
Yes, but that work largely goes unnoticed because it’s not at all close to providing us with a way to build intelligent machines. It’s work that can only really happen at academic or public research institutions because it’s not profitable at this stage. I would be much happier if the capital currently directed towards LLMs was redirected towards this type of work. Unfortunately, we’re forced to abide by the dictates of capitalism and so that won’t happen anytime soon.