ByteDance officially launches its latest Doubao large model 1.5 Pro (Doubao-1.5-pro), which demonstrates outstanding comprehensive capabilities in various fields, successfully surpassing the well-known GPT-4o and Claude3.5Sonnet in the industry. The release of this model marks an important step forward for ByteDance in the field of artificial intelligence. Doubao 1.5 Pro adopts a novel sparse MoE (Mixture of Experts) architecture, utilizing a smaller set of activation parameters for pre-training. This design's innovation...
Not in any meaningful way. A statistical model cannot address the Frame problem. Statistical models themselves exacerbate the problems of connectionist approaches. I think AI researchers aren’t being honest with the causality here. We are simply fooling ourselves and willfully misinterpreting statistical correlation as causality.
The frame problem is addressed by creating a model of the environment the system interacts with. That’s what provides the context for reasoning and deciding what information is relevant and what isn’t. Embodiment is one obvious way to build such a model where the robot or even a virtual agent interacts with the environment and encodes the rules of the environment within its topology.
Let me repeat myself for clarity. We do not have a valid general theory of mind.
This is not necessary for making an AI that can reason about the environment, make decisions, and explain itself. Furthermore, not having a theory of mind does not even prevent us from creating minds. One example of this could be using evolutionary algorithms to evolve agents that have similar reasoning capabilities to our own. Another would be to copy the structure of animal brains to a high degree of fidelity.
Because you are a human doing it, you are not a machine that has been programmed.
You are programmed in a sense of the structure of your brain being a product of the information encoded in your DNA. The same way the neural network is a product of the algorithms used to build it. However, the learning that both your brain and the network are doing is encoded in the weights and connections of the network through reinforcement. These are not programmed in either case.
This is a really Western brained idea of how our biology works, because as complex systems we work on inscrutable ranges.
🙄
Strength. We cannot build a robot that can get stronger over time. Humans can do this, but we would never build a robot to do this. We see this as inefficient and difficult. This is a unique biological aspect of the human experience that allows us to reason about the physical world.
You’re showing utter lack of imagination on your part here. Of course we could build a robot that could get stronger. There’s nothing uniquely biological about this example.
Pain. We would not build a robot that experiences pain in the same way as humans. You can classify pain inputs. But why would you build a machine that can “understand” pain. Where pain interrupts its processes? This is again another unique aspect of human biology that allows us to reason about the physical world.
Maybe try thinking why organisms evolve pain in the first place and what advantage it provides.
The frame problem is addressed by creating a model of the environment the system interacts with. That’s what provides the context for reasoning and deciding what information is relevant and what isn’t. Embodiment is one obvious way to build such a model where the robot or even a virtual agent interacts with the environment and encodes the rules of the environment within its topology.
This is not necessary for making an AI that can reason about the environment, make decisions, and explain itself. Furthermore, not having a theory of mind does not even prevent us from creating minds. One example of this could be using evolutionary algorithms to evolve agents that have similar reasoning capabilities to our own. Another would be to copy the structure of animal brains to a high degree of fidelity.
You are programmed in a sense of the structure of your brain being a product of the information encoded in your DNA. The same way the neural network is a product of the algorithms used to build it. However, the learning that both your brain and the network are doing is encoded in the weights and connections of the network through reinforcement. These are not programmed in either case.
🙄
You’re showing utter lack of imagination on your part here. Of course we could build a robot that could get stronger. There’s nothing uniquely biological about this example.
Maybe try thinking why organisms evolve pain in the first place and what advantage it provides.