• multitotal@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    ·
    23 hours ago

    A human-style intelligence on an artificial substrate unlocks the potential for virtual worlds unconstrained by physical laws, operating at speeds beyond human comprehension

    “Intelligence” is not the same as consciousness. We don’t know what consciousness is and therefore cannot create it in something else. We can’t even reliably recognise it in anything else, we only know other humans have consciousness cause we ourselves have it.

    If that’s the likely progression of technological civilizations

    Technical progression, much like evolution, is not goal-oriented. Everyone assumes technological progress necessarily involves better gadgets, but progress can also be in the way we use and consume technology, what role it plays in our lives.

    “AI” is a fad. Anyone who has played around with the AI models knows they aren’t actually thinking, but collating and systemising information. We’re nowhere near “general intelligence” or “human-like intelligence”. AI is useful for data analysis, fetching/storing information, comparison, etc. but it is not at the level of a baby or whatever they are saying. We simply cannot make human brains out of computers.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      16 hours ago

      “Intelligence” is not the same as consciousness. We don’t know what consciousness is and therefore cannot create it in something else. We can’t even reliably recognise it in anything else, we only know other humans have consciousness cause we ourselves have it.

      It’s true that intelligence and consciousness aren’t the same thing. However, I disagree that we can’t create it in something else without understanding it. Ultimately, consciousness arises from patterns being expressed within the firings of neurons within the brain. It’s a byproduct of the the physical events occurring within our neural architecture. Therefore, if we create a neural network that mimic our brain and exhibits the same types of patterns then it stands to reason that it would also exhibit consciousness.

      I think there are several paths available here. One is to simulate the brain in a virtual environment which would be an extension of the work being done by the OpenWorm project. You just build a really detailed physical simulation which is basically a question of having sufficient computing power.

      Another approach is to try and understand the algorithms within the brain, to learn how these patterns form and how the brain is structured, then to implement these algorithms. This is the approach that Jeff Hawkins has been pursuing and he wrote a good book on the subject. I’m personally a fan of this approach because it posits a theory of how and why different brain regions work, then compares the functioning of the artificial implementation with its biological analogue. If both exhibit similar behaviors then we can say they both implement the same algorithm.

      “AI” is a fad. Anyone who has played around with the AI models knows they aren’t actually thinking, but collating and systemising information.

      The current large language model approach is indeed a far, but that’s not totality of AI research that’s currently happening. It’s just getting a lot of attention because it looks superficially impressive.

      We simply cannot make human brains out of computers.

      There is zero basis for this assertion. The whole point here is that computing power is not developing in a linear fashion. We don’t know what will be possible in a decade, and much less in a century. However, given the rate of progress that happened in the past half a century, it’s pretty clear that huge leaps could be possible.

      Also worth noting that we don’t need to have an equivalent of the entire human brain. Much of the brain deals with stuff like regulating the body and maintaining homeostasis. Furthermore, turns out that even a small portion of the brain can still exhibit the properties we care about https://www.rifters.com/crawl/?p=6116

      At the end of the day, there is absolutely nothing magical about the human brain. It’s a biological computer that evolved through natural selection. There’s no reason to think that what it’s doing cannot be reverse engineered and implemented on a different substrate.

      The key point I’m making is that while timelines of centuries or even millennia might seem long from a human standpoint, these are blinks of an eye from cosmic point of view.

      • freagle@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        The idea that consciousness emerges as a functional overlay of the physical neurons is not settled science, let alone settled philosophy. It is just as likely, or perhaps more likely, that there are physical phenomena that we have yet to discover that explain consciousness in terms of a field such that emergence is unnecessary.

        Further, the artificial substrates that we are designing are deeply inferior to biologics and it is far more likely that we will create biological substrates to replace our contemporary silicon substrates. It is generally understood (outside of European psychology) that it is preferable to participate in circular systems than it is to attempt to transcend them. Biological technology will take advantage of abundant resources and be infinitely recyclable, as opposed to the current mineral-based technologies that require mass destruction, are significantly non-recyclable, and have no world-scale ecosystems available to integrate with.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          9 hours ago

          I strongly disagree with that. Our brains construct models of the world that they are themselves a part of. The recursive nature of the mind creating a model of itself in order to reason about itself is very likely what we perceive as consciousness. These constructs form the basis for the patterns of thought that underpin our conscious experience. The neurons, with their inherent complexity, serve merely as a substrate upon which these patterns are expressed.

          The same concept is mirrored in the realm of computing. The physical complexity of transistors within a silicon chip plays no direct role in the functioning of programs that it executes. Consider virtual machines: these software constructs faithfully emulate the operation of a computer system, down to the instruction set and operating system, without replicating the internal details of the underlying silicon substrate. The heart of computation resides not in the physical properties of transistors but in the algorithms they compute.

          This notion is further underscored by the fact that the same computational architecture can be realized on vastly different physical foundations. From vacuum tubes and silicon transistors to optical gates and memristors, the underlying technology can vary dramatically while still supporting identical computing environments. Consequently, we are able to infer that the abstract nature of digital computation — the manipulation of discrete symbols according to formal rules — is not inherently tied to any particular physical medium.

          Likewise, our consciousness isn’t merely a static property of our brains’ physical components; it’s a process arising from the dynamic patterns formed by the flow of electrochemical impulses across synapses. These patterns, emergent properties of the system as a whole, are what gives rise to our thoughts, feelings, and experiences.

          The physical matter of the brain serves as a medium that facilitates the transmission of information. While essential for the process, the brain’s components, such as neurons and synapses, do not themselves contain the essence of cognition. Like transistors in a computer, neurons are merely conduits for information, creating the patterns and rhythms that constitute our mental lives.

          These processes, much like the laws of physics or mathematics, can be described using a formal set of rules. Therefore, the essence of our minds lies in the algorithms that govern their operation as opposed to the biological machinery of the brain. Several lines of evidence support this proposition.

          The brain’s remarkable plasticity, its ability to reorganize in response to experience, indicates that various regions can adapt to perform new types of computation. Numerous studies have shown how individuals who have lost specific brain regions are able to regain absent functions through neural rewiring, demonstrating that cognitive processes can be reassigned to different parts of the brain.

          Artificial neural networks, inspired by biological neurons, further bolster this argument. Despite being based on algorithms distinct from those in our brains, ANNs have demonstrated remarkable capabilities in mimicking cognitive functions such as image recognition, language processing, and even creative endeavors. Their success implies that these abilities emerge from computational processes independent of their base substrate.

          Approaching cognition from a computational perspective brings us to the concept of computational universality, closely related to the Curry-Howard Correspondence, which establishes a deep isomorphism between mathematical proofs and computer programs. It suggests that any system capable of performing a certain set of basic logical operations can simulate any other computational process. Therefore, the specific biology of the brain isn’t essential for cognition; what truly matters is the system’s ability to express computational patterns, regardless of its underlying mechanics.

          Further, the artificial substrates that we are designing are deeply inferior to biologics and it is far more likely that we will create biological substrates to replace our contemporary silicon substrates.

          Biological computers are better at certain things and worse at others. I wouldn’t call the substrates we’re designing inferior, they just optimize for different kinds of computation. Biological systems are well adapted to our environment. However, they’re a dead end for expanding our civilization into space.

          • freagle@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 hours ago

            The recursive nature of the mind creating a model of itself in order to reason about itself is very likely what we perceive as consciousness.

            This is such a massive leap, though. Don’t you see that? Why is it very likely? What effects the probability? What aspects of recursion lend themselves to consciousness? Where have we seen analogs elsewhere that provide evidence for your probabilistic claim? What aspects of the nature of models lend themselves to consciousness? Same questions.

            These constructs form the basis for the patterns of thought that underpin our conscious experience

            Again, a significant ontological leap. As Hume would say, at best you have constant conjunction. There is no argument that patterns of thought underpin our conscious experience that isn’t inherently circular.

            The same concept is mirrored in the realm of computing. The physical complexity of transistors within a silicon chip plays no direct role in the functioning of programs that it executes.

            This is an entirely inappropriate analogy. The physical complexity of transistors is physically connected, contiguously, with voltage differentials. The functioning of a program is entirely expressed in the physical world through voltage differentials. The very idea of a program or the execution thereof is a metaphor we use to reason about our tools but do not bear on the reality of the physics. Voltage differentials define everything about contemporary silicon-based binary microcomputers.

            the underlying technology can vary dramatically while still supporting identical computing environments

            Only if we limit ourselves severely. Underlying technology varying greatly has a severe impact on what sorts of I/O operations are possible. If we reduce everything to the pure math of computation, then you are correct, but you are correct inside an artificial self-referential symbolic system (the mathematics of boolean logic), which is to say extremely and deleteriously reductionist .

            it’s a process arising from the dynamic patterns formed by the flow of electrochemical impulses across synapses. These patterns, emergent properties of the system as a whole, are what gives rise to our thoughts, feelings, and experiences.

            Again, incredibly strong claim that lacks sufficient evidence. We’ve been working on this problem for a very long time. The only way we get to your conclusion is through the circular reasoning of materialist reductionism - the assertion that only physical matter exists and therefore that consciousness is merely an emergent property of the physical matter that we have knowledge off. It begs the question.

            These processes, much like the laws of physics or mathematics, can be described using a formal set of rules. Therefore, the essence of our minds lies in the algorithms that govern their operation as opposed to the biological machinery of the brain. Several lines of evidence support this proposition.

            Again, I think this is entirely reductionist and human experience has plenty of evidence that runs counter to this, from mystical experiences to psychedelics to NDEs, there is sufficient evidence that is counter to that theory.

            In physics, when we have such evidence, we work to figure out what’s wrong with the model or with our instruments. But in pop psychology, AI, and Western philosophy of mind, we instead throw out all the evidence in favor of the dominant narrative of the academy.

            Scientific history shows us we’re wrong. Scientific consensus today shows us we’re wrong.

            Before we understood the EMF, we relied on all the data our senses could gather and as a Western scientific community, that was considered 100% of what was real. We discarded all the experiences of other people that we could not experience ourselves. Then, we discovered the EMF and realized that literally everything in our entire Western philosophy of science accounted for less than 0.000001% of reality.

            Today, we have a model of the universe based on everything Western science has achieved in the last 600 years or so. That model accounts for about 3% of reality in so far as we can tell. That is to say, if we take everything we know, and everything we know we don’t know, what we know we know makes up 3% of what we know, and what we know we don’t know makes up about 97% of what we know. And then we have to contend with the unknown unknown, which is immeasurable.

            To assume that this particularly pernicious area of inquiry has any solution that is more or less likely than any other solution is to ignore the history and present state of science.

            However, even more to the point, the bioware plays a massively important part that digital substrates simply cannot mimic, and that’s the fact that we’re not talking about voltage differentials in binary states representing boolean logic, but rather continuums mediated by a massively complex distributed chemical system comprising myriad biologics, some that aren’t even our own genetics. Our gut microbiota have a massive effect on our cognition. Each organ has major roles to play in our congition. From a neurological perspective, we are only just scratching the surface on how things work at all, let alone the problem of consciousness.

            Therefore, the specific biology of the brain isn’t essential for cognition; what truly matters is the system’s ability to express computational patterns, regardless of its underlying mechanics.

            This is the clearest expression of circular reasoning in your writing. I encourage you to examine your position and your basis for it meticulously. In essence you have said:

            1. patterns of thought underpin our conscious experience
            2. neurons are merely conduits for information, creating the patterns and rhythms that constitute our mental lives
            3. any system capable of performing a certain set of basic logical operations can simulate any other computational process
            4. Therefore, patterns of thought underpin our conscious experience
            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              1
              ·
              2 hours ago

              This is such a massive leap, though. Don’t you see that? Why is it very likely? What effects the probability? What aspects of recursion lend themselves to consciousness? Where have we seen analogs elsewhere that provide evidence for your probabilistic claim? What aspects of the nature of models lend themselves to consciousness? Same questions.

              I think there is a clear evolutionary reason why the mind would simulate itself since it’s whole job is to simulate the environment and make predictions. The core purpose of the brain is to maintain homeostasis of the body. It aggregates inputs from the environment, and models the state of the world based on that. There is no fundamental difference between inputs from outside world and the ones it generates itself, hence the recursive step. Furthermore, being able to model minds is handy for interacting with other volitional agents, so there is a selection pressure for developing this capability.

              I think Hofstadter makes a pretty good case for the whole recursive loop being the source of consciousness in I Am a Strange Loop. At least, I found his arguments convincing and in line with my understanding of how this process might work.

              Again, a significant ontological leap. As Hume would say, at best you have constant conjunction. There is no argument that patterns of thought underpin our conscious experience that isn’t inherently circular.

              I disagree here, as I’ve stated above, I think patterns of thought arise in response to inputs into the neural network that originate both from within and without. The whole point of thinking is to create a simulation space where the mind can extrapolate future states and come up with actions that can bring the organism back into homeostasis. The brain receives chemical signals from the body indicating an imbalance, these are interpreted as hunger, anger, and, so on, and then the brain formulates a plan of action to address these signals. Natural selection honed this process over millions of years.

              This is an entirely inappropriate analogy. The physical complexity of transistors is physically connected, contiguously, with voltage differentials. The functioning of a program is entirely expressed in the physical world through voltage differentials. The very idea of a program or the execution thereof is a metaphor we use to reason about our tools but do not bear on the reality of the physics. Voltage differentials define everything about contemporary silicon-based binary microcomputers.

              And how is this fundamentally different from electrochemical signals being passed within the neural network of the brain? Voltage differentials are a direct counterpart to our own neural signalling.

              Only if we limit ourselves severely. Underlying technology varying greatly has a severe impact on what sorts of I/O operations are possible. If we reduce everything to the pure math of computation, then you are correct, but you are correct inside an artificial self-referential symbolic system (the mathematics of boolean logic), which is to say extremely and deleteriously reductionist .

              I don’t see what you mean here to be honest. The patterns occurring within the brain can be expressed in mathematical terms. There’s nothing reductionist here. The physical substrate these patterns are expressed in is not the important part.

              Again, incredibly strong claim that lacks sufficient evidence. We’ve been working on this problem for a very long time. The only way we get to your conclusion is through the circular reasoning of materialist reductionism - the assertion that only physical matter exists and therefore that consciousness is merely an emergent property of the physical matter that we have knowledge off. It begs the question.

              I don’t believe in magic or supernatural, and outside that one has to reject body mind dualism. The physical reality is all there is, therefore the mental realm can only stem from physical interactions of matter and energy.

              Again, I think this is entirely reductionist and human experience has plenty of evidence that runs counter to this, from mystical experiences to psychedelics to NDEs, there is sufficient evidence that is counter to that theory.

              Again, I fundamentally reject mysticism. All these human experiences are perfectly explained in terms of the brain simulating events that create an internal experience. However, there’s zero basis to assert that these experiences are not rooted in physical reality. Just the same way it would be absurd to say that there’s some mystical force that’s needed to create a virtual world within a video game.

              Today, we have a model of the universe based on everything Western science has achieved in the last 600 years or so. That model accounts for about 3% of reality in so far as we can tell. That is to say, if we take everything we know, and everything we know we don’t know, what we know we know makes up 3% of what we know, and what we know we don’t know makes up about 97% of what we know. And then we have to contend with the unknown unknown, which is immeasurable.

              This statement is an incredible leap of logic. We know that out physics models are incomplete, but we very much do know what’s directly observable around us, and how our immediate environment behaves. We’re able to model that with an incredible degree of accuracy.

              However, even more to the point, the bioware plays a massively important part that digital substrates simply cannot mimic, and that’s the fact that we’re not talking about voltage differentials in binary states representing boolean logic, but rather continuums mediated by a massively complex distributed chemical system comprising myriad biologics, some that aren’t even our own genetics.

              There’s absolutely no evidence to support this statement. It’s also worth noting that discrete computation isn’t the only way computers can work. Analog chips exist and they work on energy gradients much like biological neural networks do. It’s just optimizing for a different type of computation.

              This is the clearest expression of circular reasoning in your writing. I encourage you to examine your position and your basis for it meticulously. In essence you have said:

              There is absolutely nothing circular in my reasoning. I never said patterns of thought underpin our conscious experience as a result of any system capable of performing a certain set of basic logical operations being able to simulate any other computational process.

              What I said is that patterns of thought underpin our conscious experience because the brain uses its own outputs as inputs along with the inputs from the rest of the environment, and this creates a recursive loop of the observer modelling itself within the environment and creating a resonance of patterns. The argument I made about universality of computation is entirely separate from this statement.