the-podcast guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • dat_math [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    49
    ·
    7 months ago

    As a REDACTED who has published in a few neuroscience journals over the years, this was one of the most annoying articles I’ve ever read. It abuses language and deliberately misrepresents (or misunderstands?) certain terms of art.

    As an example,

    That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

    The neuronal circuitry that accomplishes the solution to this task (i.e., controlling the muscles to catch the ball), if it’s actually doing some physical work to coordinate movement in in a way that satisfies the condition given, is definitionally doing computation and information processing. Sure, there aren’t algorithms in the usual way people think about them, but the brain in question almost surely has a noisy/fuzzy representation of its vision and its own position in space if not also that of the ball they’re trying to catch.

    For another example,

    no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain

    in any sense?? really? what about the physical sense in which aspects of a visual memory can be decoded from visual cortical activity after the stimulus has been removed?

    Maybe there’s some neat philosophy behind the seemingly strategic ignorance of precisely what certain terms of art mean, but I can’t see past the obvious failure to articulate the what the scientific theories in question purport nominally to be able to access it.

    help?

    • Frank [he/him, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 months ago

      The deeper we get in to it the more it just reads as old man yells at cloud and people who want consciousness to be special and interesting being mad that everyone is ignoring them.

    • Philosoraptor [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      15
      ·
      7 months ago

      Yeah, this is just as insane as the people who think GPT is conscious. I’ve been trying to give a nuanced take down thread (also an academic, with a background in philosophy of science rather than the science itself). I think this resonates with people here because they’re so sick of the California Ideology narrative that we are nothing but digital computers, and that if we throw enough money and processing power at something like GPT, we’ll have built a person.

    • Sidereal223 [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      As someone who also works in the neuroscience field and is somewhat sympathetic to the Gibsonian perspective that Chemero (mentioned in the essay) subscribes to, being able to decode cortical activity doesn’t necessarily mean that the activity serves as a representation in the brain. Firstly, the decoder must be trained and secondly, there is a thing called representational drift. If you haven’t, I highly recommend reading Romain Brette’s paper “Is coding a relevant metaphor for the brain?”

      He asks a crucial question, who/what is this representation for? It certainly is a representation for the neuroscientist, since they are the one who presented the stimuli and are then recording the spiking activity immediately after, but that doesn’t imply that it is a representation for the brain. Does it make sense for the brain to encode the outside world, into its own activity (spikes), then to decode it into its own activity again? Are we to assume that another part of this brain then reads this activity to translate into the outside world? This is a form a dualism.

      • dat_math [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        7 months ago

        being able to decode cortical activity doesn’t necessarily mean that the activity serves as a representation in the brain

        I’m sorry: I don’t mean to be an ass, but this seems nonsensical to me. Definitionally, being able to decode some neuronal signals means that those signals carry information about the variable they encode. Thus, if those vectors of simultaneous spike trains are received by any other part of the body in question, then the representation has been communicated.

        Firstly, the decoder must be trained and secondly, there is a thing called representational drift.

        Why does a decoder needing to be trained for experimental work that reverse engineers neural codes imply that neural correlates of some real world stimulus are not representing that stimulus?

        I have a similar issue seeing how representational drift invalidates that idea as well, especially since the circuits receiving the signals in question are plastic and dynamically adapt their responses to changes in their inputs as well.

        I started reading Brette’s paper that you recommended, and I’m finding the same problems with Romain’s idea salad. He says things like, "Climate scientists, for example, rarely ask how rain encodes atmospheric pressure. "

        and while I think that’s not exactly the terminology they use, in the sense that they might model rain = couplingFunction(atmospheric pressure) + noise, they’re in fact mathematically asking that very question!

        Am I nit-picking or is this not an example of Brette doing the same deliberate misunderstanding of the communications metaphor as the article in the original post?

        Does it make sense for the brain to encode the outside world, into its own activity (spikes), then to decode it into its own activity again?

        It might?, but the question seems computationally silly. I would expect efferent circuitry receiving signals encoded as vectors of simultaneous spikes would not do extra work to try to re-map the lossy signal they’re receiving into the original stimulus space. Perhaps they’d do some other transformations on it to integrate it with other information, but why would circuitry that was grown by STDP undo the effort of the earlier populations of neurons involved in the initial compression?

        sorry again if my stem education is preventing me from seeing meaning through a forest of mixed and imprecisely applied metaphors

        I’m going to go read Brette’s responses to commentary on the paper you linked and see if I’m just being a thickheaded stemlord

        • Sidereal223 [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          That’s fine, I don’t think you’re being an ass at all. Brette is saying that just because there is a correspondence between the measured spike signals and the presented stimuli, that does not qualify the measured signals to be a representation. In order for it to be a representation, it also needs a feature of abstraction. The relation between an image and neural firing depends on auditory context, visual context, behavioural context, it changes over time, and imperceptible pixel changes to the image also substantially alters neural firing. According to Brette, there is little left of the concept of neural representation once you take into account all of this and you’re better off calling it a neural correlate.