- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin
guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin
The fmri ones are probably bunk. That said, if you could manage the heinous act of cw: body gore
spoiler
implanting several thousands of very small wires throughout someone’s visual cortex, and record the responses evoked by specific stimuli or instructions to visualize a given stimulus, you could probably produce low fidelity reconstructions of their visual perception
are you familiar with the crimes of Hubel and Weisel?
I am not, and I will look it up in a minute.
But my point is that such a low-fidelity reconstruction, when interpreted through the model of modern computing methods, lacks the accuracy for any application AND, crucially, has absolutely no way to account for and understand its limitations in relation to the intended applications. That last part is a more philosophy of science argument than about some percentage accuracy. It’s that the model has no way to understand its limitations because we don’t have any idea what those are, and discussion of this is limited to my knowledge, leaving no ceiling for the interpretations and implications.
I think a big difference in positions in this thread though is between those talking about how the best neuroscientists in the world think about this, and about those who are more technologists who never reached that level and want to Frankenstein their way to tech-bro godhood. I’m sure the top neuros get this, and are constantly trying to find new and better models. But their publications don’t appear in science journals on the covers