The way people talk about it makes it sound indistinguishable from “random will”. If you believe in the existence of a “self” in any form, be it the chemical signals and electrical impulses in your material brain, or a ghost existing outside of space and time controlling your body like a puppeteer, you must believe in one of you believe in that self having free will.

Say you were to run a scenario many times on the same person, perfectly resetting every single measurable thing including that person’s memory. If you observe them doing the same thing each time then they don’t have this quality of free will? But if you do different things each time are you really “yourself”? How are your choices changed in a way that preserves an idea of a “self” and isn’t just a dice roll? Doesn’t that put an idea of free will in contradiction with itself?

Edit: I found this article that says what I was trying to say in much gooder words

  • 420stalin69@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    Yeah basically.

    Being a bit more rigorous about it, I think philosophically speaking there is some kind of error in the logic and that error gets smuggled in via the ambiguity of the language.

    Like, “I” assumes some singular decision making entity exists but if you really want to zoom in to the level of chemical and electrical impulses in the brain and say ahah well there goes your “free will”, well you’ve broken the model because at this zoom level there is no singular decision making entity. There’s some network of synapses etc that fire quasi-independently.

    But “I” still exist at a higher level of zoom as some emergent phenomena from those synapses. But “I” am more than just those synapses, I have an experience that cannot really be described by those synapses even if it is emergent from them.

    So the error is to confuse levels of abstraction. To deny properties of “I” by zooming down to a level where “I” doesn’t exist anymore and then trying to draw conclusions about what “I” am.

    Does this make sense?

    Basically the entire concept of me and a self exists at a certain zoom level, and my will or the qualia of my existence only make any sense at all at a certain zoom level.

    So zooming in further down to the point of electrical actions is an error. It’s like trying to figure someone’s birthday by studying the atoms they’re made of. It’s nonsense.

    Or to skin this cat another way, and remain at this level of abstraction you can still describe the inputs into my decisions at the same level of abstraction at which “I” exist. Like, you can say I’m aggressive because my dad was toxic when I was a child or whatever some quasi-deterministic social factor that influences my behavior, right?

    But what am “I” if not the sum of my experiences? By saying that I am controlled by my past experiences and therefore that I can’t make independent decisions, well this is just the same as saying that “I” exist somehow independently of my past. But that’s also nonsense, but here the nonsense is by secretly introducing via sleight of hand another level abstraction which is the metaphysical “I” which transcends my physical existence.

    Basically, the question “what does it mean to have free will” always involves some shifting frame of abstraction and it’s the way this frame of abstraction shifts that, I think, is the philosophical error in asking the question.