I’m not actually sure there’s much daylight between our views here, except that it seems like your concern over its impact is mostly oriented toward it being used as a cudgel against labor, irrespective of what qualities of competence AI might actually have. I don’t mean to speak for you, please correct me if I’m wrong.
While I think the question of AI sentience is ridiculous, I still think that it wouldn’t take much further development before some of these models start meaningfully replicating human competence (i.e. being able to complete some tasks at least as competently as a human). Considering the previous generation of models couldn’t string more than 50 words together before devolving into nonsense, and the following generation could start stringing together working code with not much fundamentally different in their structure, it is not a forgone conclusion that one or two more breakthroughs could bring it within striking distance of human competence. Dismissing the models as unintelligent misrepresents what I think the threat actually is.
I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that’s hubris.
irrespective of what qualities of competence AI might actually have
That competence mostly applies as a net negative when it’s being used in its present state because of who owns and who commands it. The “competence” isn’t thrilling or inspiring people that are getting denied medical because a computer program “accidentally” denied them healthcare, or when they experience increasingly sophisticated profiling and surveillance technology, or when people who previously paid bills with artistic talents get outbid by cheap-to-free treat printing technology.
At a ground level among common people, outside of science fiction scenarios in their movies and shows and games, asking them to be particularly “curious” about such things when they’re only feeling downward pressure from them is condescending and I don’t blame some for being knee-jerk against it, or against those scolding them for not being enthusiastic enough.
I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that’s hubris.
That was not my position, though I do on the side mock the singularity cultists and false claims about how close the robot god’s construction is, and I also condemn reductionist derision of living human beings with edgy techbro terminology like “meat computers” while trying to boost their favorite LLM products.
No disagreement with anything you just said, apologies for misinterpreting your position.
I don’t know how to reconcile the manic singularity cultists with what I feel is a very real acceleration toward a hellscape of underemployment and hyper capitalism driven by AI. It does feel to me like the urgency AI represents deserves anxious attention, and I at least appreciate the weight those cultists place on that technology I think represents a threat. It feels like people are only either eagerly waiting for a sentient AGI, or mocking AI on those terms of sentience, leaving precious few who are actually materially concerned with what threats AI represent. And that is not at all a way of dismissing the very real ways machine learning is deployed against real people today, but I think there’s a lot of room for it to get worse and I wish people took that possibility seriously.
It’s especially frustrating because there are very real threats from the technology as it is being applied and commanded, but because the ruling class has so many tech billionaires among them, their version of perceived threats gets the attention and publicity, usually some pop culture shit about robot uprisings (against them specifically).
but because the ruling class has so many tech billionaires among them, their version of perceived threats gets the attention and publicity, usually some pop culture shit about robot uprisings (against them specifically)
Yes, i’ve been struggling articulating how I feel about this saga, and I think this captures it. Because while i felt a little encouraged seeing people advocate for legislative action, the action and concerns they were articulating were just, off. There were very brief mentions of concerns about unemployment, but then they passed over them like it was too big a problem to talk about. My hair especially raises when I hear the conversation veer toward copyright infringement.
Thanks for discussing this with me, I feel a bit better
I appreciate the clarification of your position, too.
It fucking sucks that actual valid concerns about LLMs and related technology are likely to continue to be ignored in favor of WHAT IF ROBOT UPRISING LIKE IN THE TREATS sensationalism and what regulations might actually come will likely be regulatory capture tactics done by the ruling class and their lobbying power.
I’m not actually sure there’s much daylight between our views here, except that it seems like your concern over its impact is mostly oriented toward it being used as a cudgel against labor, irrespective of what qualities of competence AI might actually have. I don’t mean to speak for you, please correct me if I’m wrong.
While I think the question of AI sentience is ridiculous, I still think that it wouldn’t take much further development before some of these models start meaningfully replicating human competence (i.e. being able to complete some tasks at least as competently as a human). Considering the previous generation of models couldn’t string more than 50 words together before devolving into nonsense, and the following generation could start stringing together working code with not much fundamentally different in their structure, it is not a forgone conclusion that one or two more breakthroughs could bring it within striking distance of human competence. Dismissing the models as unintelligent misrepresents what I think the threat actually is.
I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that’s hubris.
That competence mostly applies as a net negative when it’s being used in its present state because of who owns and who commands it. The “competence” isn’t thrilling or inspiring people that are getting denied medical because a computer program “accidentally” denied them healthcare, or when they experience increasingly sophisticated profiling and surveillance technology, or when people who previously paid bills with artistic talents get outbid by cheap-to-free treat printing technology.
At a ground level among common people, outside of science fiction scenarios in their movies and shows and games, asking them to be particularly “curious” about such things when they’re only feeling downward pressure from them is condescending and I don’t blame some for being knee-jerk against it, or against those scolding them for not being enthusiastic enough.
That was not my position, though I do on the side mock the singularity cultists and false claims about how close the robot god’s construction is, and I also condemn reductionist derision of living human beings with edgy techbro terminology like “meat computers” while trying to boost their favorite LLM products.
No disagreement with anything you just said, apologies for misinterpreting your position.
I don’t know how to reconcile the manic singularity cultists with what I feel is a very real acceleration toward a hellscape of underemployment and hyper capitalism driven by AI. It does feel to me like the urgency AI represents deserves anxious attention, and I at least appreciate the weight those cultists place on that technology I think represents a threat. It feels like people are only either eagerly waiting for a sentient AGI, or mocking AI on those terms of sentience, leaving precious few who are actually materially concerned with what threats AI represent. And that is not at all a way of dismissing the very real ways machine learning is deployed against real people today, but I think there’s a lot of room for it to get worse and I wish people took that possibility seriously.
No arguments from me here.
It’s especially frustrating because there are very real threats from the technology as it is being applied and commanded, but because the ruling class has so many tech billionaires among them, their version of perceived threats gets the attention and publicity, usually some pop culture shit about robot uprisings (against them specifically).
Yes, i’ve been struggling articulating how I feel about this saga, and I think this captures it. Because while i felt a little encouraged seeing people advocate for legislative action, the action and concerns they were articulating were just, off. There were very brief mentions of concerns about unemployment, but then they passed over them like it was too big a problem to talk about. My hair especially raises when I hear the conversation veer toward copyright infringement.
Thanks for discussing this with me, I feel a bit better
I appreciate the clarification of your position, too.
It fucking sucks that actual valid concerns about LLMs and related technology are likely to continue to be ignored in favor of WHAT IF ROBOT UPRISING LIKE IN THE TREATS sensationalism and what regulations might actually come will likely be regulatory capture tactics done by the ruling class and their lobbying power.