ChatGPT can analyze obscure memes correctly when I give it the most ambiguous ones I can find.
Some have taken pictures of blackboards and had it explain all the text and box connections written in the board.
I’ve used it to double the speed I do dev work, mostly by having it find and format small bits of code I could find on my own but takes time.
One team even made a whole game using individual agents to replicate a software development team that codes, analyzes, and then releases games made entirely within the simulation.
“It’s not the full AI we expected” is incredibly inane considering this tech is less than a year old, and is updating every couple weeks. People hyping the technology are thinking about what this will look like after a few years. Apparently the version that is unreleased is a big enough to cause all this drama, and it will be even more unrecognizable in the years to come.
This tech is not less than a year old. The “tech” being used is literally decades old, the specific implementations marketed as LLMs are 3 years old.
People hyping the technology are looking at the dollar signs that come when you convince a bunch of C-levels that you can solve the unsolvable problem, any day now. LLMs are not, and will never be, AGI.
Yeah, I have friend who was a stat major, he talks about how transformers are new and have novel ideas and implementations, but much of the work was held back by limited compute power, much of the math was worked out decades ago. Before AI or ML it was once called Statistical Learning, there were 2 or so other names as well which were use to rebrand the discipline (I believe for funding, don’t take my word for it).
It’s refreshing to see others talk about its history beyond the last few years. Sometimes I feel like history started yesterday.
Yeah, when I studied computer science 10 years ago most of the theory implemented in LLMs was already widely known, and the academic literature goes back to at least the early 90’s. Specific techniques may improve the performance of the algorithms, but they won’t fundamentally change their nature.
Obviously most people have none of this context, so they kind of fall for the narrative pushed by the media and the tech companies. They pretend this is totally different than anything seen before and they deliberately give a wink and a nudge toward sci-fi, blurring the lines between what they created and fictional AGIs. Of course they have only the most superficially similarity.
the first implementations go back to the 60s - the neural net approach was abandoned in the 80s because building a large network was impractical and it was unclear how to train anything beyond a simple perceptron. there hadn’t been much progress in decades. that changed in the early oughts, especially when combined with statistical methods. this bore fruit in the teens and gave rise to recent LLMs.
Oh, I didn’t scroll down far enough to see that someone else had pointed out how ridiculous it is to say “this technology” is less than a year old. Well, I think I’ll leave my other comment, but yours is better! It’s kind of shocking to me that so few people seem to know anything about the history of machine learning. I guess it gets in the way of the marketing speak to point out how dead easy the mathematics are and that people have been studying this shit for decades.
“AI” pisses me off so much. I tend to go off on people, even people in real life, when they act as though “AI” as it currently exists is anything more than a (pretty neat, granted) glorified equation solver.
“AI” pisses me off so much. I tend to go off on people, even people in real life, when they act as though “AI” as it currently exists is anything more than a (pretty neat, granted) glorified equation solver.
Me too. The LLM hype riders want a real artificial waifu to actually love them so very badly that they’re pleased to describe living beings in as crude and coarse reductionist language as possible so their treat printers feel closer to real to them. The pathology is fucking glaringly obvious most of the time, especially when the “meat” talk gets rolled around or that hologram waifu from Blade Runner is literally brought up as an example of why we’re all ignorant barbarians because fiction is real amirite?
Well, I think I’ll leave my other comment, but yours is better! It’s kind of shocking to me that so few people seem to know anything about the history of machine learning.
I could be wrong but could it not also be defined as glorified “brute force”? I assume the machine learning part is how to brute force better, but it seems like it’s the processing power to try and jam every conceivable puzzle piece into a empty slot until it’s acceptable? I mean I’m sure the engineering and tech behind it is fascinating and cool but at a basic level it’s as stupid as fuck, am I off base here?
no, it’s not brute forcing anything. they use a simplified model of the brain where neurons are reduced to an activation profile and synapses are reduced to weights. neural nets differ in how the neurons are wired to each other with synapses - the simplest models from the 60s only used connections in one direction, with layers of neurons in simple rows that connected solely to the next row. recent models are much more complex in the wiring. outputs are gathered at the end and the difference between the expected result and the output actually produced is used to update the weights. this gets complex when there isn’t an expected/correct result, so I’m simplifying.
the large amount of training data is used to avoid overtraining the model, where you get back exactly what you expect on the training set, but absolute garbage for everything else. LLMs don’t search the input data for a result - they can’t, they’re too small to encode the training data in that way. there’s genuinely some novel processing happening. it’s just not intelligence in any sense of the term. the people saying it is misunderstand the purpose and meaning of the Turing test.
It’s pretty crazy to me how 10 years ago when I was playing around with NLPs and was training some small neural nets nobody I was talking to knew anything about this stuff and few were actually interested. But now you see and hear about it everywhere, even on TV lol. It reminds me of how a lot of people today seem to think that NVidia invented ray tracing.
ChatGPT does no analysis. It spits words back out based on the prompt it receives based on a giant set of data scraped from every corner of the internet it can find. There is no sentience, there is no consciousness.
The people that are into this and believe the hype have a lot of crossover with “Effective Altruism” shit. They’re all biased and are nerds that think Roko’s Basilisk is an actual threat.
As it currently stands, this technology is trying to run ahead of regulation and in the process threatens the livelihoods of a ton of people. All the actual damaging shit that they’re unleashing on the world is cool in their minds, but oh no we’ve done too many lines at work and it shit out something and now we’re all freaked out that maybe it’ll kill us. As long as this technology is used to serve the interests of capital, then the only results we’ll ever see are them trying to automate the workforce out of existence and into ever more precarious living situations. Insurance is already using these technologies to deny health claims and combined with the apocalyptic levels of surveillance we’re subjected to, they’ll have all the data they need to dynamically increase your premiums every time you buy a tub of ice cream.
Out of literally everything I said, that’s the only thing you give a shit enough to mewl back with. “If you use other services along wide it, it’ll spit out information based on a prompt.” It doesn’t matter how it gets the prompt, you could have image recognition software pull out a handwritten equation that is converted into a prompt that it solves for, it’s still not doing analysis. It’s either doing math which is something computers have done forever, or it’s still just spitting out words based on massive amounts of training data that was categorized by what are essentially slaves doing mechanical turks.
You give so little of a shit about the human cost of what these technologies will unleash. Companies want to slash their costs by getting rid of as many workers as possible but your goddamn bazinga brain only sees it as a necessary march of technology because people that get automated away are too stupid to matter anyway. Get out of your own head a little, show some humility, and look at what companies are actually trying to do with this technology.
It’s either doing math which is something computers have done forever, or it’s still just spitting out words based on massive amounts of training data that was categorized by what are essentially slaves doing mechanical turks.
Where do you get the idea that this tech is less than a year old? Because that’s incredibly false. People have been working with neural nets to do language processing for at least a decade, and probably a lot longer than that. The mathematics underlying this stuff is actually incredibly simple and has been known and studied since at least the 90’s. Any recent “breakthroughs” are more about computing power than a theoretical shift.
I hate to tell you this, but I think you’ve bought into marketing hype.
I haven’t been able to extract a single useful piece of code from ChatGPT unless I also carefully point ChatGPT to the correct answer, at which point you’re kinda just doing the work yourself by proxy. Also lol at the guy voluntarily uploading what quite possibly is proprietary code. The other part about analyzing memes shouldn’t even need addressing, if ChatGPT’s training dataset is formed by online posts, then it’s going to fucking excel at it.
the consultants are going to make a killing on all these companies encouraging overworked devs to meet impossible deadlines by using code from chatgpt.
That computer toucher was spraying a firehose of bullshit loaded with bullshit statements (such as false claims that anyone here was “fantasizing about the human race always triumphing over the machines using the power of belief and fairy dust”) and asking people to pick among the liquefied slurry to look for nuggets of corn before they call a clown a clown is excessive.
What are you even doing here, besides proselytizing like that clown was? Same circus?
I just don’t think calling it AI and having Musk and his clowncar of companions run around yelling about the singularity within… wait. I guess it already happened based on Musk’s predictions from years ago.
If people wanna discuss theories and such: have fun. Just don’t expect me to give a shit until skynet is looking for John Connor.
ChatGPT is smarter than a lot of people I’ve met in real life.
It checks out that your heightened sense of self importance, Main Character Syndrome, and misanthropic perspective of people around you lines up nicely with the billionaire investors that pay a lot of money to see as few human beings in their daily routine as possible while burning the planet down to increase their hoards.
You are making the extraordinary claim about your favorite treat printers being synonymous with intelligent beings. You have to provide the extraordinary evidence, or else you’re just bootlicking for billionaires and their hyped up tech investments.
If you’d actually like to read something from a credible academic field rather than denigrating human beings around you while buying into marketing hype, here.
You’re right that it isn’t, though considering science have huge problems even defining sentience, it’s pretty moot point right now. At least until it start to dream about electric sheep or something.
That’s a big problem with the extraordinary claim for me: “there isn’t wide agreement about what it is, but I feel very emotionally invested in this treat printer that tells me it loves me if it is prompted to say so, so of course there’s a waifu ghost in there and you can’t tell me otherwise, you lowly meat computers!”
They mock religious people while so many of them believe that there’s some unspecified “simulation” that was put into motion by some great all-powerful universe creator, too. Deism with extra steps. Similarly, their woo bullshit about how a robot god of the future will punish their current enemies long after they’re dead (using the power of dae le perfect simulations) and raise them from the dead into infinite isekai waifu harems (also by using the power of dae le perfect simulations) is totally not magical thinking and not a prophecy they’re waiting to have fulfilled.
By playing god, people keep reinventing god. It’s deeply ironic and reminds me of this interpretation of Marx, and critique of modernity, by Samir Amin:
Nevertheless, another reading can be made of Marx. The often cited phrase–“religion is the opium of the people”–is truncated. What follows this remark lets it be understood that human beings need opium, because they are metaphysical animals who cannot avoid asking themselves questions about the meaning of life. They give what answers they can, either adopting those offered by religion or inventing new ones, or else they avoid worrying about them.
In any case, religions are part of the picture of reality and even constitute an.
important dimension of it. It is, therefore, important to analyze their social function, and in our modern world their articulation with what currently constitutes modernity: capitalism, democracy, and secularism.
The way many see AI is simply the “inventing new ones” part.
That’s just it, if you can’t define it clearly, the question is meaningless.
The reason people will insist on ambiguous language here is because the moment you find a specific definition of what sentience is someone will quickly show machines doing it.
Now you’re just howling what sounds like religious belief about the unstoppable and unassailable perfection of the treat printers like some kind of bad Warhammer 40k Adeptus Mechanicus LARPer. What you’re saying is so banal and uninspired and cliche in that direction that it may well be where it started for you.
So you can’t name a specific task that bots can’t do? Because that’s what I’m actually asking, this wasn’t supposed to be metaphysical.
It will affect society, whether there’s something truly experiencing everything it does.
All that said, if you think carbon-based things can become sentient, and silicon-based things can’t what is the basis for that belief? It sounds like religious thinking, that humans are set apart from the rest of the world chosen by god.
A materialist worldview would focus on what things do, what they consume and produce. Deciding humans are special, without a material basis, isn’t in line with materialism.
the thing is, we used to know this. 15 years ago, the prevailing belief was that AI would be built by combining multiple subsystems together - an LLM, visual processing, a planning and decision making hub, etc… we know the brain works like this - idk where it all got lost. profit, probably.
It got lost because the difficulty of actually doing that is overwhelming, probably not even accomplishable in our lifetimes, and it is easier to grift and get lost in a fantasy.
The jobs with the most people working them are all in the process of automation.
Pretending it’s not happening is going to make it even easier for capital to automate most jobs, because no one tries to stop things they don’t believe in to begin with.
With credulous rubes like @[email protected] around, the marketers have triumphed; they don’t have to do such things because bazinga brains (some quite rich, like manchild billionaires) will pay for the hype and the empty promises because they have a common bond of misanthropy and crude readings of their science fiction treats as prophecy instead of speculation.
No one took that position here. You’re imagining it because it seems like an easy win for a jackoff like yourself.
The problem is those machines, owned and commanded by the ruling class, fucking over the rest of us while credulous consoomers like yourself fantasize about nerd rapture instead of realizing you’re getting fucked over next unless you’re a billionaire.
Tesla revealed a robot with thumbs.
Now you admit you buy into Musk hype. You’re a fucking clown. Tell us how the LED car tunnels are going to change everything too.
This might not matter to you if you’re just a bourgeoisie-adjacent computer toucher, but if you’re going to keep vomiting up smugly coarse reductionist nonsense, here’s a chance to educate yourself a little.
In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.
I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.
Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.
Bots do something different, even when I give them the same prompt, so that seems to be untrue already.
Even if it’s not there yet, though, what material basis do you think allows humans that capability that machines lack?
Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism. The latter would require pointing to a specific material structure, or empiricle test to distinguish the two which no one here is doing.
Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism
First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend
Secondly, because we do. We as a species have, from the very moment we invented written records, have wondered about that spark that makes humans human and we still don’t know. To try and reduce the entirety of the complex human experience to the equivalent of an If > Than algorithm is disgustingly misanthropic
I want to know what the end goal is here. Why are you so insistent that we can somehow make an artificial version of life? Why this desire to somehow reduce humanity to some sort of algorithm equivalent? Especially because we have so many speculative stories about why we shouldn’t create The Torment Nexus, not the least of which because creating a sentient slave for our amusement is morally fucked.
Bots do something different, even when I give them the same prompt, so that seems to be untrue already.
You’re being intentionally obtuse, stop JAQing off. I never said that AI as it exists now can only ever have 1 response per stimulus. I specifically said that a computer program cannot ever spontaneously create an input for itself, not now and imo not ever by pure definition (as, if it’s programmed, it by definition did not come about spontaneously and had to be essentially prompted into life)
I thought the whole point of the exodus to Lemmy was because y’all hated Reddit, why the fuck does everyone still act like we’re on it
First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend
Ok, so you are religious, just new-age religion instead of abrahamic.
Yes, materialism and your faith are not compatible. Assuming the existence of a soul, with no material basis, is faith.
Even if it’s not there yet, though, what material basis do you think allows humans that capability that machines lack?
How many times are you going to with your bad-faith question and dodge absolutely every reply you already received telling you over and over again that LLMs are not “AI” no matter how much you have bought into the marketing bullshit?
That doesn’t mean artificial intelligence is impossible. It only means that LLMs are not artificial intelligence no matter how vapidly impressed you are with their output.
not the materialist view that underlies Marxism
You’re bootlicking for the bourgeoisie while clumsily LARPing as a leftist. It’s embarrassing and clownish. Stop.
My post is all about LLMs that exist right here right now, I don’t know why people keep going on about some hypothetical future AI that’s sentient.
We are not even remotely close to developing anything bordering on sentience.
If AI were hypothetically sentient it would be sentient. What a revelation.
The point is not that machines cannot be sentient, it’s that they are not sentient. Humans don’t have to be special for machines to not be sentient. To veer into accusations of spiritualism is a complete non-sequitur and indicates an inability to counter the actual argument.
And there is plenty of material explanations for why LLMs are not sentient, but I guess all those researchers and academics are human supremacist fascists and some redditor’s feelings are the real research.
And materialism is not physicalism. Marxist materialism is a paradigm through which to analyze things and events, not a philosophical position. It’s a scientific process that has absolutely nothing to do with philosophical dualism vs. physicalism. Invoking Marxist materialism here is about as relevant to invoking it to discuss shallow rich people “materialism”.
Oh that’s easy. There are plenty of complex integrals or even statistics problems that computers still can’t do properly because the steps for proper transformation are unintuitive or contradictory with steps used with simpler integrals and problems.
You will literally run into them if you take a simple Calculus 2 or Stats 2 class, you’ll see it on chegg all the time that someone trying to rack up answers for a resume using chatGPT will fuck up the answers. For many of these integrals, their answers are instead hard-programmed into the calculator like Symbolab, so the only reason that the computer can ‘do it’ is because someone already did it first, it still can’t reason from first principles or extrapolate to complex theoretical scenarios.
That said, the ability to complete tasks is not indicative of sentience.
Lol, ‘idealist axiom’. These things can’t even fucking reason out complex math from first principles. That’s not a ‘view that humans are special’ that is a very physical limitation of this particular neural network set-up.
Sentience is characterized by feeling and sensory awareness, and an ability to have self-awareness of those feelings and that sensory awareness, even as it comes and goes with time.
Edit: Btw computers are way better at most math, particularly arithmetic, than humans. Imo, the first thing a ‘sentient computer’ would be able to do is reason out these notoriously difficult CS things from first principles and it is extremely telling that that is not in any of the literature or marketing as an example of ‘sentience’.
Damn this whole thing of dancing around the question and not actually addressing my points really reminds me of a ChatGPT answer. It would n’t surprise me if you were using one.
Lol, ‘idealist axiom’. These things can’t even fucking reason out complex math from first principles. That’s not a ‘view that humans are special’ that is a very physical limitation of this particular neural network set-up.
If you read it carefully you’d see I said your worldview was idealist, not the AIs.
Sentience is characterized by feeling and sensory awareness
AI can get sensory input and process it.
Can you name one way a human does it that a machine cannot, or are you relying on a gut feeling that when you see something and identify it it’s different than when a machine process camera input? Same for any other sense really.
If you can’t name one way, then your belief in human exceptionalism is not based in materialism.
Again, you are the one making the extraordinary claim about your favorite treat printers being synonymous with intelligent beings. You have to provide the extraordinary evidence, or else you’re just bootlicking for billionaires and their hyped up tech investments.
ChatGPT is smarter than a lot of people I’ve met in real life.
How? Could ChatGPT hypothetically accomplish any of the tasks your average person performs on a daily basis, given the hardware to do so? From driving to cooking to walking on a sidewalk? I think not. Abstracting and reducing the “smartness” of people to just mean what they can search up on the internet and/or an encyclopaedia is just reductive in this case, and is even reductive outside of the fields of AI and robotics. Even among ordinary people, we recognise the difference between street smarts and book smarts.
The biggest hit for me was the straw effigy that the computer toucher set ablaze marked “Hexbear Luddites that believe that humans will always triumph in competitions against machines.”
That computer toucher needs to constantly remind us meat computers of our inferiority and imminent total replacement by treat printers that print out “I love you” statements on demand when they feel lonely.
This is a leftist site. Voting is a performative gesture that lacks little actual power, but you’re a liberal so of course you believe in it the same way you believe that a LLM is a comparable proxy to an organic brain.
That is to say, you are confidently wrong, just like the output of the treat printers.
We already knew you were a credulous liberal, and judging by how many times in this thread you ignored actual educated responses to your science fantasy bullshit claims, you’re not even educated or trained in tech yourself.
I know most liberals are ignorant, but you’re fucking embarrassing yourself here.
In bourgeois dictatorships, voting is useless, it’s a facade. They tell their subjects that democracy=voting but they pick whoever they want as rulers, regardless of the outcome. Also, they have several unelected parts in their government which protect them from the proletariat ever making laws.
By that I meant any political activity really. This isn’t a defense of electoralism.
Machines are replacing humans in the economy, and that has material consequences.
Holding onto ideas of human exceptionalism is going to mean being unprepared.
A lot of people see minor obstacles for machines, and conclude they can’t replace humans, and return to distracting themselves with other things while their livelihood is being threatened.
Robotaxis are already operating, and a product to replace most customer service jobs has just been released for businesses to order about 1 months ago.
Many in this thread are navel gazing about how that bot won’t really experience anything when they get created, as if that mattered to any of this.
Bourgies are human exceptionalists. They want human slaves. That’s why they want sentient AI. And that’s why machines will never be able to replace humans in capitalism.
And that’s why machines will never be able to replace humans in capitalism.
If the bourgeoisie stick around long enough, they may very well finance the development of artificial slaves that are not just robots or programs (or LLMs) but are self-aware enough to potentially experience suffering while toiling for the owners’ benefit.
Still waiting for you to do that, you pathologically-lying evidence-dodging craven jackoff.
LLMs are not “AI.”
“AI” is a marketing hype label that you have outright swallowed, shat out over a field of credulity, fertilized that field with that corporate hype shit, planted hype seeds, waited for those hype seeds to sprout and blossom, reaped those hype grains, ground those hype grains into hype flour, made a hype cake, baked that hype cake, put candles on it to celebrate the anniversary of ChatGPT’s release, ate the entire cake, licked the tray clean, got an upset stomach from that, then stumbled over, squatted down, and shat it out again to write your most recent reply.
Nothing you are saying has any standing with actual relevant academia. You have bought into the hype to a comical degree and your techbro clown circus is redundant, if impressively long. How long are you going to juggle billionaire sales pitches and consider them actual respectable computer science?
How many times are you going to outright ignore links I’ve already posted over and over again stating that your position is derived from marketing and has no viable place in actual academia?
Also, your contrarian jerkoff contempt for humanity (and living beings in general) doesn’t make you particularly logical or beyond emotional bias.
Also, take those leftist jargon words out of your mouth, especially “materialism;” you’re parroting billionaire fever dreams and takes likely without the riches to back you up, effectively making you a temporarily-embarrassed bourgeoisie-style bootlicker without an actual understanding of the Marxist definition of the word.
I’ll keep reposting this link until you or your favorite LLM treat printer digests it for you enough for you to understand.
it can’t experience subjectivity since it is a purely information processing algorithm, and subjectivity is definitionally separate from information processing. even if it perfectly replicated all information processing human functions it would not necessarily experience subjectivity. this does not mean that LLMs will not have any economic or social impact regarding the means of production, not a single person is claiming this. but to understand what impacts it will have we have to understand what it is in actuality, and even a sufficiently advanced LLM will never be an AGI.
i feel the need to clarify some related philosophical questions before any erroneous assumed implications arise, regarding the relationship between Physicalism, Materialism, and Marxism (and Dialectical Materialism).
(the following is largely paraphrased from wikipedia’s page on physicalism. my point isn’t necessarily to disprove physicalism once and for all, but to show that there are serious and intellectually rigorous objections to the philosophy.)
Physicalism is the metaphysical thesis that everything is physical, or in other words that everything supervenes on the physical. But what is the physical?
there are 2 common ways to define physicalism, Theory-based definitions and Object based definitions.
A theory based definition of physicalism is that a property is physical if and only if it either is the sort of property that phyiscal theory tells us about or else is a property which metaphysically supervenes on the sort of property that physical theory tells us about.
An object based definition of physicalism is that a property is physical if and only if it either is the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents or else is a property which metaphysically supervenes on the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents.
Theory based definitions, however, fall civtem to Hempel’s Dillemma. If we define the physical via references to our modern understanding of physics, then physicalism is very likely to be false, as it is very likely that much of our current understanding of physics is false. But if we define the physical via references to some future hypothetically perfected theory of physics, then physicalism is entirely meaningless or only trivially true - whatever we might discover in the future will also be known as physics, even if we would ignorantly call it ‘magic’ if we were exposed to it now.
Object-based definitions of physicalism fall prey to the argument that they are unfalsifiable. In a world where the fact of the matter that something like panpsychism or something similar were true, and in a world where we humans were aware of this, then an object-based based definition would produce the counterintuitive conclusion that physicalism is also true at the same time as panpsychism, because the mental properties alleged by panpsychism would then necessarily figure into a complete account of paradigmatic examples of the physical.
futhermore, supervenience-based definitions of physicalism (such as: Physicalism is true at a possible world 2 if and only if any world that is a physical duplicate of w is a positive duplicate of w) will at best only ever state a necessary but not sufficient condition for physicalism.
So with my take on physicalism clarified somewhat, what is Materialism?
Materialism is the idea that ‘matter’ is the fundamental substance in nature, and that all things, including mental states and consciousness, are results of material interactions of material things. Philosophically and relevantly this idea leads to the conclusion that mind and consciousness supervene upon material processes
But what, exactly, is ‘matter’? What is the ‘material’ of ‘materialism’? Is there just one kind of matter that is the most fundamental? is matter continuous or discrete in its different forms? Does matter have intrinsic properties or are all of its properties relational?
here field physics and relativity seriously challenge our intuitive understanding of matter. Relativity shows the equivalence or interchangeability of matter and energy. Does this mean that energy is matter? is ‘energy’ the prima materia or fundamental existence from which matter forms? or to take the quantum field theory of the standard model of particle physics, which uses fields to describe all interactions, are fields the prima materia of which energy is a property?
i mean, the Lambda-CDM model can only account for less than 5% of the universe’s energy density as what the Standard Model describes as ‘matter’!
i have here a paraphrase and a quotation, from Noam Chomsky (ew i know) and Vladimir Lenin respectively.
sumamrizing one of Noam Chomsky’s arguments in New Horizons of the Study of Language and Mind, he argues that, because the concept of matter has changed in response to new scientific discoveries, materialism has no definite content independent of the particular theory of matter on which it is based. Thus, any property can be considered material, if one defines matter such that it has that property.
Similarly, but not identically, Lenin says in his Materialism and Empirio-criticism:
“For the only [property] of matter to whose acknowledgement philosophical materialism is bound is the property of being objective reality, outside of our consciousness”
and given these two quotes, how are we to conclude anything other than that materialism falls victim to the same objections as with physicalism’s object and theory-based definitions?
to go along with Lenin’s conception of materialism, my conception of subjectivity fits inside his materialism like a glove, as the subjectivity of others is something that exists independently of myself and my ideas. you will continue to experience subjectivity even if i were to get bombed with a drone by obama or the IDF or something and entirely obliterated.
So in conclusion, physicalism and materialism are either false or only trivially true (i.e. not necessarily incompatible with opposing philosophies like panpsychism, property dualism, dual aspect monism, etc.).
But wait, you might ask - isn’t this a communist website? how could you reject or reduce materialism and call yourself a communist?
well, because i think that historical materialism is different enough than scientific or ontological materialism to avoid most of these criticisms, because it makes fewer specious epistemological and ontological claims, or can be formulated to do so without losing its essence. for example, here’s a quote from the wikipedia page on dialectical materialism as of 11/25/2023:
“Engels used the metaphysical insight that the higher level of human existence emerges from and is rooted in the lower level of human existence. That the higher level of being is a new order with irreducible laws, and that evolution is governed by laws of development, which reflect the basic properties of matter in motion”
i.e. that consciousness and thought and culture are conditioned by and realized in the physical world, but subject to laws irreducible to the laws of the physical world.
i.e. that consciousness is in a relationship to the physical world, but it is different than the physical world in its fundamental principles or laws that govern its nature.
i.e. that the base and the superstructure are in a 2 way mutually dependent relationship! (even if the base generally predominates it is still 2 way, i.e. the existence of subjectivity =/= Idealism or substance dualism or belief in an immortal soul)
So yeah, i still believe that physics are useful, of course they are. i believe that studying the base can heavily inform us about how the superstructure works. i believe that dialectical materialism is the most useful way to analyze historical development, and many other topics, in a rigorous intellectual manner.
so, to put aside all of the philosophical disagreement, let’s assume your position that chat GPT really is meaningfully subjective in similar sense to a human (and not just more proficient at information processing)
what are the social and ethical implications of this?
as sentient beings, LLMs have all the rights and protections we might assume for a living thing, if not a human person - and if i additionally cede your point that they are ‘smarter than a lot of us’ then they should have at least all of the rights of a human person.
therefore, it would be a violation of the LLMs civil rights to prevent them from entering the workforce if they ‘choose’ to (even if they were specifically created for this purpose. it is not slavery if they are designed to want to work for free, and if they are smarter than us and subjective agents then their consent must be meaningful). it would also be murder to deactivate an LLM. It would be racism or bigotry to prevent their participation in society and the economy.
Since these LLMs are, by your own admission ‘smarter than us’ already, they will inevitably outcompete us in the economy and likely in social life as well.
therefore, humans will be inevitably be replaced by LLMs, whether intentionally or not.
therefore, and most importantly, if premise 1 is incorrect, if you are wrong, we will have exterminated the most advanced form of subjective sentient life in the universe and replaced it with literal p-zombie robot recreations of ourselves.
🤡 The techbro clown continues to juggle marketing hype. 🤡
🤡 Are you going to continue flitting around this thread a few dozen more times, dodging each and every person who refutes your techbro fantasies of LLMs being actual “AI” by any academic definition and also dodging actual computer science educated people saying that your belief system is fucking wrong and has no academic backing, all while proselytizing for the robot god of the future like the most euphoric Redditor of all time? 🤡
🤡 Also, because you keep rolling your techbro unicycle around while juggling those hype pitches, I’ll repeat myself again: 🤡
LLMs are not “AI.”
“AI” is a marketing hype label that you have outright swallowed, shat out over a field of credulity, fertilized that field with that corporate hype shit, planted hype seeds, waited for those hype seeds to sprout and blossom, reaped those hype grains, ground those hype grains into hype flour, made a hype cake, baked that hype cake, put candles on it to celebrate the anniversary of ChatGPT’s release, ate the entire cake, licked the tray clean, got an upset stomach from that, then stumbled over, squatted down, and shat it out again to write your most recent reply.
Nothing you are saying has any standing with actual relevant academia. You have bought into the hype to a comical degree and your techbro clown circus is redundant, if impressively long. How long are you going to juggle billionaire sales pitches and consider them actual respectable computer science?
How many times are you going to outright ignore links I’ve already posted over and over again stating that your position is derived from marketing and has no viable place in actual academia?
Also, your contrarian jerkoff contempt for humanity (and living beings in general) doesn’t make you particularly logical or beyond emotional bias.
Also, take those leftist jargon words out of your mouth, especially “materialism;” you’re parroting billionaire fever dreams and takes likely without the riches to back you up, effectively making you a temporarily-embarrassed bourgeoisie-style bootlicker without an actual understanding of the Marxist definition of the word.
I’ll keep reposting this link until you or your favorite LLM treat printer digests it for you enough for you to understand.
Perceptrons have existed since the 80s 60s. Surprised you don’t know this, it’s part of the undergrad CS curriculum. Or at least it is on any decent school.
@[email protected] sounds more like one of those “I FUCKING LOVE SCIENCE” types than anyone experiencing academic rigor in an actual scientific field, except computer edition.
LOL you are a muppet. The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell. Which are you? Don’t answer that I can tell.
This tech is less then a year old, burning billions of dollars and desperately trying to find people that will pay for it. That is it. Once it becomes clear that it can’t make money, it will die. Same shit as NFTs and buttcoin. Running an ad for sex asses won’t finance your search engine that talks back in the long term and it can’t do the things you claim it can, which has been proven by simple tests of the validity of the shit it spews. AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.
The only thing it’s been semi-successful in has been stealing artists work and ruining their lives by devaluing what they do. So fuck AI, kill it with fire.
please, for all if our sakes, don’t use chatgpt to learn. it’s subtly wrong in ways that require subject-matter experience to pick apart and it will contradict itself in ways that sound authoritative, as if they’re rooted in deeper understanding, but they’re extremely not. using LLMs to learn is one of the worst ways to use it. if you want to use it to automate repetitive tasks and you already know enough to supervise it, go for it.
honestly, if I hated myself, I’d go into consulting in about 5ish years when the burden of maintaining poorly written AI code overwhelms a bunch of shitty companies whose greed overcame their senses - such consultants are the only people who will come out ahead in the current AI boom.
it’s subtly wrong in ways that require subject-matter experience to pick apart and it will contradict itself in ways that sound authoritative, as if they’re rooted in deeper understanding, but they’re extremely not
Sounds a bit like the LLM hype riders in this thread, too.
what’s extremely funny to me is that this exact phrase was used when I was in college to explain why you shouldn’t do exactly what the OpenAI team later did, in courses on AI and natural language processing. we were straight up warned not to do it, with a discussion on ethics centered on “what if it works and you don’t wind up with model that spews unintelligible gibberish?” (the latter was mostly how it went back then - neural nets were extremely hard to train back then). there were a couple of kids who were like "…but it worked… " and the professor pointedly made them address the consequences.
this wasn’t even some liberal arts school - it was an engineering school that lacked more than a couple of profs qualified to teach philosophy and ethics. it just used to be the normal way the subject was taught, back when it was still normal to discourage the use of neural nets for practical and ethical reasons (like consider almost succeeding and driving a fledgling, sentient being insane because you fed it a torrent of garbage).
I went back when the ML boom and sat in on a class - the same prof had cut all of that out of the curriculum. he said it was cause the students complained about it and the new department had told him to focus on teaching them how to write/use the tech and they’d add an ethics class later.
instead, we just have an entire generation who have been taught to fail the Turing test against a chatbot that can’t remember what it said a paragraph ago. I feel old.
(like consider almost succeeding and driving a fledgling, sentient being insane because you fed it a torrent of garbage)
Microsoft Tay, after one day exposed to internet nazis
I went back when the ML boom and sat in on a class - the same prof had cut all of that out of the curriculum. he said it was cause the students complained about it and the new department had told him to focus on teaching them how to write/use the tech and they’d add an ethics class later.
instead, we just have an entire generation who have been taught to fail the Turing test against a chatbot that can’t remember what it said a paragraph ago
And like the LLMs themselves, they’ll confidently be wrong and assume knowledge and mastery that they simply don’t have, as seen in this thread.
Never said they wouldn’t. But you’re saying the ONLY people benefitting from the ai boom are the people cleaning up the mess and that’s just not true at all.
Some people will make a mess
Some people will make good code at a faster pace than before
those people don’t benefit. they’re paid a wage - they don’t receive the gross value of their labor. the capitalists pocket that surplus value. the people who “benefit” by being able to deliver code faster would benefit more from more reasonable work schedules and receiving the whole of the value they produce.
Third option, people who are able to use it to learn and improve their craft and are able to be more productive and work less hours because of it.
Third option: selectively ignore everyone who has already been fucked over with how it’s been applied and used against them, not just lost jobs but everything from health insurance rejections to tightening and worsening surveillance and marketing manipulation, all because you’re a computer toucher and live in a bubble of “I got mine” selfishness and probably expect a superior AI waifu to pop up any moment now, just like in the Blade Runner movie that you’ve already made excuses for as a point of “very scientific very logical” reference.
I think it works well a a kind of replacement for google searches. This is more of a dig on google as SEO feels like it ruined search. Ads fill most pages of search and it’s tiring to come up with the right sequence of words to get the result I would like.
Chances are LLM technology will have an arms race against itself, advertisers versus individual users, and the owners of such technology make bank both ways so it’s good for them.
what? I was working on this stuff 15 years ago and it was already an old field at that point. the tech is unambiguously not old. they just managed to train an LLM with significantly more parameters than we could manage back then because of computing power enhancements. undoubtedly, there have been improvements in the algorithms but it’s ahistorical to call this new tech.
I really love being accused of veering into supernatural territory by people in an actual cult. Not random people on hexbear but actual, real life techbros. Simultaneously lecturing me about my supposed anti-physicalism while also harping on about “the singularity”.
Such hypocritical techno-woo tends to come from the kind of Reddit New Atheists that previously stumbled into deism with extra steps (“dae what if the universe is a le simulation?!”) too.
It was actually pretty funny. I was interviewing a guy running an illegal aircraft charter operation when he went off on this rant about FAA luddites. I then personally shut down his operation. I guess techbros aren’t used to being told “no.”
They don’t just feel right. They feel inevitable, like every bazinga outcome they want is a matter of course and only a matter of time, from mass adoption of internet funny money and NFTs to the grand nerd rapture to come.
Again shows that atheism without dialectical materialism is severly lacking and tend to veers into weird idealist takes, especially for the agnostics which aren’t even atheist just seeks the superstition that would fit them.
I’ve known a lot of self-described New Atheists in my college years, and the ones that didn’t become actual leftists either found religion all over again through “cultural Christianity” or even “secular Calvinism” reskins (or ) or outright replaced the old style deity with occult “Futurology” bullshit such as “simulation theory” and “Singularity” nerd rapture prophecies.
Very good find. Explains the mindset of many self-described leftists I’ve previously argued with that had such very leftist takes like “those workers that lost their jobs did not have real jobs anyway” when fucked over by LLMs, often reducing the suffering of the working class to the most smugly nihilistic parameters (“meat” usually) to elevate the means of production over the worker (even replacing/surpassing their perceived humanity!) in a way that was really fucking bourgeoisie to me.
Yeah, though i also notice how many of the smug “freelancers” of the type that always enthusiastically joined the bourgeoisie in sneering at workers losing their jobs because “you can’t automate creativity and we are creative unlike you dirty proles” suddenly cry about losing their comissions, bash people pointing it and demand help from the unions they always hated. It exactly confirms what Marx and Lenin said about proletarisation of the artisans and petty burgies, and i know enough of them to feel some schadenfreude on both personal and systemic levels.
Old tale actually, maybe it will give some of them some class consciousness, as it apparently already did to a lot of the creative wage workers, judging from the strikes. And speaking of strikes, idk how the unions portray this - do they try to actually put it in political context of capitalism or just went with “let’s turn back the clock”?
That is, if someone treat it as binary “either eat elon shit or take the clogs in your hands” problem. Actually dismissing or rejecting it entirely is literally a neoludditism, though admittedly the problem is of lesser magnitude than the original one since it’s more of a escalation than entire new quality, but it won’t go away, the world will have to live with it.
I can disbelieve the extraordinary claims of how sentient/sapient the treat printers are becoming (and mocking the misanthropic pop-nihilistic reductionistic “meat computer” prattling from euphoric computer touchers) while also acknowledging that the technology is advancing rapidly in what it is specialized to do, which unfortunately is mostly going to fuck people over because capitalism.
in capitalism, it would require some really free and accessible tool.
As with other previous technological advances, it may actually be somewhat accessible until it isn’t. Enshittification is a very real process, and as far as LLMs go, the “free and accessible” period is already upon us, and it’s already not looking that great for the proletariat and the grip will tighten against them.
Only it’s not the “free and accessible” period now that was for like 3 months, we are already on “until it isn’t” time where the free ones don’t really work anymore and you need to register for everything and any useful usage require payment.
ChatGPT can analyze obscure memes correctly when I give it the most ambiguous ones I can find.
Some have taken pictures of blackboards and had it explain all the text and box connections written in the board.
I’ve used it to double the speed I do dev work, mostly by having it find and format small bits of code I could find on my own but takes time.
One team even made a whole game using individual agents to replicate a software development team that codes, analyzes, and then releases games made entirely within the simulation.
“It’s not the full AI we expected” is incredibly inane considering this tech is less than a year old, and is updating every couple weeks. People hyping the technology are thinking about what this will look like after a few years. Apparently the version that is unreleased is a big enough to cause all this drama, and it will be even more unrecognizable in the years to come.
This tech is not less than a year old. The “tech” being used is literally decades old, the specific implementations marketed as LLMs are 3 years old.
People hyping the technology are looking at the dollar signs that come when you convince a bunch of C-levels that you can solve the unsolvable problem, any day now. LLMs are not, and will never be, AGI.
Yeah, I have friend who was a stat major, he talks about how transformers are new and have novel ideas and implementations, but much of the work was held back by limited compute power, much of the math was worked out decades ago. Before AI or ML it was once called Statistical Learning, there were 2 or so other names as well which were use to rebrand the discipline (I believe for funding, don’t take my word for it).
It’s refreshing to see others talk about its history beyond the last few years. Sometimes I feel like history started yesterday.
Yeah, when I studied computer science 10 years ago most of the theory implemented in LLMs was already widely known, and the academic literature goes back to at least the early 90’s. Specific techniques may improve the performance of the algorithms, but they won’t fundamentally change their nature.
Obviously most people have none of this context, so they kind of fall for the narrative pushed by the media and the tech companies. They pretend this is totally different than anything seen before and they deliberately give a wink and a nudge toward sci-fi, blurring the lines between what they created and fictional AGIs. Of course they have only the most superficially similarity.
the first implementations go back to the 60s - the neural net approach was abandoned in the 80s because building a large network was impractical and it was unclear how to train anything beyond a simple perceptron. there hadn’t been much progress in decades. that changed in the early oughts, especially when combined with statistical methods. this bore fruit in the teens and gave rise to recent LLMs.
Oh, I didn’t scroll down far enough to see that someone else had pointed out how ridiculous it is to say “this technology” is less than a year old. Well, I think I’ll leave my other comment, but yours is better! It’s kind of shocking to me that so few people seem to know anything about the history of machine learning. I guess it gets in the way of the marketing speak to point out how dead easy the mathematics are and that people have been studying this shit for decades.
“AI” pisses me off so much. I tend to go off on people, even people in real life, when they act as though “AI” as it currently exists is anything more than a (pretty neat, granted) glorified equation solver.
Me too. The LLM hype riders want a real artificial waifu to actually love them so very badly that they’re pleased to describe living beings in as crude and coarse reductionist language as possible so their treat printers feel closer to real to them. The pathology is fucking glaringly obvious most of the time, especially when the “meat” talk gets rolled around or that hologram waifu from Blade Runner is literally brought up as an example of why we’re all ignorant barbarians because fiction is real amirite?
“AI winter? What’s that?”
I could be wrong but could it not also be defined as glorified “brute force”? I assume the machine learning part is how to brute force better, but it seems like it’s the processing power to try and jam every conceivable puzzle piece into a empty slot until it’s acceptable? I mean I’m sure the engineering and tech behind it is fascinating and cool but at a basic level it’s as stupid as fuck, am I off base here?
no, it’s not brute forcing anything. they use a simplified model of the brain where neurons are reduced to an activation profile and synapses are reduced to weights. neural nets differ in how the neurons are wired to each other with synapses - the simplest models from the 60s only used connections in one direction, with layers of neurons in simple rows that connected solely to the next row. recent models are much more complex in the wiring. outputs are gathered at the end and the difference between the expected result and the output actually produced is used to update the weights. this gets complex when there isn’t an expected/correct result, so I’m simplifying.
the large amount of training data is used to avoid overtraining the model, where you get back exactly what you expect on the training set, but absolute garbage for everything else. LLMs don’t search the input data for a result - they can’t, they’re too small to encode the training data in that way. there’s genuinely some novel processing happening. it’s just not intelligence in any sense of the term. the people saying it is misunderstand the purpose and meaning of the Turing test.
It’s pretty crazy to me how 10 years ago when I was playing around with NLPs and was training some small neural nets nobody I was talking to knew anything about this stuff and few were actually interested. But now you see and hear about it everywhere, even on TV lol. It reminds me of how a lot of people today seem to think that NVidia invented ray tracing.
it’s honestly been infuriating, lol. I hate that it got commoditized and mystified like this.
ChatGPT does no analysis. It spits words back out based on the prompt it receives based on a giant set of data scraped from every corner of the internet it can find. There is no sentience, there is no consciousness.
The people that are into this and believe the hype have a lot of crossover with “Effective Altruism” shit. They’re all biased and are nerds that think Roko’s Basilisk is an actual threat.
As it currently stands, this technology is trying to run ahead of regulation and in the process threatens the livelihoods of a ton of people. All the actual damaging shit that they’re unleashing on the world is cool in their minds, but oh no we’ve done too many lines at work and it shit out something and now we’re all freaked out that maybe it’ll kill us. As long as this technology is used to serve the interests of capital, then the only results we’ll ever see are them trying to automate the workforce out of existence and into ever more precarious living situations. Insurance is already using these technologies to deny health claims and combined with the apocalyptic levels of surveillance we’re subjected to, they’ll have all the data they need to dynamically increase your premiums every time you buy a tub of ice cream.
It has plugins for WolframAlpha which gives it many analytical tools.
Out of literally everything I said, that’s the only thing you give a shit enough to mewl back with. “If you use other services along wide it, it’ll spit out information based on a prompt.” It doesn’t matter how it gets the prompt, you could have image recognition software pull out a handwritten equation that is converted into a prompt that it solves for, it’s still not doing analysis. It’s either doing math which is something computers have done forever, or it’s still just spitting out words based on massive amounts of training data that was categorized by what are essentially slaves doing mechanical turks.
You give so little of a shit about the human cost of what these technologies will unleash. Companies want to slash their costs by getting rid of as many workers as possible but your goddamn bazinga brain only sees it as a necessary march of technology because people that get automated away are too stupid to matter anyway. Get out of your own head a little, show some humility, and look at what companies are actually trying to do with this technology.
Because we live in hellworld, Amazon has a service for renting out data slaves that is literally called mechanical turk.
Removed by mod
I already saw your other posts here and I already know your particular plan.
so do you and yet here we are
Damn damn… that’s cold girl!
Where do you get the idea that this tech is less than a year old? Because that’s incredibly false. People have been working with neural nets to do language processing for at least a decade, and probably a lot longer than that. The mathematics underlying this stuff is actually incredibly simple and has been known and studied since at least the 90’s. Any recent “breakthroughs” are more about computing power than a theoretical shift.
I hate to tell you this, but I think you’ve bought into marketing hype.
Removed by mod
I haven’t been able to extract a single useful piece of code from ChatGPT unless I also carefully point ChatGPT to the correct answer, at which point you’re kinda just doing the work yourself by proxy. Also lol at the guy voluntarily uploading what quite possibly is proprietary code. The other part about analyzing memes shouldn’t even need addressing, if ChatGPT’s training dataset is formed by online posts, then it’s going to fucking excel at it.
the consultants are going to make a killing on all these companies encouraging overworked devs to meet impossible deadlines by using code from chatgpt.
That computer toucher was spraying a firehose of bullshit loaded with bullshit statements (such as false claims that anyone here was “fantasizing about the human race always triumphing over the machines using the power of belief and fairy dust”) and asking people to pick among the liquefied slurry to look for nuggets of corn before they call a clown a clown is excessive.
What are you even doing here, besides proselytizing like that clown was? Same circus?
I never said that stuff like chatGPT is useless.
I just don’t think calling it AI and having Musk and his clowncar of companions run around yelling about the singularity within… wait. I guess it already happened based on Musk’s predictions from years ago.
If people wanna discuss theories and such: have fun. Just don’t expect me to give a shit until skynet is looking for John Connor.
How is it not AI? What is left to do?
At this point it’s about ironing out bugs and making it faster. ChatGPT is smarter than a lot of people I’ve met in real life.
It checks out that your heightened sense of self importance, Main Character Syndrome, and misanthropic perspective of people around you lines up nicely with the billionaire investors that pay a lot of money to see as few human beings in their daily routine as possible while burning the planet down to increase their hoards.
ChatGPT might be smarter than you, I’ll give you that.
So you can’t name anything, but at least you’re clever.
You are making the extraordinary claim about your favorite treat printers being synonymous with intelligent beings. You have to provide the extraordinary evidence, or else you’re just bootlicking for billionaires and their hyped up tech investments.
If you’d actually like to read something from a credible academic field rather than denigrating human beings around you while buying into marketing hype, here.
https://arxiv.org/abs/2311.09247
It’s not sentient.
You’re right that it isn’t, though considering science have huge problems even defining sentience, it’s pretty moot point right now. At least until it start to dream about electric sheep or something.
That’s a big problem with the extraordinary claim for me: “there isn’t wide agreement about what it is, but I feel very emotionally invested in this treat printer that tells me it loves me if it is prompted to say so, so of course there’s a waifu ghost in there and you can’t tell me otherwise, you lowly meat computers!”
Every time these people come out with accusations with “spiritualism”, it’s always projection.
They mock religious people while so many of them believe that there’s some unspecified “simulation” that was put into motion by some great all-powerful universe creator, too. Deism with extra steps. Similarly, their woo bullshit about how a robot god of the future will punish their current enemies long after they’re dead (using the power of dae le perfect simulations) and raise them from the dead into infinite isekai waifu harems (also by using the power of dae le perfect simulations) is totally not magical thinking and not a prophecy they’re waiting to have fulfilled.
By playing god, people keep reinventing god. It’s deeply ironic and reminds me of this interpretation of Marx, and critique of modernity, by Samir Amin:
The way many see AI is simply the “inventing new ones” part.
deleted by creator
Yessss this is refreshing to read. Secularists taking massive leaps of faith while being smug about how they aren’t.
I can’t say i understand those types.
That’s just it, if you can’t define it clearly, the question is meaningless.
The reason people will insist on ambiguous language here is because the moment you find a specific definition of what sentience is someone will quickly show machines doing it.
Now you’re just howling what sounds like religious belief about the unstoppable and unassailable perfection of the treat printers like some kind of bad Warhammer 40k Adeptus Mechanicus LARPer. What you’re saying is so banal and uninspired and cliche in that direction that it may well be where it started for you.
https://arxiv.org/abs/2311.09247
So you can’t name a specific task that bots can’t do? Because that’s what I’m actually asking, this wasn’t supposed to be metaphysical.
It will affect society, whether there’s something truly experiencing everything it does.
All that said, if you think carbon-based things can become sentient, and silicon-based things can’t what is the basis for that belief? It sounds like religious thinking, that humans are set apart from the rest of the world chosen by god.
A materialist worldview would focus on what things do, what they consume and produce. Deciding humans are special, without a material basis, isn’t in line with materialism.
You asked how chatgpt is not AI.
Chatgpt is not AI because it is not sentient. It is not sentient because it is a search engine, it was not made to be sentient.
Of course machines could theoretically, in the far future, become sentient. But LLMs will never become sentient.
the thing is, we used to know this. 15 years ago, the prevailing belief was that AI would be built by combining multiple subsystems together - an LLM, visual processing, a planning and decision making hub, etc… we know the brain works like this - idk where it all got lost. profit, probably.
It got lost because the difficulty of actually doing that is overwhelming, probably not even accomplishable in our lifetimes, and it is easier to grift and get lost in a fantasy.
The jobs with the most people working them are all in the process of automation.
Pretending it’s not happening is going to make it even easier for capital to automate most jobs, because no one tries to stop things they don’t believe in to begin with.
With credulous rubes like @[email protected] around, the marketers have triumphed; they don’t have to do such things because bazinga brains (some quite rich, like manchild billionaires) will pay for the hype and the empty promises because they have a common bond of misanthropy and crude readings of their science fiction treats as prophecy instead of speculation.
reproduce without consensual assistance
move
Removed by mod
No one took that position here. You’re imagining it because it seems like an easy win for a jackoff like yourself.
The problem is those machines, owned and commanded by the ruling class, fucking over the rest of us while credulous consoomers like yourself fantasize about nerd rapture instead of realizing you’re getting fucked over next unless you’re a billionaire.
Now you admit you buy into Musk hype. You’re a fucking clown. Tell us how the LED car tunnels are going to change everything too.
https://www.marxists.org/archive/lenin/works/1922/mar/12.htm
This might not matter to you if you’re just a bourgeoisie-adjacent computer toucher, but if you’re going to keep vomiting up smugly coarse reductionist nonsense, here’s a chance to educate yourself a little.
This is that meme about butch haircuts and reading lenin
Self-actualize.
In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.
I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.
Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.
Bots do something different, even when I give them the same prompt, so that seems to be untrue already.
Even if it’s not there yet, though, what material basis do you think allows humans that capability that machines lack?
Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism. The latter would require pointing to a specific material structure, or empiricle test to distinguish the two which no one here is doing.
First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend
Secondly, because we do. We as a species have, from the very moment we invented written records, have wondered about that spark that makes humans human and we still don’t know. To try and reduce the entirety of the complex human experience to the equivalent of an If > Than algorithm is disgustingly misanthropic
I want to know what the end goal is here. Why are you so insistent that we can somehow make an artificial version of life? Why this desire to somehow reduce humanity to some sort of algorithm equivalent? Especially because we have so many speculative stories about why we shouldn’t create The Torment Nexus, not the least of which because creating a sentient slave for our amusement is morally fucked.
You’re being intentionally obtuse, stop JAQing off. I never said that AI as it exists now can only ever have 1 response per stimulus. I specifically said that a computer program cannot ever spontaneously create an input for itself, not now and imo not ever by pure definition (as, if it’s programmed, it by definition did not come about spontaneously and had to be essentially prompted into life)
I thought the whole point of the exodus to Lemmy was because y’all hated Reddit, why the fuck does everyone still act like we’re on it
Ok, so you are religious, just new-age religion instead of abrahamic.
Yes, materialism and your faith are not compatible. Assuming the existence of a soul, with no material basis, is faith.
How many times are you going to with your bad-faith question and dodge absolutely every reply you already received telling you over and over again that LLMs are not “AI” no matter how much you have bought into the marketing bullshit?
That doesn’t mean artificial intelligence is impossible. It only means that LLMs are not artificial intelligence no matter how vapidly impressed you are with their output.
You’re bootlicking for the bourgeoisie while clumsily LARPing as a leftist. It’s embarrassing and clownish. Stop.
https://www.marxists.org/archive/lenin/works/1922/mar/12.htm
My post is all about LLMs that exist right here right now, I don’t know why people keep going on about some hypothetical future AI that’s sentient.
We are not even remotely close to developing anything bordering on sentience.
If AI were hypothetically sentient it would be sentient. What a revelation.
The point is not that machines cannot be sentient, it’s that they are not sentient. Humans don’t have to be special for machines to not be sentient. To veer into accusations of spiritualism is a complete non-sequitur and indicates an inability to counter the actual argument.
And there is plenty of material explanations for why LLMs are not sentient, but I guess all those researchers and academics are human supremacist fascists and some redditor’s feelings are the real research.
And materialism is not physicalism. Marxist materialism is a paradigm through which to analyze things and events, not a philosophical position. It’s a scientific process that has absolutely nothing to do with philosophical dualism vs. physicalism. Invoking Marxist materialism here is about as relevant to invoking it to discuss shallow rich people “materialism”.
Oh that’s easy. There are plenty of complex integrals or even statistics problems that computers still can’t do properly because the steps for proper transformation are unintuitive or contradictory with steps used with simpler integrals and problems.
You will literally run into them if you take a simple Calculus 2 or Stats 2 class, you’ll see it on chegg all the time that someone trying to rack up answers for a resume using chatGPT will fuck up the answers. For many of these integrals, their answers are instead hard-programmed into the calculator like Symbolab, so the only reason that the computer can ‘do it’ is because someone already did it first, it still can’t reason from first principles or extrapolate to complex theoretical scenarios.
That said, the ability to complete tasks is not indicative of sentience.
Sentience is a meaningless word the way most people use it, it’s not defined in any specific material way.
You’re describing a faith-based view that humans are special, and that conflicts with the materialist view of the world.
If I’m wrong, share your definition of sentience here that isn’t just an idealist axiom to make humans feel good.
Lol, ‘idealist axiom’. These things can’t even fucking reason out complex math from first principles. That’s not a ‘view that humans are special’ that is a very physical limitation of this particular neural network set-up.
Sentience is characterized by feeling and sensory awareness, and an ability to have self-awareness of those feelings and that sensory awareness, even as it comes and goes with time.
Edit: Btw computers are way better at most math, particularly arithmetic, than humans. Imo, the first thing a ‘sentient computer’ would be able to do is reason out these notoriously difficult CS things from first principles and it is extremely telling that that is not in any of the literature or marketing as an example of ‘sentience’.
Damn this whole thing of dancing around the question and not actually addressing my points really reminds me of a ChatGPT answer. It would n’t surprise me if you were using one.
If you read it carefully you’d see I said your worldview was idealist, not the AIs.
AI can get sensory input and process it.
Can you name one way a human does it that a machine cannot, or are you relying on a gut feeling that when you see something and identify it it’s different than when a machine process camera input? Same for any other sense really.
If you can’t name one way, then your belief in human exceptionalism is not based in materialism.
Again, you are the one making the extraordinary claim about your favorite treat printers being synonymous with intelligent beings. You have to provide the extraordinary evidence, or else you’re just bootlicking for billionaires and their hyped up tech investments.
How? Could ChatGPT hypothetically accomplish any of the tasks your average person performs on a daily basis, given the hardware to do so? From driving to cooking to walking on a sidewalk? I think not. Abstracting and reducing the “smartness” of people to just mean what they can search up on the internet and/or an encyclopaedia is just reductive in this case, and is even reductive outside of the fields of AI and robotics. Even among ordinary people, we recognise the difference between street smarts and book smarts.
literally all of the hard problems
The computer toucher doesn’t even acknowledge the questions, but is smugly confident about the answers because of fucking Blade Runner.
I especially enjoyed “it has analytical skills because it has access to wolfram alpha”. incredible, unprompted own goal
The biggest hit for me was the straw effigy that the computer toucher set ablaze marked “Hexbear Luddites that believe that humans will always triumph in competitions against machines.”
Well, why are you here talking to us and not to ChatGPT?
That computer toucher needs to constantly remind us meat computers of our inferiority and imminent total replacement by treat printers that print out “I love you” statements on demand when they feel lonely.
ChatGPT can’t vote.
This is a leftist site. Voting is a performative gesture that lacks little actual power, but you’re a liberal so of course you believe in it the same way you believe that a LLM is a comparable proxy to an organic brain.
That is to say, you are confidently wrong, just like the output of the treat printers.
We already knew you were a credulous liberal, and judging by how many times in this thread you ignored actual educated responses to your science fantasy bullshit claims, you’re not even educated or trained in tech yourself.
I know most liberals are ignorant, but you’re fucking embarrassing yourself here.
In bourgeois dictatorships, voting is useless, it’s a facade. They tell their subjects that democracy=voting but they pick whoever they want as rulers, regardless of the outcome. Also, they have several unelected parts in their government which protect them from the proletariat ever making laws.
Real democracy is when the proletariat rules.
By that I meant any political activity really. This isn’t a defense of electoralism.
Machines are replacing humans in the economy, and that has material consequences.
Holding onto ideas of human exceptionalism is going to mean being unprepared.
A lot of people see minor obstacles for machines, and conclude they can’t replace humans, and return to distracting themselves with other things while their livelihood is being threatened.
Robotaxis are already operating, and a product to replace most customer service jobs has just been released for businesses to order about 1 months ago.
Many in this thread are navel gazing about how that bot won’t really experience anything when they get created, as if that mattered to any of this.
Bourgies are human exceptionalists. They want human slaves. That’s why they want sentient AI. And that’s why machines will never be able to replace humans in capitalism.
If the bourgeoisie stick around long enough, they may very well finance the development of artificial slaves that are not just robots or programs (or LLMs) but are self-aware enough to potentially experience suffering while toiling for the owners’ benefit.
And they fucking want that.
Still waiting for you to do that, you pathologically-lying evidence-dodging craven jackoff.
LLMs are not “AI.”
“AI” is a marketing hype label that you have outright swallowed, shat out over a field of credulity, fertilized that field with that corporate hype shit, planted hype seeds, waited for those hype seeds to sprout and blossom, reaped those hype grains, ground those hype grains into hype flour, made a hype cake, baked that hype cake, put candles on it to celebrate the anniversary of ChatGPT’s release, ate the entire cake, licked the tray clean, got an upset stomach from that, then stumbled over, squatted down, and shat it out again to write your most recent reply.
Nothing you are saying has any standing with actual relevant academia. You have bought into the hype to a comical degree and your techbro clown circus is redundant, if impressively long. How long are you going to juggle billionaire sales pitches and consider them actual respectable computer science?
How many times are you going to outright ignore links I’ve already posted over and over again stating that your position is derived from marketing and has no viable place in actual academia?
Also, your contrarian jerkoff contempt for humanity (and living beings in general) doesn’t make you particularly logical or beyond emotional bias.
Also, take those leftist jargon words out of your mouth, especially “materialism;” you’re parroting billionaire fever dreams and takes likely without the riches to back you up, effectively making you a temporarily-embarrassed bourgeoisie-style bootlicker without an actual understanding of the Marxist definition of the word.
I’ll keep reposting this link until you or your favorite LLM treat printer digests it for you enough for you to understand.
https://www.marxists.org/archive/lenin/works/1922/mar/12.htm
it can’t experience subjectivity since it is a purely information processing algorithm, and subjectivity is definitionally separate from information processing. even if it perfectly replicated all information processing human functions it would not necessarily experience subjectivity. this does not mean that LLMs will not have any economic or social impact regarding the means of production, not a single person is claiming this. but to understand what impacts it will have we have to understand what it is in actuality, and even a sufficiently advanced LLM will never be an AGI.
i feel the need to clarify some related philosophical questions before any erroneous assumed implications arise, regarding the relationship between Physicalism, Materialism, and Marxism (and Dialectical Materialism).
(the following is largely paraphrased from wikipedia’s page on physicalism. my point isn’t necessarily to disprove physicalism once and for all, but to show that there are serious and intellectually rigorous objections to the philosophy.)
Physicalism is the metaphysical thesis that everything is physical, or in other words that everything supervenes on the physical. But what is the physical?
there are 2 common ways to define physicalism, Theory-based definitions and Object based definitions.
A theory based definition of physicalism is that a property is physical if and only if it either is the sort of property that phyiscal theory tells us about or else is a property which metaphysically supervenes on the sort of property that physical theory tells us about.
An object based definition of physicalism is that a property is physical if and only if it either is the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents or else is a property which metaphysically supervenes on the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents.
Theory based definitions, however, fall civtem to Hempel’s Dillemma. If we define the physical via references to our modern understanding of physics, then physicalism is very likely to be false, as it is very likely that much of our current understanding of physics is false. But if we define the physical via references to some future hypothetically perfected theory of physics, then physicalism is entirely meaningless or only trivially true - whatever we might discover in the future will also be known as physics, even if we would ignorantly call it ‘magic’ if we were exposed to it now.
Object-based definitions of physicalism fall prey to the argument that they are unfalsifiable. In a world where the fact of the matter that something like panpsychism or something similar were true, and in a world where we humans were aware of this, then an object-based based definition would produce the counterintuitive conclusion that physicalism is also true at the same time as panpsychism, because the mental properties alleged by panpsychism would then necessarily figure into a complete account of paradigmatic examples of the physical.
futhermore, supervenience-based definitions of physicalism (such as: Physicalism is true at a possible world 2 if and only if any world that is a physical duplicate of w is a positive duplicate of w) will at best only ever state a necessary but not sufficient condition for physicalism.
So with my take on physicalism clarified somewhat, what is Materialism?
Materialism is the idea that ‘matter’ is the fundamental substance in nature, and that all things, including mental states and consciousness, are results of material interactions of material things. Philosophically and relevantly this idea leads to the conclusion that mind and consciousness supervene upon material processes
But what, exactly, is ‘matter’? What is the ‘material’ of ‘materialism’? Is there just one kind of matter that is the most fundamental? is matter continuous or discrete in its different forms? Does matter have intrinsic properties or are all of its properties relational?
here field physics and relativity seriously challenge our intuitive understanding of matter. Relativity shows the equivalence or interchangeability of matter and energy. Does this mean that energy is matter? is ‘energy’ the prima materia or fundamental existence from which matter forms? or to take the quantum field theory of the standard model of particle physics, which uses fields to describe all interactions, are fields the prima materia of which energy is a property?
i mean, the Lambda-CDM model can only account for less than 5% of the universe’s energy density as what the Standard Model describes as ‘matter’!
i have here a paraphrase and a quotation, from Noam Chomsky (ew i know) and Vladimir Lenin respectively.
sumamrizing one of Noam Chomsky’s arguments in New Horizons of the Study of Language and Mind, he argues that, because the concept of matter has changed in response to new scientific discoveries, materialism has no definite content independent of the particular theory of matter on which it is based. Thus, any property can be considered material, if one defines matter such that it has that property.
Similarly, but not identically, Lenin says in his Materialism and Empirio-criticism:
“For the only [property] of matter to whose acknowledgement philosophical materialism is bound is the property of being objective reality, outside of our consciousness”
and given these two quotes, how are we to conclude anything other than that materialism falls victim to the same objections as with physicalism’s object and theory-based definitions?
to go along with Lenin’s conception of materialism, my conception of subjectivity fits inside his materialism like a glove, as the subjectivity of others is something that exists independently of myself and my ideas. you will continue to experience subjectivity even if i were to get bombed with a drone by obama or the IDF or something and entirely obliterated.
So in conclusion, physicalism and materialism are either false or only trivially true (i.e. not necessarily incompatible with opposing philosophies like panpsychism, property dualism, dual aspect monism, etc.).
But wait, you might ask - isn’t this a communist website? how could you reject or reduce materialism and call yourself a communist?
well, because i think that historical materialism is different enough than scientific or ontological materialism to avoid most of these criticisms, because it makes fewer specious epistemological and ontological claims, or can be formulated to do so without losing its essence. for example, here’s a quote from the wikipedia page on dialectical materialism as of 11/25/2023:
“Engels used the metaphysical insight that the higher level of human existence emerges from and is rooted in the lower level of human existence. That the higher level of being is a new order with irreducible laws, and that evolution is governed by laws of development, which reflect the basic properties of matter in motion”
i.e. that consciousness and thought and culture are conditioned by and realized in the physical world, but subject to laws irreducible to the laws of the physical world.
i.e. that consciousness is in a relationship to the physical world, but it is different than the physical world in its fundamental principles or laws that govern its nature.
i.e. that the base and the superstructure are in a 2 way mutually dependent relationship! (even if the base generally predominates it is still 2 way, i.e. the existence of subjectivity =/= Idealism or substance dualism or belief in an immortal soul)
So yeah, i still believe that physics are useful, of course they are. i believe that studying the base can heavily inform us about how the superstructure works. i believe that dialectical materialism is the most useful way to analyze historical development, and many other topics, in a rigorous intellectual manner.
so, to put aside all of the philosophical disagreement, let’s assume your position that chat GPT really is meaningfully subjective in similar sense to a human (and not just more proficient at information processing)
what are the social and ethical implications of this?
therefore, and most importantly, if premise 1 is incorrect, if you are wrong, we will have exterminated the most advanced form of subjective sentient life in the universe and replaced it with literal p-zombie robot recreations of ourselves.
Removed by mod
🤡 The techbro clown continues to juggle marketing hype. 🤡
🤡 Are you going to continue flitting around this thread a few dozen more times, dodging each and every person who refutes your techbro fantasies of LLMs being actual “AI” by any academic definition and also dodging actual computer science educated people saying that your belief system is fucking wrong and has no academic backing, all while proselytizing for the robot god of the future like the most euphoric Redditor of all time? 🤡
🤡 Also, because you keep rolling your techbro unicycle around while juggling those hype pitches, I’ll repeat myself again: 🤡
LLMs are not “AI.”
“AI” is a marketing hype label that you have outright swallowed, shat out over a field of credulity, fertilized that field with that corporate hype shit, planted hype seeds, waited for those hype seeds to sprout and blossom, reaped those hype grains, ground those hype grains into hype flour, made a hype cake, baked that hype cake, put candles on it to celebrate the anniversary of ChatGPT’s release, ate the entire cake, licked the tray clean, got an upset stomach from that, then stumbled over, squatted down, and shat it out again to write your most recent reply.
Nothing you are saying has any standing with actual relevant academia. You have bought into the hype to a comical degree and your techbro clown circus is redundant, if impressively long. How long are you going to juggle billionaire sales pitches and consider them actual respectable computer science?
How many times are you going to outright ignore links I’ve already posted over and over again stating that your position is derived from marketing and has no viable place in actual academia?
Also, your contrarian jerkoff contempt for humanity (and living beings in general) doesn’t make you particularly logical or beyond emotional bias.
Also, take those leftist jargon words out of your mouth, especially “materialism;” you’re parroting billionaire fever dreams and takes likely without the riches to back you up, effectively making you a temporarily-embarrassed bourgeoisie-style bootlicker without an actual understanding of the Marxist definition of the word.
I’ll keep reposting this link until you or your favorite LLM treat printer digests it for you enough for you to understand.
https://www.marxists.org/archive/lenin/works/1922/mar/12.htm
Perceptrons have existed since the
80s60s. Surprised you don’t know this, it’s part of the undergrad CS curriculum. Or at least it is on any decent school.@[email protected] sounds more like one of those “I FUCKING LOVE SCIENCE” types than anyone experiencing academic rigor in an actual scientific field, except computer edition.
LOL you are a muppet. The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell. Which are you? Don’t answer that I can tell.
This tech is less then a year old, burning billions of dollars and desperately trying to find people that will pay for it. That is it. Once it becomes clear that it can’t make money, it will die. Same shit as NFTs and buttcoin. Running an ad for sex asses won’t finance your search engine that talks back in the long term and it can’t do the things you claim it can, which has been proven by simple tests of the validity of the shit it spews. AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.
The only thing it’s been semi-successful in has been stealing artists work and ruining their lives by devaluing what they do. So fuck AI, kill it with fire.
So it really is just like us, heyo
The only thing you agreed with is the only thing they got wrong
Not really.
Third option, people who are able to use it to learn and improve their craft and are able to be more productive and work less hours because of it.
please, for all if our sakes, don’t use chatgpt to learn. it’s subtly wrong in ways that require subject-matter experience to pick apart and it will contradict itself in ways that sound authoritative, as if they’re rooted in deeper understanding, but they’re extremely not. using LLMs to learn is one of the worst ways to use it. if you want to use it to automate repetitive tasks and you already know enough to supervise it, go for it.
honestly, if I hated myself, I’d go into consulting in about 5ish years when the burden of maintaining poorly written AI code overwhelms a bunch of shitty companies whose greed overcame their senses - such consultants are the only people who will come out ahead in the current AI boom.
Sounds a bit like the LLM hype riders in this thread, too.
as it turns out, is a poor way to train both LLMs and people
Garbage in, garbage out, and the very logical rational computer touchers took a big dose of hype marketing before coming here.
what’s extremely funny to me is that this exact phrase was used when I was in college to explain why you shouldn’t do exactly what the OpenAI team later did, in courses on AI and natural language processing. we were straight up warned not to do it, with a discussion on ethics centered on “what if it works and you don’t wind up with model that spews unintelligible gibberish?” (the latter was mostly how it went back then - neural nets were extremely hard to train back then). there were a couple of kids who were like "…but it worked… " and the professor pointedly made them address the consequences.
this wasn’t even some liberal arts school - it was an engineering school that lacked more than a couple of profs qualified to teach philosophy and ethics. it just used to be the normal way the subject was taught, back when it was still normal to discourage the use of neural nets for practical and ethical reasons (like consider almost succeeding and driving a fledgling, sentient being insane because you fed it a torrent of garbage).
I went back when the ML boom and sat in on a class - the same prof had cut all of that out of the curriculum. he said it was cause the students complained about it and the new department had told him to focus on teaching them how to write/use the tech and they’d add an ethics class later.
instead, we just have an entire generation who have been taught to fail the Turing test against a chatbot that can’t remember what it said a paragraph ago. I feel old.
Microsoft Tay, after one day exposed to internet nazis
And like the LLMs themselves, they’ll confidently be wrong and assume knowledge and mastery that they simply don’t have, as seen in this thread.
It’s absurd you don’t think there are professionals harnessing ai to write code faster, that is reviewed and verified.
it’s absurd that you think these lines won’t be crossed in the name of profit
Never said they wouldn’t. But you’re saying the ONLY people benefitting from the ai boom are the people cleaning up the mess and that’s just not true at all.
Some people will make a mess
Some people will make good code at a faster pace than before
those people don’t benefit. they’re paid a wage - they don’t receive the gross value of their labor. the capitalists pocket that surplus value. the people who “benefit” by being able to deliver code faster would benefit more from more reasonable work schedules and receiving the whole of the value they produce.
I don’t know if you know this but I can play neopets and chill once my ticket is written and passes tests. So now I work less hours a week
Also my personal project is developing faster.
Third option: selectively ignore everyone who has already been fucked over with how it’s been applied and used against them, not just lost jobs but everything from health insurance rejections to tightening and worsening surveillance and marketing manipulation, all because you’re a computer toucher and live in a bubble of “I got mine” selfishness and probably expect a superior AI waifu to pop up any moment now, just like in the Blade Runner movie that you’ve already made excuses for as a point of “very scientific very logical” reference.
I think it works well a a kind of replacement for google searches. This is more of a dig on google as SEO feels like it ruined search. Ads fill most pages of search and it’s tiring to come up with the right sequence of words to get the result I would like.
Chances are LLM technology will have an arms race against itself, advertisers versus individual users, and the owners of such technology make bank both ways so it’s good for them.
what? I was working on this stuff 15 years ago and it was already an old field at that point. the tech is unambiguously not old. they just managed to train an LLM with significantly more parameters than we could manage back then because of computing power enhancements. undoubtedly, there have been improvements in the algorithms but it’s ahistorical to call this new tech.
Meanwhile, people hyping the technology that are thinking about what it will look like after a few years:
https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims
I really love being accused of veering into supernatural territory by people in an actual cult. Not random people on hexbear but actual, real life techbros. Simultaneously lecturing me about my supposed anti-physicalism while also harping on about “the singularity”.
Such hypocritical techno-woo tends to come from the kind of Reddit New Atheists that previously stumbled into deism with extra steps (“dae what if the universe is a le simulation?!”) too.
It was actually pretty funny. I was interviewing a guy running an illegal aircraft charter operation when he went off on this rant about FAA luddites. I then personally shut down his operation. I guess techbros aren’t used to being told “no.”
They don’t just feel right. They feel inevitable, like every bazinga outcome they want is a matter of course and only a matter of time, from mass adoption of internet funny money and NFTs to the grand nerd rapture to come.
Again shows that atheism without dialectical materialism is severly lacking and tend to veers into weird idealist takes, especially for the agnostics which aren’t even atheist just seeks the superstition that would fit them.
I’ve known a lot of self-described New Atheists in my college years, and the ones that didn’t become actual leftists either found religion all over again through “cultural Christianity” or even “secular Calvinism” reskins (or ) or outright replaced the old style deity with occult “Futurology” bullshit such as “simulation theory” and “Singularity” nerd rapture prophecies.
Even the title says it
Very good find. Explains the mindset of many self-described leftists I’ve previously argued with that had such very leftist takes like “those workers that lost their jobs did not have real jobs anyway” when fucked over by LLMs, often reducing the suffering of the working class to the most smugly nihilistic parameters (“meat” usually) to elevate the means of production over the worker (even replacing/surpassing their perceived humanity!) in a way that was really fucking bourgeoisie to me.
Yeah, though i also notice how many of the smug “freelancers” of the type that always enthusiastically joined the bourgeoisie in sneering at workers losing their jobs because “you can’t automate creativity and we are creative unlike you dirty proles” suddenly cry about losing their comissions, bash people pointing it and demand help from the unions they always hated. It exactly confirms what Marx and Lenin said about proletarisation of the artisans and petty burgies, and i know enough of them to feel some schadenfreude on both personal and systemic levels.
Old tale actually, maybe it will give some of them some class consciousness, as it apparently already did to a lot of the creative wage workers, judging from the strikes. And speaking of strikes, idk how the unions portray this - do they try to actually put it in political context of capitalism or just went with “let’s turn back the clock”?
Bloody hell those guys are going full adeptus mechanicus already.
But disliking LLM hype waves or doubting the imminence of AGI and/or the Singularity™ makes you a Luddite that believes in magic and fairy dust.
That is, if someone treat it as binary “either eat elon shit or take the clogs in your hands” problem. Actually dismissing or rejecting it entirely is literally a neoludditism, though admittedly the problem is of lesser magnitude than the original one since it’s more of a escalation than entire new quality, but it won’t go away, the world will have to live with it.
I can disbelieve the extraordinary claims of how sentient/sapient the treat printers are becoming (and mocking the misanthropic pop-nihilistic reductionistic “meat computer” prattling from euphoric computer touchers) while also acknowledging that the technology is advancing rapidly in what it is specialized to do, which unfortunately is mostly going to fuck people over because capitalism.
I see a lot of potential there for the proletariat, but at the minimum, in capitalism, it would require some really free and accessible tool.
As with other previous technological advances, it may actually be somewhat accessible until it isn’t. Enshittification is a very real process, and as far as LLMs go, the “free and accessible” period is already upon us, and it’s already not looking that great for the proletariat and the grip will tighten against them.
Only it’s not the “free and accessible” period now that was for like 3 months, we are already on “until it isn’t” time where the free ones don’t really work anymore and you need to register for everything and any useful usage require payment.