Cool, my work for my company with AI for medical scans has detected thousands upon thousands of tumors and respiratory diseases, long before even the most well trained doctor could have spotted them, and as a result saved many of those people’s lives. But it’s good to know we’re all just lazy pieces of shit because we use AI.
When people talk about “AI” nowadays, they’re usually talking about LLMs and other generative AI, especially if it’s used to replace workers or human effort. Analytical AI is perfectly valid and is a wonderful tool!
Assuming what you’re describing works (and i have no particular reason to doubt, beyond the generally poor reputation of AI), that’s a different beast than “lol i fired all the copywriters, artists, and support staff so I, the owner, could keep more profits for myself!”. Or, “I didn’t pay attention in English 101 and don’t know how to write, so I’ll have expensive auto suggest do it for me”
Yeah that’s my point. AI has a lot of problems that need to be addressed, put people are getting so mad about AI that conversation around it is getting more and more extreme to the point people are talking about all AI being bad.
I think what was being implied though was that the original poster was saying that any use or talk of AI by a company immediately invalidates it, regardless of there being any specific traits like firing workers present. (e.g. “Using AI” was the only prerequisite they mentioned)
So it seems like, based on the original wording, if they saw a hospital going “we use top of the line AI to identify tumors and respiratory diseases early” they would just disregard that hospital entirely, without actually caring how the AI works, is implemented, or affects the employment of the other people working there, even though it’s wholly beneficial.
Or the original poster was going by what is commonly referred to as AI today and not the multi-generational tree of technologies that fall under the academic definition of AI.
The core algorithm we built upon is practically the same one used by AI image generators, the main difference is that we have deeper convolutional layers and more of them and we don’t do any GAN stuff that the newer image generators use.
The problem is that anything even remotely related to AI is just being called “AI,” whether it’s by the average person or marketing people.
So when you go to a company’s website and you see “powered by AI,” they could be talking about LLMs, or an ML model to detect cancer, and the average person won’t know the difference between the technologies.
So if someone universally rejects anything that says it “uses AI” just because what’s usually called “AI” is just badly implemented LLMs that make the experience worse, they’re going to inevitably catch nearly every ML model in the crossfire too, since most companies are calling their ML use cases “AI powered,” and that means rejecting companies that develop models like those that detect tumors, predict protein folding patterns, identify anomalies in other health characteristics, optimize traffic routes in cities, etc, even if those use cases aren’t even related to LLMs and all the flaws they often bring.
Cool, my work for my company with AI for medical scans has detected thousands upon thousands of tumors and respiratory diseases, long before even the most well trained doctor could have spotted them, and as a result saved many of those people’s lives. But it’s good to know we’re all just lazy pieces of shit because we use AI.
When people talk about “AI” nowadays, they’re usually talking about LLMs and other generative AI, especially if it’s used to replace workers or human effort. Analytical AI is perfectly valid and is a wonderful tool!
Assuming what you’re describing works (and i have no particular reason to doubt, beyond the generally poor reputation of AI), that’s a different beast than “lol i fired all the copywriters, artists, and support staff so I, the owner, could keep more profits for myself!”. Or, “I didn’t pay attention in English 101 and don’t know how to write, so I’ll have expensive auto suggest do it for me”
Yeah that’s my point. AI has a lot of problems that need to be addressed, put people are getting so mad about AI that conversation around it is getting more and more extreme to the point people are talking about all AI being bad.
I think what was being implied though was that the original poster was saying that any use or talk of AI by a company immediately invalidates it, regardless of there being any specific traits like firing workers present. (e.g. “Using AI” was the only prerequisite they mentioned)
So it seems like, based on the original wording, if they saw a hospital going “we use top of the line AI to identify tumors and respiratory diseases early” they would just disregard that hospital entirely, without actually caring how the AI works, is implemented, or affects the employment of the other people working there, even though it’s wholly beneficial.
At least, that’s just my reading of it though.
Or the original poster was going by what is commonly referred to as AI today and not the multi-generational tree of technologies that fall under the academic definition of AI.
Register is a thing.
Machine learning is not artificial intelligence.
No they are the same thing.
The core algorithm we built upon is practically the same one used by AI image generators, the main difference is that we have deeper convolutional layers and more of them and we don’t do any GAN stuff that the newer image generators use.
The problem is that anything even remotely related to AI is just being called “AI,” whether it’s by the average person or marketing people.
So when you go to a company’s website and you see “powered by AI,” they could be talking about LLMs, or an ML model to detect cancer, and the average person won’t know the difference between the technologies.
So if someone universally rejects anything that says it “uses AI” just because what’s usually called “AI” is just badly implemented LLMs that make the experience worse, they’re going to inevitably catch nearly every ML model in the crossfire too, since most companies are calling their ML use cases “AI powered,” and that means rejecting companies that develop models like those that detect tumors, predict protein folding patterns, identify anomalies in other health characteristics, optimize traffic routes in cities, etc, even if those use cases aren’t even related to LLMs and all the flaws they often bring.