Previously it would say 2. Gpt thinks wailord is the heaviest Pokemon, google thinks you can buy a runepickaxe on osrs at any trader store. Was it google that suggested a healthy dose of glue for pizza to keep the toppings on?
AI is right when you use it for the correct things. The user is wrong when they think it is supposed to be all powerful. And AI companies are ultimately to blame for marketing it as something more than it is: a buggy word calculator, that requires a lot of user effort.
I use it every day, and I know what it can and what it cannot do. I don’t complain, because ai understand it’s basically “alpha” software at this point, but I can see a huge difference between it now and last year.
Try to ask ChatGPT to make an image of a cat without a tail. It’s hilariously impossible. Does that mean it can’t summarize a document or help me calculate compound interest or help me understand a coding concept? Nope.
The majority of people using AI use it to ask a question, and AI will reliably spit out the wrong answer. This ‘alpha’ product is pushed absolutely fucking everywhere, is currently terrible to use properly for 99% of people, and online in less techy spaces people try AI like it’s always correct.
If you can spend the time to filter out it’s constant lies, good for you, most people don’t even know it’s constantly lying.
Ask it how many Rs there are in the word strawberry.
Or have it write some code and see if it invents libraries that don’t exist.
Or ask it a legal question and see if it invents a court case that doesn’t exist.
It’s important to know how to use it, not just blindly accept its responses.
Previously it would say 2. Gpt thinks wailord is the heaviest Pokemon, google thinks you can buy a runepickaxe on osrs at any trader store. Was it google that suggested a healthy dose of glue for pizza to keep the toppings on?
Ai is wrong more often than right.
AI is right when you use it for the correct things. The user is wrong when they think it is supposed to be all powerful. And AI companies are ultimately to blame for marketing it as something more than it is: a buggy word calculator, that requires a lot of user effort.
I use it every day, and I know what it can and what it cannot do. I don’t complain, because ai understand it’s basically “alpha” software at this point, but I can see a huge difference between it now and last year.
Try to ask ChatGPT to make an image of a cat without a tail. It’s hilariously impossible. Does that mean it can’t summarize a document or help me calculate compound interest or help me understand a coding concept? Nope.
The majority of people using AI use it to ask a question, and AI will reliably spit out the wrong answer. This ‘alpha’ product is pushed absolutely fucking everywhere, is currently terrible to use properly for 99% of people, and online in less techy spaces people try AI like it’s always correct.
If you can spend the time to filter out it’s constant lies, good for you, most people don’t even know it’s constantly lying.
Right. It’s not ready for the average user. I agree completely. It has, however, made me significantly more productive in every part of my life.