Oh noo you called me a robot racist. Lol fuck off dude you know that’s not what I’m saying
The problem with supporters of AI is they learned everything they know from the companies trying to sell it to them. Like a 50s mom excited about her magic tupperware.
AI implies intelligence
To me that means an autonomous being that understands what it is.
First of all these programs aren’t autonomous, they need to be seeded by us. We send a prompt or question, even when left alone to its own devices it doesn’t do anything until it is given an objective or reward by us.
Looking up the most common answer isn’t intelligence, there is no understanding of cause and effect going on inside the algorithm, just regurgitating the dataset
These models do not reason, though some do a very good job of trying to convince us.
Looking up the most common answer isn’t intelligence, there is no understanding of cause and effect going on inside the algorithm
In order for that to be true, the entire dataset would need to be contained within the LLM. Which it is not. If it were, a model wouldn’t have to undergo training.
AI implies intelligence
You seem to be mistaking ‘intelligence’ for ‘human-like intelligence’. This is not how AI is defined. AI can be dumber than a gnat, but if it’s capable of making decisions based on stimulus without each set of stimulus and decision being directly coded into it, then it’s AI. It’s the difference between what is ACTUALLY called AI, and when a sci-fi show or novel talks about AI.
Oh noo you called me a robot racist. Lol fuck off dude you know that’s not what I’m saying
The problem with supporters of AI is they learned everything they know from the companies trying to sell it to them. Like a 50s mom excited about her magic tupperware.
AI implies intelligence
To me that means an autonomous being that understands what it is.
First of all these programs aren’t autonomous, they need to be seeded by us. We send a prompt or question, even when left alone to its own devices it doesn’t do anything until it is given an objective or reward by us.
Looking up the most common answer isn’t intelligence, there is no understanding of cause and effect going on inside the algorithm, just regurgitating the dataset
These models do not reason, though some do a very good job of trying to convince us.
…what?
In order for that to be true, the entire dataset would need to be contained within the LLM. Which it is not. If it were, a model wouldn’t have to undergo training.
You seem to be mistaking ‘intelligence’ for ‘human-like intelligence’. This is not how AI is defined. AI can be dumber than a gnat, but if it’s capable of making decisions based on stimulus without each set of stimulus and decision being directly coded into it, then it’s AI. It’s the difference between what is ACTUALLY called AI, and when a sci-fi show or novel talks about AI.
A little thought experiment: How would you determine whether another human being understands what it is? What would that look like in a machine?