

I actually use AI a lot, I’ve seen that the safeguards aren’t very well managed, I find some situations where it mentions completely fabricated information even after deep search or reasoning, although that said, it is also improving, since even last year it was way worse.
Then again though, it is also the poisoned dude’s fault for not searching up what these chemicals are, so it’s really both sides being responsible.
It’s difficult to be sure, since GPT 5, the newest model, comes with a new structure of smaller, more specialised models combining outputs after being given a prompt by a different model the user interfaces with first, this is called mixture of experts.
How do you know that OpenAI had made sure the outputs from multiple expert models wouldn’t contradict, wouldn’t cause accidental safeguard bypasses, etc?
Personally, I trust GPT 4o more, even then though, I usually substitute the output with actual research when needed.