Same happens every time I’ve tried to use it for search. Will be radioactive for this type of thing until someone figures that out. Quite frustrating, if they spent as much time on determining the difference between when a user wants objective information with citations as they do determining if the response breaks content guidelines, we might actually have something useful. Instead, we get AI slop.
Last night, we tried to use chatGPT to identify a book that my wife remembers from her childhood.
It didn’t find the book, but instead gave us a title for a theoretical book that could be written that would match her description.
At least it said if it exists, instead of telling you when it was written (hallucinating)
Maybe it’s trying to motivate me to become a writer.
Same happens every time I’ve tried to use it for search. Will be radioactive for this type of thing until someone figures that out. Quite frustrating, if they spent as much time on determining the difference between when a user wants objective information with citations as they do determining if the response breaks content guidelines, we might actually have something useful. Instead, we get AI slop.