They’re two different tools with different purposes, so why treat one like it can replace the other?
On the use side, a lot of users want an answer to their question, not a list of pages that may have the answer, with the answer made more obscure via SEO optimization. AI is just a continuation of this.
deleted by creator
Allow me to add in an extract from the film “The Cube” which addresses this point
WORTH
It’s maybe hard for you to understand, but there’s no conspiracy. Nobody
is in charge. It’s a headless blunder operating under the illusion of a masterplan.
Can you grasp that? Big brother is not watching you.
QUENTIN
What kind of fucking explanation is that?
WORTH
It’s the best you�re gonna get. I looked and the only explanation I can
come to is that there is nobody up there.
QUENTIN
Somebody had to say yes to this thing.
WORTH
What thing? Only we know what it is.
QUENTIN
We have no idea, what it is.
WORTH
We know more than anybody else. I mean somebody might have known
sometime, before they got fired or voted out or sold it. But if this place ever had a
purpose, then it got miscommunicated or lost in the shuffle. This is an accident, a
forgotten propetual, public, worksproject. Do you think anybody wants to ask
questions? All they want is a clear conscience and a fat paycheck. I mean, I lead on
my desk for months. This was a great job!
QUENTIN
Why put people in it?
WORTH
Because it’s here. you have to use it or admit it’s pointless.
QUENTIN
But it is pointless!
WORTH
Quentin… That’s my point.
Every single answer in this thread just boils down to money.
deleted by creator
It keeps you on there site. Same reason Twitter banned links and has grok now, the longer you stay on the site the more likely you are to look at or even click on an ad on that site. If you google something, then quickly scroll past the first couple ad links and click on the first non ad link you are maybe only staying on Google for 1 or 2 seconds. If you get an “ai overview” at the top and start reading through that then you’re maybe spending 10-30 seconds reading through that. That’s another 10 seconds that the ad was displayed that Google can go to there ad customers and say people were looking at it longer.
Another reason more motivated by user experience is also that the AI has a better “understanding” of meaning compared to typical search algorithms. Say you search “Starbucks price at closing” when you meant “Starbucks stock price at time of market closing” an AI would be more able to discern that meaning as opposed to a traditional algorithm which may show you the closing time of the nearest Starbucks, or the price of one of there drinks etc.
Google’s delusional fantasy has always been you want to buy a guitar, you go on Google, their ad algorithm shows you a perfectly targeted ad for the guitar you want, and you love and trust Google so you click the ad and buy it. They think LLMs will make that actually work, they want to give you a Grima Wormtongue that can simper and manipulate until you do love and trust it, and once you’re a rotting husk it whispers the ad algorithm into your ear so you think it was your idea.
If that was the goal then maybe they should’ve worked harder so that their algo didn’t spit out cheap poorly made garbage with a million fake reviews every time I try to search anything remotely purchasable.
I said to take the Wizards staff!!!
I think, it’s mainly just companies trying to get their foot into the market. If Microsoft can establish LLMs as alternative to search, then it’s Google that loses market share. And once they control a share of the market, then they figure out how to capitalize on that.
At the very least, they can use it to control what information is available to the public and how it’s framed. But they can also integrate things like the LLM generating an affiliate link when asked about a product, or just generally weaving ad placements into the generated answers.
Because they’ve sunk billions into the hype train, and it’s clear most people don’t really want it. So it’s being force-fed to everyone in every product to try to get some kind of ROI.
That, and the more interactions it can get, the more data it can suck up to train on and/or sell.
My 2c:
Control:
By adding rules to AI output, the ruling elite seek to regain what the internet took from them: Information control. Some scandal happens? AI monitoring erases all indication of it, or pushes the narrative in the desired direction.We have easy evidence of that on the Canada subreddit. Trudeau, for his faults, was unequivocally the single friendliest prime minister to Alberta that the country ever had, considerably more so than the hack that was Harper. But thanks to astroturfing and media control, the conservatives of Alberta see him as one of the worst.
Monetization of big data. The other that I can see is that AI can solve end of the big data issue. Sure, big data has reams and reams of data. But they’ve had trouble processing it and turning it into useful monetizable information/product. Even Google admits that for all of their data on everyone, their clickthrough rates are atrocious. The hope is that AI can sort through those massive data sets and give them the easy data they want.
They had just as much control with the old search algorithm, though. They could still pick and choose what you see on the search results with their opaque algorithm. The only difference would be that instead of only showing some regime captured media outlet they could generate there own narrative on the fly, but it’s not like there’s a shortage of sycophantic media written by actual people they could pull from.
There isn’t much of a reason for that besides money.
Psychopathy shouldn’t be disregarded though.
New and trendy bandwagon. Aside from the reasons you excluded, fear of being left behind is my best answer to your questions
Your product might not benefit from AI but you definitely can get more VC investment dollars if you bolt an LLM onto the side of it and feature AI as central to what you’re offering. This is because VCs treat tech like fashion and don’t actually understand how it works or how it would integrate into our lives.
This was true for the nascent internet, and for blockchain even more, but truly nobody really understands how llms work so its way worse.
That may be true for small or mid size startups that are reliant on VC money, but we’re talking about Google and Microsoft here, they already have there money printers going and don’t need VC money.
theres serious overlap. they are not mutually exclusive. the ‘text generator’ is utilizing the search prompt to identify the most likely “next word” which would translate to most likely the best result for the search.
theoretically, its just a better search engine being able to handle an obscene number of variables. theoretically.
It’s more like: Traditional search pipes first page of results to the bot. The bot reads the pages from the results and tries to identify an answer or the best result from the set. Both the bot summary and the adjusted ranking for the results are returned. This gives a chance at a better experience for the user because they don’t have to read all the pages themselves to try find the answer they were looking for. However there is a huge margin for error since the bot is underpowered due to Google balancing the amount they pay for each search with the amount they earn for each search. So there end up being misinterpretations, hallucinations, biased content etc.
If they used a top end model like Claude Sonnet 3.7 and piped it enough contextual information, the AI summaries would be quite accurate and useful. They just can’t afford to do that and they want to use their own Gemini bs.
Perhaps true, but with the nature of the errors involved (generating anything instead of error messages for lacking info) and requisite reviewing, which itself demands research (which was what it was being used to shortcut to begin with in this context), isn’t it still something of an ill fit for this?
They know it’s bad. They want you locked in to their ecosystem. The goal is to be the first to get consumers locked in. So they’re rushing to market with incomplete products because if they don’t release NOW someone else might beat them to it.
AI hype is also Collusion among the ultra wealthy to artificially prop up their investments.
Hype