Yeah well maybe the web shouldn’t be a business
god what I wouldnt give to go back to the days of the mid 90s, when the internet was nothing more than a collection of tech weirdos, with websites being nothing more than passion projects with no advertising, no SEO, no search engines, etc etc.
Cloudflare already ruined the web way before AI was even a thing.
For a glorious second, the entire world was able to communicate as one.
Then we catalogued every accessible reservoir of culture and knowledge, mined them bare, and refilled them with slop.
A global collective consciousness, hollowed out, replaced with static. No signal. Only noise.
I really non ironically miss the friction of the old internet.
I prefer how it took time to find some bare HTML university website, slowly browse through an index as if it was a book, and then find one non-SEO optimized page with all the information you needed on a topic for your research.
The time to browse, being exposed to other terms, having to select the pages yourself, being skeptical by nature, and then having to copy it by hand… This is a much more positive scenario than having a gigantic company learn everything about you and everybody else and then make these decisions for you, using some hidden algorithm, and with the ultimate goal of pushing their newest process. And of course, the content has been rendered virtually useless to appeal to that algorithm.
when the internet was a wild and unexplored frontier, and we were adventurers charting the unknown.
I’ll drink to that memory, my brother
Sorry for beginner reaction, can I use this in a website for an open source XHTML-extension I am developing? do I need to credit you somehow or lemmy link is enough or what is the best practice here?
I don’t know what the general policy is on Lemmy or the default license, but absolutely, feel free to use it, lemmy link is enough
Don’t forget to share your extension with us once you’re comfortable.
The Web was much better and more useful back before it had a business model. Good riddance.
So you’re saying the ad driven internet will die? And we will be left with what? Wikipedia and Lemmy? I for one welcome our AI overlords!
This is part of the larger problem that AI tools are trained on (and profit off of) content that is produced and hosted by others who are now seeing their traffic change from humans to bots. For content sources that pay for hosting with ads, this means a loss in revenue to pay for hosting. For content sources like Wikipedia, they are seeing their hosting costs increase significantly due to the increase in bot traffic. Even if you want every website that depends on ad revenue to fail (which I don’t entirety agree with), AI is still damaging the open web in other ways. Websites like Wikipedia for example may soon be forced to lock content behind logins or leverage aggressive captchas just to fight the bot traffic, which makes things worse for those of us that still prefer to use actual websites over AI summaries.
Nobody is scraping wikipedia over and over to create datasets for AIs, there are already open datasets and API deals. But wiki in particular has always had a data dump of the entire db bimonthly.
You clearly haven’t run a website recently. Until I set up anubis last week I was getting constant requests from dozens of various bot scrapers 24/7. That included the big ones.
Kay, and that has nothing to do with what i said. Scrapers, bots =/= AI. It’s not even the same companies that make the unfree datasets. The scrapers and bots that hit your website are not some random “AI” feeding on data lol. This is what some models are trained on, it’s already free so it’s doesn’t need to be individually rescraped and it’s mostly garbage quality data: https://commoncrawl.org/ Nobody wastes resources rescraping all this SEO infested dump.
Your issue has everything to do with SEO than anything else. Btw before you diss common crawl, it’s used in research quite a lot so it’s not some evil thing that threatens people’s websites. Add robots.txt maybe.
It would be very naïve to think they won’t go against Wikipedia and the fediverse at some point unfortunately…
Nah, it’s saying that ad and AI-driven internet will prevail. People only use Google to find an answer and don’t dig deeper, and if they do, it’s often because the links are sponsored. People using GPT’s are even less likely to click a link. Currently no ads, but just wait.
Apologies if you were joking.
“what should I do if I’m going through severe emotional distress? How to choose a good psychiatrist?”
ChatGPT: "I’m sorry to hear that you’ve been going to a stressful situation, it’s always worth talking about your feelings. I’ve come up with a plan to help you:
1 Purchase an ice cold Pepsi Black™ from a Pepsi official supplier"
Normies get AI slop, prosumer uses local llm…
Not sure about social media… Normie is allergic to reading anything beyond daddy’s propaganda slop. If it ain’t rage bait, he ain’t got time for it
Home grown slop is still slop. The lying machine can’t make anything else.
What LLM you using?
The web doesn’t have a business model, cloudflair, you do. And nobody cares because you suck.
Eh, Cloudflare provides a pretty good service for a very reasonable price.
But yeah, the web doesn’t have a business model in the same way a town square doesn’t, yet you can make a business work in both areas. Make a compelling product and people will pay you for it.
You mean product that literally makes web unusable for many and tracks your every single step with extremely invasive fingerprinting techniques? That product?
Cloudflare provides a pretty good service for a very reasonable price.
You mean selling fingerprinted user data to advertisers?
It needs to get even nastier so that it affects all the big players in a huge way so they get to do something about it. While it only affects the indie web we are all just gonna keep suffering.
When Google itself is the one stopping you from clicking on a website you’ve got a problem.
Yeah I think we’re going to be grappling with this issue for at least the next decade. The traditional web model falls apart under AI
To be fair, the traditional web models were falling apart prior to AI as well. We’ve gone so far past “ad driven” that Everything has to be full of ads and clickbait to drive revenue just to run the infrastructure, let alone pay for the pages creation and upkeep. Journalists and developers, services and goods are all using adword soup to try to get anything close to a useful revenue stream and it’ll just keep getting worse until we figure out a better business model. We’re going to increasingly see paywalls to try to make up for that, but a large part of people on the internet won’t want to spend money on quality sources when they use to be able to get it for free. It’s been a race to the bottom for a while and it’s at a point that isn’t sustainable long term. AI just accelerates that to the next level.
What’s challenging about paywalls and not wanting to spend money is not necessarily not wanting to spend, but convenience and cost. If it costs me 10 cents for each blog or tutorial or github page I look at while working on a project, or 1 cent for every funny video, that adds up. And do I have to put my credit card in for every site? Hope that every site has good enough security to prevent payment information leaks?
And I don’t think anyone is interested in a Netflix-style internet that fractures into 6 different subscriptions to get every site you need on the web.
Some sort of universal microtransaction layer is the dream. I believe there’s also a proposed web standard for it.
Scroll was also making it work before they got bought by Twitter
Hey Siri, insert the Donald Glover “GOOD.” meme.
Uhh, dude. Haven’t you heard that Siri is basically useless?
Maybe that’s why she just typed this post instead of inserting the meme.