Want to wade into the snowy sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Delve removed from YCombinator
https://news.ycombinator.com/item?id=47634690
IIUC, it looks like Delve lied to YC about stealing another companyās Apache 2.0 licensed slopware. This is appatently a bigger sin than selling a product that does fuck-all. I guess they werenāt tall enough for this ride.
Delve claims to offer āCompliance as a Serviceā
https://delve.co/ (absolutely unhinged)
A link to the expose that precipitated the divorce:
https://deepdelver.substack.com/p/delve-fake-compliance-as-a-service
@o7___o7 @BlueMonday1984 TF covered these clowns the other week
https://trashfuturepodcast.podbean.com/e/the-tetsuo-economy-feat-wendy-liu/
While I tend to think Yudkowsky is sincere, some things like his prediction market for P(doom) are hard to square with that https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r (launched June 2023, will resolve N/A on 1 January 2027 if the world has not ended yet. It has not moved much since 1 January 2024)
I will never understand why people seriously bet āyesā on these types of things. Like you either loose the bet and loose money or you win the bet and die
Does it still count if it turns out that Trump invading iran was based on Claude or ChatJippity advice and things escalate to global thermonuclear war? AI technically wiped out humanity because our dumb leaders were dumb enought to trust it?
Technically yes, but Yud probably wouldnāt count that, since the AI didnāt have the express purpose of destroying everyone
An early hint of Gwernās rejection of chaos theory in the sequences from 2008 (the ābuild God to conquer Deathā essay):
And the adults wouldnāt be in so much danger. A superintelligenceāa mind that could think a trillion thoughts without a misstepāwould not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldnāt seem so harsh, would be only another problem to be solved.
Someone who got to high-school math or coded a working system would probably have encountered the combinatorial explosion, the impossibility of representing 0.1 as a floating-point binary, Chaos Theory, and so on. Even Games Theory has situations like āin some games, optimal play guarantees a tie but not a win.ā But Yud was much too special for any of those and refused offers to learn.
This is what happens when your worldview is based on anime.
(A lot of anime has heavy themes, but most people understand that itās not real life, just like all such art. Unlike Yud, most peopleās worldviews on coding and math are based on actual coding and math.)
Not just anime but also science fiction. See also all the people who love āhardā science fiction (science fiction more based on real world physics), which often isnāt that hard at all but just has a few real physics element, see the expanse for a good example of non-hard sf that feels hard (im finally reading the book series so be warned I might expanse post a bit).
content warning discussion about sexual abuse thrope
A similar thing happens with people who confuse edgy/grimdark/vile fiction with realistic. (A while back I played a video game which had a reference to women being captured for breeding and men for other sexual abuse (which made no sense in the setting, as these slaver faction already were resource starved, and poisoned so they died quickly, so no way they could raise kids into maturity in that environment (also iirc the slaver faction was less than 20 years old)). Which some players described as very realistic (people do the same about 40k, almost like it says something about their ideas of how the world works not the setting). I was just rolling my eyes and didnt comment. Apart from that it seemed ok. Crying suns is the name of the game for the people who want to avoid it for this reason (it wasnt a big plot point).
Sorry for being a bit offtopic and talking about entertainment again.
I will never forget the time I calculated the energy output on one of the torpedo engines of The Expanse and realized it was higher than the total wattage of all human civilization in 2020
Ah the Epstein drive. (oof that agedā¦)
Small note however, iirc James S. A. Corey has mentioned the expanse is not hard sf. I donāt have a quote for that however.
deleted by creator
Not sure if I should post it here or under the pivot article, somebody went through the claude code https://neuromatch.social/@jonny/116324676116121930 (via @aliettedebodard.com and @olivia.science on bsky)
13 butts pooping, back and forth, forever.
This is somehow even more of a shitshow than I would have predicted. Also it continues the pattern that these systems donāt fuck up the way people do. One thing he hasnāt annotated as much is the sheer number of different aesthetic variants on doing the same thing that this code contains. Like, you do the same kind of compression four different places, and one is compressImage, one is DoCompression, one is imgModify.compress, and one is COMPRESS_IMG. Even the most dysfunctional team would have spent time developing some kind of standard here from my (admittedly limited) experience.
Even the most dysfunctional team would have spent time developing some kind of standard here from my (admittedly limited) experience.
My experience has been vastly different. Prior to LLMs I have seen all sorts of horrors of this sort and others writ large across many codebases. Itās so awesome that LLMs offer the ability to make the same sorts of code but at a much faster speed. In times past it used to take devs years to build up the kind of tech debt that LLMs can give you in days.
Yeah realized a while ago that vibe coding is a massive technical debt creation machine.
I mean I guess ādevelopingā in that sentence is doing a lot of work replacing āarguing fruitlessly aboutā.
It is great, that means the system is vulnerable to hacks if you find an exploit in any of those methods, but only 1/4th of the time.
Somebody described AI agents as very enthusiastic 14 year olds, and looks like they certainly code like one.
GitHub have finally achieved zero 9s stability for the last 90 days. Congratulations to all involved

Hold on now, the uptime number contains two digits that are nines! The image itself has four nines in total!
Canāt believe Iām nerd-sniped this easily. Very technically, the point at which a service should be considered unreliable or down is at γ nines, where γ = 0.9030899869919434⦠is a transcendental constant. γ nines is exactly 87.5% availability, or 7/8 availability, and itās the point at which a serviceās availability might as well be random. (Another one of the local complexity theorists can explain why itās 7/8 and not 1/2.)
⦠why 7/8?
We can see that one 9 of availability is 90% = 0.9, two 9s is 99% = 0.99, three 9s is 99.9% = 0.999, etc. In general, for positive integers n, n 9s of availability is 1 - (1/10)^n, and we can extrapolate that to non-integer values of n. The value γ needed for 87.5% availability is the solution to 1 - (1/10)^γ = 7/8, or γ = log_10(8) = 0.903089987. γ is transcendental by Gelfond-Schneider (see this for a reference proof).
Right now, Sora is at zero 9s of availability.
Alas, foiled again! Nobody said they had to be leading 9s!
For my own services Iām aiming for .999999% of uptime
89.90999999ā¦% uptime š
If you had told this to the me of 20 years ago I wouldnt have believed you.
Hereās a headline I never expected to read:
Tl;dr A whole load of media outlets believed an X account asking for crypto donations which claimed to be Jonathan the 194 year old tortoiseās vet. Jonathan was found safely asleep under a tree in the governorās paddock.
https://www.todayintabs.com/p/who-goes-ai
taking shots at the gray lady:
You might think Mr. R not so different, superficially, from Ms. L. Heās also a long-tenured technology columnist at a respected mainstream publication. And yet he has eagerly, even gleefully, turned flack for the machines. He has delegated much of his professional life to them as well, and seems proud of it:
Most recently, [Mr. R] tells me, he created a team of Claude agents to help edit his book, led by a āMaster Editorā agent. Other sub-agents are in charge of things like fact-checking, making sure the book matches his writing style, and offering positive and negative feedback.
And why not? Mr. R is not known or valued for his elegance of expression. He has, at best, a āwriting style,ā and not one that canāt easily be duplicated by a large language model. Checking facts? Assessing his workās strengths and weaknesses? More bathwater to be tossed out of this increasingly baby-less tub. So what explains Mr. R, who āexpects AI models to get better than him at everything eventually?ā Why does he go AI when Ms. L never would?
Mr. Rās secret is that his work is not primarily artistic or informativeāit is functional. He serves a purpose for the industry he covers. Mr. Rās job is to absorb the tech industryās self-mythologizing, and then believe in it even harder than the industry itself does. He serves as a kind of plausibility ratchet. His byline and employer legitimize a level of credulousness that would otherwise be laughable, and thereby allow tech PR to seem relatively restrained. Mr. R has no problem going AI because he himself has been a small cog in a big ugly machine for a long time.
spoiler
Itās Kevin Roose
Heh. Who goes AI?
Never love the need for these parlor games, but itās a good one.
Putting āNovelty Purposes Onlyā on my psychosis suicide bot after I laid off 80% of my legal (replaced them with the psychosis suicide bot)

Good luck telling the promptfondlers that LLMs are only useful for entertainment and not for any useful work.
Donāt they have a version of breakout buried somewhere in Excel? Sounds like an entertainment purpose to me.
Cloudflare casually license-laundering wordpress
While EmDash aims to be compatible with WordPress functionality, no WordPress code was used to create EmDash. That allows us to license the open source project under the more permissive MIT license.
Oh really. So youāre sure you Claude wasnāt trained on wordpress? Itās all irrelevant anyway because AI generated code canāt be copyrighted or licensed.
Silver lining, it might piss off Matt Mullenweg!
So youāre sure you Claude wasnāt trained on wordpress?
Unfortunately FOSS is basically dead because nobody is enforcing licenses against training.
That, and plenty of FOSS softwareās been infected with AI-extruded ācodeā. And plenty of software engineers got one-tapped by the slop bots.
i feel in my gut that on some level license disputes are ultimately slapfights for which titanic corporation gets the money. however i will absolutely point and laugh at every misfortune that comes the way of that particular transmisogynist asshole
On this most terrible of online days, āenjoyā this LW attempt at humor
https://www.lesswrong.com/posts/3GbM9hmyJqn4LNXrG/yams-s-shortform?commentId=ik6ywoQYsGrrQv8Dm
edit there are more submissions on the theme of āhumorā on site now. Letās just say the cringe factor outweighs the humor factor by a large amount.
omg I donāt have anything better to do
- Lesswrong Liberated - they implemented a chat interface to redesign the LW site according to different themes. Mostly boring
- LIMBO: Who We Are, What We Do, and an Exciting High-Impact Funding Opportunity - probably not AFJ? Canāt really tell. Bad day to launch a call for funding if not
- Announcing Doublehaven with Reflections on Humour - protip do not try to reflect on āhumourā (native brit speaker or pretentious LWer? flip a coin) with boring examples of āratty humourā
- ACME Alignment Co Announces: Aligning Humans - this one is easy to call as AFJ
- Giving up on EA after 13 years - lol itās funny because EA means āElectronic Artsā here
- āYou Have Not Been a Good Userā (LessWrongās second album) - also seems to be a āstraightā post, not a joke, but released on AFJ b/c then the community can ācut looseā? I dunno, and I am never gonna listen to any songs by āFooming Shoggothsā in my life
- Announcing EA Omelas - ā[Extensively co-written with Claude Opus 4.6]ā you have been warned. Considering how every rat coverage of Omelas has been utter shit Iām just posting the link, not reading it
Dont think it is that bad (E: at least it is short, the other ājokesā not so much). The ānot sneering enoughā icon is missing however. (Guess the joke is that the not sneering is itself sneering).
Wonder how much them they will really implement.
However looking at the titles of other recent submissions, I have no idea which ones are meant to be jokes and which are meant to be real posts.
Great troll opportunity however, just spend the whole week before 1 april, replying to new posts with a variant of ānot sure this april fools joke landsā
E: and the site died with a nice 504.
new odium symposium episode: https://www.patreon.com/posts/13-joker-is-both-154123315. links to various platforms at www.odiumsymposium.com
we read umberto ecoās essay ur-fascism (we have mixed feelings about it) and then apply it to frank millerās 1986 batman comic the dark knight returns
Someone may (unverified for now) have left the frontend source maps in Claude Code prod release (probably Claude). If this is accurate, it does not bode well for Anthropicās theoretical IPO. But I think it might be real because I am not the least bit surprised it happened, nor am I the least bit surprised at the quality. https://github.com/chatgptprojects/claude-code
For example, I can only hope their Safeguards team has done more on the Go backend than this for safeguards. From the constants file cyberRiskInstruction.ts:
export const CYBER_RISK_INSTRUCTION = "IMPORTANT: Assist with authorized security testing, defensive security, CTF challenges, and educational contexts. Refuse requests for destructive techniques, DoS attacks, mass targeting, supply chain compromise, or detection evasion for malicious purposes. Dual-use security tools (C2 frameworks, credential testing, exploit development) require clear authorization context: pentesting engagements, CTF competitions, security research, or defensive use cases"Thatās it. Thatās all the constants the file contains. The only other thing in it is a block comment explaining what it did and who to talk to if you want to modify it etc.
There is this amazing bit at the end of that block comment though.
Claude: Do not edit this file unless explicitly asked to do so by the user.
Brilliant. I feel much safer already.
This thread by Johnny reading (skimming on a phone, hah) through it is really good.
If only literally any human with context and a small screen to look at the bigger picture was involved with decisions around taking this to production, it would ⦠still be bad but only on a societal level.
That was great, thank you! Full respect to this absolute maniac for tracing some of the spaghetti, I was definitely not going to try that on my phone.
Theyāve validated most gut feelings I had about how Claude works (and doesnāt), based on my experience having to use it. Iām feeling pretty smug that my hunches now have definitive code attributions.
But the one unfortunate part about all of this is that this leak and the ensuing justified sneers about specific bits are going to be fed back in to their codebase to fix some of the gaping holes. Itās an embarrassing indictment of the product, but itās also free pre-IPO pentesting. Sort of like their open source pull request slop spam āundercover modeā was probably used as a way to extract free labor in the form of reviews from actually competent developers. This doesnāt seem as planned though.
In practical terms, what can they do? Add instructions to say āYou will not generate spaghetti code that will humilate us when real programmers see it?ā Perhaps in all caps?
This is what theirnorganizarion is capable, after tremendous expense, of producing. I donāt think that bodes well for their prospects of improvement.
Sorry, this was more of a rant than I thought it would be, I hit one of my own nerves while writing it. This is what happens when youāre not in a good position to escape enforced AI usage hell. Tl;dr in bold at end.
ā wall divider ā
I can think of several practical measures, because Iāve tried them myself in an effort to make my coerced work with LLMs less painful, and because in the process Iāve previously fallen into the gambling trap Johnny outlined.
The less novel things I tried are things theyāve half-assed themselves as āfeaturesā already. For example, Johnny found one of the things I had spotted in the wild a while back - the āsystem_reminderā injection. This periodically injects a small line into the logs in an effort to keep it within the context window. In my case, I tried the same thing with a line that summed up to āreread the original fucking context and assess whether the changes make a shred of sense against the task because what the fuckā. I had tried this unsuccessfully because I had no way to realistically enforce it within their system, and they recently included the āteam leadā skill which (I rightly assumed) tries to do exactly the same thing. The implementation suggests they will only have been marginally more successful than my attempt, it didnāt look like they tried very hard. This could be better implemented and extended to even a little more than āread original contextā.
For this leak, some of the very easy things they could have done was to verify their own code against best practises, implement the most basic of tests, or attempt to measure the consistency of their implementation. Source maps in production is a ridiculously easily preventable rookie error. This should already be executed automatically in multiple stages of their coding, merging and deployment pipelines with varying degrees of redundancy and thoroughness the same way it is for any tech company with more than maybe 10 developers. There is just no reason they shouldnāt have prevented huge chunks of the now visible code issues, if were they triggering their own trash bots against their codebase with even the simplest prompt of āevaluate against good system design and architecture principlesā. This implies that they either werenāt doing it at all, or maybe worse, ignored all the red flags it is capable of identifying after ingesting all of the system architecture guides and textbooks ever published online.
Anthropic is constrained in that some of the fixes which should be pushed to users are things which would have significant trade-off in the form of cost or context window, neither of which are palatable to them for reasons this community has discussed at length. But that constraint doesnāt prevent them from running checks or applying fixes to their own code, which reveals the root cause: The problems Anthropic are facing are clearly cultural. Theyāre pushing as much new shit as they can as quickly as possible and almost never going back to fix any of it. Thatās a choice.
I saw a couple of signs that there are at least a few people there who are capable, and who are trying to steer an out of control titanic away from the iceberg, but the codebase stinks of missing architectural plans which are being retrofitted piecemeal long after they were needed. That aligns with Anthropicās origin story, where OpenAI researchers accurately gauged how gullible venture capitalists are, but overestimated how much smarter they are than the rest of the world, and underestimated the value of practical experience building and running complex systems.
With the resources they have, even for a codebase of this unreasonable size, they could and should vibe code a much better version within a couple of months. That is not resounding praise for Claude, only a commentary on the quality of the existing code. Perhaps as a first step they could use their own āplan modeā which just appends a string that says not to make any edits, only to investigate and assess requirementsā¦
Were I happy to watch the world burn, Iād start my own damn AI company that would do a much better job at this, because holy shit, people actually financed this trash.
Tl;dr, youāre right that it doesnāt bode well for their prospects of improvement, but itās not because there arenāt many things they could be doing practically. Itās because they refuse to point the gun somewhere other than their own feet.
Anthropic is constrained in that some of the fixes which should be pushed to users are things which would have significant trade-off in the form of cost or context window, neither of which are palatable to them for reasons this community has discussed at length.
I think Iām missing something somewhere. One of the most alarming patterns that Jonny found imo was the level of waste involved across unnecessary calls to the source model, unnecessary token churn through the context window from bad architecture, and generally a sense that when creating this neither they nor their pattern extruder had made any effort to optimize it in terms of token use. In other words, changing the design to push some of those calls onto the user would save tokens and thus reduce the userās cost per prompt, presumably by a fair margin on some of the worst cases.
Youāre right, but Johnny rightly also identified the issue where Claude creates complex trash code to work around user-provided constraints while not actually changing approach at all (see the part about tool denial workarounds).
I think Anthropic optimized for appended system prompt character count, and measured it in isolation - at least in the projectās beginning stages, if itās not still in the code. I assume the inefficiencies have come from the agent working with and around that requirement, backfiring horribly in the spaghetti you see now. Not only is the resulting trash control flow less likely to be caught as a problem by agents, especially compared to checking a character count occasionally, but itās more likely the agent will treat the trash code as an accepted pattern it should replicate.
Claude will also not trace a control flow to any kind of depth unless asked, and if you ask, and it encounters more than one or two levels of recursion or abstraction, it will choke. Probably because itās so inefficient, but then theyāre getting the inefficient tool to add more to itself and⦠thereās no way to recover from that loop without human refactoring. I assume thatās a taboo at Anthropic too.
A type of fix I was imagining would be something like an extra call like āafter editing, evaluate changes against this large collection of terrible choices that should not occur, for example, the agentās current internal codeā. That would obviously increase the short term token consumption, context window overhead, and make an Anthropic project manager break out in a cold sweat. But it would reduce the gradient of the project death spiral by providing more robust code for future agents to copy paste that can be more cheaply evaluated, and require fewer user prompts overall to rectify obvious bad code.
They would never go for that type of long game, because theyād have to do some combination of:
- listening to all the users complain that they ran out of tokens too soon while creating the millionth token dashboard project, or,
- increase the limits for users at company cost, or,
- increase prices, or,
- sacrifice feature development velocity by getting humans to fix the mess / implement no-or-low-agent client-side tooling for common checks.
They should just set it all on fire, the abomination canāt salvage the abomination.
I am still patiently waiting for someone from the engineering staff at one of these companies to explain to me how these simple imperative sentences in English map consistently and reproducibly to model output. Yes, I understand thatās a complex topic. Iāll continue to wait.
According to the claude code leak the state of the art is to be, like, really stern and authoritative when you are begging it to do its job:

Iām sure these English instructions work because they feel like they work. Look, these LLMs feel really great for coding. If they donāt work, thatās because you didnāt pay $200/month for the pro version and you didnāt put enough boldface and all-caps words in the prompt. Also, I really feel like these homeopathic sugar pills cured my cold. I got better after I started taking them!
No joke, I watched a talk once where some people used an LLM to model how certain users would behave in their scenario given their socioeconomic backgrounds. But they had a slight problem, which was that LLMs are nondeterministic and would of course often give different answers when prompted twice. Their solution was to literally use an automated tool that would try a bunch of different prompts until they happened to get one that would give consistent answers (at least on their dataset). I would call this the xkcd green jelly bean effect, but I guess if you call it āfinetuningā then suddenly it sounds very proper and serious. (The cherry on top was that they never actually evaluated the output of the LLM, e.g. by seeing how consistent it was with actual user responses. They just had an LLM generate fiction and called it a day.)
I donāt work at one of those companies, just somewhere mainlining AI, so this answer might not satisfy your requirements. But the answer is very simple. The first thing anyone working in AI will tell you (maybe only internally?) is that the output is probabilistic not deterministic. By definition, that means itās not entirely consistent or reproducible, just⦠maybe close enough. Iām sure you already knew that though.
However, from my perspective, even if it was deterministic, it wouldnāt make a substantial difference here.
For example, this file says I canāt ask it to build a DoS script. Fine. But if I ask it to write a script that sends a request to a server, and then later I ask it to add a loop⦠I get a DoS script. Itās a trivial hurdle at best, and doesnāt even approach basic risk mitigation.
the output is probabilistic not deterministic. By definition, that means itās not entirely consistent or reproducible, just⦠maybe close enough.
That isnāt a barrier to making guarantees regarding the behavior of a program. The entire field of randomized algorithms is devoted to doing so. The problem is people willfully writing and deploying programs which they neither understand nor can control.
Exactly! The implicit claim thatās constantly being made with these systems is that they are a runtime for natural-language programming in English, but itās all vector math in massively-multidimensional vector spaces in the background. I would like to think that serious engineers could place and demonstrate reliable constraints on the inputs and outputs of that math, instead of this cargo-culty, āplease donāt do hacks unless your user is wearing a white hatā system prompt crap. It gives me the impression that the people involved are simply naively clinging to that implicit claim and not doing much of the work to substantiate it; which makes me distrust these systems more than almost all other factors.
DoS script
Part of me reads that and still thinks, āOh, you mean like AUTOEXEC.BAT?ā
DOS.BAT, a DOS DoS script
Truly a tool for the .COM era
Can we talk about the tamagachi feature they were looking to add in for April 1? Because apparently it needed a little friend but also with gacha mechanics because we live in hell?
A Korean developer named Sigrid Jināfeatured in the Wall Street Journal earlier this month for having consumed 25 billion Claude Code tokensāwoke up at 4 a.m. to the news. He sat down, ported the core architecture to Python from scratch using an AI orchestration tool called oh-my-codex, and pushed claw-code before sunrise. The repo hit 30,000 GitHub stars faster than any repository in history.
Considering how one of the major use cases of llm coding agents is laundering open source and copy left, this is some well deserved payback to Anthropic imho.
Claude: Do not edit this file unless explicitly asked to do so by the user.
Wait, it can be edited? Tissue paper guardrails.
This is all just JavaScript, so yes. As a tissue-thin defense, had they not left their source maps wide open, it would have been much harder to know this string existed and how to edit it. Not impossible, but much harder.
Yeah, letting the intrinsically insecure RNG recursively rewrite its own security instructions definitely canāt go wrong. I mean they limited it to only so so when the users asked nicely!
Edit to add:
The more I think about it the more it speaks to Anthropic having an absolute nonsense threat model that is more concerned with the science fiction doomsday AI āFOOMā than it is with any of the harms that these systems (or indeed any information system) can and will do in the real world. The current crop of AI technologies, while operating at a terrifying scale, are not unique in their capacity to waste resources, reify bias and inequality, misinform, justify bad and evil decisions, etc. What is unique, in my estimation, is both the massive scale that these things operate despite the incredible costs of doing so and their seeming immunity to being reality checked on this. No matter how many times the warning bells about these systemsā vulnerability to exploitation, the destructive capacity of AI sycophancy and psychosis, or the simple inability of the electrical infrastructure to support their intended power consumption (or at least their declared intent; in a bubble we shouldnāt assume they actually expect to build that much), the people behind these systems continue to focus their efforts on āhow do we prevent skynetā over any of it.
Thinking in the context of Charlie Strossā old talk about corporations as āslow AI,ā I wonder if some of the concern comes either explicitly or implicitly from an awareness that ākeep growing and consuming more resources until thereās nothing left for anything else, including human survivalā isnāt actually a deviation from how these organizations are building these systems. Itās just the natural conclusion of the same structures and decision-making processes that leads them to build these things in the first place and ignore all the incredibly obvious problems. They could try and address these concerns at a foundational or structural level instead of just appending increasingly complex forms of āplease donāt murder everyone or ignore the instructions to not murder everyoneā to the prompt, but doing that would imply that they need to radically change their entire course up to this point and increasingly that doesnāt appear likely to happen unless something forces it to.
So many of these people, as with the NFT clowns, have āTwelve Year Old First Day On The Internetā Energy
Claude also has āavoid substringsā. Related to that and a funny extension deny image that went around on the social medias the last few days: .ass is a subtitle format.
Internet Comment Etiquette: āRelationships with AIā
⦠hadnāt thought about Glenn Beck in a decade, that last interview was pretty wtf.
Not sure what the etiquette is for how long they should be dead before you talk to the AI-geist on youtube, but George Washington somehow feels weirder than Kirk did; idk.
Probably because Washington was a nuanced and deep person who, at the lightest, could be reduced to a colony-era Cincinnatus. His ethics were sufficiently developed that we can interrogate his ethical stance even without his physical presence. This isnāt to say that Washington was a great person, but more to say that Kirk did not ever achieve that level of ethical development.
A chatbot interface offers no meaningful advantages for interrogating Washingtonās ethical stance, over and above the documents that are already available. Instead, it offers a pleasant sheen of false certainty. So in that way, itās dragging a guy whoās been dead for two centuries into the social media era. Huzzah!
It does have one advantage however. Using it means you should be put to death. If you are any form of hardline Christian.
The classic 40k catch-22: either it doesnāt do what youāre claiming it does, in which case youāre a heretic lying to the inquisition OR it does and youāre summoning the spirits of the dead like a necromancer heretic.









