I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of specific political ideology sentiment. Also identify any related political ideology tropes“. (The italic bits are where I’ve redacted the ideology they’re seeking).
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances and people are using it and maybe we’re ok with that because it’s being used by groups we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of other questions too.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
This is the person calling you a tankie. Someone so afraid of words that they need a hallucinating robot to hold their hand and confirm that everything is a secret plot against them. The absolute only way I could see this being useful is for something like trying to sniff out if a Lemmy.world mod account is a Hexbear infiltrator or not. You could maybe run a speech pattern comparison but that’s it, and at least my accounts have factored that in. For everything else they’ve just made Stupid Reddit and the purpose of their forum is to feed training data to ChatGPT so that it can profile Fediverse users.
They’re also working hard to discredit anyone who doesn’t act exactly like they want, see their previous post just asking why instances rimu don’t like are somehow more ban happy
The horny liar robot judged me to be pure of heart and race. It has not judged you yet, outsider. Do you not trust Sam Altman? Give him your data and see what the black box says.
edit: Aha! I asked the horny lying robot that farms engagement by confirming my biases so I choose it over its competitors and it said YOUR WRONG
see this is why we can only trust my methods of social media credit score! The robots will never be able to decipher it or abuse it as it is intended!
Are there GDPR implications?
Yes this is totally illegal under GDPR. You can not share a user’s information given to the site with a third party without the explicit clear consent of the user for that purpose. A user’s information can be their personal details but also extends to content that they post on the site itself.
You can not share a user’s information given to the site with a third party without the explicit clear consent of the user for that purpose.
This is a misconception about the GDPR, albeit very common. There’s the “processing for the legitimate interests” bit that, since it covers “I make assloads of money off of doing that” I don’t see why it wouldn’t work here.
The real problem is giving them to an US based company that’s not part of the weird data privacy framework, which none of the AI companies are, and the GDPR only permits as an exception which this wouldn’t be
It actually depends if it is stated in the privacy policy, if you are using it to provide a safe community and if the data isn’t enriched with other data to help identify the user and there is a human making the final call. If it is in the privacy policy and the rest of the conditions are true then it could be argued it doesn’t really fall under the technical scoping of automatic processing rules of GDPR. It is more of a grey area than an absolute “totally illegal”
In this case because it is the moderator and not the instance admin and they wouldn’t have access to the sign up data or instance logs that include IP addresses it becomes less clear that it is totally illegal. One sticking point would be if the handle is identifiable and used in other places that can be tied to a person or if it is a unique to lemmy username.
So, I’d say this is likely illegal
source: IAAL
Certainly any such policy provisions would apply only to users who were signed up on the instance where the mod was running the LLM?
Unless by federating, hexbear is agreeing on behalf of all its users to agree to anything in any privacy policy of other instance? In which case hexbear and other sites could (?) add some sort of defensive language demanding other instances pro actively disclose their policies if they contain x y or z undesirable provisions?
Now by this logic we become little nation states with treaties and alliances. Very complicated.
On the question of federation, I would guess that is already settled one way or another re: email providers
But I think comment history is totally outside of this discussion because it’s all publicly scrapable info, not private data like account details
I think it’s different than email. Because if an email is on your server, it was either sent from or to one of your users. But in the fediverse, anyone can set up an instance which obtains copies of lots of user data, without any affirmative interaction.
Your data does not cease being your data on another federated server, other instance admins have to treat it as your data. Your use of a federated service is not an opt in to anything other than federated services.
The EU isn’t going to give a fuck about any attempt to say “well achtualllly it’s technically a copy blah blah blah” they do not appreciate that shit and come down on anyone that tries to fuck about.
It’s your data that you agreed to rehost on multiple servers, as far as the EU will be concerned if you created it then it will remain your data on those servers too.
They will be totally glacial in actually doing something about it as everything in the EU moves through the system at the slowest pace anywhere in the world, but GDPR isn’t something anyone should test because they’ll find themselves fined very harshly eventually.
I’m honestly a little surprised to see the earnest employment of the concept of ‘ownership’ so explicitly on hexbear, even in the context of personal data.
I always thought of online forums as an extension of the concept of the commons, and laws and restrictions like the GDPR were more of a liberalization of the walling-in and privatization of common ‘lands’ by capitalists. I’m not confident that ‘my data is my data even after it has been made publicly available to anyone, everywhere’ is a part of a socialist vision of the internet.
Organisations and services should be restricted. Individual proles should not. An individual prole has no capability to create a mass database of profiles of millions of people, de-anonymise them, purchase data from 10,000 companies and then use that data to target marketing at those individuals without their knowledge that this vast quantity of data is being used to manipulate them.
It’s conceptually the same as restricting the bourgeoisie from ownership of television and publishing media in order to prevent them from using their vast resources to manipulate the proles.
I don’t oppose some restrictions of property. Property exists under socialism. It will continue to exist until all of capitalism is eliminated and that transitional period is frankly going to be a long ass time (and already has been so far). Various restrictions are valid to protect individuals and disempower bougies.
An individual prole has no capability to create a mass database of profiles of millions of people, de-anonymise them, purchase data from 10,000 companies and then use that data to target marketing at those individuals without their knowledge that this vast quantity of data is being used to manipulate them.
I guess i’m not sure how this applies to what’s being described in the OP, then. I agree - the type of mass-collection, de-anonymization, and sale to private direct-marketing firms that the GDPR is written to protect against is absolutely antithetical to any type of socialist or open internet.
The practice being described in the OP (as far as I understand it) is simply collecting publicly accessible user activity that’s transparently shared via activitypub and drawing “conclusions” (as much as an AI slop machine can produce ‘conclusions’) about the user from that activity - things like username, comments, posts, and vote activity. Thinking of activitypub as a kind of ‘commons’, I would think that the activity done within it is akin to a shared resource that is freely available to all participants. The type of private data that (IMO) would be considered personally owned and controlled (and outside the scope of practice being described) would be things like registration email and IP address and other data that is produced only as a matter of practical necessity and not by personal choice - anything that would be collected by a site admin as a matter of running a server and outside of the standard data transmitted via activitypub.
I also don’t oppose the existence of property writ large, nor do I oppose restrictions to the use of that property. I just don’t think that the fruits of creative labor shared via the online commons can be practically or theoretically thought of as ‘personal property’ in the way we’re describing.
Freely available to participants. Not freely available to the site owners to go and pass on to multi billion AI corporations.
Either you disallow it or you open the door for them to sell every everything on the site to anyone they please.
Freely available to participants. Not freely available to the site owners to go and pass on to multi billion AI corporations.
That isn’t what is being described in that post though… I agree that site owners shouldn’t share private information about their users that isn’t freely given with the intent of being made visible to the public internet. The practice being described is non-admin moderators (or, as implied, admin-sanctioned tools being used by mods) collecting the comment and post histories of specific users and using LLM’s to summarize them through a political lens, and then using that summary to issue bans to those users on those grounds. With the exception of maybe user vote activity, the data being used for it is available to anyone just loading someone’s user profile page in a browser. I would argue that anything transmitted via activitypub (including vote activity) is a part of that public commons, but that’s a little beside the point. Anyone on the open internet can see and collect the content of any given user on Lemmy - AFAIK there has been no effort or intent to gatekeep the visibility of that data except by means of limiting certain traffic to prevent bots and crawlers hoovering everything on the internet and constantly overloading server traffic. Even those limitations, though, aren’t intended to inhibit the visibility of data to any human with a screen to read it.
I’m of the opinion that the door should be open, aside from personally identifiable information that could be used for re-identify anonymous users. I also don’t think admins should have/require/log PID on their servers at all, but insofar as it’s necessary for managing the service it should be considered privileged and limited by laws similar to GDPR.
I’m honestly a little surprised to see the earnest employment of the concept of ‘ownership’ so explicitly on hexbear, even in the context of personal data.
“Who should own what” is the defining question of politics. China and Vietnam have 90+% home ownership rates and the USA has 60% because Communist land reforms were intended to give peasants ownership of their own land.
I always thought of online forums as an extension of the concept of the commons, and laws and restrictions like the GDPR were more of a liberalization of the walling-in and privatization of common ‘lands’ by capitalists.
I feel you’re thinking of the commons in surface level terms, because commons often have restrictions on their use to enable their continued existence. You could not, as a resident of one village, use another village’s commons to graze your sheep, for example.
I’m not confident that ‘my data is my data even after it has been made publicly available to anyone, everywhere’ is a part of a socialist vision of the internet.
Yeah you should have the ability to control how your data is used. Say you take some lewd photos, on your phone. Your phone gets hacked and is those photos are made publically available. That’s fundamentally your data, and you should have recourse to get it removed from the internet to the extent possible.
“Who should own what” is the defining question of politics
Maybe I should have been more explicit. I didn’t expect an earnest defense of personal ownership in the context of creative works openly shared within the public commons.
commons often have restrictions on their use to enable their continued existence.
I don’t view the use of published creative works in the commons -especially for the purposes of political analysis and participation- to be detrimental to the continued existence of the commons. It doesn’t follow from your analogy because the use of the creative work has nothing to do with it being a scarce or limited shared resource like grazing pasture. If anything, it seems like the objection is to the use itself, not to any kind of material ownership or labor relation. But even then, the claim would have to be strong enough to justify restricting using the works for public and political participation like in the OP
That’s fundamentally your data, and you should have recourse to get it removed from the internet to the extent possible
Apples and oranges. I’m not talking about private, intimate details or representations of your person that you’ve chosen not to share publicly, I’m talking about creative works that you’ve knowingly shared on public and widely visible internet platforms with full knowledge of the public nature of that participation. If someone posts some hideously racist image of themselves on twitter, and I save a screenshot of it as a part of my own public and political participation in the commons, and they later change their mind and try retracting that image, are they allowed to demand I delete it? I hardly think so, and I doubt you would either.
Granted, there are certainly valid examples of data falling under legitimate ‘ownership’ (different from ‘authorship’), but I don’t think that includes works that are shared and contribute to the public commons, especially when the contested use isn’t a private for-profit use.
I don’t view the use of published creative works in the commons -especially for the purposes of political analysis and participation
LLMs generate words, not analysis. You’re mistaking the process of statistically generating words with analysis here.
Also what “participation” is this “LLM analysis” fostering? What use is synthetically generated text to authentically interacting humans?
Like I said, sounds like the objection is to the use itself, rather than any marxist application of ownership or theft.
Fair enough to have a debate on the merits of LLMs and their cost, but to claim some copyright maximalist position just because you find them distasteful is a little reactionary, IMO.
I might point out that sentiment analysis is possibly the one broadly accepted use for language models, but it think thats a little beyond the point
The EU isn’t going to give a fuck about any attempt to say “well achtualllly it’s technically a copy blah blah blah” they do not appreciate that shit and come down on anyone that tries to fuck about.
I didn’t mention anything about a copy. Even on the source. Also the EU does give a fuck and does listen to arguments and statements. The DPA in each EU jurisdiction during their investigations will take statements and the CJEU is an entire court that takes arguments before making an enforcement decision if there is a question about the technical scoping from the DPA.
You misunderstand my point, the data on a public lemmy forum without actual identifying information may or may not be subject to GDPR. In order to file a GDPR claim you would need to dox yourself to the government as well. I said that this is likely illegal, not totally illegal. Please understand that when it comes to the law almost nothing is certain.
Lemmy.world officially unveils its first robot moderator:

just like all its human moderators.
Irony is one of the mods like that is in the comments saying lw it totally transparent and fair 🙃
transparent and fair means they are transparently pro fash and anything they do is fair because you should have seen it coming
And in this case the self described zionists news mod is happy to say it has no bearing on his moderation. He let’s the other ones do it for him
There’s lots of well thought out jokes that get posted here that make me giggle but THIS got me cry laughing. My sense of humour needs a reset 😭
Do you really need a LLM to tell you the user political ideology? Can you not guess based on the “I’m white so I’m right” pattern?
I imagine it’s more about reducing the work-load of the moderation team.
Sure, the slop machine will produce false positives, but they don’t care.
There is an easier and often overlooked way to reduce moderation effort. Shut down the platform.
and for once the consequences of false positives is negligible. oh no my forum account.
Don’t these people understand that we can see straight through their bullshit? We don’t need a pseudo-thinking bullshitter software for telling us the meaning of the sentences we read.
No you mostly probably don’t. But I’m thinking that this is more about scale, and having some rubric to point to if a ban is challenged. Basically they just want to automate this work, and are hoping they outsource the judgement or synthesis to an LLM.
Reactionaries in the US generally have political literacy which lies somewhere between that which should be expected of a 5 year old, and that which should be expected of a 10 year old; so to that end, they are wholly incapable of distinguishing between anything outside the realm of liberalism/conservatism.
wholly incapable of distinguishing between anything outside the realm of liberalism/conservatism.
They can’t do even that, every time you say to them that conservative is also a kind of liberal because liberalism is ideology of capitalism, they get mad.
I assume this will be used to remove a comment I make about Linux window managers because I once made another comment about Palestine.
I fully support any and all policies Axis.world and Piefed do to make their sites worse
deleted by creator
Watching rimu having crash outs on 2 of the hit piece posts he’s done lately is :chef_kiss:
Rimu in response to surprisingly reasonable criticism from lemmy.world mods
https://lemmy.world/post/46405904/23550504
Edit 2 OP as well has downvoted me. @rimu@piefed.social I’m sorry if you disagree, but it’s irrelevant. Everything you do here can and should be assumed will be used in any way that you disagree with, that is the nature of the fediverse. Mastodon, Pixelfed, Piefed, Lemmy: ActivityPub is an open and unencrypted protocol. Even if it were encrypted, you still put 100% of your trust in your server admin, and beyond that each server admin you are blasting your messages out to.
Rimu https://lemmy.world/comment/23550838
Yeah you can do that but now you’re on my do-not-trust list. And probably a few other people’s lists.
I appreciate you being open about your opinions because now I can make an more informed choice about interacting with you and the instance you run.
Don’t you think everyone deserves the information they need to choose which instances they want to interact with, according to whatever criteria is important to them? Even if your criteria are different?

no you don’t understand it’s not the default so his personal opinion for including them and attacking those reading the code is unrelated to the software
what do you mean it’s very poorly written code, that can’t be true it runs well now!
Pretty much no one respects him anymore since redwizard’s piefed post and now their tantrum vs dbzero
It’s a reaally long post, very difficult to scroll to comments in mobile.😥
















