[email protected] is live! If you missed the previous discussion, it’s a community with a robot moderator that bans you if the community doesn’t like your comments, even if you’re not “breaking the rules.” The hope is to have a politics community without the arguing. [email protected] has an in-depth explanation of how it works.
I was trying to keep the algorithm a secret, to make it more difficult to game the system, but the admins convinced me that basically nobody would participate if they could be banned by a secret system they couldn’t know anything about. I posted the code as open source. It works like PageRank, by aggregating votes and assigning trust to users based on who the community trusts and banning users with too low a trust level.
I’ve also rebalanced the tuning of the algorithm and worked on it more. It now bans a tiny number of users (108 in total right now), but still including a lot of obnoxious accounts. There are now no slrpnk users banned. It’s a lot of lemmy.world people, a few from lemmy.ml or lemm.ee, and a scattering from other places.
Check it out! Let me know what you think.
Very interesting idea. Glad you decided to make it transparent since I don’t think it will work otherwise.
I am not sure I think it will work as intended—in my opinion the state of political discourse on Lemmy is pretty bad right now, but that may reflect the broader state of politics in society more than our particular platform. If we want to create a truly positive space for political discussion, it might require more intervention than banning a small fraction of users. To use myself as an example, I try to be pleasant and constructive but I know I don’t always succeed. An analysis based on the content of comments could also be interesting to try. Or a kind of intermediate status of user, where comments require mod approval. That could be overly dependent on subjective mod opinions though.
Still, I think even if it doesn’t work, this is the type of experiment we need to elevate online discourse beyond the muck we see today.
I agree. As soon as I started talking to people about it, it was blatantly obvious that no one would trust it if I was trying to keep how it worked a secret. I would have loved to inhabit the future where everyone assumed it was an LLM and spent time on trying to trick the nonexistent AI, but it’s not to be.
I agree with you about the bad state of the political discourse here. That’s why I want to do this. It looks really painful to take part in, and I thought for a long time about what could be a way to make it better. This may or may not work, but it’s what I could come up with.
I do think there is a significant advantage to the bot being totally outside of human judgement, because that means it can be a lot more aggressive with moderation than a human could be, because it’s not personal. The solution I want to try for the muck you are talking about is setting a high bar, but it’s absurd to have a human go through comments sorting them into “high enough quality” and “not positive enough, engage better” because it’ll always be based on personal emotion and judgement. If it’s a bot then the bot can be a demanding jerk, and it’s okay.
I think a lot of the intervention element you’re talking about can come from good transparency and giving people guidance and insight into how the bot works. The bans aren’t permanent. If someone wants to engage in a bot-protected community, they can, if they’re amenable to changing the way they are posting so that the bot likes them again. Which also means being real with people about what the bot is getting wrong when it inevitably does that, of course.
I agree with your points and general philosophy, but I guess I the flaw I was trying to address is that good users can post bad content and vice versa. So moderation strategies that can make decisions based on individual comments might be better than just banning individuals that on average we don’t like.
This would require a totally different approach, and I don’t think your tool necessarily needs to solve every problem, but it’s worth pondering.
I think I see it the opposite. There’s a population that posts normal stuff and sometimes crosses a line and posts inflammatory stuff. And there’s a population that has no interest in being normal or civil with their conversation, which can sometimes be kept in line to some degree by the moderators, or sometimes gets removed if they can’t.
The theory behind this moderation is that it’s better to leave alone the first population, but outright remove the whole second population, while still giving them the option of coming back in if they want to change their way of interacting on a longer-term timescale. My guess is that it’s better to do that than to keep them in line by removing comments every now and then, but not intervene unless they cross certain lines, which means they can continue to make unwanted postings according to the community while skirting the lines of acceptable levels of offensiveness, according to the moderators.
Whether that theory is accurate remains to be seen, of course.
So an echo chamber, now with a bot to enforce the echo chamber?
Yeah, the 99.5% of users who are allowed to post are really going to produce a weirdly artificial monoculture without the vital counterweight of the other 0.5%.
In seriousness, I did worry about this. Your user is, as a matter of fact, a great test case for deciding whether it’s banning people based on unpopular opinions alone. The bot doesn’t have a problem with you, despite you posting radically unpopular opinions that it judges negatively (one, two), because you participate in discussion other than that and have enough “positive rank” to outweigh saying some things that aren’t popular.
You’re not wrong to worry about this, but I did worry about it too. Part of what I want to watch, and why I want people to speak up if they think its decisions are unfair, is that I made a hard concerted effort to distinguish between banning real trolls, and banning people who are just speaking their mind, and do the first without doing the second.
Really? I honestly expected Id be banned. Sounds like you did it properly. Congrats, I hope it does well.
I really did worry about this a lot. I wasn’t kidding that your user is a perfect test case. A human moderator could look at one of those comments and say you’re just stirring up trouble, but in my opinion that’s a valid viewpoint even if I don’t agree with it, that someone should still be allowed to say.
By the same token, if that’s all someone wants to post, then it’s a way different story and they should probably be banned. I very much like the property that you can effectively earn the right to say what you want by posting productive things. So, if you want to make a troll account, you have to post a bunch of productive content to shield your disruptive content from moderation… at which point the result is a bunch of productive content and a handful of disruptive comments, which is probably a net victory for the community anyway.
I am sorry to tell you that I’ve redone the tuning and your user is banned now.
I looked into it some, because it bothered me. I’m still not sure if it got it right. If you want a detailed breakdown of what its evaluations are based on, I can send you one.
Id love to see it.
Sure. Here are some offending comments that it picked out:
- https://reddthat.com/comment/11469495
- https://reddthat.com/comment/11528378
- https://reddthat.com/comment/11305756
They’re unpopular in a way that motivates a ban decision, with the last two being severely unpopular.
I think I agree with your first two comments, so it irritates me a lot that they’re motivating a ban. That’s exactly the kind of silencing of an unpopular viewpoint that I don’t want it to do. Your last comment is different. I’ll just say that a lot of what I want to do is look at the type of discussion that particular comments cause, and in this case that last comment definitely caused a lot of yelling and not a lot of evidence-based reasoned discussion from either side.
The reason the determination changed is that I retuned the bot such that it’s a lot easier to get banned if stuff like those comments above is all, or most, of what you post. And it does look a lot like that applies to you. Even if any one comment isn’t a deal-breaker, most of your comments are like the above, so you’re starting to look like a rabble-rouser primarily, and a political participant calmly speaking your mind only occasionally. It’s not that any one of the comments is a deal breaker, but that stuff is the majority of what you post.
I’m not sure how to feel about it. When I looked over the whole history I did see quite a lot of controversy which usually isn’t good. But it’s hard for me to say that I agree with the bot’s determination in this case, especially because a good bit of what you say, I agree with.
Thank you! I think right now, there’s some specific misinformation going around. And I have a tendency to really want to correct misinformation.
And I totally get the fact that my views aren’t really in line for most of lemmy, so some of what I say is going to unpopular.
So who knows, maybe for the health of the community, I need to go. It’s definitely borderline, and I don’t envy you.
I would suggest that what is to you “correcting misinformation” can easily be received as just being cantankerous or offensive.
If you accept that the other person has a choice whether or not to agree with what you are saying, and show respect for both their ability to make up their own mind about it and the possibility that you might be the wrong one, I think you will be more successful at correcting the misinformation. As it is, I think you’re gathering a lot of downvotes because you’re airing deliberately combative opinions in places they aren’t welcome, and often not much more than that.
I think a better solution would be to find a way to present your opinion in a way that still preserves the health of the community as you say, and stay, rather than to either hold on to your current way or else go. I didn’t read your entire profile, just that parts of it that the bot took issue with, but even those, I agreed with your unpopular opinion a lot of the time. But I do think the bot has a point that you’re creating your own unwelcome reception by the way that you are presenting them.
Lets see. This post has some thoughts on that matter.
The only thing I could think of reading OPs post
Oh no! My meow meow beans…
pleasant politics
nothing to see here
as expected
I kid - i hope it does well
I know, I know. I can’t figure out whether starting out with a political community represents a good real-world test, or a hopeless impossibility. Let’s see.
Have fun watching conservatives/fascists/tankies vote opposing commentors into the ban box like they do in the real world anywhere they have power.
I think this is more difficult than you are thinking it is. PageRank based trust rankings are not infallible but they are a lot more resistant to abuse than what Lemmy communities do now, and it’s all happening in public, so if something screwy is happening with the bans, it’s always possible to audit where the wrong judgements came from.
Thanks for being transparent about this. I look forward to seeing how this develops. Your work on this is very appreciated.
Thank you! I agree, and I like how it’s working so far. I have some fear about how it’ll fare against the wider community, but I just posted to [email protected] to invite a new level of challenge,