Every political thread is chock full of people being angry and unreasonable. I did some data mining, and most of the hate is coming from a very small percentage of the community, and the rest of the community is very consistent in downvoting them.
The problem is that even with human moderators enforcing a series of rules, most of those people are still in the comments making things miserable. So I made a bot to do it instead.
[email protected] is a bot that uses an algorithm similar to PageRank to analyze the Lemmy community, and preemptively bans about 1-2% of posters, that consistently get a negative reaction a lot of the time. Take a look at an example of the early results. See how nice that is? It’s just people talking, and when they disagree, they say things like “clearly that part is wrong” and “your additions are good information though.”
It’s too early to tell how well it will work on a larger scale, but I’m hopeful. So, welcome to my experiment. Let’s talk politics without all the abusive people coming into the picture too. Please come in and test if this thing can work in the long run.
You don’t need a social credit tracking system to auto-ban users if there’s a big majority of the community that recognizes the user as problematic: you could manually ban them, or use a ban voting system, or use the bot to flag users that are potentially problematic to assist on manual-ban determinations, or hand out automated warnings… Especially if you’re only looking at 1-2% of problematic users, is that really so many that you can’t review them independently?
Users behave differently in different communities… Preemptively banning someone for activity in another community is already problematic because it assumes they’d behave in the same way in the other, but now it’s for activity that is ill-defined and aggregated over many hundreds or thousands of comments. There’s a reason why each community has their rules clearly spelled out in the side, it’s because they each have different expectations and users need those expectations spelled out if they have any chance of following them.
I’m sure your ranking system is genius and perfectly tuned to the type of user you find the most problematic - your data analysis genius is noted. The problem with automated ranking systems isn’t that they’re bad at what they claim to be doing, it’s that they’re undemocratic and dehumanizing and provide little recourse for error, and when applied at large scales those problems become amplified and systemic.
That isn’t my concern with your implementation, it’s that it limits the ability to defend opposing views when they occur. Consensus views don’t need to be defended against aggressive opposition, because they’re already presumed to be true; a dissenting view will nearly always be met with hostile opposition (especially when it regards a charged political topic), and by penalizing defenses of those positions you allow consensus views remain unopposed. I don’t particularly care to defend my own record, but since you provided them it’s worth pointing out that all of the penalized examples you listed of my user were in response to hostile opposition and character accusations. The positively ranked comments were within the consensus view (like you said), so of course they rank positively. I’m also tickled that one of them was a comment critiquing exactly the kind of arbitrary moderation policies like the one you’re defending now.
Even if I wasn’t on the ban list and could see it I wouldn’t have any interest in critiquing its ban choices because that isn’t the problem I have with it.