ProductGPT
Try the custom AI to help you find products that Reddit loves.
> Hate speech (most of it, at least) isn't seeking to invite new opinions or dialogue. It's seeking to shut down an argument or turn it into a battle of name-callings. Both of those are silencing opinions/ideas by force.
No, it isn't. Hate speech is just speech you find offensive. It can't force you to stop speaking, and it doesn't prevent an audience from hearing you. You still have the ability to speak no matter how vile your opposing interlocutor gets, provided they are not speaking over you. Stated opinions are not force. You may feel moved to quit, however you can always stand your ground and continue to make your points if you want to. You may find that unproductive, but that's not the point.
> If we're in a debate, and instead of listening to your point, I scream at the top of my lungs some vile insults, it's a similar (though not completely identical) idea.
That's not hate speech, that's called a heckler's veto, where the heckler is drowning out the speaker by being loud. They could as easily be singing about raindrops on roses; as long as it's loud and prevents the audience from hearing you, it's a heckler's veto. "Hate speech" is ill-defined when attempted to encode into law because it's subjective, and laws are not supposed to be interpreted differently based upon the victim.
See HATE: Why We Should Resist it With Free Speech, Not Censorship by Nadine Strossen, former president of the ACLU from 1991-2008. She's read virtually every hate speech law that's been attempted across the world, and none of them work. Chapter Five is titled Is It Possible to Draft a "Hate Speech" Law That Is Not Unduly Vague or Overbroad? The answer is no, and the chapter begins with the epigraph,
"It is technically impossible to write an anti-speech code that cannot be twisted against speech nobody means to bar. It has been tried and tried and tried."
> Eleanor Roosevelt said "No one can make you feel inferior without your consent."
>> That is a bit debatable. The psychological harms of online hate + bullying are pretty well documented. Having mental resilience is great, but applying it as a blanket statement -- "if you're being hurt by words, its your problem for letting them hurt you" -- I'd argue is wrong.
Psychological harm can be real, as Nadine says at 29:31 in Intelligence Squared U.S. Debates: Hate Speech in America,
> "Nobody denies the harm. The question is, what are other ways to prevent the harm, because the neuroscience and the mental health and psychological experts say that shielding people from upsetting words may actually not be beneficial to their mental health, that the best thing to do is to develop habits and skills of resilience, because they are going to be exposed to all kinds of things that are deeply upsetting in the real world, and we're making them less able to withstand that."
It is up to you how you feel when someone speaks. This is different from when someone strikes you. That harm is universal. But when someone says "you're dumb", and one person shrugs it off whereas another feels hurt, we don't punish the speaker there because there are people who that does not bother, and they are the ones who can prepare counter arguments to "hate speech".
> At this scale, it pretty much comes down to deterrence. It's a matter of preventing as many as possible for as low cost as possible. Yes, you can code + check to see if content is shadow removed, the more complexity you add to a system to more points of failure there are.
Shadow removals hurts individuals, not bots. Bots will be coded to check for removals. I don't know what you're talking about with complexity, bots are simple, and checking the status of your content as another user is also simple.
> And yes, there are successful bot authors who know this -- but there are also many script kiddies pulling things off Github to run themselves.
There are not nearly as many of these as there are real individuals caught in the dragnet. Either way, it doesn't justify building a system that compromises everyone's values in order to rid the system of a few troublesome bots.
> Right now those users are not in the room, and you're talking about secretly removing them from other conversations on the basis that they are hateful.
>> Okay, how would you suggest we, as a project/organization, best address that? We have zero influence over Reddit's policies or systems. If we do nothing, we neither solve the problem of hate, nor the problem of secrecy. There's the argument in there that not doing anything is not making things worse, which has merit, but also assumes that these people are looking for conversations or to provide input. Which is often not the case.
It's possible to make something worse if you don't know what you are doing. In that case, doing nothing is better than doing something.
You should reverse course and not build systems atop secret removals. Shadow moderation deceives millions of users. We should be working to eliminate that, not expanding upon it. You can instead advocate for transparency. Build systems that show users how they are being shadow moderated. Then once secretive moderation is no longer happening you can build whatever user lists you want.
So, my understanding of the "backfire" argument is not so much that it will create martyrs, though that may sometimes be the case; it's more that censorship-type laws may be enforced unequally or used to further marginalize already marginalized groups, depending on who is in power. Think of things like community or individual movements to reclaim certain demeaning words.
For a real world example, check out Simon Tam (an Asian American) and his fight to trademark his own band name "The Slants." He ended up in the Supreme Court because USTO wouldn't let him trademark because it was considered a demeaning name for an Asian American group, despite the fact that they chose the name for themselves. (He wrote an interesting book about his experiences too.)
I think Nadine Strossen (former president of ACLU) has a great take on the nuance advocating for free speech and why censorship is dangerous for civil rights in her semi-recent book if you're interested in checking it out.
You're asking, how do anonymous sources end up influencing platform features, right?
Well, lots of ways. Moderators are anonymous. One way that comes to mind is features discussed in r/ModSupport. Only moderators can participate there, but that isn't apparent to users. If you aren't a moderator and you comment there, your comment will be removed without notification. I mention this in my last comment in that thread which did not receive a reply.
More generally, anonymous moderators have a lot of sway over this platform. There are 100,000 communities run by volunteers. They went on strike once and that was a big deal. A CEO left over it and the founder returned.
The oversight for all of this is you and me, the userbase, which is even larger. When users become aware of something, moderators do adjust. But, we can also part of the status quo. Moderators mostly take action on user reports, and when you or I report a comment and it is silently removed, that comes from us.
Reddit isn't the first forum to have a large system of volunteer moderators. AOL had one too, and I'm sure Facebook's is larger than Reddit's. Both Reddit and Facebook appear to use outsourcing and volunteers to do the work, and I would guess that others work the same way (Parler, Gab, TikTok, etc.). I'm not sure about Twitter. Part of me suspects that Twitter's new owner will arrive and bring in even more secret removals and community moderation. That might further solidify the capturability of social media.
The problem, in my view, comes when you enable community moderation and shadow removals. Either one on its own does not seem problematic to me because it's hard to scale the abuse. If only admins can shadow remove content, then they have to pay to do it. That said, it might not be possible to force platforms to behave this way with legislation. In that case, to really get the word out to users, you either need an effective awareness campaign, or tools that bring Reveddit-like transparency to other platforms.
What I would do, if I had the time, is make tools for each platform that shows users which of their content was removed, or make it clear how much it was viewed, to the extent that's possible. I think it's possible to do it for all public platforms. I'm 99% sure Facebook would try to sue over this, like they did to Power.com, so I'd want to be prepared for that if I worked on a tool for that.
That's my 2c. I reserve the right to change my mind about all of this. There is so much to know, and every day I learn something new. This does not even scratch the surface, for example, on debates lawyers have over free speech. I'm recently getting into <em>HATE: Why We Should Resist it With Free Speech, Not Censorship</em> by Nadine Strossen who was President of the ACLU from 1991 - 2008. I think people in her network would be shocked to discover what's going on in social media to everyday people.
I’ll just leave this here since you seem like the sort who’d be interested.
https://www.amazon.com/HATE-Should-Resist-Censorship-Inalienable/dp/0190859121