Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you considered that maybe de-plaforming people on the fringe and forcing them into their own private silos actually ends up boosting their message?

When you try to be the goody two-shoes and block anything and everything that could potentially be offensive to someone you just make controversial opinions more intriguing. At least I want to see what the big fuss was about and determine myself it such banishment was actually justified and more often than not it feels like too hash of a punishment, which in turn makes it more likely that I go look for the next banished person's shit and so on.

One can then easily get directed to one of these silos where there are no opposing arguments at all. At least when someone acts out on a public forum majority of users can rein them back in line and avoid further indoctrination. However this last part is hard pill to swallow for most, since they don't want their own bad opinions to be called out.



I mean, it's worth _considering_, but it does not appear to be true. See reddit; reddit has gone through a number of waves of banning abominable subreddits. In general the result is that the most extreme members create a nightmarish reddit clone which no-one else cares about, the rest disperse.


How would you know when the very existence of these new silos is hidden from you?

This is like saying "our model recognizes 99% of the AI generated images", but it leave out that you don't know the actual amount since when your model does not recognize that an image in the wild was generated by AI you dont know that it was generated by an AI.


... I mean, it's not hidden at all. If you want (you probably do not; they are astonishingly obsessive and horrible) to find the alternative reddit clones that people fled to when fatpeoplehate or the Nazi subreddits or the worst of the TERF subreddits or whatever were banned, well, they're right there, they are not a secret.


> When you try to be the goody two-shoes and block anything and everything that could potentially be offensive to someone

This is intentionally trivializing the actual approach and making it seem as arbitrary and low impact as possible. I certainly wouldn't support a ban on "everything that could potentially be offensive" and I haven't seen a proposal for one. I do support a ban on violent far right extremist movements on social media platforms though, because they coherently use these platforms as a venue for harassment, recruitment, and messaging.

The "marketplace of ideas" ideology or "don't feed the trolls" tactic don't actually work in practice. It's the sartre quote. Having a public policy debate with, for example, an ethnonationalist is a victory for the ethnonationalist in itself. They don't have to "win" the debate, they've won by getting you to have it in the first place.

Bans do work. Reddits used to have a serious problem with extremist antifeminists and literally, self-identified neonazis brigading semi-related posts in other subreddits. Banning the extremist subs had a huge impact in reducing it! You don't have to give people a forum to self-organize against your other users.

Or like, what is milo yiannopoulos up to these days? His influence and reach shriveled into insignificance after he got banned from everything a few years ago. The idea that the best way to combat extremism is by discussing it with extremists is a particular ideology. It is not a pragmatic goal- or result-based approach to moderation, or an abstinence from making ideological decisions about moderation.


Deplatforming people doesn't boost their message though. You don't get the Streisand effect when its 1000 trolls instead of one famous person. Also the free market of ideas just hasn't proven effective at stopping harassment and worse. There's nothing illogical about what you've said, but the real world data just doesn't support your conclusions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: