Americans overestimate how many social media users post harmful content

Posted by bikenaga 10 hours ago

Counter28Comment50OpenOriginal

Comments

Comment by Apreche 9 hours ago

This is one of those studies that presents evidence confirming what many people already know. The majority of the bad content comes from a small number of very toxic and very active users (and bots). This creates the illusion that a large number of people overall are toxic, and only those who are in deep already recognize the truth.

It is also why moderation is so effective. You only have to ban a small number of bad actors to create a rather nice online space.

And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.

Comment by biophysboy 8 hours ago

It also explains why large platforms can be so toxic. If there were a sport with 1000 players, you would need 100 referees, not 1. At scale, all you can really do is implement algorithmic solutions, which are much coarser and can be seriously frustrating for good-faith creators (e.g. YouTube demonetization)

Arbitrators are good! They can be unfair or get things wrong, but they are absolutely essential. It boggles my mind how we decided we needed to re-learn human governance from scratch when it comes to the internet. Obviously the rules will be different, but arbitrators are practically universal in human institutions.

Comment by nradov 4 hours ago

The stakes are much lower on social media. If a referee makes a bad call then I might lose the game so it's worth paying for sufficient and competent officials. But when I see offensive content on social media I just block it and move on with no harm done. As a user the value of increased governance is virtually zero.

Comment by RiverCrochet 9 hours ago

Any site with UGC should include posting frequency next to the name of posters, each time they appear on pages. If a post is someone's 500th for that day it provides a lot of valuable context.

Comment by cosmic_cheese 8 hours ago

Ratio of posts:replies, average message length, and average message energy (negative, combative, inflammatory, etc) provide decent signal and would be nice to see too. Most trolls fall into distinct patterns across those.

Comment by chiefalchemist 8 hours ago

I’m of the belief that HN would benefit from showing a user’s up-votes and down-votes, and perhaps even the post that happened within. Also limit down votes per day, or at least make karma points pay for them. There is definitely an “uneven and subjective” distribution of down-votes and it would be healthy to add some transparency.

Comment by SchemaLoad 9 hours ago

One of the best things platforms started doing is showing the account country of origin. Telegram started doing this this year using the users phone number country code when they cold DM you. When I see a random DM from my country, I respond. When I see it's from Nigeria, Russia, USA, etc I ignore it.

It's almost 100% effective at highlighting scammers and bots. IMO all social media should show a little flag next to usernames showing where the comment is coming from.

Comment by BoiledCabbage 8 hours ago

Yes, but as soon as scammers find their current methods ineffective they will swap to VPN and find a way to get "in country" phone numbers.

There is a fundamental problem with large scale anonymous (non-verified) online interaction. Particularly in a system where engagement is valued. Even verified isn't much better if it's large scale and you push for engagement.

There are always outliers in the world. In their community they are well know as outliers and most communities don't have anyone that extreme.

Online every outlier is now your neighbor. And to others that "normalizes" outlier behaviors. It pushes everyone to the poles. Either encouraged by more extreme versions of people like them, or repelled by more extreme versions of people they oppose.

And that's before you get to the intentional propaganda.

Comment by SchemaLoad 8 hours ago

In country phone numbers are quite hard to get since they have to be activated with ID. Sure scammers could start using stolen IDs, but that's already a barrier to entry. And you are limited to how many phone numbers you can register this way.

Presumably with further tie ins to government services, one would be able to view all the phone numbers registered in their name to spot fraud and deactivate the numbers they don't own.

Comment by didgetmaster 9 hours ago

It is very much like crime in general. The vast majority of crimes committed each year are by a tiny minority of people. Criminals often have a rap sheet as long as your arm; while a huge percentage of the population has never had a run-in with the law except for a few traffic or parking tickets.

While crime is definitely a major problem, especially in big cities; it only takes a few news stories to convince some people that almost everyone is out to get them.

Comment by themafia 8 hours ago

> this is why for-profit platforms are loathe to properly moderate

They measure the wrong things. Instead of measuring intangibles like project outcomes or user sentiment they measure engagement by time spent on site. It's the Howard Stern show problem on a "hyper scale."

> A social network

Given your points we should probably just properly call them "anti-social networks."

Comment by kelseyfrog 8 hours ago

I've always wondered who these people are, like demographically.

We hold (or I do at least) certain stereotypes of what type of person they must be, but I'm sure I'm wrong and it'd be lovely to know how wrong I am.

Comment by basilgohar 8 hours ago

Paid from lower income countries or Israel's section 8200.

Comment by kelseyfrog 8 hours ago

Interesting idea. Any numbers to back it up?

Comment by quantified 8 hours ago

It doesn't take a lot of pee to spoil the soup.

Comment by lanfeust6 8 hours ago

Algorithms and perverse incentives are currently boosting that signal, however. Take this story for instance from the Atlantic: https://www.theatlantic.com/ideas/2025/12/american-anti-semi...

"Last week, the Yale Youth Poll released its fall survey, which found that “younger voters are more likely to hold antisemitic views than older voters.” When asked to choose whether Jews have had a positive, neutral, or negative impact on the United States, just 8 percent of respondents said “negative.” But among 18-to-22-year-olds, that number was 18 percent. Twenty-seven percent of 18-to-22-year-olds strongly or somewhat agreed that “Jews in the United States have too much power,” compared with 16 percent overall and just 11 percent of those over 65."

It's easy to get exposed to extreme content on instagram, X, YT and elsewhere. Incendiary content leads to more engagement. The algorithms ain't alright.

Comment by SilverElfin 6 hours ago

One danger is that the volume of toxic people does actually create large numbers of actually toxic people. For example when mainstream influencers or politicians endorse racist views even indirectly, it can shift give others permission to start saying the same things. Then that causes the other side to go further to an extreme on their side. And so on.

Comment by thaumasiotes 8 hours ago

> And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.

How so? It's not like Facebook charges you to post there.

Comment by nemomarx 8 hours ago

Other users engaging with them drives views, I think is the idea. Without trolls to dunk on or general posts to get mad about, why go on Twitter?

Comment by cosmic_cheese 9 hours ago

Furthermore, this does well to illustrate how that handful of trolls is eroding away the mutual trust that makes modern civilization function. People get start to get the impression that everybody is awful and act accordingly. If allowed to continue to spiral, the consequences will be dire.

Comment by energy123 8 hours ago

Bad content being shoved in our face is a symptom of the real problem, which is bad mechanics. Solutions that reform the mechanics (e.g. require a chronological feed instead of boosting by likes) are going to be more effective, less divisive, neutral by design, and politically/legally easier to implement.

Comment by ok123456 9 hours ago

This is intentional: make people think there's nothing online except harmful content, and propose a regulatory solution, which creates a barrier to entry. It's "meta" trying to stop any insurgent network.

Comment by goalieca 8 hours ago

It’s also meta overstating the power of influence. Why would they do that? Because it’s good marketing for them to sell a story around how their services running ads can be used for highly effective mass influence.

Comment by energy123 8 hours ago

These academics are being bribed by Meta? What are you implying here?

Comment by ok123456 3 hours ago

There are plenty of sychophantic academics who will do anything to advance their career. Take a look at Alex "Lex" Fridman.

Comment by 9 hours ago

Comment by barfoure 8 hours ago

Turns out the kids are alright after all!

Comment by bikenaga 10 hours ago

Abstract: "Americans can become more cynical about the state of society when they see harmful behavior online. Three studies of the American public (n = 1,090) revealed that they consistently and substantially overestimated how many social media users contribute to harmful behavior online. On average, they believed that 43% of all Reddit users have posted severely toxic comments and that 47% of all Facebook users have shared false news online. In reality, platform-level data shows that most of these forms of harmful content are produced by small but highly active groups of users (3–7%). This misperception was robust to different thresholds of harmful content classification. An experiment revealed that overestimating the proportion of social media users who post harmful content makes people feel more negative emotion, perceive the United States to be in greater moral decline, and cultivate distorted perceptions of what others want to see on social media. However, these effects can be mitigated through a targeted educational intervention that corrects this misperception. Together, our findings highlight a mechanism that helps explain how people's perceptions and interactions with social media may undermine social cohesion."

Comment by daveguy 9 hours ago

Ahhhh. So maybe it's the platforms and their algorithms promoting harmful content for attention that are to blame? And how many of the platforms want to even admit the content they are pushing is "harmful"? Seems like two elephant sized sources of error.

Comment by Me1000 9 hours ago

The premise of this study is a bit misguided, imho. I have absolutely no idea how many people _post_ harmful content. But we have a lot of data that suggests a _lot_ of people consume harmful content.

Most users don't post much of anything at all on most social media platforms.

Comment by skybrian 8 hours ago

Saying it's "algorithms" trivializes the problem. Even on reasonable platforms, trolls often get more upvotes, reshares, and replies. The users are actively trying to promote the bad stuff as well as the good stuff.

Comment by darth_avocado 9 hours ago

Isn’t that how news works? Sensational stuff sells, so you only see the extremes. Pretty much the same with social media.

Rage = engage

Comment by exceptione 9 hours ago

Open youtube in a fresh browser profile behind a vpn. More than 90% of the recommended videos in the sidebar are right-wing trash like covid-conspiracies, nut-jobs sprouting Kremlin nonsense, alt-right shows.

Baseline is in the end anti-democracy and anti-truth. And Google is heavily pushing for that. The same for Twitter. They are not stupid, if they know you and they think they should push you in a more subtle way then they aren't going to bombard you with Tucker Carlson. Don't ever think the tech oligarchy is "neutral". Just a platform, yeah right.

Comment by bdangubic 9 hours ago

> Baseline is in the end anti-democracy and anti-truth. And Google is heavily pushing for that.

Google et al do not give a hoot about being “left” or “right” - they only care about profit. Zuck tattooed rainbow flag while Biden was President and is currently macho-man crusader. If Youtube can make money from videos about peace and prosperity that’s what you’d see behind the VPN. since no one watches that shit you get Tucker

Comment by 8 hours ago

Comment by freejazz 8 hours ago

> Zuck tattooed rainbow flag while Biden was President and is currently macho-man crusader. If

Funny how you say this but insist you're not the one being fooled right now!

Comment by bdangubic 7 hours ago

fooled in what way? I don’t use youtube or any social media since 2019-ish. last time I saw anything on youtube in probably 2018-ish (othercthan my kid showing me volleyball highlights :) )

Comment by expedition32 8 hours ago

Normal people are perhaps less inclined to make videos about chemtrails and lizards.

I was always intrigued about Twitter. After the novelty wears off who the hell wants to spend hours ever day tweeting?

Comment by decremental 9 hours ago

[dead]

Comment by 8 hours ago

Comment by AuthAuth 9 hours ago

Isnt it still an accurate perception of moral decline? Even if its only 3% sharing misinfo and toxic posts its still 47% that is sharing them, commenting on them and interacting positively with them. This gives the in my opinion correct perception that there is moral decline.

Comment by RiverCrochet 9 hours ago

You have to eliminate this counterpoint for the evidence to fully support your perception: people share stuff on social media because it's easier and actively encouraged.

Here's the counterpoint to that though: people share stuff on social media not just because it's easy, but because of the egocentric idea that "if I like this, I matter to the world." The egocentricism (and your so-called moral decline) started way earlier than that, though-it goes back to the 1990s when talk shows became the dominant morning and afternoon program in the TV days. Modern social media is simply Jerry Springer on sterioids.

Comment by cosmic_cheese 8 hours ago

It doesn't indicate moral decay at all. It just confirms what we already know about the human psyche being vulnerable to certain types of content and interactions. The species has always had a natural inclination towards gossip, drama, intrigue, outrage, etc.

It only looks like "decline" because we didn't used to give random people looking to exploit those weaknesses a stage, spotlight, and megaphone.

Comment by ggm 9 hours ago

If more people were obligated to undergo KYC to get posting rights, Less people would be able to objectively claim to be other than they are.

If more channels were subject to moderation, and moderators incurred penalty for their failure, channels would be significantly more circumspect in what they permitted to be said.

Free speech reductionists: Not interested.

Comment by JuniperMesos 9 hours ago

Man, I'm against existing American KYC laws in the context of transferring money. I certainly don't want to see them expanded to posting online.

Comment by RiverCrochet 8 hours ago

KYC should work both ways. If a social media network needs to know my real name and address, I should know the real name and address of everyone running the social media network.

Comment by ggm 8 hours ago

Seems fair.

Comment by juggerlt 8 hours ago

Hi. Are we posting from Eglin or Tel Aviv today?

Comment by ggm 8 hours ago

Australia. The land of the VPN nowadays.

Comment by makeitdouble 8 hours ago

This study seems to be playing with what toxicity means.

Is the 43% cited at the top of the piece matching the same criteria they use for digging deeper in the study ?

Their specific definition of toxicity is in the supplementary material, and honestly I don't think it matches the spectrum of what people perceive as toxic in general:

> The study looked at how many of these Reddit accounts posted toxic comments. These were mostly comments containing insults, identity-based attacks, profanity, threats, or sexual harassment.

That's basically very direct, ad hominem comments. and example cited:

> DONT CUT AWAY FROM THE GAME YOU FUCKING FUCK FUCKS!

Also why judge Reddit on toxicity but not fake news or any other social trait peolple care about ? I'm not sure what's the valuable takeaway from this study, only 3% of reddit users will straight insult you ?

Comment by glitchc 8 hours ago

Profanity is toxic by definition? Since when?

Comment by tsunamifury 9 hours ago

any basic nodal theory will help you understand its not about how many who post, its about their reach and correlations with viewership of overall graph.

A few bad apples, spoil the whole bunch is illustrated to an extreme in any nodal graph or community.

So it's more about how much toxic content is pushed, not how much is produced. At an extreme a node can be connected to 100% of other nodes and be the only toxic node, yet also make the entire system toxic.

Comment by bongodongobob 9 hours ago

Isn't this just saying they are bad at estimating? It's not like any of these people did any rigorous studies to come to their conclusion.

Comment by JuniperMesos 9 hours ago

> When US-Americans go on social media, how many of their fellow citizens do they expect to post harmful content?

Just because an American citizen sees something psoted on social media in English, it doesn't mean that it was a fellow American citizen who posted it. There are many other major and minor Anglophone countries, and English is probably the most widely spoken second language in the history of humanity. Not to mention that even if someone does live in America and speak English and post online, they are not necessarily a US citizen.