Social media companies, and their users, must look in the legal and moral mirror

Share this story

The community holds the media to a standard they so often do not uphold themselves, and it’s becoming a problem that needs attention.

It is right that the media and journalists are held to account in their reporting; that the public apply pressure to ensure accuracy, respect and facts are upheld.

And despite what the reputation of the media may be at times, having worked in numerous newsrooms over the years I can tell you that the checks and balances that go into stories are far more detailed and thorough than you probably imagine.

Journos will not always get it right. There will almost always be a reason for that which the public never sees.

And sometimes, editors and media organisations as a whole make bad calls. In a split decision, the words ‘run it’ can come back to haunt anyone with such power.

But the explosion of social media over the last decade has created a much bigger problem than the odd rogue journalist or bad call: unverified or unauthorised information, often extremely unhelpful to police investigations or ongoing operations, floods into the public sphere well before the media reports it.

And it’s becoming a serious issue.

Have you ever wondered why a particular element of a story is not reported by a journalist? Ever wondered why nothing came of that huge police and ambulance presence you spotted at that house down the road the other day? What about that court case, why did things suddenly go quiet?

No, it’s not because journalists are asleep at the wheel. It’s because the media adheres, on a daily basis, to a whole range of requests, embargoes and operational procedures that relate to stories.

The legal, social and moral ramifications of breaking those commitments are serious.

Should the media get it wrong, the community piles on: we’ve all seen the commentary.

Yet when random people, often hiding behind fake names or anonymous posting options, go off early on social media, it seems to just get passed by as ‘one of those things’.

Case in point: An incident at Nepean Village shopping centre early last week.

Police were called to a disturbance at the shopping centre, just 72 hours after the Bondi Junction attack and on the same night as the Fairfield terror incident.

It was, on the whole, a fizzer of an incident: A fight between two people, with a pair of pliers thrown in for good measure.

Yet hundreds of people flooded community groups on Facebook to declare there had been another stabbing at a shopping centre. Said with conviction and surety but, overall, zero facts to back it up.

Can you imagine the outrage if a journalist had done the same thing?

The Weekender checked the story, verified there was nothing in it, and moved on. We’re criticised for that too; because people claim we should also report when potential incidents turn out to be nothing, not just when they are legitimate issues.

Like last Sunday when someone fired in a message to us, declaring there had been a robbery at Starbucks. Huge police presence, apparently. Turns out a bunch of officers were literally getting a coffee. Funny, but checked regardless, and nothing to report.

These checks and balances do not exist in the social media world.

And while the likes of Meta, X and TikTok like to claim they have decent procedures in place to check facts and remove offensive content, those polices and procedures are weak at best.

Facebook, X and TikTok let people run rampant on their platforms with misinformation, wild theories and opinions and often disturbing footage that no traditional media outlet would ever dare to run.

It’s been interesting this week to see the debate unfold over whether Elon Musk and X should remove violent videos from the recent terrorist stabbing from its platform.

The traditional media would never show such content in such graphic detail, and would be torn down from pillar to post if they did. And yet many believe X has every right to keep the videos up, all under the freedom of speech argument.

Social media may be one of the newer forms of media but it’s not all that different from one of the oldest, radio.

Think about all the checks and balances that go into the world of live radio.

Apart from all the usual policies that exist before that ‘on air’ button is hit, any caller who phones goes through a call screener or producer, who makes a decision on whether the content the person has is legitimate and worthy of going to air.

If they are put to air, there’s a 10 second delay to ensure the caller can be dumped should they say anything legally or morally problematic or break any of the countless codes that govern the industry.

If something does go to air that’s wrong, there’s enormous ramifications and processes to go through.

And yet, in the equivalent circumstances on social media, those behind Facebook, X and TikTok allow the caller to go to air with no pre-check and no delay. Might delete it later, might (and probably) not.

No matter what your argument is about freedom of speech, there is no excuse for some of the content that makes its way onto social media, and stays there.

Which brings me back to the point at the top of the column: people like to hold the traditional media to account, but they don’t seem all that concerned about the misinformation they are reading on socials every day.

Where is the daily outrage about incorrect facts or morally wrong posts? Why was there so much outrage about Channel Seven naming the wrong person as the offender at Bondi Junction, and not equal outrage at social media, where his name first appeared? Where is the 15 minutes of TV every week to pick apart the rights and wrongs?

Is it really because while people like to claim they take the high road, they mainly choose the low one; and social media offers them such a road toll-free?

Perhaps, which means the general public is part of the problem. That’s you and me.

Social media is a cess pit. Yet most of us go back to it every day. If it were a newspaper, we would have unsubscribed long ago.


Share this story