WHEN the Collins Dictionary anointed “fake news” its 2017 word of the year, it was presumably because the publishers deemed the term so important it didn’t really matter that it wasn’t actually a single word.

Misinformation is hardly new, but the platforms it spreads on these days certainly are. Google, Facebook and friends provide an unprecedented capacity for the misleading, or downright dishonest, to be disseminated and amplified.

In April 2018, Facebook CEO Mark Zuckerberg acknowledged to the US Congress that his organisation had, among other things, been “too slow to spot and respond to Russian interference” in the 2016 US presidential election using the social media site.

The company, which has also faced allegations of antiright wing bias, has made various attempts to crack down on fake news since 2016.

Earlier this month, it advertised for two “news credibility specialists”, people with “a passion for journalism, who believe in Facebook’s mission of making the world more connected”.

Within a day, following reports in other media, the ad had been replaced with one for two “news publisher specialists” and the passion for journalism had gone.

Whatever the job title, it’s hard to see what two people could do to curb the unruly behemoth that is the internet.

And would it be a good thing if they could? Do we want private corporations such as Google, Facebook and Co to become the effective “censors-in-chief” of our networked age?

Facebook, it’s worth remembering, is the platform that for many years refused to allow women to post pictures of their breastfeeding babies because this breached their nudity provisions.

That said, there’s little doubt about the potential and actual harms of the proliferation of fake news, for health as much as for politics.

Fake health news online covers a gamut from AIDS-denialism to breathless announcements of miracle cures for cancer.

A recent article in Undark magazine – which I, ironically enough, first saw on Facebook – outlines some of the risks posed by this kind of misinformation.

Indian engineer Biswaroop Roy Chowdhury, for example, received 380 000 views within weeks for a video he posted on YouTube arguing that HIV did not exist and that antiretrovirals were the real cause of AIDS.

When questioned by Undark, Chowdhury claimed that 700 people had been in touch to tell him they had stopped their HIV medication as a result.

In a similar vein, the Independent last year examined the proliferation of quackery in health news shared on Facebook.

The most popular cancer story on the social networking site over the previous year was one claiming dandelion root cured cancer “better than chemotherapy”.

The article had received more than 1.4 million Facebook likes, shares or comments, despite there being no actual evidence to support the root’s efficacy as a cancer treatment.

Perhaps the solutions might lie, not in censorship per se, but in recognising the ways online platforms differ fundamentally from the media outlets they are replacing.

Traditional media outlets provided a curated view of what was happening in the world to a broad sector of the population. The system was by no means perfect but it did at least mean people across society were able to form their views based on pretty much the same information.

In today’s more fragmented world, each of us can now inhabit our own personal “filter bubble”, exposed only to information that confirms our pre-existing beliefs.

If you frequent antivaccination sites online, a Google search for “vaccine safety” will prioritise news from those sites rather than, say, the MJA or the Cochrane Collaboration.

If your Facebook friends believe doctors are the tools of Big Pharma, the news items you see on the site will be more likely to come from Natural News than the New York Times.

The online platforms have an interest in skewing what you see in this way. It’s how they make their money.

The more content is tailored to your particular interests and beliefs, the more valuable the adjacent space is to the advertisers who provide the sites’ revenue.

If we really want to undermine the spread of fake news, we’re going to need to attack the algorithms that so effectively promote it.

It’s hard to see Google and Facebook getting on board with that.

Jane McCredie is a health and science writer and editor based in Sydney.

 

To find a doctor, or a job, to use GP Desktop and Doctors Health, book and track your CPD, and buy textbooks and guidelines, visit doctorportal.

3 thoughts on “Fixing fake health news: will Google and Facebook help?

  1. Leviathan says:

    To Randall Williams: FB can regulate the content of “suggested post” very easily. Suggested Posts are paid advertising. Anybody with an advertiser account on FB can do it. All Facebook has to do is employ a legal department and whatever specific expertise is required, and not allow a paid post unless and until it is deemed not misleading. That takes money, which erodes profitability. Profit motive is the only obstacle, they own their own systems and are responsible for the consequences of everything they cause to be published.

  2. Randal Williams says:

    In terms of medical fake news there is an abundance on Facebook, often under the heading “suggested post” . Under the guise of a a medical post or article, there follows an advertising promotion for some sort of dodgy health product, with unsubstantiated claims of benefit. When i see them I make a comment to the effect that there is no evidence for the claims made, but such adverse comments are often quickly removed. Anyone can put anything they like on FB provided it is not offensive or defamatory. I don’t see FB being able to regulate this in any way.

  3. Anonymous says:

    Facebook and similar social media companies are private for-profit publishers, making profit from advertising, similar to traditional newspapers. If someone writes a letter to a newspaper defaming another party, the newspaper can be sued, so they are generally as careful to filter private contributions to their platforms as they are with their own articles, to avoid legal consequences. The trouble with publishing quackery (and newspapers are at times of publishing quackery either uncritically or with poor research, citing “balance”) is that “harm” is something that is only judged after people start dying. Multinational publishers are also hard to pin down into one legal jurisdiction. The main reason social media have any sort of filtering (and it is often arbitrary, robotic and badly applied) is to reduce offence, thereby maximising the client base. There’s little point in hoping for a “socially responsible” approach from such firms until it affects their bottom line, and that will generally require legislation across multiple jurisdictions.

Leave a Reply

Your email address will not be published. Required fields are marked *