Easy access to information and the internet has created a significant impact in society. From increasing overall cultural and news literacy to empowering people to stay connected, the rapid information age has certainly transformed how society communicates. However, with it, has come certain challenges—and the spread of misinformation is central among these challenges.
The fear of misinformation was expressly prevalent during the Covid-19 pandemic, when there was a global state of massive confusion and lack of understanding as to what the virus was, who could be affected, how to prevent infection, death, etc. Many non-medical professionals shared their opinions about the virus which challenged the opinions of trained medical professionals, causing not only chaos, but a sense of distrust in the general community.
This is an ephemeral challenge in social media and public information platforms; companies and moderators struggle with the ever-onerous balance between over-moderation and curtailing of free speech versus promoting a platform that propagates misinformation.
Late last week, YouTube, one of the world’s largest online video sharing and media platforms, released a new initiative on this front. Dr. Garth Graham, Global Head of YouTube Health, wrote in a blog-post titled “New ways for licensed healthcare professionals to reach people on YouTube” and explained: “When it comes to our health, people trust healthcare professionals to give us the best advice. But the opportunity that healthcare professionals have to inform and educate their patients largely stops at the clinic door. The reality is that the majority of healthcare decisions are made outside the doctor’s office, in the everyday lives of our patients […] Today, we’re announcing that for the first time, certain categories of healthcare professionals and health information providers can apply to make their channels eligible for our health product features that were launched in the US last year. This includes health source information panels that help viewers identify videos from authoritative sources and health content shelves that highlight videos from these sources when you search for health topics, so people can more easily navigate and evaluate health information online.”
Essentially, YouTube is attempting to empower higher quality health information through its platforms, with the hope that this helps the wider YouTube community find more legitimate content and connections they can trust.
YouTube is certainly not the only platform going through significant growing pains on this front. Facebook (now known as “Meta”) has received significant scrutiny for many years on how content is moderated on its platforms, from the actual Facebook application to Instagram. The company’s CEO and founder Mark Zuckerberg has repeatedly come under scrutiny on this subject, especially given that the platforms reach nearly 2 billion people monthly.
Most recently, the dramatic purchase of Twitter Inc., by Tesla founder Elon Musk has drawn significant media attention. One of Musk’s self-proclaimed interests in purchasing the company was allegedly based on his dissatisfaction with how Twitter was moderating content and controlling the flow of information. In a post last week, Musk shared an open note addressed to “Twitter Advertisers”: “The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square, where a wide range of beliefs can be debated in a healthy manner, without resorting to violence. There is currently great danger that social media will splinter into far right wing and far left wing echo chambers that generate more hate and divide our society.”
Musk has congruently announced the formation of a “content moderation council” that will review content and account reinstatement decisions, to ensure better alignment with Twitter’s mission and guidelines.
Indeed, this is also important with regards to health misinformation, as Twitter is a conduit for massive amounts of healthcare data and news. During the height of the pandemic, physicians and providers from all backgrounds used Twitter (the most common hashtags included #MedTwitter, #MedEd—“medical education,” or #FOAMed—“Free Open Access Medical Education”) to display the scenes on the frontlines, often sharing their own experiences and advice on how to deal with the virus and other illnesses. Of course, this soon led to the ephemeral problem: who should be trusted? Notably, Twitter boasts nearly 450 million monthly active users.
None of this content management work will be easy. The reason this problem even exists is due to a constant challenge between the right to freedom of speech, propagating truthful information, and public safety concerns. Finding the right balance between these factors is proving to be notoriously challenging for these companies. Indeed, the best thing users can do for themselves is to ultimately consult their own trusted licensed and trained medical professionals for any and all problems. Nevertheless, the question of how best to transmit information is one the most important thought-problems leaders should be contemplating, as the future of our world truly depends on it.