top of page
Search

On media and public policy concerns

In 2009, journalist Adam Penenberg wrote his book “Viral Loop: From Facebook to Twitter, How Today’s Smartest Businesses Grow Themselves.” He discusses how companies like Facebook and Twitter exponentially scaled their businesses through technology: by sheer virtue of a viral product. A decade later, Professor Penenberg described the downside of this to my Journalism class in NYU: fake news, or misinformation. With the exponential growth of social media has come misinformation via bots, spam accounts, purchased tweets/retweets, flooding comments and incorrect context. And while this trend is global, it also majorly impacts social media's online safety and regulatory landscape.


An important case study- India’s recent concerns over content regulation. A popular social media platform was accused of failing to shut down 1300+ accounts which allegedly spread misinformation and hate speech over recent protests. The company maintained that it would not censor news outlets, journalists and activists, allowing them to uphold principles of protected speech and freedom of expression. Meanwhile, the state believed that inflammatory content and misinformation violated national rules, and that the company adopted a preferential treatment when it took stronger measures against similar instances in the West.


Media companies tread a fine line between conserving business interests, and protecting international standards of freedom of opinion and expression. Given the growing popularity of social media, platforms need a robust public policy strategy which applies universally across the region (and beyond).


In today’s globalised world, large tech platforms may not be able to adhere to different norms in every country. The key to a universal policy could be filtering genuine misinformation (false news) from opinionated speech. This needs to occur at the level of individual posts, rather than whole accounts. The process of posting can incorporate fact checking algorithms via tools like machine learning, or third party fact checking services (like Facebook’s program); while stronger tools like sentiment analysis could perhaps strengthen identification of hateful conduct on the platform. Certain companies are already testing ways to ensure that people have read an article before retweeting- a useful feature for combatting the spread of false information. This also links to Jack Dorsey’s recent speech about decentralising social media. Allowing users to flag and categorise any forms of misinformation would allow them to customise the kind of data and content they see, while “cleaning up” the online space. Of course, stricter technical checks are also necessary to prevent things like purchased posts, spam accounts and bots which pollute the online community. These combined efforts would allow platforms to better censor the false and hateful content in individual posts, while upholding the freedom of speech for activist/journalists' social media overall.


 
 
 

Comments


© 2023 by Train of Thoughts. Proudly created with Wix.com

bottom of page