Future responsibilities of social media platforms

What is the role of social media platforms and their responsibility to protecting the next generation?

It’s easy to forget that, before it evolved into the global behemoth it is today, Facebook was built by college students as a platform to rate women. These days – ethically and socially – a start-up with that focus wouldn’t fly. 

Social media and modern tech infiltrate and influence every corner of our society – including education, politics, culture and business – in innovative and positive ways. It is social media’s dark side – disinformation, hate speech, cyberbullying, revenge porn, terrorism campaigns and sex trafficking – that promotes the questions of ethics and responsibility moving ahead. When statistics show that more than 13 million Australians say the internet is their main news source, the urgency magnifies.

A recent study released by Safer Kids Online, undertaken by global digital security company ESET, outlined the increased online activity by children between the ages of 6 and 14 years that’s been attributed to the pandemic. However, outside of home-schooling activities undertaken by the age group, the study identified YouTube as the most popular social networking space followed by online gaming networks such as Minecraft and Roblox beating out other web-based social platforms. 

Source: Pexels

If this indicates anything, it’s that social media and online entertainment isn’t going anywhere, it will remain an intrinsic part of our lives; and the dark side of social media will mature with it, inflating the need for a holistic approach to online security.

The business models of social media platforms are perhaps the biggest challenge for content moderation and meaningful corporate social responsibility (CSR). The modus operandi of success for these platforms is maximised content creation, posting, and sharing, and the constant activity and immense amount of users across the world requires moderation to be automated through a combination of algorithms and human supervision. 

Currently, social media platforms are self-regulating, and critics say platforms aren’t doing enough to eliminate damaging and dangerous activity that occurs in their spaces. Exacerbating the conundrum are successful dynamic feedback loops called network effects that enable the platforms to monetise activity that increase their own bottom lines. 

Further challenges identified by experts – in particular for stopping the spread of misinformation, propaganda and fake news – speak to the increasing skills of the creators themselves, referencing research around the lead-up to the 2020 US Presidential election.

Adoption of robust algorithms by the perpetrators boosts the velocity and reach that misinformation travels, increasing the difficulty to combat it. It’s a cat and mouse game of technical expertise, often with the bad guys being anonymous and unidentifiable. Additional expert findings also identify language complications – such as context and nuance – can hinder any algorithm’s ability to separate ‘permissible’ versus ‘problematic’. 

Just this year, the newly formed Select Committee on Social Media and Online Safety released a report which criticised digital platforms for ‘touting’ Australia’s strong community standard policies. It echoes what other country’s legal experts are stating, that current policy decisions made by social platforms offer only ‘lip service’ when it comes to CSR. 

A major issue globally is the publishers versus platforms debate. Under Section 230 of the US Communications Decency Act, and indeed here in Australia, accountability is problematic, as online social platforms aren’t assumed publishers and therefore not responsible for their users’ posts. 

Experts agree this offers social platforms the best of both worlds; Facebook can claim they are not responsible for what their users say, but on the flip side they can delete posts by users that violate their own community standards. Freedom of speech shouldn’t shield platforms from the consequences; as in many cases right now, it leaves the individual user and/or the victim with any legal fallout.

If social platforms are reluctant to draw clear lines for their own policies, where is the transparency and consistency to enforce them? Prior to the global domination of the online social world, some industries had limited success with self-regulation, but it’s widely agreed that self-regulation worked better for everyone, in some instances, when the shadow of government enforcement loomed. 

There is some indication that social platforms’ tolerance for misinformation may be waning post-pandemic, and in the aftermath of the Black Lives Matter protests and commentary. Twitter has added ‘Get The Facts’ labels to questionable tweets, and other socials including Facebook are facing pushback to employ similar tactics. 

Experts and academics are also floating other measures being introduced such as reconfiguration of algorithms to favour posts and information that’s true, from reputable and identifiable sources. Those that are positive about reform claim it’s a combination of technical solutions and human or societal intervention. 

If history tells us anything, it’s that, inescapably, governments will become engaged to some degree eventually. Right now, there is room for social platforms to be proactive, and be more aggressive and diligent in their self-regulatory activity. 

Some say there is also room for policy that serves accountability, free speech and consumer protection equally. However, there is some assumed agreement that social media platforms should not be responsible for third-party content, even if they are powerful enough to cause perceptions and spread information globally in the blink of an eye. 

It’s argued that power should be harnessed to create policy and guidelines where human rights and laws are respected and promoted. It must be realised that, fundamentally, any legal regulation should offer incentives for being proactive, and to assist in finding solutions as opposed to dictating what social platforms can and can’t do.  

Legal controls imposed by governments, at the very least, could be intrusive and defensive, but, at the extreme, they could also destroy the fundamental operating principles that made the social media giants so successful in the first place. It should be expected that more effective policies, transparently enforced, could minimise any inevitable government involvement. 

Self-regulation goals, before government action, should aim to endorse the trusted landscape that has enabled digital platforms to thrive as they have. User confidence depends on it, especially around the education of our future digital natives. 

The good news is that, while not negating the need for social media platforms to take more responsibility, the ESET Study revealed 84% of our kids stated that they felt confident in their ability to avoid or respond to online risks, while still being realistic around the fact that hacking and cyberbullying can occur. It’s agreed though that education and instilling user confidence of the social media landscape can only go so far. 

Moving ahead, regardless of regulation, governments and big tech will have to work together for sustainable success and long-term survival. 


Author: Kelly Johnson

Kelly Johnson is the Country Manager for leading cybersecurity firm, ESET.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top