Expert Analysis

Freedom of Speech vs. the Fate of Liberal Democracy

Discussions regarding the need to regulate social media platforms and their contents escalated with the horrific shooting in the city of Christchurch. It is not that many countries, spearheaded by the European Union have not already covered a great distance on the matter, yet the Christchurch shooting and its live broadcasting on Facebook surely carried the subject one step further.

50 people were killed on 15 March 2019 in shootings at two mosques in the city of Christchurch. One of the shooters happened to livestream the attack on Facebook. The video showing the gunman walking into a mosque and opening fire ran for about 17 minutes on Facebook and was viewed 4,000 times before it was taken down.

Per Facebook’s director of policy for Australia and New Zealand, Mia Garlick’s statement, New Zealand Police alerted Facebook to a video on Facebook, shortly after the livestream commenced and Facebook removed the video right away, together with the shooter’s accounts. Hours after the shooting, despite Facebook’s prompt intervention, copies of the video continued to appear on Facebook, YouTube and Twitter; which without a doubt raises new concerns as to Facebook’s ability to effectively manage and remove harmful content on its platform.

Facebook is removing any harmful or violent content as soon as it becomes aware of it. So does YouTube. Yet, the copies of the shooter’s video circulating even after its removal, trigger questions about how social media platforms handle offensive content. Are these companies doing enough to catch harmful content? “While Google, YouTube, Facebook and Twitter all say that they’re cooperating and acting in the best interest of citizens to remove this content, they’re actually not because they’re allowing these videos to reappear all the time,” said Lucinda Creighton, a senior adviser at the Counter Extremism Project, an international policy organization.[1]

Following the New Zealand shooting, Facebook is now considering whether it should implement certain restrictions as to who will be allowed to go live on the platform, due to the fact that the shooter chose to use Facebook Live to broadcast the entire incident, which later appeared on various other platforms.

Live content creates certain disadvantages for the platforms on which they are broadcasted. Live content, by its nature, cannot be reviewed beforehand.  Secondly, it is not possible for the platforms to control and take action over what is being broadcasted, until they become aware of it, either by themselves or upon notification. There are other incidents that took place in the recent past. In 2016, a woman used Periscope to live-stream her own suicide. In 2018, a woman was murdered by her boyfriend while broadcasting on Facebook Live.[2]

Following the shooting, Facebook has met New Zealand government officials regarding what can be done to prevent such incidents from repeating. Accordingly, a solution being discussed is restricting certain people from using live-streaming entirely. Certain users who have been previously reported or are known for concerning behavior would not be given the chance to use live option. Facebook COO Sheryl Sandberg also commented on this by stating “We are exploring restrictions on who can go live depending on factors such as prior Community Standard violations.” Most certainly, categorizing people based on their prior violations of law and restricting their social media usage accordingly will raise other ethical concerns and serious heated discussions.

While everyone rages against Facebook for their failure to manage harmful content, another aspect to consider is whether they actually have the liability in this regard. Online platforms are exempt from liability for the fraudulent content and information on its platform. Platforms have no legal liability to act unless and until they have been noticed of these practices or ordered to take down certain content.  Thus, per the current regulations in most countries, platforms like Facebook would not be held liable for any unlawful content uploaded by their users, their liability will only start with their failure to act upon being notified. (such as Communications Decency Act in the USA and the Law numbered 5651 in Turkey) Retaining this exemption from liability is considered vital for the existence of many platform business models. Considering the measures that needs to be taken in case of a liability, its absence also lowers barriers to market entry for new platforms. At the same time, it is important to block to dissemination of the illegal or harmful content on online platform, which until now is mainly done via self-regulatory measures by online platforms.[3] It is now changing as countries spearheaded by Australia, New Zealand and the UK now discuss government intervention in regulating social media.

“What if live-streaming required a government permit? What if Facebook, YouTube and Twitter were treated like traditional publishers, expected to vet every post, comment and image before they reached the public? Or like Boeing or Toyota, held responsible for the safety of their products and the harm they cause? Imagine what the internet would look like if tech executives could be jailed for failing to censor hate and violence. These are the kinds of proposals under discussion in Australia and New Zealand as politicians in both nations move to address popular outrage over the massacre of 50 people at two mosques in Christchurch, New Zealand.” writes the New York Times article.[4]

“Big social media companies have a responsibility to take every possible action to ensure their technology products are not exploited by murderous terrorists” said Australia’s prime minister Scott Morrison. “It should not just be a matter of just doing the right thing. It should be the law.”, he added. The planned government intervention to social media platforms and the content running on them, will be solidified with a bill to be introduced in Australia.[5]

In recent weeks, Mark Zuckerberg, has said new regulations are needed, particularly to more clearly define what is acceptable content so companies aren’t the main judges. On the other hand, the passage from the Zuckerberg’s essay also reflects Facebook’s stance and unwillingness to be proactive on this matter: Now, I’m not going to sit here and tell you we’re going to catch all bad content in our system. We don’t check what people say before they say it, and frankly, I don’t think our society should want us to. Freedom means you don’t have to ask permission first, and that by default you can say what you want. If you break our community standards or the law, then you’re going to face consequences afterwards.”

Concurrently, following these developments, Britain also proposes to assign the government new powers to regulate the internet to fight against harmful and violent content as well as fake and misleading information. Britain intent to target internet platforms starting with Facebook and Google. The government called for naming an internet regulator with the power to issue fines, block access to websites if necessary and make individual executives legally liable for harmful content spread on their platforms. The proposal, if remains unchanged, would be one of the world’s most aggressive actions against online content.

The proposal, explicitly supported by Prime Minister Theresa May, target Facebook, Google and other large internet platforms that have the competency to stop the dissemination of the harmful material. The British government plans to assign an internet regulator with the authority to issue fines, block access to websites if necessary and make individual executives legally liable for harmful content spread on their platforms. “The internet can be brilliant at connecting people across the world, but for too long these companies have not done enough to protect users, especially children and young people, from harmful content,” stated Theresa May while talking on the proposed bill, “That is not good enough, and it is time to do things differently.”

Around the world, governments continue to debate on how to control social media and its content, while there is a parallel debate on whether these regulations will restrict freedom of expression. Governments are now troubled to find the balance to regulate online communication, but not to implement policies that would lead to censorship. In response, Australia passed a law that imposes fines for social media companies and imprisonment for their executives if they fail to rapidly remove the harmful material from their platforms. New Zealand is also considering new restrictions. In Singapore, draft legislation was introduced to restrict the spread of false and misleading information. India has also proposed broad new powers to regulate internet content.

The European Union has been debating the proposal on the terrorist content regulation that some have warned is overly broad and will harm free expression, since September 2018. Finally, the issue became a new urgency following the mass shootings in New Zealand and last week, European Union lawmakers approved controversial legislation that would require platforms to take down terrorist content within one hour of receiving notification from authorities. The European Parliament passed the measure by a vote of 308 to 204, and the text will be further negotiated among lawmakers before becoming law. The proposal could undergo a serious change before becoming law. As noted above, this proposal too, should find a balance between allowing freedom of expression and stopping the spread of harmful and illegal content online.

Under the legislation, called the Terrorist Content Regulation, companies could be fined up to 4 percent of revenue if they consistently fail to remove terrorist content. The plan would apply to major companies like Facebook and YouTube, but much of the debate has focused on smaller platforms, as critics have charged that the plan places an undue burden on those companies.[6]

On the other hand, British officials are proposing a mandatory “duty of care” standard with the intention to make companies liable for the content on their platforms and for maintaining the safety of their users. To this end, British government listed a number of topics including, supporting terrorism, inciting violence, encouraging suicide, disinformation, cyberbullying and inappropriate material accessible to children, that companies could be required to address and manage or face the risk of fines and other penalties. These rules would be applicable to social media platforms, discussion forums, messaging services and search engines. [7]

Actions taken and are contemplated to be taken throughout the world signal a new era for the internet. As governments are more willing to (and also coerced to) intervene and regulate, concerns grow against over regulation, which would result in restricting freedom of expression and censorship. In the upcoming days, we will all witness whether governments will succeed to fine-balance between the need for a more regulated internet without the expense of freedom of speech.

References:

[1] https://edition.cnn.com/2019/03/15/tech/new-zealand-shooting-video-facebook-youtube/index.html

[2] https://www.socialmediatoday.com/news/facebooks-considering-implementing-restrictions-on-who-can-go-live/551666/

[3] https://www.bertelsmann-stiftung.de/fileadmin/files/user_upload/EZ_JDI_OnlinePlatforms_Dittrich_2018_ENG.pdf

[4] https://www.nytimes.com/2019/03/31/world/australia/countries-controlling-social-media.html

[5] https://www.business-humanrights.org/en/australia-new-zealand-push-for-govt-intervention-in-regulating-social-media-cos-intensifies-after-mass-shooting

[6] http://www.europarl.europa.eu/news/en/press-room/20190410IPR37571/terrorist-content-online-should-be-removed-within-one-hour-says-ep

[7] https://www.nytimes.com/2019/04/07/business/britain-internet-regulations.html?smid=nytcore-ios-share

Başlangıç Noktası E-bülten

Merak etmeyin. Asla Spam yapmıyoruz.

Related posts
Expert Analysis

The Show Must Go On(line)!

EducationExpert Analysis

Closing The Schools Is Widening The Gap: Inequality

Expert Analysis

AQ (Adaptability Quotient): The New Form of IQ or EQ In the Edge of “Future of Work”

Expert Analysis

How Will Blockchain Reshape Organizational Culture On The Edge of Future of Work?

Başlangıç Noktası E-bülten

Merak etmeyin. Asla Spam yapmıyoruz.

Leave a Reply

Your email address will not be published. Required fields are marked *