Out of an abundance for protection and security, Facebook announced that it is now in control of who gets to live stream videos on the social network’s platform. Amid the Christchurch’s incident, Facebook is going to launch a one strike policy that will ban rule breakers from using Facebook’s Live service.
About the One Strike Policy
Facebook announced that it would be implementing what is called a “one strike” policy to Facebook Live. The policy would ban users who violate the platform’s community standards from using the live-streaming service for periods of time. The same applies to the content users post in different parts on the website. Should a user post a harmful link on his/her profile- terrorist website- Facebook would ban him/her from live-streaming as well. Facebook isn’t being specific or clear on the duration of the bans or what it would take to trigger a permanent ban from the live-streaming feature.
Why Impose Restrictions?
Facebook has increasingly been using AI to detect and combat violent and dangerous content on its platform. However, this approach didn’t seem to be working. This makes one of the reasons why Facebook has decided to change things in its regulations.
The decision behind the ban has to do with terrorism. The white nationalist terrorist attack that took place in New Zealand, not a few months back. The terrorist live-streamed the attack on the platform. After the massacre, people vehemently criticized Facebook for allowing the individual to go live. Facebook was slow to remove copies of the video, which angered a lot of people as well. Guy Rosen, Facebook’s vice president of integrity said: “Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate.”
Shockingly, according to people working for the French Economy Ministry, the Christchurch Call doesn’t demand action for updating regulations. Apparently, cultural separation draws upon the discrepancies between countries. Each country can decide what they mean by violent and extremist content. What is violent to one country, might not be the same according to another.
Commentary on the One Strike Policy
Facebook’s vice president of integrity, Guy Rosen said in a post: “Our goal is to minimize the risk of abuse on Live while enabling people to use Live in a positive way every day. From now on, anyone who violates our most serious policies won’t be able to use the Live feature for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.”
Whom Do these Restrictions Apply to?
facebook’s latest restrictions apply to Facebook’s Dangerous Individuals and Organizations policy. Facebook introduced the much-needed policy earlier this month and resulted in the banning of right-wing personalities like Paul Nehlen, Alex Jones, and Milo Yiannopoulos from both Facebook and Instagram. Rosen expresses Facebook’s hopes of expanding these restrictions to other parts of the platform in the near future. Hopefully, in the future, Facebook will ban the same users who violate Facebook’s Community Standards from creating ads.
Despite what CEO Mark Zuckerberg had claimed, Facebook’s AI detection system has failed in Myanmar. One clear indication was the detection system not being robust in dealing with the aftermath of Christchurch. No one has reported the live stream until 12 minutes after it had ended. The social network giant also failed to block 20 percent of the videos of the live stream. Third parties uploaded them to the site later on. People could still find several videos of the attack after more than 12 hours of its happening.
Facebook announced that it will join forces with more researchers and universities for improving the platform. These insights would include the “image and video analysis technology.” The company will be investing a $7.5 million in “new research partnerships with leading academics from three universities to improve image and video analysis technology.”Partnering up with Facebook will be The University of Maryland, Cornell University, The University of California, Berkeley. The last one offered assistance with techniques to detect manipulated images, video, and audio.
Going into this research, one of the goals is to utilize technology to distinguish between those who purposefully manipulate that media and those who unintentionally do. The company seeks to add more partners that would help it with deep fakes combat initiative. Rosen admitted in the blog post the following: “Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research.”
Facebook One Strike Policy – Final Words
Although some did not encourage the decision to carry out the one strike policy, it sure couldn’t do more harm than it can good. Some consider the ban a result of an event in particular that caused an issue for multiple countries. Meaning the effect Christchurch incident must not be the driving factor that leads to new regulations. With all that being said, offensive users are better off deleting their accounts than getting struck out.