Ankita Tripathy, 12 hours ago
Fake News and Dangers of Social Media: Post New Zealand Terror Attack
How was the Facebook Live Streaming Session operational for 17 minutes?
Why did not Facebook take it down?
Did anyone of the 400 people, who were viewing the Live Stream, flag it as inappropriate?
Did Facebook’s internal bots fail to identify that this content was Live?
Following the terror attack in New Zealand, there are a number of questions that remain-
On 15 March 2019, the idyllic country of New Zealand was subject to its worst terrorist attack in history. The attack, which took place just before Friday prayers at two Christchurch Mosques, resulted in over 50 people losing their lives. Many others had bullet wounds resulting from the attack.
Facebook and other social media platforms were criticized by the world community for failing to take action. The attacker had streamed the entire process via Facebook Live. This video was seen by over 4000 people before Facebook acted on it.
This led to many skeptics and critics challenging the influence and power of Facebook. Yes, it has become a potent platform to spread information. However, the important question that arises is- What is Facebook doing to check disinformation and hate?
New Zealand Jessica Arden went on record to state that it took 4 days before all variants of the Live Stream Video on Facebook was removed from the internet. We have no account of the number of people who have downloaded the video, and have shared it or stored it.
Officially, Facebook states that the Facebook Live video of the perpetrator was viewed 400 times during Facebook Live. Facebook claims that the video was down in minutes. In their defense, Facebook mentioned that no one of the 400 people complained to it about the video, or flagged it. Facebook was also quick to point out that they removed the video within minutes. The AI process of Facebook took the action and the necessary steps to bring all variants of the video circulating on social media.
It took a report and intimation from New Zealand that informed Facebook of the terror live streaming session. This was a disaster and one that Facebook had not foreseen.
Facebook was taken to task for not being able to provide proper guidelines to individuals as well as law enforcement agencies regarding wrongful or violent content. Yes, there is a page entitled ‘Information’; however, it contains mainly procedures for submitting legal requests.
As a result of the live streaming process, there were over 1.5 million variants of the video on the internet. Let us pause now and examine the power of networking on social media and the digital domain.
400 people viewed the live streaming, and 1.5 million copies of the video were circulating on the internet. 17 minutes was what it took Facebook to end the Live Streaming session; 4 days was what it took to remove 1.5 million copies of the video from the internet.
The recent terror attacks and its live streaming on Facebook is not the first time the world has seen objectionable content spreading via the medium.
Since Facebook Live first began, several people who have committed suicide while live-streaming the act. In many instances, this has it is not bought to Facebook’s notice for a long time. People who view these videos have failed to flag them or report it to the relevant authorities.
Psychologists have referred to this as the ‘Bystander Effect’ on social media. This is an alarming trend, and often seen as a prior justification by victims before committing the act.
Facebook’s use of AI has not yet progressed to a stage where it can take proactive initiatives to curb such content. Yes, once it brought to its notice, the speed at which Facebook reacts has definitely gotten better.
In response, Facebook CEO, Mark Zuckerberg has publically (even on his own profiles) stated that Facebook is taking serious initiatives to check this.
To break it down, Facebook has hired 3000 additional staff members to check the generation and flow of such content on its platforms. However, 3000 members for controlling more than 2 billion profiles seems a bit far-fetched.
This brings us to how misinformation and vile content on social media is being prevented by corporations like Facebook.
From the past couple of years, Facebook is under investigation for a number of reasons. This includes data breach (Cambridge Analytica Scandal), influencing US Elections through Fake News, and other problems.
Social Media and its more than 2 Billion users all over the world are a powerful mechanism of change. This change is both positive, as well as negative.
Positively, Facebook has brought about a change in millions of lives for the better. The growth of small businesses, the development of diverse and indigenous communities, are some of the best results.
Likewise, in spite of its positive impacts on society, we find that negativities on social media are also increasing alarmingly. The 15 March Terror incident in New Zealand is also a testament to that.
Therefore, the million-dollar question is whether Facebook is doing enough to follow an ethical model of publishing news and facts.
We need to understand a few things regarding information and news on social media. First of all, as a tool, which disseminates information, social media is a double-edged sword.
Compared to formal models of journalism and reporting, social media platforms do not have enough checks and balances.
Fact-checking is a major issue on social media, owing to which we see many forms of misinformation going viral every day.
Everyone can be a social media journalist, even if that person has zero credibility or responsibility towards his or her actions. This leads to dangerous levels of hate and misinformation being present on social media platforms.
Governments and international bodies have been slow to react to the dangers of fake news and misinformation. However, the good thing is that they are finally up.
This monumental change of attitude towards Facebook and other social media platforms arose to post the Cambridge Analytica scandal.
With Facebook CEO, Mark Zuckerberg appearing for Senate Hearings in the USA, and Twitter CEO, Jack Dorsey appearing before a Parliamentary Committee in India, we see that things are getting stricter.
These regulations should have happened much before, but we can at least be hopeful that change is starting to show.
Most governments all over the world now subject Facebook and social media platforms to country based laws on Information Technology. We have also seen Facebook work proactively with several countries during elections to curb the spread of Fake News on social media.
For 2019, Indian General Elections Facebook has put in place a dedicated team or as Facebook calls it ‘The War Room’ in New Delhi.
This is a positive change if done in an ethical fashion. Yes, there are dangers that Facebook and other social media platforms might end up helping particular political parties staking claims to form governments, but independent statutory bodies can help.
Facebook has several critics who keep lambasting it for becoming a monster and not being able to control its size. This is not the case. Facebook is by far one of the most sophisticated and technologically advanced companies on the planet.
Facebook currently owns more than 6000 patents for its products and innovations. This number will keep increasing in the coming years for sure. The best brains in the world rank Facebook at one of the top three companies to work at in the world. This is right up there with Apple and Google.
Chief Operating Officer of Facebook, Sheryl Sandberg came out with a statement post the attacks on Instagram and Facebook. She condemned the terror attacks and laid out steps currently being undertaken by Facebook to stop the spread of hate.
She stated that post the attacks, Facebook is trying to do three main things-
By bringing Facebook Live under the ambit of ‘Community Standards’ of Facebook, the team is trying to align who can go Live, with prior records of the individual.
In other words, if an individual’s profile has been reported in the past for violating Facebook’s Community Standard, then it would restrict the individual from going live.
Technology-wise, she wrote that Facebook is continuously trying to update its algorithms and monitoring mechanisms. The active use of Artificial Intelligence and Machine Learning will help Facebook stop the spread of such content swiftly and effectively.
It is easier to stop the actual video from circulating, but edited versions of the same pass-through Facebook’s security and proliferate the internet.
The statement, which came out on 27 March 2019 on Instagram Press Room, was replete with hope, promise and a commitment. Facebook as an organization is definitely trying to address newer challenges on what kind of information is spreading on its platforms.
On 29 March, an even strongly worded statement on Facebook’s own News Room condemning ‘White Nationalism’ was published. This article categorically stated Facebook’s aim of creating a world free of hatred and discrimination.
It also outlined the steps by Facebook to curb white supremacy and its followers. It also plans to redirect people who search for topics related to white nationalism to a page called- ‘Life After Hate’.
Like many big corporations in the world, Facebook has grown to a level that its founders had never imagined. If you ask Jeff Bezos or Bill Gates if they could predict their company’s growth, I am sure they would be honest and say no.
Likewise, Facebook and Mark Zuckerberg have also grown so big that bringing all the processes under unified control is taking a wee bit longer. This is not to say that big companies should not forecast and predict how things would turn out. This is to say that sometimes the technology to control outcomes often comes after the outcome has already taken place.
With changes in technology taking place rapidly, it is just a matter of time before Facebook Live and its associated problems are fixed. Facebook is investing heavily in developing its AI capabilities and using it to take action on such counter community behaviors.
However, it is essential not to blame Facebook for the entire process. Of all the people who watched the live streaming of the terror attack, no one brought it to the notice of the relevant authorities.
This speaks of a bigger problem of humanity, and where are we heading as a community. The world has a lot of issues and problems. There are racial, religious, and ethnic problems erupting every day.
Governments and international institutions are failing to control them. Social Media platforms have been instrumental in helping citizens and human beings rise away from hate. Facebook has helped communities rise up from repressions, dictatorships, and human right abuses.
A platform that has done so much good for people all over the world has some problems. No one is denying that. Yes, Facebook needs to improve its technology and innovate on its products without a doubt. Instances like the New Zealand terror attack live streaming would not take place.
If Facebook is supported in its endeavor by the responsible citizenry, governments, and international institutions bad processes can be weeded out. Think of all the good that the power of networking has unleashed and is capable of unleashing in the future.