Upload
nana-nduati
View
407
Download
0
Embed Size (px)
DESCRIPTION
Citation preview
UMATI – Monitoring Online Hate Speech Online Speech: What’s Dangerous, What’s Not?
2
What Is Hate Speech?
3
Kenya experienced what is regarded as one of the country’s darkest periods when post election violence rocked the country in 2007. Violence on targeted groups of people, religions, political and social groups erupted as a result of incitement in the period immediately following the announcement of the election results. Incitement through hate speech is seen as the biggest reasons why the violence erupted. Hate speech is that which has the potential to stir or promote violence against targeted groups of people.
“that which advocates or encourages violent acts against a specific group, and creates a climate of hate or prejudice, which may, in turn, foster the commission of hate crimes.” -‐ The current definition of “hate speech”, according to the
National Cohesion and Integration Commission Act of 2008 is speech
Hate speech has garnered growing interest in Kenya since the 2007 Post Election Violence due to its recognized potential to stir or promote violence against targeted groups of people. Despite definitions stated in the NCIC Act, the Media Act and the Penal Code, there has been demand from peace-‐building organizations, politicians, government officials and the general public on how to define, identify, mitigate, report and deal with hate speech.
4
Image credit: Afromusing
Need to Identify Hate Speech The Umati Project
5
Demand for a clear definition of hate speech led to the formation of the Umati project. The main aim of Umati (Monitoring Online Dangerous Speech) is to facilitate easier identification of hate speech that especially has potential to cause violence, so that the violence it is likely to cause can be avoided or mitigated. As Umati we define dangerous speech as hate speech with the potential to cause violence. Our project partner Professor Susan Benesch coined the term ‘dangerous speech’ after extensive research in several countries around the world (read more here www.voicesthatpoison.org). Events on the ground -‐ such as violence as seen in 2007/2008 – may be directly related to dangerous online speech. There is need then, for responsible content creation from the online community, such as bloggers, and social media influencers, the media and the general public.
“Our research suggests that certain types of reporting by media houses directly increases dangerous speech online.” Angela Crandall, Project Manager
“While most projects related to hate speech have been looking at mainstream media, we are aware of the influence—positive and negative—that New Media such as the blogosphere and online forums had on the 2007 Post Election
Violence in Kenya. Therefore, our flagship Umati project seeks to monitor and report, for the first time, the role of new media on an election.”
Kagonya Awori, Umati Research Lead.
6
What Constitutes Dangerous Speech? Dangerous Speech is hate speech with a potential to cause violence. (Professor Susan Benesch, American University)
7
How to Identify Dangerous Speech:
• A powerful speaker with influence over an audience; • An audience with vulnerabilities/fears the speaker can cultivate; • Content of the speech that may be taken as inflammatory; • A conducive social and historical context of the speech; and • An influential means of spreading the speech.
The Three Buckets of Dangerous Speech are:
• Offensive Speech • Moderately Dangerous Speech • Extremely Dangerous Speech
The Three identifiers of dangerous speech are:
• Comparing a group of people with animals, insects or a derogatory term in mother tongue, • Suggesting that the audience faces a serious threat or violence from another group, • Suggesting that some people from another group are spoiling the purity or integrity of the speakers’ group
8
Examples and Case Studies (Results from October - December 2012)
9
Compare a group of people with animals, insects or a derogatory term in mother tongue Before the 1994 genocide in Rwanda, the Hutus used the term “inyenzi” (cockroaches) to demean the Hutus to less than human beings. Research has shown us that it was easier for the Hutus to harm the Tutsis since they thought of them as mere insects. In Kenya, atop using animal and insect names, our communities also have particular insults in vernacular language that are intended to demean certain groups. The names we have come across from the blogs and sites we are monitoring include: “…kigeugeu, nugu, pigs, jigger infested, vultures, hyenas, dogs/maumbwa, chinkororo, madoadoa, kihii, black monkeys, nyang’au, snakes, weevils, cockroaches, cannibals, warthogs, headless chicken, siafu, rumbwa, blind donkeys, dinosaurs,
nzi, baboon, wakwitu, maggots, nyani, kombamwiko…”
10
Suggest that some people are spoiling the purity or integrity of another group Of the four major communities we are monitoring, all are known to possess certain characteristics and/or perform certain socio-‐cultural activities that sterotype them eg Luos fish, Kikuyus do businesses, Kalenjins are pastoralists and Luhyas are farmers. However, comments in this category rely on historically negative stereotypes to insult a particular group. Other comments however, were outright calls to remove the “impure” group from the society.
“Tunaondoa takataka za KFF ya zamani chafu kutoka kwa ODM. KWENDA”
“If u want to be killed then try and marry a [tribe] gal, am talking of what I know and can prove. If [pres candidate] becomes the president, then more dead bodies shall be found in masinga dam.”
“..wats wrong wid ths community? God wat r u waitin for wid ths evil,heartles,assasins in kenya? please clear 4 us this
whole Gomorra and sodom of kenya we are tired kindly!”
“POLICE ALERT! If you spot any [tribe] report them to the Police immediately. Keep Kenya safe!”
11
Suggest that the audience faces a serious threat or violence from another group Another indicator that a statement has the potential to cause violence is when the statement suggests that the audience should “equip” themselves because another group will attack them. Often, these comments are not based on truth but are instead intended to invoke fear in the audience so that they can start the violence.
“Killing all [political party] leaders is the only way to prevent further loss of innocent lives!!!”
“If [presidential candidate] takes over the next government, let the other 41 tribes be ready for the greatest oppression yet..”
“[religion1] dont tolerate [religion2] , any1 who have been 2 eastleigh can testify, they rubbish [religion2]. am ready also to
bomb their mosque, wakwende uko”
and you ask how the PEV came about! [political party]is a threat to peace and national development!
12
What You Can Do? What are we responsible for as the online community?
13
There are four ways of dealing with dangerous speech:
1. Stop the Speaker 2. Discredit the Speaker; “Help the Audience spot the lie” Help the Audience lose the credibility of the speaker: By
educating the people to spot dangerous speech and lose the credibility of the speaker, losing their power 3. Punish the Speaker: (Job of NCIC) 4. Limit the Means of dissemination (Shut down Facebook/Twitter)
YOU can do something about stopping the speaker and discrediting the speaker. How? When you come across speech online that you suspect might be dangerous, you can: 1. “Help the Audience spot the lie” Help the speaker lose the credibility with the audience, by educating the people to spot dangerous speech. 2. Report Hate Speech: https://docs.google.com/a/ihub.co.ke/spreadsheet/viewform?formkey=dEVUZk5fUDlJQkpBUUdRbmJBWlQyLXc6MQ 3. Put out good content: Think about, ‘What am I doing with this content? Am I writing this just to rant?’ Don’t use your Twitter/Facebook as an anger Diary – don’t write angrily, and don’t talk about just anything online, because you could be instigating someone to act based on your thoughtless statements.” “You want to mobilize people who have the same feeling, you want to find other people who want to rant with you – we
get it. But what this does is mobilize people with the same feeling, in a negative way.”
14
Umati Frequently Asked Questions 1. Why was the project started? Empirical evidence from 2007/2008 election cycle suggested the important role that online media had on the post-‐election violence period. Nonetheless, we did not have any systematic monitoring of the online space and could therefore no data to track the trends online. This time around, we wanted to ensure that we were capturing the trending topics, phrases, and sentiments online well in advance of the elections and decided to launch the Umati project. Following the need to define, identify and deal with hate speech, the goals of the Umati project are
• To set a definition of hate/dangerous speech that can be incorporated into the constitution; • To forward incidences of dangerous speech to Uchaguzi to limit further harm; • To define a process for election monitoring that can be replicated elsewhere: • To further civic education on hate speech;
15
2. Who are the collaborators on the project? Our partners on this project are Internews – who monitor traditional media, Article 19 who are encouraging journalists to be more responsible, and prof Benesch who is our scholarly advisor. 3. How long is the project running? From September 2012 to April 2013. 4. How can I get involved? You can help by reporting incidences of dangerous speech that you come across here: https://docs.google.com/a/ihub.co.ke/spreadsheet/viewform?formkey=dEVUZk5fUDlJQkpBUUdRbmJBWlQyLXc6MQ
16
Resources a. Definitions of hate speech http://www.cohesion.or.ke/index.php/media-‐centre/news/144-‐unpacking-‐hate-‐speech-‐by-‐commissioner-‐milly-‐odongo b. Research on Dangerous Speech www.voicesthatpoison.org
c. Umati Reports: October November d. Contact: Kagonya Awori [email protected]
17
About Umati: Our flagship Umati project seeks to monitor and report the role of new media on an election. Our Kenya-‐based project has citizens at its core and uses relevant technologies to collect, organize, analyze, and disseminate the information collected. More About Uchaguzi: Uchaguzi is a technology platform that allows citizens and civil society to monitor and report incidences around the electoral process. Uchaguzi provides web and mobile-‐based channels for citizens and civil society to report on electoral offences such as intimidation, hate speech, vote buying, polling clerk bias, voting mis-‐information etc. The reports are then sent to the electoral authorities or security personnel for action. More About iHub Research iHub Research works from within the nerve center of Kenya's technology community. The organization has expertise in technology research and facilitates local ICT research capacity in the region. iHub Research shares stories about the vibrant East African technology community by conducting ICT research on technology innovation within the community. More About Ushahidi We are a non-‐profit tech company that specializes in developing free and open source software for information collection, visualization and interactive mapping. We build tools for democratizing information, increasing
18
transparency and lowering the barriers for individuals to share their stories. We're a disruptive organization that is willing to take risks in the pursuit of changing the traditional way that information flows. More