Analysis

ANALYSIS - On algorithmic violence and Palestine

The discourse of 'algorithmic glitches', which is asserted as a justification by social media platforms in the case of Palestine, cannot be separated from the corrupt worldview shared by these platforms

Sabri Ege  | 28.06.2021 - Update : 28.06.2021
ANALYSIS - On algorithmic violence and Palestine

The writer is a deputy researcher at the Turkish Radio and Television Corporation (TRT). He received his MSc in Media and Communications from the London School of Economics (LSE).

ISTANBUL 

In his seminal essay, “Do Artifacts Have Politics?” (1980), Langdon Winner discusses the bridges over the parkways in Long Island, New York. Winner notes that these overpasses were consciously designed to be very low-hanging to keep off the 12-foot-tall public buses that the poor and black people used back then. Hence, the upper and middle white class could enjoy the roads, parks, and bridges without having to mix with low-income classes and racial minorities.

Several similar examples exist in the history of technology. Unfortunately, such patterns reveal that technology can be politicized and used for ideological purposes.

During the Israeli attacks on Gaza, and in parallel to the oppression of Palestinians on the ground, social media platforms continuously blocked hashtags, censored, removed, [1] and restricted contents and accounts. Almost all social media platforms engaged in this censorship, stifling the voices of Palestinians and activists. 7amleh, The Arab Centre for the Advancement of Social Media, published a detailed report [2] of such censorship of Palestinian content.

Deconstructing myth of algorithmic ‘glitch’

Three main reasons usually have been put forward to defend such actions of social media platforms. The first one is based on user-reporting mechanisms, namely that pro-Israeli accounts report pro-Palestinian content. However, if that is the case, the platforms should answer why this mechanism works better for Israelis and why they process pro-Israeli requests without valid evidence.

The second argument is that the platforms block activists and remove their accounts upon recommendations from the Israeli government. Already in 2018, [3] Facebook and YouTube complied with Israel’s requests in 95% and 80% of the cases, respectively. Nonetheless, in this case, the Israeli government is the perpetrator of the violence. So why do these platforms still unquestioningly comply with their demands?

However, social media platforms themselves put forward a third rationale behind their stance: technical “glitches” in their algorithms; another softer denomination for algorithmic bias. This claim is a common deflection tactic in communication: while you can blame it on the machine, why look for other culprits?

Absolving the perpetrators and shifting the blame to mathematical calculations is a dangerous game. It allows the social media platforms to create the impression that the existing failure arises from certain, so-called “neutral” automated systems, thereby normalizing the oppression by shifting responsibility to a non-human, natural, apolitical source as the agent. Therefore, it is vital to spell out some hard facts about AI (artificial intelligence).

Two key arguments debunk the myth of algorithmic glitches, demonstrating that algorithms are indeed political interventions and social and ideological constructs as much as mathematical calculations.

To begin with, most problems pertaining to algorithms have to do with data. While most of the data comes from various continents, [4] the automated systems would “instinctively” favor contents coming from Western Europe and the US, because the training datasets are Western-centric. These practices resemble colonial practices in which colonial powers established, as Edward Said put it, “a political vision of reality whose structure promoted the difference between the familiar (Europe, the West, ‘us’) and the strange (the Orient, the East, ‘them’).” That is why it is important to consider the discussion of Nick Couldry and Ulises A. Mejias, [5] who describe this emerging social order as data colonialism.

Second, social media platforms are well aware of how to handle glitches effectively. Sarah Hooker, a researcher at Google AI, explains in an article [6] that “algorithmic bias” is a model design problem in addition to biased datasets. However, the questions that need to be asked are; how are such models designed, classified, and ordered? Who created and supervised them? What is the end game here?

The models designed for algorithms are value-added and cannot be dissociated from the embedded opinions of the people behind them (designers, user researchers, coders, data scientists, content strategists, etc.). Inevitably, machine learning systems carry out the organizational culture of the companies as well as the values, assumptions, and ethical considerations of their designers. If the human being behind the model design is racist, sexist, or Islamophobic, they will label actions according to their worldview. For example, Safiya Noble, in her brilliant work, [7] shows how Google’s top results led to porn sites when she googled “black girls”. This example brings to mind how YouTube’s algorithm changed the word “Palestinians” to “terrorists”. [8]

In this context, it is vital to recall the book Weapons of Math Destruction by Cathy O’Neil, in which she argues the impossibility of morally neutral, natural, and apolitical algorithms. Accordingly, the discourse of “algorithmic glitches”, which is asserted as a justification by social media platforms in the case of Palestine, cannot be separated from the corrupt worldview shared by these platforms.

Towards new form of violence: algorithmic violence

Consequently, instead of using the term “algorithmic glitches”, it is more suited to use the concept of “algorithmic violence”, especially in the case of Palestine. By calling a spade a spade, the platforms must face the consequences of their policies rather than hide behind unequally designed mathematical models. And certain questions remain unanswered: why do these so-called “algorithmic glitches” happen only to the Palestinians? Why do always the vulnerable groups suffer from these discriminatory “glitches”?

Like the bridges constructed to keep the poor and racial minorities off in New York in the 1950s, algorithms function the same way. The inequalities in technology proceed with no amelioration because the apartheid culture, racism, and colonialism are the premises of these technologies.

Moreover, although algorithmic violence has been widely reported over the past decade, disclosed in many cases, such as in those related to gender, race, ethnicity issues, etc., and most recently during the BLM (Black Lives Matter) protests, the issue has never been as evident and striking on this scale. The reason for this is that, for the first time, social media platforms “collectively” suppressed and silenced the voices of an oppressed group, Palestinians, during an earth-shattering incident. Accordingly, such incidents are not exceptions, but rather the rule. To put it another way, exception has become the rule.

The censorship of Palestinians by the aforementioned platforms is as terrifying as Israeli bombings. While Israel forcibly relocates people, Twitter is complicit in labeling Palestinian content as a terrorist incident and manipulating their ordeal. While Israeli bombs kill children, Instagram is complicit in the use of censorship techniques that prevent people from condemning the attacks on social media. While residents and activists are detained and attacked by Israeli forces, Facebook is complicit in silencing Palestinians’ voices and making them invisible. The reason for this is that physical and symbolic violence go hand in hand, and a new terrifying kind of violence results from the mixture of the two.


* Opinions expressed in this article are the author’s own and do not necessarily reflect the editorial policy of Anadolu Agency.


[1] https://www.trtworld.com/magazine/workplace-and-algorithm-bias-kill-palestine-content-on-facebook-and-twitter-46842


[2] https://7amleh.org/2021/05/21/7amleh-issues-report-documenting-the-attacks-on-palestinian-digital-rights


[3] https://www.haaretz.com/israel-news/business/facebook-removes-inciting-content-at-israel-s-request-minister-says-1.5432959


[4] https://datareportal.com/reports/digital-2021-global-overview-report


[5] https://www.sup.org/books/title/?id=28816


[6] https://www.sciencedirect.com/science/article/pii/S2666389921000611


[7] https://nyupress.org/9781479837243/algorithms-of-oppression/


[8] https://www.youtube.com/watch?v=tiTAIvI3zMU

Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.