By Arthur Gwagwa & Lisa Garbe

Exporting Repression? China's AI Push into Africa

China's increasing investments in Artifical Intelligence across the African continent, especially in countries with poor human rights record, should be treated with caution.

It's no secret that China is spending huge sums on research related to artificial intelligence (AI) technologies, and uses some of them for surveillance and social control. In Xinjiang, for example, authorities conduct compulsory mass collection of biometric data, such as voice samples or DNA, and use AI to identify, profile, and track citizens in the province.

What is less known, however, is that China has started to export its AI technology to developing countries around the world. For China, the expansion to new markets takes the development of AI to a whole new level. In its ambitious plan to become a world leader in AI, Beijing has begun to use developing countries as laboratories to improve its surveillance technologies. Recently, China signed an agreement with Zimbabwe to deploy facial recognition software from Chinese company CloudWalk in its capital Harare. Beijing touted the deal as a case of "win-win" diplomacy—Chinese AI companies get to train their algorithms on Africans to diversify their datasets and Zimbabwe gets to use cutting edge tech to monitor its population. Similar deals have been signed in Angola and Ethiopia. However, the current use of data analytics and digital identifiers across Africa should draw attention to the potentially severe political consequences of China's tech exports to the continent.

Certainly, there are examples for innovative uses of digital identifiers and data analytics in Africa. Several countries use internet-connected and data-driven technology to improve the efficiency of government and business services. In sub-Saharan Africa, for example, Rwanda's Indangamuntu, implemented by UK-based firm De La Rue, is one of the most advanced integrated multipurpose electronic ID cards. These ID cards allow Rwandans to efficiently access government services like the issuance of birth and marriage certificates through an online platform called Irembo.

However, the application of electronic identifiers and centralized monitoring systems can have its downsides, particularly in countries with non-democratic governments. Given Rwanda's history with genocide, the government should take care to not use the centralized biometric database to reinforce bias and discrimination against specific parts of the population. Another concern is that governments can easily make design decisions that decrease the security of end users for state security purposes. For example, Tunisia abandoned an effort to equip all ID cards with an electronic chip that would encrypt some identifying data, making it safer from fraudsters but also rendering it inaccessible to the authorities. In Kenya, hackers were able to allegedly break into elections data, which was stored centrally, and tried to rig the results during the 2017 presidential election. In light of the rapid diffusion of eID cards as part of the ID4Africa initiative, there should be a careful revision of whether and how the collection, analysis, and storage of such data can be dehumanizing.

Such trends highlight that, according to a study by Nic Cheeseman and others, "digitization is being pursued in many countries that lack the political will and institutional framework necessary for it to function effectively." Given the alarming trends in the use of digital identifiers and data analytics in Africa, China's export of AI technology should be cause for concern, especially when sold to non-democratic regimes.

For many Western-based companies selling these technologies to African governments, there might at least be some way of holding them accountable. In the case of Kenya's elections, the results management system was managed by French firm OT-Morpho. Once the alleged compromise became public, Kenyan opposition leader Raila Odinga complained to the French government and OT-Morpho agreed to subject its system to an external audit. Western firms can also be held can accountable through their employees. The employee opposition at Google over its efforts to re-enter the Chinese search market is a case in point. It will be more difficult to demand that CloudWalk be more transparent in the way it uses and stores Zimbabweans' faces.

China's increasing investments in AI across the African continent, especially in countries with poor human rights record, should be treated with caution. Less than twenty percent of African countries have signed onto progressive legal frameworks like the African Union Convention on Cybersecurity and Personal Data Protection. Civil society and lawmakers should therefore press for appropriate safeguards to deal with ethical and practical challenges arising from major AI-related investments. Large telecom operators in Africa, such as Etisalat or Vodafone, are already subject to scrutiny by international human rights initiatives assessing their corporate social responsibility efforts. Similar initiatives should monitor the activities of large Chinese software companies like CloudWalk.

Arthur Gwagwa is a senior fellow at Strathmore University Law School. Lisa Garbe is a research associate at the University of St. Gallen. You can follow them at @arthurgwagwa and @lasergabi respectively.

Photo: Facial recognition technology is shown at DeepGlint booth during the China Public Security Expo in Shenzhen, China. Bobby Yip/Reuters

Article: Courtesy Council on Foreign Relations.

This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.










"Exporting Repression? China's AI Push into Africa"