When you think of nations using artificial intelligence (AI) -enhanced surveillance technologies, China probably comes to mind: the place where facial recognition is used to ration toilet paper, to name and shame jaywalkers, and to outfit police with glasses to help them find suspects.
It’s not just China, of course. According to a report from the Carnegie Endowment for International Peace, the use of AI surveillance technologies is spreading faster, to a wider range of countries, than experts have commonly understood.
The report found that at least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes, including smart city/safe city platforms, now in use by 56 countries; facial recognition systems, being used by 64 countries; and smart policing, now used by law enforcement in 52 countries.
The report’s author, Steven Feldstein, told AP that he was surprised by how many democratic governments in Europe and elsewhere – just over half – are installing AI surveillance such as facial recognition, automated border controls and algorithmic tools to predict when crimes might occur:
I thought it would be most centered in the Gulf States or countries in China’s orbit.
The report didn’t differentiate between lawful uses of AI surveillance, those that violate human rights, or those that fall into what Feldstein called the “murky middle ground.”
Smart city technologies are an example of how murky things can get. In Quayside, the smart city that’s developing on Toronto’s eastern waterfront, good intentions come in the form of sensors meant to serve the public, in that they’re meant to “disrupt everything,” from traffic congestion, healthcare, housing, zoning regulations, to greenhouse-gas emissions and more. But Quayside is also referred to as a privacy dystopia in the making.
The purpose of the research is to show how new surveillance technologies such as these are transforming the way that governments are monitoring and tracking us. It tackles these questions…
- Which countries are adopting AI surveillance technology?
- What specific types of AI surveillance are governments deploying?
- Which countries and companies are supplying this technology?
The Carnegie Endowment for International Peace presents the answers in the first-ever compilation of such data, which it’s calling the AI Global Surveillance (AIGS) index: a “country-by-country snapshot of AI tech surveillance”, mostly concerned with data pulled in from 2017 to 2019. Here’s the full index.
China doesn’t just use a lot of AI surveillance. It’s also a big exporter of the technologies. The research found that Chinese companies – particularly Huawei, Hikvision, Dahua, and ZTE – supply AI surveillance technology to 63 countries. Huawei alone is responsible for providing AI surveillance technology to at least 50 countries worldwide. “No other company comes close,” the report says. The next largest non-Chinese supplier is Japan’s NEC, which supplies AI surveillance tech to 14 countries.
Chinese vendors often sweeten their product pitches with offers of soft loans to encourage governments to buy. That works particularly well in countries with underdeveloped technology infrastructures, including Kenya, Laos, Mongolia, Uganda, and Uzbekistan, which likely wouldn’t be able to get the technology otherwise. From the report:
This raises troubling questions about the extent to which the Chinese government is subsidizing the purchase of advanced repressive technology.
US companies are also active in worldwide exports. 32 countries are getting their AI surveillance technologies from the US. The most significant exporters are IBM, selling to 11 countries; Palantir, selling to 9; and Cisco, selling to 6.
Other companies based in liberal democracies are proliferating the technologies. France, Germany, Israel, and Japan aren’t “taking adequate steps to monitor and control the spread of sophisticated technologies linked to a range of violations,” the report found.
Liberal democracies are major users of AI surveillance. The AIGS shows that 51% of advanced democracies deploy AI surveillance systems. In contrast, 37% of closed autocratic states, 41% of electoral autocratic/competitive autocratic states, and 41% of electoral democracies/illiberal democracies deploy AI surveillance technology. Governments in “full” democracies are deploying a range of surveillance technology, from safe city platforms to facial recognition cameras, the research found. That doesn’t mean that they’re abusing these systems; whether or not governments use it for “repressive purposes” depends on “the quality of their governance.”
Is there an existing pattern of human rights violations? Are there strong rule of law traditions and independent institutions of accountability? That should provide a measure of reassurance for citizens residing in democratic states.
That doesn’t mean that “advanced” democracies aren’t struggling to balance security interests with civil liberties protections, though. The research cites a few examples of where civil liberties are losing out in that equation in such democracies as the US and France:
- A 2016 investigation revealed that Baltimore police had secretly deployed aerial drones to carry out daily surveillance over the city’s residents. Photos were snapped every second over the course of 10-hour flights. Baltimore police also deployed facial recognition cameras to monitor and arrest protesters, particularly during 2018 riots in the city.
- A slew of companies are providing advanced surveillance equipment for use at the US-Mexico border, including dozens of towers in Arizona to spot people as far as 7.5 miles away, as the Guardian reported in June 2018. Other towers in use feature laser-enhanced cameras, radar and a communications system that scans a 2-mile radius to detect motion. The captured images are analyzed with AI to pick out humans from wildlife and other moving objects. It’s unclear whether these surveillance uses are legal or necessary.
- In France, the port city of Marseille is running the Big Data of Public Tranquility project: a program aimed at reducing crime via a vast public surveillance network featuring an intelligence operations center and nearly 1,000 intelligent closed-circuit television (CCTV) cameras – a number that’s going to double by 2020.
- In 2017, Huawei “gifted” a showcase surveillance system to the northern French town of Valenciennes to demonstrate what’s being called a “safe city” model. It included upgraded high-definition CCTV surveillance and an intelligent command center powered by algorithms to detect unusual movements and crowd formations.
Autocratic and semi-autocratic governments are more prone to abuse these technologies, including those in China, Russia, and Saudi Arabia. Other governments with “dismal human rights records” are also exploiting AI surveillance to carry out repression in more limited ways, but all governments are at risk of unlawful exploitation of the technology “to obtain certain political objectives.”
Military spending strongly correlates to AI surveillance spending. 40 of the world’s top 50 military spending countries also use AI surveillance technology.
Such countries include full democracies, dictatorial regimes, and everything in between: richer countries such as France, Germany, Japan, South Korea, as well as poorer states such as Pakistan and Oman. This isn’t too surprising, Feldstein writes:
If a country takes its security seriously and is willing to invest considerable resources in maintaining robust military-security capabilities, then it should come as little surprise that the country will seek the latest AI tools.
The motivations for why European democracies acquire AI surveillance (controlling migration, tracking terrorist threats) may differ from Egypt or Kazakhstan’s interests (keeping a lid on internal dissent, cracking down on activist movements before they reach critical mass), but the instruments are remarkably similar.
State surveillance isn’t inherently unlawful
Surveillance isn’t necessarily rooted in governments’ desire to repress its citizens, the report points out. It can play a vital role in preventing terrorism, for example, and can enable authorities to monitor other threats.
But technology has also ushered in new ways to carry out surveillance, and that’s caused the amount of transactional data – i.e., metadata – to burgeon, whether it be emails, location identification, web-tracking, or other online activities.
The report quotes former UN special rapporteur Frank La Rue from a 2013 surveillance report:
Communications data are storable, accessible and searchable, and their disclosure to and use by State authorities are largely unregulated. Analysis of this data can be both highly revelatory and invasive, particularly when data is combined and aggregated.
As such, States are increasingly drawing on communications data to support law enforcement or national security investigations. States are also compelling the preservation and retention of communication data to enable them to conduct historical surveillance.
Feldstein says that it goes without saying that such intrusions “profoundly affect an individual’s right to privacy – to not be subjected to what the Office of the UN High Commissioner for Human Rights (OHCHR) called ‘arbitrary or unlawful interference with his or her privacy, family, home or correspondence.’ [and]… likewise may infringe upon an individual’s right to freedom of association and expression.”