The Washington PostDemocracy Dies in Darkness

Huawei tested AI software that could recognize Uighur minorities and alert police, report says

An internal report claims the face-scanning system could trigger a ‘Uighur alarm,’ sparking concerns that the software could help fuel China’s crackdown on the mostly Muslim minority group

December 8, 2020 at 10:30 a.m. EST
Workers walk by the perimeter fence of what is officially known as a vocational skills education center in the Xinjiang region of China in 2018. (Thomas Peter/Reuters)

The Chinese tech giant Huawei has tested facial recognition software that could send automated “Uighur alarms” to government authorities when its camera systems identify members of the oppressed minority group, according to an internal document that provides further details about China’s artificial-intelligence surveillance regime.

A document signed by Huawei representatives — discovered by the research organization IPVM and shared exclusively with The Washington Post — shows that the telecommunications firm worked in 2018 with the facial recognition start-up Megvii to test an artificial-intelligence camera system that could scan faces in a crowd and estimate each person’s age, sex and ethnicity.

If the system detected the face of a member of the mostly Muslim minority group, the test report said, it could trigger a “Uighur alarm” — potentially flagging them for police in China, where members of the group have been detained en masse as part of a brutal government crackdown. The document, which was found on Huawei’s website, was removed shortly after The Post and IPVM asked the companies for comment.

Such technology has in recent years gained an expanding role among police departments in China, human rights activists say. But the document sheds new light on how Huawei, the world’s biggest maker of telecommunications equipment, has also contributed to its development, providing the servers, cameras, cloud-computing infrastructure and other tools undergirding the systems’ technological might.

John Honovich, the founder of IPVM, a Pennsylvania-based company that reviews and investigates video-surveillance equipment, said the document showed how “terrifying” and “totally normalized” such discriminatory technology has become.

“This is not one isolated company. This is systematic,” Honovich said. “A lot of thought went into making sure this ‘Uighur alarm’ works.”

Huawei and Megvii have announced three surveillance systems using both companies’ technology in the past couple years. The Post could not immediately confirm if the system with the “Uighur alarm” tested in 2018 was one of the three currently for sale.

Both companies have acknowledged the document is real. Shortly after this story published Tuesday morning, Huawei spokesman Glenn Schloss said the report “is simply a test and it has not seen real-world application. Huawei only supplies general-purpose products for this kind of testing. We do not provide custom algorithms or applications.”

Also after publication, a Megvii spokesman said the company’s systems are not designed to target or label ethnic groups.

Chinese officials have said such systems reflect the country’s technological advancement, and that their expanded use can help government responders and keep people safe. But to international rights advocates, they are a sign of China’s dream of social control — a way to identify unfavorable members of society and squash public dissent. China’s foreign ministry did not immediately respond to requests for comment.

Zumrat Dawut was placed in an internment camp for Uyghur minorities in Xinjiang and was forcibly sterilized, but she made it out of China. (Video: Jason Aldag/The Washington Post, Photo: Marlena Sloss/The Washington Post/The Washington Post)

China is building vast new detention centers for Muslims in Xinjiang

Artificial-intelligence researchers and human rights advocates said they worry the technology’s development and normalization could lead to its spread around the world, as government authorities elsewhere push for a fast and automated way to detect members of ethnic groups they’ve deemed undesirable or a danger to their political control.

Maya Wang, a China senior researcher at the advocacy group Human Rights Watch, said the country has increasingly used AI-assisted surveillance to monitor the general public and oppress minorities, protesters and others deemed threats to the state.

“China’s surveillance ambition goes way, way, way beyond minority persecution,” Wang said, but “the persecution of minorities is obviously not exclusive to China. … And these systems would lend themselves quite well to countries that want to criminalize minorities.”

Trained on immense numbers of facial photos, the systems can begin to detect certain patterns that might differentiate, for instance, the faces of Uighur minorities from those of the Han majority in China. In one 2018 paper, “Facial feature discovery for ethnicity recognition,” AI researchers in China designed algorithms that could distinguish between the “facial landmarks” of Uighur, Korean and Tibetan faces.

But the software has sparked major ethical debates among AI researchers who say it could assist in discrimination, profiling or punishment. They argue also that the system is bound to return inaccurate results, because its performance would vary widely based on lighting, image quality and other factors — and because the diversity of people’s ethnicities and backgrounds is not so cleanly broken down into simple groupings.

A totalitarian surveillance city in China should be a warning to us all

Such ethnicity-detection software is not available in the United States. But algorithms that can analyze a person’s facial features or eye movements are increasingly popular in job-interview software and anti-cheating monitoring systems.

Clare Garvie, a senior associate at Georgetown Law’s Center on Privacy and Technology who has studied facial recognition software, said the “Uighur alarm” software represents a dangerous step toward automating ethnic discrimination at a devastating scale.

“There are certain tools that quite simply have no positive application and plenty of negative applications, and an ethnic-classification tool is one of those,” Garvie said. “Name a human rights norm, and this is probably violative of that.”

Huawei and Megvii are two of China’s most prominent tech trailblazers, and officials have cast them as leaders of a national drive to reach the cutting edge of AI development. But the multibillion-dollar companies have also faced blowback from U.S. authorities, who argue they represent a security threat to the United States or have contributed to China’s brutal regime of ethnic oppression.

Biden likely to remain tough on Chinese tech like Huawei, but with more help from allies

Eight Chinese companies, including Megvii, were hit with sanctions by the U.S. Commerce Department last year for their involvement in “human rights violations and abuses in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance” against Uighurs and other Muslim minority groups.

The U.S. government has also issued sanctions against Huawei, banning the export of U.S. technology to the company and lobbying other countries to exclude its systems from their telecommunications networks.

Huawei, a hardware behemoth with equipment and services used in more than 170 countries, has surpassed Apple to become the world’s second-biggest maker of smartphones and is pushing to lead an international rollout of new 5G mobile networks that could reshape the Internet.

And Megvii, the Beijing-based developer of the Face Plus Plus system and one of the world’s most highly valued facial recognition start-ups, said in a public-offering prospectus last year that its “city [Internet of Things] solutions,” which include camera systems, sensors and software that government agencies can use to monitor the public, covered 112 cities across China as of last June.

U.S. sanctions additional Chinese companies, alleging human rights violations in Xinjiang region

The “Uighur alarm” document obtained by the researchers, called an “interoperability test report,” offers technical information on how authorities can align the Huawei-Megvii systems with other software tools for seamless public surveillance.

The system tested how a mix of Megvii’s facial recognition software and Huawei’s cameras, servers, networking equipment, cloud-computing platform and other hardware and software worked on dozens of “basic functions,” including its support of “recognition based on age, sex, ethnicity and angle of facial images,” the report states. It passed those tests, as well as another in which it was tested for its ability to support offline “Uighur alarms.”

The test report also said the system was able to take real-time snapshots of pedestrians, analyze video files and replay the 10 seconds of footage before and after any Uighur face is detected.

The document did not provide information on where or how often the system is used. But similar systems are used by police departments across China, according to official documents reviewed last year by the New York Times, which found one city system that had scanned for Uighur faces half a million times in a single month.

Jonathan Frankle, a deep-learning researcher at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Lab, said such systems are clearly becoming a priority among developers willing to capitalize on the technical ability to classify people by ethnicity or race. The flood of facial-image data from public crowds, he added, could be used to further develop the systems’ precision and processing power.

“People don't go to the trouble of building expensive systems like this for nothing,” Frankle said. “These aren't people burning money for fun. If they did this, they did it for a very specific reason in mind. And that reason is very clear.”

Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM

It’s less certain whether ethnicity-detecting software could ever take off outside the borders of a surveillance state. In the United States and other Western-style democracies, the systems could run up against long-established laws limiting government searches and mandating equal protection under the law.

Police and federal authorities in the United States have shown increasing interest in facial recognition software as an investigative tool, but the systems have sparked a fierce public backlash over their potential bias and inaccuracies, and some cities and police forces have opted to ban the technology outright.

Such technologies could, however, find a market among international regimes somewhere in the balance between Chinese and American influence. In Uganda, Huawei facial recognition cameras have already been used by police and government officials to surveil protesters and political opponents.

“If you’re willing to model your government and run your country in that way,” Frankle said, “why wouldn’t you use the best technology available to exert control over your citizens?”

A 17-year-old posted to TikTok about China’s detention camps. She was locked out of her account

Discrimination against Uighurs has long been prevalent in the majority-Han Chinese population. In the Xinjiang region of northwestern China, authorities have cited sporadic acts of terrorism as justification for a harsh crackdown starting in 2015 that has drawn condemnation from the United States and other Western nations. Scholars estimate more than 1 million Uighurs have been detained in reeducation camps, with some claims of torture.

U.S. national security adviser Robert O’Brien called the repressive treatment of minority groups in Xinjiang “something close to” genocide, in an online event hosted by the Aspen Institute in October.

Speaking at the State Department on Jan. 7, Secretary of State Pompeo slammed Beijing over its treatment of Uighur Muslims in the Xinjiang region of China. (Video: The Washington Post)

Under international pressure, Xinjiang authorities announced last December that all reeducation “students” had graduated, though some Uighurs have since reported that they were forced to agree to work in factories or risk a return to detention. Xinjiang authorities say all residents work of their own free will.

The U.S. government has banned the import of certain products from China on the basis that they could have been made by forced labor in Xinjiang.

One of the Huawei-Megvii systems offered for sale after the “Uighur alarm” test, in June 2019, is advertised as saving local governments digital storage space by saving images in a single place.

Two other systems, said to use Megvii’s surveillance software and Huawei’s Atlas AI computing platform, were announced for sale in September. Both were described as “localization” of the products using Huawei chips and listed for sale “only by invitation.” Marketing materials for one of those systems say it was used by authorities in China’s southern Guizhou province to catch a criminal.