Privacy — from governments, from companies, and from each other — is a concept that is vastly disappearing in the internet space and is deeply concerning for most of us. There’s unfortunately a lot of grey area in a world where technological and data innovation vastly outpaces legal discourse — or even our own individual understanding of the issues at stake. Often, we think of data leaks from companies like Cambridge Analytica as the main data privacy hazard, with the government playing the role of ill-equipped would-be-protector.
But governments have and are using data to collect more and more information — on citizens and non-citizens both — and the Biden administration’s use of CBP One is a prime example of this.
CBP One is an app that the Department of Homeland Security (DHS) uses to “collect, process, and store sensitive information” on asylum seekers before they enter the United States. The app uses facial recognition software and location data to process migrants who were denied asylum during the Trump administration and are now reapplying. Asylum seekers submit biographical information, photographs, and other biometric indicators. The app then verifies that information and decides whether asylum seekers can pursue their claims and enter the U.S., or whether they’ll be denied access to cross the border.
Most new programs — app or not — from regulatory agencies like the DHS or Customs and Border Protection (CBP) have to go through a public comment and notification process before they can be implemented, but CBP One received “emergency approval” to be used on undocumented migrants without the standard review process.
Some of the most vocal critics of Facebook, Google, and other big-name data collectors align with the Democratic Party, although disapproval of the media giants is bipartisan for differing reasons. But critics of CBP One have been relatively rare from legislators, despite the fact that the app sets a dangerous precedent for the future of privacy within American borders and beyond.
The ACLU and other advocacy organizations have pointed out that procedure during an “emergency” period can and has led to becoming the status quo afterwards. Regardless of asylum status or not, using this app in the immediate months creates precedent for it to be used as a way to process undocumented migrants en masse in the future, regardless of whether they are seeking asylum status or not. This expands the potential data the app has access to tremendously, and other law enforcement agencies have access to this app’s database. The information that migrants — who in reality did not have the option to consent to the app because to refuse to do so would have ended their asylum motion — are forced to share can be held in the database for up to 75 years, posing a direct harm to their lives and well-being from law enforcement even after they cross the border.
Last, but not least, this is part of a broader trend of surveillance in the United States. In a post-9/11 world, Customs and Border Protection has used increasingly invasive methods of surveillance to expel individuals with expired visas or who crossed the border without documentation, including using facial recognition technology long before CBP One was created. Besides the fact that facial recognition algorithms suffer from much of the same biases their creators do — their accuracy decreases dramatically when they aim to identify people of color — technology is shown to be increasingly used to find, analyze, and store insane amounts of data on people for the government.