Blog post by Bushra Ali Khan, a Guest Member at University of Sussex’s The Women in Refugee Law (WiRL) think tank, and an incoming PhD student at King’s College London
The rapid advancement of artificial intelligence (AI) has dramatically impacted various aspects of human life and societal structures, including border management. While AI offers promising solutions for economic and governmental challenges, it also raises significant ethical concerns. AI technologies, particularly in the context of border management, present a dual-edged sword: enhancing security on one hand while potentially infringing on privacy and fundamental rights on the other.
The concept of algorithmic identity illustrates how AI shapes individuals’ identities based on data analytics. These algorithmic agents operate independently, influencing our sense of self and future. The extensive use of personal data by governments and private entities, often described as Big Brother, leads to surveillance capitalism. This phenomenon entails individuals trading their privacy for the convenience of digital services, ultimately fostering a society dependent on smart products controlled by tech giants.
The implications of this control are particularly evident for marginalised groups, such as migrants and refugees. AI’s role in migration management has sparked debates over the EU’s commitment to human rights amidst the construction of Fortress Europe that Yosefa Loshiitzky describes as “increasingly erects racial, ethnic and religious boundaries.” Daniel Schiff outlines the possible concerns: the proliferation of discriminatory algorithms, the spectre of automated mass surveillance and the existential threat posed by a future superintelligence. Henceforth, this article critically examines AI technologies used in EU border management – biometric identification (facial recognition) and algorithmic profiling – highlighting their impact on migrants’ fundamental rights and privacy.
Biometric Identification – Facial Recognition
Facial recognition technology, a form of biometric identification, has seen significant advancements. Deep-learning-based systems identify individuals by cross-referencing facial features with existing data. Automated Border Control (ABC) systems, employed at EU border checkpoints, authenticate travel documents and verify passenger identities against border control records. These systems are now operational at several EU airports, including those in Italy and Portugal.
Biometric identification relies on unique, permanent characteristics like facial features. These biometrics anchor identity in the human body, facilitating information linkage (Surveillance Studies Network, 2006) engendering it to what Ploeg (2005) terms as “informatisation”. The Schengen Information System (SIS) and Visa Information System (VIS) are key EU border management tools using facial recognition. As of 2020, SIS contained 965,000 alerts on persons, with 30% involving facial images. VIS stored 68 million digital photos of visa applicants.
The forthcoming Entry/Exit System (EES) will utilise machine learning for biometric matching. This system aims to enhance security by tracking non-EU nationals’ entry and exit across the Schengen Area. However, the accuracy and reliability of facial recognition remain contentious. Errors can lead to false acceptances or rejections, resulting in legal and discriminatory consequences for individuals as it remains far more prone to errors than other biometrics.
Algorithmic Profiling
Algorithmic profiling, used globally for risk assessment in border management, analyses existing data to identify individuals requiring extra scrutiny. The EU’s VIS and European Travel Information and Authorization System (ETIAS) employ these methods to flag persons of interest based on pre-set risk profiles. These profiles consider factors like age, sex, nationality, education, occupation and residence.
Critics argue that mandatory data collection, such as advance passenger information (API) and passenger name records (PNR), infringes on individual privacy. API data, collected by airlines, helps enforce border control and prevent irregular migration by identifying potential threats before they reach the border. PNR data includes comprehensive passenger information used for security screening.
The EU’s reliance on data from third countries poses additional challenges. Inaccurate or politically motivated data can lead to wrongful persecution of individuals. The European Data Protection Supervisor (EDPS) has raised concerns about the justification for such extensive data collection, emphasising that the availability of technology should not drive its usage.
Challenges in Biometric Identification and Algorithmic Profiling
Despite advancements, facial recognition technology remains prone to errors, particularly in the context of diverse populations. The EU Commission’s AI Proposal (2021) classifies facial recognition as a high-risk AI system due to its potential for false matches. Errors in recognition can lead to significant consequences for individuals, including legal issues and discriminatory practices.
The accuracy of facial recognition depends on high-quality images, but practical limitations like low-resolution CCTV footage can compromise this. Moreover, the technology’s performance degrades when applied to children compared to adults.
Algorithmic profiling faces similar challenges. Errors in data entry or biased data can lead to incorrect correlations and distortions, resulting in unfair treatment of individuals. Dependence on third-country information also risks misuse by governments for political purposes – something the EU considered to be a serious concern. These inaccuracies threaten the fundamental rights of affected individuals, necessitating careful oversight and regulation.
Conclusion
AI technologies in border management present significant ethical and practical challenges. The use of biometric identification and algorithmic profiling must be balanced with considerations of privacy and fundamental rights. Ensuring accuracy and reliability in data collection and analysis is crucial to prevent discriminatory practices. Enhanced transparency, better data quality and robust regulatory frameworks are essential to protect vulnerable populations and uphold human rights in the age of AI-driven border management.
The balance between security and privacy is delicate and must be carefully managed. The potential of AI to streamline and enhance border management is undeniable, but this should not come at the expense of fundamental human rights. Misidentifications and algorithmic biases not only undermine the efficacy of border security but also lead to serious human rights violations. The EU’s border management strategies must therefore evolve to incorporate rigorous safeguards that protect individual rights and prevent discrimination.
In conclusion, while AI offers promising advancements in managing borders, it is imperative to prioritise the protection of privacy and fundamental rights. By implementing robust safeguards and maintaining a commitment to ethical principles, the EU can harness the benefits of AI while safeguarding the dignity and rights of all individuals.
The views expressed in this article belong to the author/s and do not necessarily reflect those of the Refugee Law Initiative. We welcome comments and contributions to this blog – please comment below and see here for contribution guidelines.