SocietyTechnology

Refugees Are At Risk From Dystopian ‘Smart Border’ Technology

New technologies deployed on borders for migration management and border security under the umbrella of smart border solutions are ignoring the fundamental human rights of migrants.

Unmanned aerial vehicles (drones, for example) are often deployed in the surveillance of refugees in the US and the EU; big data analytics are being used to monitor migrants approaching the border. Though methods of border security and management vary, a great deal are increasingly used to prevent migratory movements.

Artificial intelligence (AI) is an important component of migration management. For instance, the EU, the US and Canada invest in AI algorithms to automate decisions on asylum and visa applications and refugee resettlement. Meanwhile, the real-time data collected from migrants by various smart border and virtual wall solutions such as satellites, drones and sensors are assessed by AI algorithms on the border.

At the US-Mexico border, for example,the US Customs and Border Protection (CBP) agency is using artificial intelligence, military drones with facial recognition technologies, thermal imaging and fake cellphone towers to monitor migrants before they even reach the border. They can listen to conversations between migrants, try to identify them from their faces, check out their social media accounts and locate people trying to cross borders.

A new UN report has warned about the risks of so-called “smart” border technology on refugees in particular. These technologies are helping border agencies to stop and control the movement of migrants, securitise migration governance by treating migrants as criminals and ignore the fundamental rights of people to seek asylum. Furthermore, they collect all data without taking the consent of migrants – factors that in other circumstances would likely be criminal if deployed against citizens.

As researcher Roxana Akhmetova has written: “the automated decision-making processes can exacerbate pre-existing vulnerabilities by adding on risks such as bias, error, system failure and theft of data. All of which can result in greater harm to migrants and their families. A rejected claim formed on an erroneous basis can lead to persecution.”

This is a good example of how algorithmic technology more generally can be influenced through the biases of its creators to discriminate against the lower classes of society and serve the privileged ones. In the case of refugees, people who have had to flee their homes because of war are now being subjected to experiments with advanced technology that will increase the risks carried by this already vulnerable population.

Data and consent

Another issue at stake here is the informed consent of refugees. This refers to the idea that refugees should understand the systems they are subjected to and should have the chance to opt out of them. While voluntary informed consent is a legal requirement, many academics and humanitarian NGOs focus on “meaningful informed consent” which is more than signing a paper and helping refugees to fully understand what they are subject to. Secret surveillance gives them no such chance. And the technologies involved are so complex that even the staff operating them have been said to lack the expertise to assess the ethical and practical implications.

Despite the recent UN report warning on the smart border solutions, many governments and various UN agencies dealing with refugees increasingly prefer to employ tech-based solutions, for example to assess people’s claims for aid, cash transfer and identification. But what happens to people who are not willing to share their data, for any reason, be it political, religious or personal?

Use of these technologies requires public-private partnerships and technical preparations for a long period of time before refugees encounter them on the ground. And at the end of all the processes to decide, fund and develop algorithms, recognition of the right of “beneficiaries” to reject these technologies is not realistic, nor is it practical. Therefore, most of these tech-based investments categorically undermine refugees’ informed consent because the nature of the work of those behind these decisions is to deny their rights.

Refugees can benefit from the increasing use of digital technology, as smartphones and social media can help them connect with humanitarian organisations and stay in touch with families back home. But ignoring the power imbalance created through their loss of rights as a result of using such technology leads to the romanticisation of the relationship between refugees and their smartphones.

It’s not too late to change this course of technological development. But refugees do not have the same political agency as domestic citizens to organise and oppose government actions. If you want to see what a dystopian tech-dominated future in which people lose their political autonomy looks like, the daily experiences of refugees will provide sufficient clues.

Retrieved from here.

Başlangıç Noktası E-bülten

Merak etmeyin. Asla Spam yapmıyoruz.

Related posts
Digital TransformationSociety

Digital Transformation Moving to Main Street

ConstructionTechnology

The Upcoming Trends of Construction and Technology

HealthTechnology

What Does the Future of Telehealth Look Like?

Self-Driving VehiclesSociety

Is the Public Ready to Accept Deaths From Self-Driving Car Accidents?

Başlangıç Noktası E-bülten

Merak etmeyin. Asla Spam yapmıyoruz.

Leave a Reply

Your email address will not be published. Required fields are marked *