In April 2018, the EU Member States signed a declaration of cooperation on Artificial Intelligence, which includes the formation of a High-Level Expert Group on AI (AI HLEG) that consists of 52 experts who have been selected by the Commission will draft AI ethics guidelines.[1]
Being a member of the European Commission’s AI HILEG, Cecilia Bonefeld-Dahl, Director-General of DIGITALEUROPE, whom, along with a delegation visited Turkey last week to support its trade association member Digital Turkey Platform, led by TBV, recalled that the development of new technologies will mean a continued focus on removing the bias on data.
Quoting Ms. Bonefeld-Dahl:
“Ethical development of AI means producing results that can be verified to not discriminate or infringe on the human rights of EU citizens. There is also an ethical obligation for governments to ensure these technologies are being utilised to its fullest as there is a potential AI will lead to improving the lives of humans and finding solutions on the biggest issues such as climate change, food shortages and energy scarcity. To ensure such benefits can be fully exploited, all European countries including Turkey need to work together to ensure the key building blocks of innovation are in place.”[2]
Google CEO Sundar Pichai made a bold claim about artificial intelligence (AI) last year, calling it “one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire.”
It is hard to disagree with Mr. Pichai that AI is another general-purpose technology with applications in many different fields and will have a transformative impact on societies. Building on the analogy to electricity, AI is now at post-Edison stage, where electricity is already invented and now we are building the grid connections to cities, i.e., the fundamental technology is mostly developed already.
The role of government in development of AI technology, however, is different from many previous technological developments in the last century, including the Internet. The development of these technologies has been led by governments, in particular for military applications. AI, on the other hand, is mostly led by private sector tech-giants.[3]
From 2017 to 18, the global value derived from AI business is projected to increase 70 percent to US$1.2 trillion. And according to the AI Index 2018 Report, the amount of papers published, patents pursued, conferences attended, and job openings related to AI continue to reach the highest points yet globally.[4]
The statistics are just some of many made in 2018 that reflect the perceived promise of the technology. A number of 2018 milestones suggest that “artificial intelligence experienced a landmark year,” as suggested by the Center for International Governance Innovation (CIGI). According to a report from the Canadian Institute for Advanced Research on national and regional AI strategies, the number of countries and regions with AI-specific strategies grew from six in 2017 to 18 in 2018.
Artificial intelligence is rapidly being embedded into the world’s digital fabric while it threatens us as overheating threatens the frog. Increasingly these systems are black boxes whose behaviors and decision making processes flummox even their designers which is made worse in the absence of ethical codes to follow. Many people worry about the negative consequences that these advanced systems could have on business, democracy, and society.[5]
“The complexity of the challenges does not mean that solutions can’t be developed. It does mean that the solutions are unlikely to be simple and straightforward” states World Economic Forum (WEF), in its Data Policy in the Fourth Industrial Revolution: Insights on personal data, a paper commissioned in collaboration with the Ministry of Cabinet Affairs and the Future, United Arab Emirates.
While the challenges confronting us today are unlike anything seen before, overcoming them demands the same age-old solution: good-faith dialogue, I suggested in a note.
I list three staggeringly important initiatives and all three came into life in the past couple weeks.
- Nesta, a UK based, one of the world’s largest innovation foundation, takes on mapping AI governance activities worldwide, aiming to support research and policy-making with a searchable database and visualizations.
Here, Nesta suggests:
“Artificial intelligence, often referred to as “the new electricity”, is poised to drive economic growth and development over the next decades, contributing to the solution of some of the world’s most pressing problems. However, the risks and downsides of unchecked AI deployment, highlighted by examples of biased algorithms, increased profiling, or the Cambridge Analytica scandal, demonstrate an urgent need for better governance frameworks.”
- Harvard’s Berkman Klein Center for Internet and Society embeds ethics in computer science curriculum.
This I believe, a huge accomplishment as it can be seen as an international role-model at schools which engage in intelligent systems design.
Here, Alison Simmons, Samuel H. Wolcott Professor of Philosophy at Harvard states: “Standalone courses can be great, but they can send the message that ethics is something that you think about after you’ve done your ‘real’ computer science work.”
- On the occasion of Data Protection Day on 28 January, the Consultative Committee of the Convention for the Protection of Individuals with regard to the Processing of Personal Data (Convention 108) has published Guidelines on Artificial Intelligence and Data Protection.
The guidelines aim to assist policy makers, artificial intelligence (AI) developers, manufacturers and service providers in ensuring that AI applications do not undermine the right to data protection.[6] Quoting the Directorate General of Human Rights and Rule of Law at Council of Europe:
“Artificial Intelligence1 (“AI”) based systems, software and devices (hereinafter referred to as AI applications) are providing new and valuable solutions to tackle needs and address challenges in a variety of fields, such as smart homes, smart cities, the industrial sector, healthcare and crime prevention. AI applications may represent a useful tool for decision making in particular for supporting evidence-based and inclusive policies. As may be the case with other technological innovations, these applications may have adverse consequences for individuals and society. In order to prevent this, the Parties to Convention 108 will ensure and enable that AI development and use respect the rights to privacy and data protection (article 8 of the European Convention on Human Rights), thereby enhancing human rights and fundamental freedoms.”
Here I share the list of general guiding principles:
- The protection of human dignity and safeguarding of human rights and fundamental freedoms, in particular the right to the protection of personal data, are essential when developing and adopting AI applications that may have consequences on individuals and society. This is especially important when AI applications are used in decisionmaking processes.
- AI development relying on the processing of personal data should be based on the principles of Convention 108+. The key elements of this approach are: lawfulness, fairness, purpose specification, proportionality of data processing, privacy-by-design and by default, responsibility and demonstration of compliance (accountability), transparency, data security and risk management.
- An approach focused on avoiding and mitigating the potential risks of processing personal data is a necessary element of responsible innovation in the field of AI.
- In line with the guidance on risk assessment provided in the Guidelines on Big Data adopted by the Committee of Convention 108 in 2017, a wider view of the possible outcomes of data processing should be adopted. This view should consider not only human rights and fundamental freedoms but also the functioning of democracies and social and ethical values.
- AI applications must at all times fully respect the rights of data subjects, in particular in light of article 9 of Convention 108+.
- AI applications should allow meaningful control by data subjects over the data processing and related effects on individuals and on society.
Take “Glocal” Initiative
In an increasingly interdependent world, we need collaboration without borders, more than ever. However, countless empirical studies reveal that moral choices are not universal; there are harsh variations in global ethics.[7] People who think about machine ethics make it sound like you can come up with a perfect “one size fits all” will fail.
A great friend, Markus Lehto of Join+Idea passed me an article from Politico: Finland’s grand AI experiment. The idea has a simple, Nordic ring to it: Start by teaching 1 percent of the country’s population, or about 55,000 people, the basic concepts at the root of artificial technology, and gradually build on the number over the next few years. “We’ll never have so much money that we will be the leader of artificial intelligence,” Mika Lintilä, The Minister of Economy of Finland beautifully states. “But how we use it — that’s something different.”
“Turkey has a high level of qualified engineers and researchers that can contribute to the Horizon funding programmes” says Ms. Bonefeld-Dahl, in her recent visit to Turkey.
Take “glocal” initiative that enhances human agency, protects human rights and the democratic values, and increases societal capabilities without hindering technological development. And take now.
References:
[1] DIGIBYTE. “EU Member States Sign up to Cooperate on Artificial Intelligence.” Together Against Trafficking in Human Beings, European Commission, 19 Apr. 2018, ec.europa.eu/digital-single-market/en/news/eu-member-states-sign-cooperate-artificial-intelligence.
[2] “DIGITALEUROPE advances digital transformation in Turkey”. 01 Feb. 2019, https://www.digitaleurope.org/news/delegation-turkey/.
[3] Five American companies, Google, Amazon, Facebook, Microsoft and Apple (GAFAM, in short), and three Chinese companies, Baidu, Tencent and Alibaba (BAT, in short), lead a significant portion of research on AI.
[4] Center for International Governance Innovation. “2018: A Landmark Year for Artificial Intelligence”. 27 Dec. 2018, https://www.cigionline.org/articles/2018-landmark-year-artificial-intelligence.
[5] See, for instance, “IEEE-SA – The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.” IEEE-SA – The IEEE Standards Association – Home, Dec. 2016, standards.ieee.org/develop/indconn/ec/autonomous_systems.html.
[6] Council of Europe. “New Guidelines on Artificial Intelligence and Data Protection”. Newsroom, 30 Jan. 2019, https://www.coe.int/en/web/data-protection/newsroom/-/asset_publisher/7oll6Oj8pbV8/content/new-guidelines-on-artificial-intelligence-and-personal-data-protection
[7] See, for instance, in a scenario in which some combination of pedestrians and passengers will die in a collision, people from relatively prosperous countries with strong institutions were less likely to spare a pedestrian who stepped into traffic illegally. “Self-driving car dilemmas reveal that moral choices are not universal” by Amy Maxmen: https://www.nature.com/articles/d41586-018-07135-0.