Artificial IntelligenceSociety

Ethics of AI: Benefits and Risks of Artificial Intelligence Systems

The convergence of the availability of a vast amount of big data, the speed and stretch of cloud computing platforms, and the advancement of sophisticated machine learning algorithms have given birth to an array of innovations in Artificial Intelligence (AI).

In theory, the beneficial impact of AI systems on government translates into improving healthcare services, education, and transportation in smart cities. Other applications that benefit from the implementation of AI systems in the public sector include food supply chain, energy, and environmental management.

Indeed, the benefits that AI systems bring to society are grand, and so are the challenges and worries. The evolving technologies learning curve implies miscalculations and mistakes, resulting in unanticipated harmful impacts.

We are living in times when it is paramount that the possibility of harm in AI systems has to be recognized and addressed quickly. Thus, identifying the potential risks caused by AI systems means a plan of measures to counteract them has to be adopted as soon as possible.

 

Public sector organizations can, therefore, anticipate and prevent future potential harms through the creation of a culture of responsible innovation to develop and implement ethical, fair, and safe AI systems.

That said, everyone involved in the design, production, and deployment of AI projects, which includes data scientists, data engineers, domain experts, delivery managers, and departmental leads, should consider AI ethics and safety a priority.

Artificial Intelligence ethics and roboethics

Artificial Intelligence ethics, or AI ethics, comprise a set of values, principles, and techniques which employ widely accepted standards of right and wrong to guide moral conduct in the development and deployment of Artificial Intelligence technologies.

Robot ethics, also known as roboethics or machine ethics, is concerned with what rules should be applied to ensure the ethical behavior of robots as well as how to design ethical robots. Roboethics deals with concerns and moral dilemmas such as whether robots will pose a threat to humans in the long run, or whether using some robots, such as killer robots in wars, can become problematic for humanity.

Roboticists must guarantee that autonomous systems can exhibit ethically acceptable behavior in situations where robots, AI systems, and other autonomous systems such as self-driving vehicles interact with humans.

Artificial Intelligence, automation, and AI ethics

automation and AI ethics
The development of AI systems must always be responsible and developed toward optimal sustainability for public benefit, Source: putilich/iStock

Artificial Intelligence (AI) and automation are dramatically changing and influencing our society. Applying principles of AI ethics to the design and implementation of algorithmic or intelligent systems and AI projects in the public sector is paramount. AI ethics will guarantee that the development and deployment of Artificial Intelligence are ethical, safe, and uttermost responsible.

The new interconnected digital world powered by 5G technology is delivering great potential and rapid gains in the power of Artificial Intelligence to better society. Innovation and implementation of AI are already making an impact on improving services from healthcare, education, and transportation to the food supply chain, energy, and environmental management plans, to mention just a few.

With the rapid advancements in computing power and access to vast amounts of big data, Artificial Intelligence and Machine Learning systems will continue to improve and evolve. In just a few years into the future, AI systems will be able to process and use data not only at even more speed but also with more accuracy.

As always, with power comes great responsibility. Despite the advantages and benefits that technologies such as Artificial Intelligence bring to the world, they may potentially cause irreparable harm to humans and society if they are misused or poorly designed. The development of AI systems must always be responsible and developed toward optimal sustainability for public benefit.

Artificial Intelligence ethics and potential harms caused by AI systems

big data automation
AI projects lay on the grounds of the structuring and processing of Big Data, Source: solidcolours/iStock

AI ethics and safety must be a priority in the design and implementation of AI systems. AI Ethics emerges to avoid individual and societal harms caused by the misuse, abuse, poor design, or unintended negative consequences of AI systems.

According to Dr. David Leslie, Ethics Theme Lead within the public policy program and Ethics Fellow at The Alan Turing Institute in London, England, potential harms caused by AI systems include the following:

  • AI systems: Bias and discrimination 

AI systems designers choose the features, metrics, and analytics structures of the models that enable data mining. Thus, data-driven technologies, such as Artificial Intelligence, can potentially replicate the preconceptions and biases of their designer.

Data samples train and test algorithmic systems. Yet, they can often be insufficiently representative of the populations from which they are drawing inferences; thus, creating possibilities of biased and discriminatory outcomes due to a flaw from the start when the designer feeds the data into the systems.

  • AI systems: Denial of individual autonomy, recourse, and rights

In the past, AI systems that automate cognitive functions were attributable exclusively to accountable human agents. Today, AI systems make decisions, predictions, and classifications that affect citizens.

Certain situations may arise where such individuals are unable to hold accountable the parties responsible for the outcomes. One of the most common responses from humans to justify negative results is to blame the AI system, adding that there is nothing they can do to change the outcome. Something which is not real.

Such a response is utterly ridiculous since AI systems are designed and programmed by a human designer. Therefore, a human is who can correct and change an outcome that is not satisfactory. Take as an example a case of injuries, or a negative consequence such an accountability gap, which may harm the autonomy and violate the rights of the affected individuals.

  • AI systems: Non-transparent, unexplainable, or unjustifiable outcomes

In some cases, machine learning models may generate their results by operating on high-dimensional correlations that are beyond the interpretive capabilities of human reasoning.

These are cases in which the rationale of algorithmically produced outcomes that directly affect decision subjects may remain opaque to those subjects. In some use cases, this lack of explainability may not be a cause of too much trouble.

However, in applications where the processed data could harbor traces of discrimination, bias, inequity, or unfairness, the lack of clarity of the model may be deeply problematic.

  • AI systems: Invasions of privacy

AI systems pose threats to privacy in two ways:

– As a result of their design and development processes

– As a result of their deployment

AI projects lay on the grounds of the structuring and processing of big data. Massive amounts of personal data are collected, processed, and utilized to develop AI technologies. More often than not, big data is captured and extracted without gaining the proper consent of the data owner subject. Quite often, some use of big data reveals —or places under risk— personal information, compromising the privacy of the individual.

The deployment of AI systems can target, profile, or nudge data owner subjects without their knowledge or consent. It means that such AI systems are infringing upon the ability of the individuals to lead a private life. Privacy invasion can consequently harm the right to pursue goals or life plans free from unchosen influence.

  • AI systems: Isolation and disintegration of social connection

The capacity of AI systems to curate individual experiences and to personalize digital services has the potential of improving consumer life and service delivery. This, which is a benefit if done right, yet it comes with potential risks.

Such risks may not be visible or show as risks at the start. However, excessive automation may potentially lead to the reduction of human-to-human interaction, and with it, solving problematic situations at an individual level could not be possible any longer.

Algorithmically enabled hyper-personalization might improve customer satisfaction, but limits our exposure to worldviews different from ours, and this might polarize social relationships.

Since the times of Greek philosopher Plato, well-ordered and cohesive societies have built on relations of human trust, empathy, and mutual understanding. As Artificial Intelligence technologies become more prevalent, it is paramount that these relations of human trust, or empathy, or mutual understanding remain intact.

  • AI systems: Unreliable, unsafe, or poor-quality outcomes

The implementation and distribution of AI systems that produce unreliable, unsafe, or poor-quality outcomes may be the result of irresponsible data management, negligent design production processes, or questionable deployment practices. Consequently, this can directly lead to damaging the wellbeing of individuals as well as damaging the public welfare.

Such outcomes can also undermine public trust in the responsible use of societally beneficial AI technologies. Furthermore, they can create harmful inefficiencies by the dedication of limited resources to inefficient or even detrimental AI technologies.

Applied ethics of Artificial Intelligence

the thinker, Rodin
Perhaps in the future, General AI might become moral agents with attributed moral responsibility, Source: davidf/iStock 

In his guide, Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector, supported exclusively by The Alan Turing Institute Public Policy Programme, Dr. David Leslie writes:

When humans do things that require intelligence, we hold them responsible for the accuracy, reliability, and soundness of their judgments. Moreover, we demand of them that their actions and decisions be supported by good reasons, and we hold them accountable for their fairness, equity, and reasonableness of how they treat others.”

According to Marvin Minsky, who was an American cognitive scientist, co-founder of the Massachusetts Institute of Technology AI laboratory, and who was an AI pioneer, Artificial Intelligence is the science of making computers do things that require intelligence when done by humans.

It is this standard definition that gives us a clue into what motivation has led to the development of the field of applied ethics of Artificial Intelligence.

According to Dr. David Leslie, the need to develop principles tailored to the design and use of AI systems is that their emergence and expanding power to do things that require intelligence has heralded a shift of a wide array of cognitive functions into algorithmic processes, which themselves can be held neither directly responsible nor immediately accountable for the consequences of their behavior.

Program-based machinery, such as AI systems, cannot be considered morally accountable agents. This reality gave room for the creation of a discipline that could deal with the ethical breach in the sphere of the applied science of Artificial Intelligence.

Precisely, this is what the frameworks for AI ethics are now trying to fill. Fairness, accountability, sustainability, and transparency are principles meant to fill the gap between the new smart agency of machines and their fundamental lack of moral responsibility.

On the other hand, when humans do things that require intelligence, they are held responsible. In other words, at the current level in which Artificial Intelligence is operating, humans are the only responsible for their program-based creations.

Artificial Intelligence systems implementation and design must be held accountable. Perhaps in the future, General AI might become moral agents with attributed moral responsibility.

However, for now, engineers and designers of AI systems must assume responsibility and be held accountable for what they create, design, and program.

Retrieved from Interesting Engineering.

Başlangıç Noktası E-bülten

Merak etmeyin. Asla Spam yapmıyoruz.

Related posts
Artificial IntelligenceGPT-3

A message from AI Blogger: How Human Internet Bloggers will affect from GPT3 and AI Technologies?

Digital TransformationSociety

Digital Transformation Moving to Main Street

Artificial IntelligenceSecurity

AI Must Play a Role in Making Our Stores Safer

Artificial IntelligenceHuman Resources

HR’s Role in Understanding and Mitigating AI Bias

Başlangıç Noktası E-bülten

Merak etmeyin. Asla Spam yapmıyoruz.

Leave a Reply

Your email address will not be published. Required fields are marked *