University of Maryland

About

Grand Challenge – Values-Centered AI

AI is transforming healthcare, education, transportation,
and communication. In 30 years, we see AI:

Grand Challenge – Problem

Unfortunately, that vision is not where AI is today.

There have been a number of high-profile failures of AI, which have largely been failures of the values emphasized abbove. These have ranged from sexism in hiring algorithms and racism in computer vision to antisemitism and misogyny in chatbots and racism in the criminal justice system.

Some failures are more subtle, for instance police arresting someone for a mistranslated social media post, or how Instagram’s feed ranker implicitly encourages users – especially women – to show more skin in order to have their posts ranked highly.

A large part of why these failures have happened is that AI as practiced today is not much different from the technology-centered approach from the 1960s when folks like Alan Newell and Herbert Simon were trying to build a chess-playing program. Yet chess playing is a far cry from high stakes settings like education and health, where it is not enough to study technology in isolation.

We must instead build the technology within the social context in which it will be deployed, and understand the human norms, human values, and societal expectations that govern that context.

Grand Challenge – Implementing AI Values

What AI development currently lacks is not only an understanding of human values, but ways to translate from principles and goals to models and outcomes. Stakeholders ranging from private companies to governments – including the White House Office of Science and Technology Policy – have proposed many sets of principles to guide AI development. The good news is that these guidelines tend to converge around values like transparency, justice and fairness, non-maleficence, responsibility, and privacy.

However, implementing these principles is context and application dependent, analogous to how courts of law interpret broad legislation in specific cases.
Moreover, tensions between principles must be adjudicated: for instance one cannot have complete privacy while still allowing for a system to be audited for anti-discrimination.

The team to address this includes:

As philosopher Jeff Horty would point out, there is an analogy here to law and governance: legislators or federal agencies make “open-textured” rules to represent community values, and it is up to courts to interpret those rules. That interpretation relies upon ethical reasoning. Right now, interpreting “open-textured” values for AI is left to developers and companies disconnected from the communities impacted by AI tools.

That world needs mechanisms that increase the participation of the stakeholders in guiding how values are interpreted and implemented in AI systems. We want to work within situated communities to invent those mechanisms.

Grand Challenge – Solution

The proposed solution is simple to say, but hard to do: To put AI in a human context by developing theories, practices, and tools to ensure that AI respects human values:

To achieve this requires broad expertise: AI, human-computer interaction, philosophy & ethics, data science, and domain experts.

In addition to this expertise, domain experts are also needed because it is impossible to put AI in a human context without experts in that context. Six initial target domains based on existing connection between researchers and stakeholder populations, as well as coverage across a range of values considerations, includes: accessibility, communication, education, health care, sense-making, and transportation.