0

How This AI Ethics Researcher Combines Anthropology And Technology To Build Human-First Solutions

  • Author :Vijetha IAS

  • Date : 07 April 2022

How This AI Ethics Researcher Combines Anthropology And Technology To Build Human-First Solutions

AI is penetrating every aspect of our lives, directly impacting how we function as a society. Incidents such as Google-Timnit Gebru fiasco, Cambridge Analytica, Solarwinds hack, and surveillance overreach are the result of neglecting the ethical side of AI over the years.

Unfortunately, as a 2018 MIT study puts it, “there’s a big gap between how AI can be used and how it should be used”. The study stated while 30 percent of large companies in the US are actively deploying AI applications, not many of them have a concrete plan to ensure their ethical fairness.

The role of an AI ethicist has never been more important. We caught up with Aparna Ashok, an AI Ethics researcher and a tech anthropologist, to understand her work around adopting a ‘ humanity-first approach when designing emerging technology services’. Aparna currently runs Ethics Sprint, a platform helping technology companies embed the ethics component into their product development process.

Aparna was named in the Lighthouse3’s list of ‘100 Brilliant Women in AI Ethics’ for 2020 and is also on the Advisory Group of Wellcome Trust’s Data Labs, a strategic initiative of the UK’s largest charitable foundation.

How It All Started

Aparna started her career consulting for companies on building responsible businesses. Back in 2015, she worked with an impact organisation in India delivering healthcare to low-income communities. Part of her work involved designing an electronic health record and a training platform. The experience led her to combine her anthropology background with technology.

“Technology anthropology consists of two things — understanding human needs and converting them into a technological product and studying the macro of how these technological interventions change our everyday life. My work as a tech anthropologist revolves around the study of the interaction between people and digital solutions, the changing nature of technology and its impacts on society.”

Aparna did her Master’s thesis on ‘Anticipatory Ethics for AI’. Interviewing tech practitioners as part of her research, gave Aparna perspective on risks and opportunities at the intersection of AI and society.

“AI is a powerful, influential tool with far-reaching and real implications on society and individuals. Understanding its ubiquitousness, – we are all already subjected to data-driven recommendations and profiling  – and the fact that it is only going to be more prevalent is what drew me to AI ethics. How AI systems change our future depends on the people and policies that guide their implementations. My master’s thesis showed that owners and technologists working on them, even when they have good intentions, are not able to see the implications of the technical decisions they make. While this is true for a lot of technology, the opaque nature, decision-making capability and self-learning capacity of automated decision-making systems makes this an urgent and critical matter to be considered,” she said.

Ethical Principles

In 2018, Aparna developed a framework for ‘Ethical Principles for Humane Technology’. “I developed and refined this framework to create a common language to reflect on humanity-related implications within the product design process.” In her report, Aparna noted that while companies race to use a large amount of data, analytics, and computational power to build accurate systems, the crucial aspect of ‘real-life consequences of these decisions on living, breathing human beings’ is often overlooked.

The framework lists six lenses to understand the impact of AI on human life.

Well-being: It refers to aligning the system goals to serve in the best interests of humanity. It can be achieved by keeping the user informed of system goals, designing systems that enable competency and connection, and building an overall business model to support human outcomes.

Inclusion: As the name suggests, it is about embracing diversity and creating a sense of belonging. Inclusion in system design could be achieved through mapping and accounting for diverse capabilities of users, representing different groups of users in algorithm training, incorporating representatives from the target group in the team.

Privacy: Making sure the information collected, analysed, processed, and shared honours the user’s ownership.

Security: Protecting user’s psychological, emotional, intellectual, digital and physical safety.

Accountability: It refers to creating transparency in decision making, addressing biases, and giving users an opportunity to challenge decisions.

Trust: Creating a reliable environment that promotes ‘authentic engagement’.

“What is harder is putting these into practice within quick product design cycles. The important thing to remember is that whatever role you work within technology, you have the responsibility to educate yourself on the harms as well as the benefits of what you are working on. And you have a voice that can be used to question objectives and advocate for those affected by your solution who don’t have a voice,” said Aparna.

Human-First Approach

“The adoption of AI ethics and a humanity-first approach leads to Responsible AI. This refers to automated self-learning systems that are built with the context in mind, at a minimum they fulfil human rights requirements, and where possible they are clear about improving social development targets (which can be measured through the UN’s sustainable development goals, amongst others). India is aspiring to these standards shown by Niti Aayog’s latest AI strategy – Responsible AI for all,” said Aparna.

Bridging The Gap Between AI Policymakers and AI Developers

When Sundar Pichai appeared before the US Congress in 2018, he was asked why the word ‘idiot’ returned Donald Trump’s images on Google search. Pichai tried to explain how page indexing works and that there is no manual intervention to rank the results, but the Congresswoman didn’t seem convinced.

The whole hearing was a study in the technological knowledge gap between the policymakers and practitioners. Governments — not just the US — across the world face the same challenge. The gap is a matter of grave concern because of the direct impact AI algorithms have on individuals and societies at large.

On the other hand, AI firms have been accused of playing fast and loose with the decision-making algorithms and their social and cultural implications. The ethical side of these firms leaves something to be desired.

The current gap in policymakers’ tech knowledge and technologists’ ethics knowledge needs to be bridged to ensure AI’s sustainable development.

Informed Policymakers

Knowledge building is critical in setting up an ethical framework in the AI domain. Only a well-informed group of policymakers can develop an appropriate policy framework and regulatory oversight. As things stand, politicians and policymakers are not yet there. As we advance, this should not be the case as it is a politician’s responsibility to safeguard their constituents’ interest from the threats resulting from algorithmic biases.

This does not mean the politicians need to become an expert in AI. But the policymaker should show proactive interest to better understand the impact of AI by bringing in ‘public-interest technologists’. 

The term public-interest technologist is relatively new, but the concept is old. These are professionals who act as the interface between the policymakers and technology providers. They have an educational background in both social sciences and computer sciences. 

Professionals working at the intersection of AI and social sciences are rare. To address the supply-side of the issue, governments need to make changes in their educational system. 

Though AI and data science courses are in plenty, most of them do not have ethical AI modules. To encourage more AI professionals to participate in public policy, the governments should encourage or invest in universities to introduce subjects on ethics, policy, and social sciences in AI and data science courses. 

Only by familiarising themselves with AI through qualified technologists, policymakers can draft sensible regulation that strikes the right balance between developing ethical AI and maximising its potential.

Responsible AI Providers

While policymakers lack technical knowledge, AI developers or technologists lag in awareness of AI’s ethical implications. Through indiscriminate use of AI, the tech corporations risk being the driver of existing ethical biases in society to the point of no return. Hence, producing ethical and trustworthy AI should be a top priority among all corporate social responsibilities. 

More companies need to start educating their employees on ethics and the resulting implications of AI products on society. The process should start right from the induction and continue through the employee lifecycle. The training syllabus should be in tune with the latest advancements in AI. 

The training warrants quite a bit of investment from the companies since understanding the social and cultural context in which AI technologies are deployed requires patience and time. There are several ways to undertake such training initiatives. Appointing a Chief AI Ethics Manager can help with overseeing the ethical side of AI and designing curriculums for educating or upskilling staff. From the leadership to employees, a top-down approach can speed up the process of instilling ethical values in the firm. Further, a lot of toolkits are available for companies to set up a training process.

Moreover, AI companies should be aware of the government’s regulations and work with them to ensure compliance. This will help the companies understand the government’s point of view, and compliance will help prevent firms from getting in government’s crosshairs.

Collaborations

After the recent firing of AI Ethicists at Google and the controversy that followed, Big Tech companies are racing to hire AI Ethicists. Even the Biden government has hired a public-interest technologist, Alondra Nelson, as deputy director of the White House Office of Science and Technology Policy. A network of 36 top higher educational institutes, called the Public Interest Technology University Network, has been formed to train engineers and social scientists on social impact aspects of their work. 

Policymakers, universities, and private firms must work in tandem and ensure open communication lines to get all the stakeholders on the same page — in terms of compliance and accountability. Without constant dialogue, the AI governance initiatives will wither on the vines. The US has taken the right step by creating a new position to work with technology developers. Other governments and companies should follow suit.

Loading...