Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium (2025)

Chapter: 11 How to Conduct a Human Rights Assessment of Artificial Intelligence

Previous Chapter: 10 Integrating Human Rights Principles into Systems and Product Design
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

11
How to Conduct a Human Rights Assessment of Artificial Intelligence

The symposium’s final session began with a tutorial to prepare attendees to engage in an exercise illustrating how to conduct a human rights assessment of artificial intelligence (AI). This tutorial included an overview of human rights and their connection to AI, an overview of human rights assessments, a comparison of human rights assessments to other audit frameworks, and a walkthrough of human rights assessment methodology, using a practical example.

Attendees then assembled into groups and conducted a practice assessment of the human rights risks posed by a journalist’s use of large language models (LLMs) to report on COVID-19. Betsy Popken, Executive Director of the Human Rights Center at the University of California, Berkeley, School of Law and Lindsey Andersen, Associate Director for Human Rights at BSR (Business for Social Responsibility), led the session.

A BRIEF REVIEW OF HUMAN RIGHTS

Popken reiterated that human rights are basic rights inherent to all humans, regardless of nationality, place of residence, sex, national or ethnic origin, color, religion, language or any other status. They are contained in many United Nations (UN) international rights instruments, including the Universal Declaration of Human Rights; International Covenant on Civil and Political Rights; International Covenant on Economic, Social and Cultural Rights; and thematic conventions dealing with labor rights, child rights, and the rights of persons with disabilities. Human rights have been incorporated into international treaties, regional human rights instruments, and national constitutions and legal codes. The UN has a system of bodies and experts that oversees its human rights conventions, further develops them, and promulgates guidance about what they mean in practice.

Popken observed that international human rights instruments were written for governments, not companies, so in 2011, the UN Guiding Principles on Business and Human Rights (UNGPs) were released. The UNGPs state that companies have a responsibility to respect human rights and outline their related obligations. This responsibility requires companies to perform due diligence regarding human rights, which is an ongoing process to assess actual and potential human rights effects, integrate and act on findings, track the effectiveness of responses, and communicate how effects are identified and addressed. The UNGPs are also increasingly being incorporated into mandatory human rights due diligence laws and technology-focused laws

Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

such as the European Union (EU) Digital Services Act37 and EU AI Act.38 These more stringent regulations affect the overall due diligence process across companies.

Popken noted some of the commonalities between the Institute of Electrical and Electronics Engineers (IEEE) Code of Ethics39 and human rights. For example, the IEEE Code of Ethics states, “(t)reat all persons fairly and with respect, and (d)o not engage in discrimination based on characteristics such as race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression,” while in human rights everyone has a right to equality and freedom from discrimination. The IEEE Code of Ethics also states that there is a responsibility to improve the understanding of “the capabilities and societal implications of conventional and emerging technologies, including intelligent systems,” while human rights language states that there is a right to access information and a right to education.

AN OVERVIEW OF HUMAN RIGHTS ASSESSMENTS

Andersen then described the methodology of a human rights assessment so that participants can practice it and apply it in their work. Human rights assessments, she said, identify and assess actual and potential human rights impacts and risks, and they use a methodology defined in the UNGPs to evaluate the severity of identified impacts and the appropriate action to take to avoid, prevent, or mitigate them.

The first step, said Andersen, is to use a list such as that developed by BSR (Box 11-1) to identify human rights risks and impacts and then narrow them down to those that are salient via background research; interviews with product teams, external experts, and affected shareholders; and one’s own expertise. Step two entails assessing the severity of the identified risks and impacts based on their scope, the number of people who could be affected, their scale, the seriousness of the impacts, and the extent to which the harm could be remediated. The risks and impacts are then prioritized based on severity and the likelihood of the risk or impact occurring. The final step is to identify appropriate actions to avoid, prevent, or mitigate the risk or impact. These actions are classified based on attribution, or how closely the entity is connected to the risk or impact, and the leverage the entity has to influence or address the risk or impact. Popken added that a human rights assessment judges the potential or actual risks, while recommendations address how to prevent them from happening in the future. Human rights assessments can also be used before products are deployed.

___________________

37 https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en

38 https://artificialintelligenceact.eu/the-act/

39 https://www.ieee.org/about/corporate/governance/p7-8.html

Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

Box 11-1
“Long List of Human Rights”

  • Right to equality and non-discrimination
  • Right to life, liberty, and personal security
  • Freedom from slavery
  • Freedom from torture and degrading treatment
  • Due process and fair trial rights
  • Freedom from arbitrary arrest and exile
  • Right to privacy
  • Freedom of movement
  • Right to asylum
  • Right to a nationality and the freedom to change nationality
  • Right to marriage and family
  • Right to own property
  • Freedom of thought
  • Freedom of religion and belief
  • Right to remedy
  • Freedom of opinion, expression, and access to information
  • Right of peaceful assembly and association
  • Right to political participation
  • Right to social security
  • Labor rights (e.g., safe working conditions, adequate remuneration, right to join unions)
  • Right to rest and leisure
  • Right to adequate living standards
  • Right to health
  • Right to education
  • Right to participate in the cultural life of the community
  • Right to benefit from scientific advancement
  • Right to internet access
  • Right to a healthy environment
  • Disability rights (e.g. right to accessibility)
  • Child rights
  • Indigenous peoples’ rights

Source: BSR (contained in session briefing materials, available at https://www.nationalacademies.org/event/43591_11-2024_issues-at-the-intersection-of-engineering-and-human-rights-a-symposium).

This methodology, said Andersen, is flexible and can cover a wide variety of issues. For example, a human rights assessment could be conducted for a specific product, such as an object recognition system for retail stores, or an application domain, such as use of AI in patient care and diagnosis. An assessment could address a specific entity, such as a telecommunications

Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

company in a particular country, or an aspect of a product’s governance, such as an AI model release decision. It could also be performed as a standalone exercise or integrated into an existing risk and impact assessment process.

Popken then offered her views on other assessment and audit frameworks used by engineers developing products and services in the AI space. Algorithmic audits, for example, identify biases, discriminatory patterns, and potential harms in the design, development, and deployment of AI systems. Red teaming is used to identify vulnerabilities, weaknesses, and potential threats in AI models, algorithms, and systems by simulating adversarial attacks and scenarios. Human-centered design, she noted, prioritizes the needs, behaviors, and preferences of the people who will ultimately use the product, service, or system being designed. There are clear overlaps among these frameworks and the human rights issues discussed during the symposium.

Although these other frameworks are useful, human rights assessments are an important addition to the assessment toolbox. The benefits of human rights assessments include the following:

  • A focus on impacts to people
  • The comprehensiveness of their risk and impact identification
  • An approach to prioritizing risks
  • Emphasis on stakeholder engagement
  • Emphasis on accountability and remedies
  • Adaptability to a variety of contexts
  • An established, internationally accepted methodology
  • Assistance with regulatory compliance

That said, Popken noted that human rights assessments also have limitations. They may not, for example, cover all relevant risks and impacts, they are more qualitative than quantitative, and they are not technical assessments. “We do not recommend that they replace everything you do, but… they are an addition [that focuses] on the impacts to humans,” said Popken.

A CASE STUDY

Andersen provided an example of a real-life human rights assessment that her organization conducted when Google Cloud introduced a celebrity recognition tool in 2019. This tool was designed to search and find celebrities in images and video. Google incorporated the findings from the assessment—conducted in partnership with Andersen’s organization during the tool’s development—into its programming, ultimately shaping the tool’s final release.

Andersen emphasized that several key considerations were essential for grounding identified human rights risks. One was the definition of “celebrity.” The lack of consensus in the media and entertainment industry on the definition of celebrity complicated decision making about whose faces the tool’s database should include. In addition, the assessment highlighted that celebrities can be vulnerable in certain contexts and might face threats, and that meaningful consent needs to be addressed. The assessment also noted a tension between the privacy rights of individuals in the database and newsworthiness, as well as variation in the human rights effects depending on the content type that the tool would review, such as professional media versus surveillance footage. In addition, the assessment examined Google’s responsibility as the

Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

developer of the tool versus the responsibility of the media and entertainment industry customers who would use it.

The potential impacts identified by the assessment, said Andersen, included violating an individual’s rights to privacy, particularly when informed consent was not given, and freedom of expression, because governments could use the tool to censor content. The tool could affect the right to nondiscrimination by reinforcing negative stereotypes in the media and the right to bodily security if it was used to threaten or harass celebrities. The assessment also identified possible risks to child rights if the database included children as celebrities, because children cannot provide consent on their own, as well as access to culture because the tool might privilege certain forms of cultural expression over others.

As a result of the assessment and the recommendations by Andersen’s organization, Google took several actions that significantly lowered the tool’s human rights risk profile when it was released. These actions included defining specific service terms to limit the tool’s use to professionally filmed media and defining a celebrity as someone whose primary profession involves voluntarily being the subject of public media attention. Google implemented a policy that enables celebrities to opt out of having their faces in the database and established a customer “gating” process that serves as a screening and approval process for potential customers.

Andersen’s team recommended that the establishment of (1) industry standards or norms related to the use of this kind of tool and (2) a role for public policy and regulation related to the use of facial recognition tools. The team also recommended that media and entertainment companies do their own due diligence, because that they will be the ones using the tool; prepare for government demands related to the database; and provide a grievance mechanism in the case of harm.

PRACTICE SESSION

Popken presented the following prompt for the practice session40:

An international media company is looking to cut costs and increase efficiency by procuring a custom LLM solution from a vendor to streamline reporting. They want their journalists to use the LLM for research, analyzing and summarizing information, and writing and editing drafts of articles. However, they know there may be some human rights risks, so they have hired you to conduct a human rights assessment. [emphasis in original]

The use case was a journalist’s use of LLM for the first time to report on COVID-19.

Popken tasked the participants with identifying some of the human rights risks associated with developing and using an LLM for this use case and with offering suggestions for mitigating those risks. She asked the participants to address the engineers developing the LLM, the media organization procuring it and directing its employees to use it, the specific journalist using it to report on COVID-19, and the broader ecosystem, such as public policy or industry collaboration.

Andersen then provided these instructions:

___________________

40 Materials for the practice session can be accessed at https://www.nationalacademies.org/documents/embed/link/LF2255DA3DD1C41C0A42D3BEF0989ACAECE3053A6A9B/file/D534EDC565CA2A81BFD67FED61C4D9FD9747E1521FF7?noSaveAs=1

Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
  1. Assemble into breakout groups of ~5 people. Identify a group scribe and someone who will present your results.
  2. Open the practice session materials …. 1) a slide deck with instructions and materials to help you, and 2) an assessment spreadsheet. Please create a new copy of the spreadsheet for your group.
  3. Step 1: Use the “Long List of Human Rights” to brainstorm potential human rights risks associated with the prompt. Write down at least five risks in the assessment table (spreadsheet) with the corresponding human right. Try to be specific!
  4. Step 2: Use the severity criteria to assess scope, scale, and remediability for each of your risks. The assessment table will spit out an overall severity score you can use to compare the risks you identified and gauge whether your assessment feels right.
  5. Step 3: Write down recommendations to address the risks you identified in the corresponding spot on the spreadsheet.

She also provided a table with severity assessment criteria (Table 11-1).

TABLE 11-1 Severity Assessment Criteria

Criteria Levels
Scope
How many people are (or could be) affected by the adverse impact?
Small
Minority range of the relevant population impacted.
Medium
Over half of the relevant population impacted
Large
Significant or all of the relevant population impacted.
Scale
How serious are the impacts (or could they be) for affected individuals?
Less Serious
Associated with indirect and/or minimal to moderate adverse impacts on physical, mental, civic, or material well-being.
Somewhat Serious
Associated with direct and/or serious adverse impacts on physical, mental, civic, or material well-being.
Very Serious
Associated with lasting adverse impacts on physical, mental, civic, or material well-being.
Remediability
Can a remedy restore affected individuals to the same or equivalent position before the adverse impact?
Possibly Remediable
There is (a possible …) remedy that would return those affected to the same or equivalent position before the adverse impact occurred.
Rarely Remediable
Remedy can rarely return those affected to the same or equivalent position before the adverse impact occurred.
Not Remediable
Remedy will not return those affected to the same or equivalent position before the adverse impact occurred.

SOURCE: Table supplied to symposium participants by Andersen.

Report Outs

The first group to report identified three affected rights: the rights to freedom of movement, health, and education. The primary risk stemmed from the tool’s potential to produce incorrect information affecting individual health decisions, a risk of relatively large scope based on the international media company’s presumed scale. The group judged this risk to be rarely remediable, with a high severity score. Its advice for addressing the risk to the right to health was to limit the training data to peer-reviewed literature and information produced by medical experts and to flag the data’s origin. A second proposal was to require journalists to then validate and review the flagged data and to perform most of the research, using the LLM as a supplementary source. The group also proposed that organizational policy should require audit trails on

Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

documents that show the original and LLM portions of the work and that the organization should support its journalists’ access to high-quality data sources.

The next group also identified a risk to the right to health along with the rights to benefit from scientific advancement, access to information, education, equality and nondiscrimination, freedom of movement, privacy, rest and leisure, disability rights, and labor rights. In the case of misinformation, the group believed that remediation may be possible, but if not, it should have the highest score. Regarding the right to equality and nondiscrimination, the risk stems from built-in bias that may favor some groups or make others vulnerable to discrimination. In this context, the group referred to the problem of discrimination against individuals of Asian descent during the pandemic. This group suggested that the engineers provide the ability to source the information used by the LLMs, so that the media organization develops a standard for use and validation.

The third in-person group identified many of the same risks noted above and added the right to property. Concerns over the last right stem from the risk of losing intellectual property if the source of the information is not attributed, while the right to benefit from scientific advancement is compromised if the source is scientifically inaccurate. This group proposed that data be validated and that all sources of information be included.

The online group identified risks to the right to health, life, privacy, education, and to benefit from scientific advancement. The right to health would have a large scope, be very serious, and be possibly or rarely remediable, while the right to life was very serious and rarely remediable. The group’s suggestions included conducting rigorous fact-checking, especially on public health topics; requiring the verification of any information coming from an LLM using a second non-AI source; requiring disclosure at the top of publications that an LLM was used to prepare the text; and providing clarity regarding sources. If an LLM was found to be providing false health information, journalists could be tasked with reporting this finding to some authority.

Reflections and Key Takeaways

One participant in the exercise noted that the exercise was challenging because differing opinions among group members made it difficult to produce a balanced evaluation of the risks. Another participant commented that conversations with the developer of the LLM and the journalist would have informed their group’s understanding of what the model can and cannot do. This observation underscores the importance of conducting assessments that engage technical experts as well as users and other stakeholders. Popken noted the value of having a diverse, multidisciplinary group conduct the assessment. A participant observed that a participatory approach to decision making is important, although often time consuming. Another expressed appreciation for the idea of analyzing both risk and opportunity to decide whether a particular action should be pursued.

Key takeaways from the session included the following:

  • Everyone’s decisions matter: All the people involved in developing and using a technology tool—including the engineers—make decisions connected to human rights impacts.
  • Human rights assessments can help identify risks to people and society: Human rights assessments can be used to brainstorm potential risks to people involved in developing or deciding on how to use a technology tool.
  • Human rights assessments can help mitigate risks: Assessments enable engineers and other stakeholders to think about their role in addressing risks and
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
  • highlight what one can do alone versus working with others to mitigate those risks.
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 61
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 62
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 63
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 64
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 65
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 66
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 67
Suggested Citation: "11 How to Conduct a Human Rights Assessment of Artificial Intelligence." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 68
Next Chapter: 12 Summary Observations and Closing Remarks
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.