Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium (2025)

Chapter: 9 Seeking Justice and Remediating Human Rights Harms

Previous Chapter: 8 Participation and Inclusion in Engineering Decision Making
Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

9
Seeking Justice and Remediating Human Rights Harms

This symposium session addressed accountability for human rights harms caused by engineering decisions, the mitigation and remedy of past harms, the assessment of engineering’s effects on society, and ways that engineers can help seek justice. Betsy Popken, Executive Director of the Human Rights Center at the University of California, Berkeley, School of Law, discussed the role of human rights assessments in identifying and remediating harms. Jay Aronson, Professor of Science, Technology, and Society and Founder and Director of the Center for Human Rights Science at Carnegie Mellon University, commented on the role of engineering innovation in justice seeking, the application of human rights law to emerging technologies, and options for accountability within the international human rights law system when technology causes human rights harms. Julie Owono, Executive Director of Internet Sans Frontières and an Inaugural Member of the Meta Oversight Board, discussed how the board functioned as a check to ensure accountability and justice for harms. Jose Torero, Professor and Head of the Civil, Environmental, and Geomatic Engineering Department at University College London, discussed how he has used his engineering expertise to help those who have experienced human rights violations resulting from engineering decisions. Popken moderated a discussion following the panelists’ presentations.

THE ROLE OF HUMAN RIGHTS ASSESSMENTS IN IDENTIFYING AND REMEDIATING HUMAN RIGHTS HARMS

Betsy Popken began her remarks by explaining that, according to the United Nations (UN) Guiding Principles on Business and Human Rights, adopted in 2011, businesses should have in place human rights policies, a due diligence process, and a process to enable the remediation of any adverse human rights effects that they cause or to which they contribute. She and her colleagues are conducting a human rights assessment of the use of generative artificial intelligence (AI) large language models (LLMs) in the legal profession, education, and journalism. Her team expects to release the report in 2025.

As the first step in this assessment, Popken and colleagues reviewed the literature to determine the effects of LLM use on human rights globally and to identify the rights put at risk by the use of LLMs by legal professionals, educators, and journalists. The team interviewed global stakeholders who included experts in LLMs, natural language processing, and machine learning, and representatives from the companies that produce four highly used LLMs: ChatGPT, Gemini, Llama, and Claude.

The team also interviewed professionals in law, education, and journalism about their use of LLMs in their work. They found that use varies significantly depending on where the

Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

practitioners lived and worked. For example, in North America and Europe, practitioners tend to be conservative and risk averse in their approach to LLM use, said Popken. In other parts of the world, practitioners are more willing to employ LLMs, which increases the potential for benefit, but also for risk.

Popken explained that the final step of the assessment was development of recommendations for policymakers. Because the team recognized the need for multiple stakeholders to be present at the table to remediate some of the identified human rights risks, they consulted with a law firm to review all the regulations and guidance on AI worldwide—to include those that specifically referenced generative AI—and drew from that work to determine the best regulations. They also produced recommendations for industry groups representing the legal profession, journalism, and education and for the technology companies developing LLMs to help them better avoid risks when they develop and release new products.

The team identified one cross-cutting risk related to the right to privacy, as well as risks specific to each of the three professions. For journalism, they found that reporters in certain countries were using ChatGPT to write stories about the weather, market conditions, and sporting events. Because LLMs do not always provide correct information, reliance on this source could violate the right to access information and present significant potential downstream risks. For example, an LLM-generated hurricane forecast could lead to a loss of life and injury to health if it misidentified where the hurricane would achieve landfall.

Popken said that for the legal profession, the team found that judges in certain regions were using ChatGPT to inform their judicial rulings, which can violate the rights to due process and a fair trial. In education, providing equal access to ChatGPT can help level the playing field for students, but the use of ChatGPT raises some rights-related concerns. The team learned of cases where teachers were using ChatGPT to assess students’ work. Many educators worried that ChatGPT would affect students’ ability to analyze information and make decisions on their own—posing a risk to freedom of thought, she said.

Popken concluded her remarks with a list of recommendations that were provided to policymakers, industry groups, and companies developing LLMs.

  • Provide filters, such as displaying LLM-generated text with a warning or not at all.
  • Consult experts and users in different parts of the world.
  • Assess potential risks of use in high-risk professional contexts.
  • Provide opportunities for users to give feedback.
  • Educate users to not take LLM output at face value.
  • Focus on harm to humans when looking at risks and AI principles and place emphasis on the most at-risk communities.
  • Conduct risk assessment along different parts of the AI life cycle, such as datasets and model outputs.

She noted that assessments of LLM use and risk are also warranted for the medicine and engineering professions.

TECHNOLOGY, HUMAN RIGHTS, AND HARM REDUCTION

Jay Aronson explained that the Center for Human Rights Science creates interdisciplinary collaborations to promote the development and application of scientific methods to collect,

Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

analyze, and communicate human rights information. It also provides technical assistance to individuals and organizations devoted to advancing human rights through consultation, educational programs, and original research. Some of his work has focused on using DNA identification in post-conviction contexts to provide people who might be wrongfully convicted of a crime with access to DNA-based information. That project evolved into using DNA for identification purposes after conflicts and disasters.

While involved in that project, Aronson fielded a question from a colleague about how organizations determine the number of people killed in a conflict. That question led him to work in the early 2010s on a project counting civilian casualties in chaotic contexts when normal vital records systems break down. As a part of that work, he interviewed people who had videos from conflicts that might help provide those counts but had no way to process all the information in those videos. Aronson had colleagues at Carnegie Mellon who were funded by the intelligence community to extract relevant information from large collections of video, and Aronson was able to share much of their work with the human rights community. Today, many of the tools developed by the intelligence and defense communities for this purpose are available to the public.

However, the results of video analysis need to be translated into information that will be useful in efforts to achieve justice. Working with a group of Ukrainian human rights lawyers and architects, Aronson developed a set of publicly available systems and packages for presenting data captured from video and other sources to reconstruct events in three dimensions. Aronson noted that he has analyzed video evidence of human rights violations in Syria, South Africa, Ukraine, and Mali, as well as of police killings in the United States, which led to his work during the past seven years on deaths of people in U.S. law enforcement custody. What unites these projects, he said, is the use of science and technology to advance the cause of human rights and to seek justice and redress in various forms.

Aronson listed some of the core values that animate his work:

  • Collaboration with people who are either directly affected by human rights violations or people with direct connections to communities experiencing human rights violations to identify what these communities need.
  • Solidarity with the practitioners he works with to make their work more effective and efficient, rather than bypassing them.
  • Humility, recognizing the expertise arising from lived experience.
  • Skepticism that technology can solve complex social and political problems on its own, for if it could, such problems would not exist.
  • Developing trust by taking time to understand the needs of his partners.
  • Taking deliberate and cautious action, given that real, lasting solutions take time to develop, and recognizing that the “fail fast and break things” approach can cause great harm in the human rights and humanitarian domains.

Aronson shared that technology is a double-edged sword in the human rights context. “The exact same technologies and systems that can promote and protect human rights can also be used to violate human rights,” he said. “It’s impossible in this context to have benefits without harms, positives without negatives.” In fact, Aronson asserted, the very features that make technologies useful in human rights work are the features that make them dangerous, which highlights the need to be active and intentional in making decisions that enable technologies to promote human rights.

Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

Aronson indicated that human rights frameworks and law provide important sources of norms and values, as previous speakers had noted. The most important norms to practice, he added, are to focus on the most vulnerable and least powerful, engage in conversation with them, learn from them, and avoid foisting solutions on them, seeking instead to involve them meaningfully in developing and designing solutions. It is important, Aronson said, to promote access to mechanisms for accountability and justice, acknowledge that power and privilege are built into technological systems, and recognize that technology affects all aspects of life and rights. “If we want technology to have a positive impact, it does not just happen magically,” he said. “We actually have to engineer that into the systems.”

AN EXPERIMENT AT THE INTERSECTION OF ENGINEERING AND HUMAN RIGHTS

Julie Owono noted that the Meta Oversight Board is a body of experts from around the world launched in May 2020 to serve as the external and independent “Supreme Court of Facebook.” Its creation represents an effort to protect human rights in the context of technological development, and it helps to decide what expression should be allowed on the corporation’s platforms. It makes binding, principled decisions on content that has either been taken down by Meta or left on Meta’s platforms, said Owono.

Owono explained that the Oversight Board decided to focus on eight strategic priorities based on their potential harm to human rights: elections and civic space, crisis and conflict situations, gender, protecting children’s rights, hate speech against marginalized groups, government use of Meta’s platforms, treating users fairly, and automated enforcement of policies and content curation.

The Oversight Board rule on appeals it receives from users to either remove or restore content, either one’s own or someone else’s content. It also receives cases from Meta when the company faces difficulty enforcing one of its rules in a manner that respects human rights. Owono noted that the decisions made by the Oversight Board are binding, even when contentious. For example, one appeal accused the prime minister of Cambodia of using the platform to incite violence against Cambodians supporting the opposition party during an election. The Oversight Board mandated that the corporation suspend his account, which has resulted in Owono and other Oversight Board members being banned from entering or transiting through Cambodia.

The Oversight Board can also make nonbinding recommendations to Meta on policy, transparency, and enforcement. Although nonbinding, Meta must respond publicly to these recommendations and provide its reasoning if it decides not to implement a recommendation. This requirement makes the recommendations a powerful tool for transparency, said Owono.

As an example, Owono discussed the Oversight Board’s recommendation to improve automated detection of images with text overlay to ensure that posts showing a female breast to raise awareness of breast cancer are not wrongly flagged for review because they violate the policy prohibiting adult nudity on Meta’s platforms. The Oversight Board recommended that Meta enhance human review of such content to avoid its rejection by an automated moderation agent. Resulting improvements to Meta’s text overlay detection technology led to 2,500 pieces of content being sent to human review over a 30-day period in February and March 2023. It also led to Meta developing, testing, and deploying a new health content classifier for identifying image-based content about breast cancer.

Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

So far, said Owono, the Oversight Board has received more than 2 million cases, a volume that the 22 members could never adjudicate. Instead, it has established a set of criteria it uses to select appeals to investigate based on difficulty and significance. Factors for selection include, for example, whether there is strong disagreement on whether a piece of content should remain posted, whether Meta’s Community Standards regarding the type of content at issue are consistent with international human rights principles, and whether a decision to leave up or take down the content would have severe consequences for users. The normal time to issue a decision is 90 days, but the Oversight Board can work urgently, as it did for a case regarding posts about the hostage situations in the early days of the war in Gaza.

Owono said that part of the decision-making process includes the ability of individuals and organizations to submit public comments to the Oversight Board. This input, she noted, is critical for achieving the goal of improving how Meta treats people and communities around the world. Owono said that on numerous occasions, public comments have shaped the Oversight Board’s decisions and recommendations to Meta.

SCIENCE SUPPORTING SAFETY AS A HUMAN RIGHT

Jose Torero noted that, according to Article 3 of the 1948 Universal Declaration of Human Rights, everyone has the right to life, liberty, and security of person. This and other provisions of international human rights law implicitly cover the right of individuals to safer products, safer working and living conditions, and a safer environment in which to live. The question, said Torero, regards the role of science in supporting safety as a human right.

Torero said that safety is often seen as a commonsense aspect of everyday life; however, most often, safety is a complex technical problem that requires high levels of competency from everyone involved, including individuals with specific technical knowledge who understand how to deliver safety. For example, Torero is a fire safety engineer, so his work focuses on problems associated with how to deliver a fire-safe environment and fire-safe products and how to address issues such as wildfires.

Often, said Torero, experts must decide what their responsibility is when they witness the misuse of science and technical knowledge to the detriment of people’s safety, whether that misuse was intentional or not. In many cases, the involved populations do not have the means to defend themselves or demonstrate that technical knowledge is being used improperly.

As an example, Torero discussed the case of a prison fire in Chile, which led to the death of many people and raised questions of responsibility. The government accused the prison custodians of not intervening quickly enough to save individuals locked in their cells. “The reality,” Torero said, “was the fact that it was mismanagement of the prison that created a condition that completely disabled the custodians from being able … to rescue the prisoners.” Indeed, the prison was allowed to be filled with materials that made managing the fire impossible. Similarly, obsolete regulatory practices in Paraguay enabled the use of unsafe insulation in a supermarket that led to a fire that killed a number of people. Here, the store manager was accused of not enabling people to exit the store, but because of the way the store was designed, there was nothing the building manager could have done to get people out of the store quickly enough to have changed the outcome.

In another case, said Torero, an aid organization provided blankets to hospitals without recognizing that the blankets did not meet flammability standards. When patients used these blankets along with heat elements to keep themselves warm, the blankets caught fire, causing

Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.

severe injuries and even deaths. Torero noted that, in many cases, aid organizations fail to do sufficient due diligence concerning aid that they provide to individuals who are not in a position to understand whether something they receive is safe.

Other examples provided by Torero included fires caused by irresponsible corporate practices and by government regulations that led to social inequity, as well as a high-profile case in which a government used inappropriate technical knowledge to absolve itself of blame in a fire involving the death of dozens of children. All of these examples, Torero observed, illustrate the relevance of scientific and technical knowledge in supporting human rights.

DISCUSSION

Davis Chacón-Hurtado commented that the beauty of applying a human rights framework to engineering is that it requires a focus on the most vulnerable groups. However, this focus comes with a great responsibility to ensure that the process of engagement does not pose additional risks to already vulnerable groups.

A symposium participant wondered whether AI should simply not be used because it is impossible to ensure that it does not cause harm. Popken expressed her view that the cat is already out of the bag, and there is no way to put it back in. She added that some people are focused on assessing the big risks of AI overtaking humans in the future, but she believes that the focus should be on the risks in the here and now while also recognizing the positive aspects of AI. She noted, in this context, that there is a human right to scientific advancement.

Another symposium participant asked if international law is binding on countries rather than companies, then how it is possible to hold private organizations accountable when their actions are not aligned with international human rights norms and standards. Popken said that the UN Guiding Principles on Business and Human Rights are voluntary and not enforceable. Most large companies voluntarily accept them, with the pressure to do so coming from public opinion and shareholders rather than legal action.

Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 49
Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 50
Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 51
Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 52
Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 53
Suggested Citation: "9 Seeking Justice and Remediating Human Rights Harms." National Academies of Sciences, Engineering, and Medicine. 2025. Issues at the Intersection of Engineering and Human Rights: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/29141.
Page 54
Next Chapter: 10 Integrating Human Rights Principles into Systems and Product Design
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.