During the final workshop session, members of the workshop planning committee identified their key takeaways from the 2 days of the workshop. These appear below. Also included in this chapter are suggestions from workshop participants of potential metrics and data that could be used to assess research security efforts.
Murdick identified four key takeaways. First, he said, “people are the most important part of anything we’re doing here—they’re the knowledge transfer mechanism.” Second, “we need a balanced set of measures that look for not only the success of any process that was going forward, but also the cost of it.” Third, we “need a robust national security emerging technology analyst capability that really does become a reservoir of knowledge that allows prioritization information” and “core infrastructure for some of the monitoring.” Fourth, it is necessary to adopt a cost sustainable model to pay for research security.
Caputo said the need for active data collection is ongoing. Passive data that measure whether research security initiatives are successful do not exist. Broader data-sharing is needed to facilitate holistic measurement of the effects of research security efforts. NSF has useful data on proposals,
but DOD does not have access to that material. DOD has other metrics that can be useful, however.
Dewey Murdick
The NSF SECURE Center will have data to support evaluation, but decisions will need to be made about data-sharing, Caputo continued. Research security is complicated to measure because it is related to so many other things. “The goal is to measure effectiveness of the programs, the know-how, the training, the awareness, the trust frameworks,” she said. Challenges in measuring the effectiveness of research security efforts mirror those we face in improving cybersecurity.
Humphrey said that, in considering what data to collect and what to measure, there is also a need to consider the issue of trust—specifically, how to build trust into these metrics. She suggested that public trust in science is currently at an all-time low and that the public needs to trust the research in order to trust their investment in that research. “In thinking about research security and what we are measuring, we also have to tie it back to how can we also convey to the public that we are taking this issue seriously, that it matters to us, and that the data that we’re putting out is quality data.” Personal relationships are critical. “When we go out to talk to faculty about research security, we have to make it personal,” she said, by, for example, demonstrating how research security is important to their field or discipline.
Jones said, “We’re in a race. So, when you’re in a race, that is directive to what we should measure.” But it is also “a series of races,” he said. “We’re racing on AI. We’re racing on quantum computing. . . . So, the question is, are we ahead of our competitors and how fast are we moving? How fast are we moving and how fast are they moving?”
Jones said measures on the output side (e.g., numbers of papers and patents) are needed. Another measure would relate to technological capability. What, for example, is state of the art for a particular technology? Other areas to consider are emerging critical technologies, which could be measured using metrics such as number of patents and papers, as well as technological capability and industrial leadership.
On the input side, Jones suggested measures related to effort, including
“People are a key input to going faster in our race, but they’re also the place where we have leakage,” Jones said. He suggested that measures of people flow offer an opportunity to assess how much talent is being lost.
Jones suggested that “the key thing is to do research security and avoid unintended consequences because remember, the race is going fast. . . . What we’re trying to win is the race. How fast are we going?” There is a need for the best talent in the world to come to the United States, and then we need to hold on to them. Any policy that dissuades people from coming to the U.S. is a bad policy unless it has some other strong features.”
Regarding compliance efforts, Jones advocated for choosing “what areas we’re going to do this on and then really go for openness otherwise.” Closed systems “will slow our adversaries, but [also] . . . slow ourselves.”
Jones suggested that, as regulators move forward, agencies need to think about design partnerships. Policies need to be implemented in a nimble and cooperative way with an awareness of unintended consequences. “If I look behind in the race, you might see some drafting. I see a lot of potential. We’re going to win that race because we pull in the best and the brightest in the world.” Greater efficiency is needed “not because we’re going to have more savings dollars, . . . not because we have more people,” but because the U.S. research community has better people and does it better.
Kohler agreed that people are the most vital component. The research community needs, he said, “to find ways to get the smartest people to come to the United States and stay in the United States and contribute to society, and our innovation here. That’s just the bottom line.” Universities and the research environment must, however, adapt to the way the world really is today as opposed to where the world was 20 years ago. Policies by China and other countries have created friction and competition with the United States and an unfair playing ground. The U.S. government has reacted to that. Those who are “key to innovation are between a rock and a hard place and they’re getting ground up by these two superpowers that can’t figure out how to get along and how to make this work.”
It is up to us to educate others on how to better manage the situation so that universities remain innovative powerhouses, Kohler said.
McQuade said that the United States ran the world’s best foreign talent acquisition program for 75 years. The illicit extraction of technology is a real issue, he said. But the bigger issue is the creation of a talent base in China. Thus, he continued, it should come as no surprise that China is following the U.S. model of trying to get the best people to come and work in China. “We will pay a significant price for no longer being the world’s best foreign-talent acquisition program.”
McQuade said that, while there is a need for what CUI does, CUI is unregulated and lacking in specifications. As a result, it is ultimately arbitrary and ineffective.
From a measurement point of view, McQuade called for focus on measuring the United States’s competitive position in the technologies that matter. Is the United States competitive in the places it needs to be competitive? While the country can also measure people flow, collaborations, near misses, and events, “if we do not have an assessment of where we are competitively and have a set of goals to change [our position], none of those other measurements really matter to us.”
According to McQuade, the world is “vastly different than 1945. We have a peer competitor. We’re not in front of everybody else. . . . We need to start to say where we want to be competitive. We, as a society, will determine how much money we want to spend on research to be competitive.”
Fox said that, while effective ways to collaborate need to be developed so that the United States wins the race, we need to understand our current national goal. Do we want to collaborate and allow for good collaboration, or is the goal to start to pull away from collaboration? Measures of effectiveness are difficult to establish, she said, without really understanding that alignment question.
Throughout the course of the workshop, event participants suggested metrics and data that might be used to assess research security efforts in higher education. Box 7-1 provides a summary of their suggestions of potential metrics and Box 7-2 provides a summary of their suggestions of potential data.
Core Principles for Metrics and Evaluation
A coherent, effective research security evaluation strategy should
Categories of Metrics and Indicators
Tools, Frameworks, and Evaluation Approaches
Challenges and Gaps
People and Talent Flows
Tracking human capital is essential, as people are vectors of both innovation and potential risk.
Research Activity and Behavior
Understanding how research behavior shifts under security policy pressure.
Institutional Practices and Culture
Evaluating infrastructure, awareness, and adaptability within institutions
Outcomes and Impact on Innovation
Measuring security effectiveness and unintended consequences.
Strategic Alignment and Policy Evaluation
Assessing policy coherence, effectiveness, and alignment with national goals.