Our Work
Completed Work
Forecasting and Foresight
-
Bayesian Network Enhancement of Forecasting | We have been developing a new Bayesian Network (BN) tool for use in enhancing forecasting, agnostic of domain. Our recent RCT study utilizing the United States NCAA 2023 FBS college football season has demonstrated that BNs significantly improved sports forecasting accuracy among participants.
Structural Risks
-
The Reasoning Under Uncertainty Trap: A Structural AI Risk | This report examines a novel risk associated with current and projected AI tools: Reasoning Under Uncertainty (RUU), a risk found to be multiplicative and particularly difficult to solve.
- Perceptions of Societal-scale Risks from Advanced AI and Policy Preferences: Experts and Voters | This study explores the perceptions of AI experts and US registered voters on the likelihood and impact of 18 specific AI risks, alongside their policy preferences for managing these risks.
AI Governance and Evaluations
- An International Consortium for Evaluations of Societal-Scale Risks from Advanced AI | This research is a systematic evaluation of the AI governance and risk evaluation ecosystem and a call to action and unification, proposing an international consortium comprising both AI developers and third-party AI risk evaluators.
- Envisioning a Healthy and Thriving Ecosystem for Assessment of Foundation Models | This research is part of a large collaborative effort (academic and government partners) to highlight the path forward toward a comprehensive and multidisciplinary approach to the evaluation of frontier AI systems.
-
Public Comments and Request for Input on Behalf of Government Standards-Setting Organizations and Policy | TFI has submitted three comments in support of NIST’s AISC initiatives, the Consortium, and the National Telecommunications and Information Administration regarding a call for public comment on open weight foundation model policy to be released to the White House. The NTIA public comment can be found here: Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights | The NIST Evaluation Ecosystem response to NIST can be found here: Envisioning a Thriving Ecosystem for
Ongoing Work
Forecasting and Foresight|
-
Cutting through the complexity of multi-agent AI scenarios: A computational tool | This work seeks to apply a novel complex systems approach to the analysis of AI structural risks using agent-based modeling (ABM), which is especially well suited to modeling highly complex, multidimensional, uncertain issues and has yet to be used in the study of AI risk and governance.
-
Scenario Discovery with LLM-Assisted Bayesian Network Generation | This work combines multiple unexplored methodologies using algorithmic scenario discovery and LLMs for alternative futures.
- Iterative Scenario Network Mapping for Forecast Question Generation | Generating the right forecasting questions is as important as the forecasts themselves. This work utilizes a multi-stage process to elicit relevant forecasting questions for further use.
-
Bayesian Network Enhancement of Forecasting | Building upon our success in the BN tool and training package, we plan to expand our work in further developing the tool.
Structural Risks
-
Decoding the Structural Risks and Sociotechnical Dynamics of Artificial Intelligence | This report compiles key drivers and indicators of AI structural risks for use in predictive modeling and monitoring while describing their impact on international stability and human security.
AI Governance and Evaluations
-
Envisioning a Thriving Ecosystem for Testing and Evaluating Advanced AI | This research is the continuation of a collaborative effort on a comprehensive and multi-disciplinary approach to evaluation of frontier AI systems.
-
Crisis Management Planning Protocol for AI Risk Preparedness | TFI’s crisis management and emergency preparedness initiative seeks to put our foresight, risk analysis, and forecasting work into practice to help leaders prepare for an advanced AI crisis.
Blog & Announcements
- Toby Pilditch | AI Safety Grant Awarded to TFI Research Project
Completed Work
Forecasting and Foresight
- Bayesian Network Enhancement of Forecasting | We have been developing a new Bayesian Network (BN) tool for use in enhancing forecasting, agnostic of domain. Our recent RCT study utilizing the United States NCAA 2023 FBS college football season has demonstrated that BNs significantly improved sports forecasting accuracy among participants.
Structural Risks
- The Reasoning Under Uncertainty Trap: A Structural AI Risk | This report examines a novel risk associated with current and projected AI tools: Reasoning Under Uncertainty (RUU), a risk found to be multiplicative and particularly difficult to solve.
- Perceptions of Societal-scale Risks from Advanced AI and Policy Preferences: Experts and Voters | This study explores the perceptions of AI experts and US registered voters on the likelihood and impact of 18 specific AI risks, alongside their policy preferences for managing these risks.
AI Governance and Evaluations
- An International Consortium for Evaluations of Societal-Scale Risks from Advanced AI | This research is a systematic evaluation of the AI governance and risk evaluation ecosystem and a call to action and unification, proposing an international consortium comprising both AI developers and third-party AI risk evaluators.
- Envisioning a Healthy and Thriving Ecosystem for Assessment of Foundation Models | This research is part of a large collaborative effort (academic and government partners) to highlight the path forward toward a comprehensive and multidisciplinary approach to the evaluation of frontier AI systems.
- Public Comments and Request for Input on Behalf of Government Standards-Setting Organizations and Policy | TFI has submitted three comments in support of NIST’s AISC initiatives, the Consortium, and the National Telecommunications and Information Administration regarding a call for public comment on open weight foundation model policy to be released to the White House. The NTIA public comment can be found here: Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights | The NIST Evaluation Ecosystem response to NIST can be found here: Envisioning a Thriving Ecosystem for
Ongoing Work
Forecasting and Foresight
- Cutting through the complexity of multi-agent AI scenarios: A computational tool | This work seeks to apply a novel complex systems approach to the analysis of AI structural risks using agent-based modeling (ABM), which is especially well suited to modeling highly complex, multidimensional, uncertain issues and has yet to be used in the study of AI risk and governance.
- Scenario Discovery with LLM-Assisted Bayesian Network Generation | This work combines multiple unexplored methodologies using algorithmic scenario discovery and LLMs for alternative futures.
- Iterative Scenario Network Mapping for Forecast Question Generation | Generating the right forecasting questions is as important as the forecasts themselves. This work utilizes a multi-stage process to elicit relevant forecasting questions for further use.
- Bayesian Network Enhancement of Forecasting | Building upon our success in the BN tool and training package, we plan to expand our work in further developing the tool.
Structural Risks
- Decoding the Structural Risks and Sociotechnical Dynamics of Artificial Intelligence | This report compiles key drivers and indicators of AI structural risks for use in predictive modeling and monitoring while describing their impact on international stability and human security.
AI Governance and Evaluations
- Envisioning a Thriving Ecosystem for Testing & Evaluating Advanced AI | This research is the continuation of a collaborative effort on a comprehensive and multi-disciplinary approach to evaluation of frontier AI systems.
- Crisis Management Planning Protocol for AI Risk Preparedness | TFI’s crisis management and emergency preparedness initiative seeks to put our foresight, risk analysis, and forecasting work into practice to help leaders prepare for an advanced AI crisis.
Blog & Announcements
- Toby Pilditch | AI Safety Grant Awarded to TFI Research Project