One of TFI’s core focus areas is on structural risks from advanced AI, and, structural risks emerge as the consequence of ignoring that the solution to a wicked problem must be constructed with consideration of the broader sociotechincal system in which the problem is embedded. Google Deepmind suggests a sociotechincal approach to AI safety, and, while we agree with a sociotechnical approach, we disagree that labs developing advanced AI systems should be left to evaluate capabilities risks from those systems without oversight from regulators or third-party evaluators. This is what Google Deepmind recommends, but we feel that it would only contribute to the problem that their proposed approach claims to want to solve.
In coming years a variety of other systemic issues are also likely to pose challenges to the rigorous and robust evaluation of advanced AI systems working at the frontier of AI systems’ capabilities. Some of these issues include:
- A limited diversity of evaluators
- Perverse incentives
- Suboptimal allocation of resources
- Ineffective knowledge sharing
- Bureaucratic challenges to standards creation
- Acting nimbly and quickly to keep up with tech companies
Collectively we feel these issues constitute a coordination problem. As a solution, we propose An International Consoritum for Evaluations of Societal-Scale Risks from Advanced AI.
Details about the proposal are described in our paper on the topic, released today. To our knowledge, TFI is the only organization working on the neglected topic of structural risks evaluations, and we will continue to advocate for additional efforts in this space. Moreover, third-party risk evaluations, along with AI alignment research and compute governance, is one of most critical pieces to the puzzle of safely training and deploying advanced AI. We aim to continue to be a voice in the effort to create a consortium or intergovernmental organization to help:
- Coordinate stakeholders in risk evaluations of advanced AI
- Set standards efficiently and minimize bureaucratic issues in the process
- In accrediting for evaluators for frontier AI systems as well as less capable systems
Sign up for our newsletter, follow us on social media, or stay tuned to our website for new developments on this front!
0 Comments