Will the project "Identifying good AI governance behaviours" receive receive any funding from the Clearer Thinking Regranting program run by ClearerThinking.org?
Below, you can find some selected quotes from the public copy of the application. The text beneath each heading was written by the applicant. Alternatively, you can click here to see the entire public portion of their application.
Why the applicant thinks we should fund this project
This project will identify the behaviours needed to safely navigate the transition to a world with advanced / transformative AI.
The longtermist AI safety and governance community is starting to recommend behaviours that key actors should do. Some of these behaviours are explicit (e.g., Avin 2021) and others are implied by research agendas, programs, or theories of "failure" and "victory" (e.g., Christiano 2019; Dafoe 2020).
But there is no unified list of behaviours. There is also no information about which behaviours would be endorsed, critiqued, or rejected by those key actors. And most mainstream AI governance policy and practice is focused on narrow AI, which will be insufficient to address challenges unique to advanced AI.
This project will identify behaviours relevant to increasing the safety of advanced AI. It will convene researchers, users, and policymakers to evaluate and discuss those behaviours. The behaviours will also be compared with a current Australian voluntary framework for ethical use of narrow AI.
Identifying and discussing these behaviours is crucial to be able to raise awareness, coordinate action and measure progress towards or away from safe futures with advanced AI systems. Gaps or weaknesses in the framework revealed by the comparison will be used to generate recommendations for improving governance of advanced AI in Australia. The behaviour list and evaluation process will be published to improve work on AI governance internationally.
Expected outcomes
Outside the Australian context, this research process will support the articulation and measurement of actions focused on safe design, development and use of advanced AI. This would support researchers in AI governance / safety to conduct research for these actions to be critiqued, compared with national policies, and assessed in different jurisdictions.
In the Australian context (e.g., practitioners, engineers, policymakers), the key outcomes will be (1) an increase in knowledge about the issue of safe advanced AI, (2) identification and reflection on actions needed to design, develop and use safe advanced AI, grounded in existing policies such as the AI Ethics Framework and the Australian AI Action Plan.
This will be measured in surveys and interviews immediately following the workshops. More meaningful indicators of impact will be if the next iteration of AI safety / ethics policy in Australia includes explicit reference to advanced AI systems and the specific actions that can be taken to improve safe design, development and use of advanced AI.
How much funding are they requesting?
$ 49,797 USD
Here you can review the entire public portion of the application (which contains a lot more information about the applicant and their project):
https://docs.google.com/document/d/1-ZZlj7AM_ErngBXhcqgw_Es9TjHabaFO4l2kG8V04-w/edit