Overview
We worked with 12 companies across the APAC region to co-develop and test a policy prototype on AI transparency & explainability (T&E) based on Singapore’s Model AI Governance Framework (MF) as well as its Implementation and Self-Assessment Guide for Organizations (ISAGO).
Through our methodological approach, we captured the experience of participants receiving, handling, and following the policy prototype, testing in this way its clarity, effectiveness and actionability. In particular, we asked the participants to leverage the policy prototype in order to build and deploy AI explainability solutions in practice, in the context of their specific products and services.
As a result, we learned about the tensions and challenges they encountered when delving into this technical endeavor, capturing 4 important tradeoffs: T&E vs. security; T&E vs effectiveness / accuracy; T&E vs disclosure of potential IP issues; and T&E vs meaningfulness and actual understanding. When tasked with building an interface design for their AI explainability solution, participants also shared important technical, policy and usability considerations that we documented in this report.
Through our methodological approach, we captured the experience of participants receiving, handling, and following the policy prototype, testing in this way its clarity, effectiveness and actionability. In particular, we asked the participants to leverage the policy prototype in order to build and deploy AI explainability solutions in practice, in the context of their specific products and services.
As a result, we learned about the tensions and challenges they encountered when delving into this technical endeavor, capturing 4 important tradeoffs: T&E vs. security; T&E vs effectiveness / accuracy; T&E vs disclosure of potential IP issues; and T&E vs meaningfulness and actual understanding. When tasked with building an interface design for their AI explainability solution, participants also shared important technical, policy and usability considerations that we documented in this report.
Recommendations
Publication This report presents the findings and recommendations of the Open Loop’s policy prototyping program on AI Transparency and Explainability, which was rolled out in the Asia-Pacific region fromApril 2020 to March 2021, and in partnership with Singapore’s Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC). To do so, we designed and deployed the program to achieve the following goals:
|
A Global Experimental
|
Let's TalkLet's ExperimentLet's UnlockLet's CollaborateNews & EventsGet In Touch |
Open Loop is a global program supported by Meta that bridges the gap between tech and policy innovation, fostering a closer collaboration between those building emerging technologies and those regulating them. We partner with governments, tech companies, academia and civil society to co-create and test new governance frameworks through policy prototyping programs, and to support the evaluation of existing legal frameworks through regulatory sandbox exercises.
|
Privacy Policy - Terms of Use - ©2022 |