Data for Artificial Intelligence
Well-governed data availability and access for artificial intelligence: Demonstrating the practical use of privacy-enhancing (and adjacent) technologies
Objectives
Privacy-enhancing technologies (PETs), and more generally, the emergent concept of “structured transparency” can help. Although there have been recent efforts to identify how PETs can potentially support real-life use cases, the deployment and adoption of such technologies remain relatively limited. This project aims to demonstrate the viability of these technologies in helping achieve the UN SDGs while addressing these data challenges.
Utilising Privacy-Enhancing Technologies to overcome data barriers for social good
In one of the world’s first cross-border collaborations on PETs, practical demonstrations for AI systems will be conducted, drawing insights to develop practical guidance for AI developers and owners of AI systems. The experience will guide future research and development, business adoption of PETs, and contribute towards developing international standards.
To test how these technologies can further enable the development of AI systems, the project will deliver
- (1) a practical demonstration of how PETs can help improve data availability for AI use cases that are beneficial to humanity,
- (2) practical guidance and framework(s) for data scientists and AI developers on how to work with such technologies,
- (3) guidelines on further development and adoption of AI technologies (which could also translate to international standards),
- and (4) an outreach plan that yields greater awareness of, and confidence in, technology solutions to address privacy, IP, and sovereignty concerns.
This project explores how to enable greater data availability to support innovation and improve competition for data/analytics-enabled products and services, all with public benefit. In general, more frictionless data sharing between organisations and/or countries will enable learning and innovation. It will also support smaller organisations or corporations to compete more effectively with large (and sometimes monopolistic) data-rich organisations that have access to massive datasets within their organisational boundaries.
Team
Stephanie King
Director of AI Initiatives
Kim McGrail
GPAI expert and Project co-lead
Shameek Kundu
GPAI expert and Project co-lead
Project Advisory Group
Marc Rotenberg
Bertrand Monthubert
Ching-Yi Liu
Andrea A. Jacobs
Christian Reimsbach
Michael O'Sullivan
Collaborative opportunities with CEIMIA
Collaborating with CEIMIA means contributing to the development of responsible AI. For this project we welcome three types of collaborators:
- Organisations that have a possible AI-for-social-good use case who would be interested in exploring what a demonstration of PETs could look like for their application(s); and
- Relating to our first demonstration use case – individuals and/or teams working at the intersection of incorporating location-based data into contact networks (or, even more specifically, pandemic modelling contact networks).
Additionally, we welcome expert representatives from external organisations in this domain to join the team as members of the Project Advisory Group.