2023 GPAI Summit: Project Retrospective
Photo credit: CEIMIA team
Privacy-enhancing Technologies (PETs)
The Data Governance Working Group (WG) partnered with the Infocomm Media Development Authority (IMDA) of Singapore and Nanyang Technological University (NTU) Singapore, to conduct a use case focused on data sharing for the purposes of improving the resilience of society to pandemics. Through this use case, they tested whether PET-protected data could reproduce outcomes of the original pandemic model and learned that, at an aggregate level, the PET-enabled solutions were near-identical to the one created from the original (sensitive) data. Read the key learnings of the project!.
If you have identified additional pilot use cases for PETs that apply to AI-for-social-good contexts, the IMDA, CEIMIA and NTU are prepared to support. Get in touch!
Responsible AI Strategy for the Environment (RAISE)
Carried out by the Responsible AI Working Group (RAI WG), Project RAISE aims to help our societies get the AI transition right (fighting climate change without sacrificing the ecological transition) by developing an action-oriented roadmap for the responsible use of AI for climate action and biodiversity preservation. In 2023, the RAI WG held a first public workshop to define essential next steps and pave the way for more collaborations between organisations to conduct practical projects in the near future. Recommendations for governments include prioritising climate and biodiversity preservation in their policies and joining forces with other stakeholders within the Partnership to address this common crisis. Read them!
Next phase:
For this year, the RAI WG builds on the momentum built up in 2023 with, first, intentions to conduct another workshop between which will focus on developing and implementing a joint responsible AI R&D roadmap, as well as deliver ambitious implementation projects that build upon recommendations from the climate and biodiversity reports. The WG will also accelerate the pace to develop tools for access to climate and energy data under the Net Zero Data Space project – with the aim of helping researchers identify paths to Net Zero.
Social Media Governance
Social media platforms are one of the main channels through which AI systems influence people’s lives, and therefore have the potential to influence the dynamic of a whole country’s population. This raises questions, especially in the age of Generative AI. The Responsible AI for Social Media Governance project responds to growing concerns about the misuse of social media platforms, which can be harmful and serve to propagate disinformation, extremism, violence, harassment and abuse. In 2023, the Responsible AI Working Group pursued three main work streams:
- recommender algorithms ;
- content moderation processes ;
- the role of these platforms as disseminators of AI-generated content.
Have a look at its analysis and recommendations.
Next phase:
Develop and deploy mechanisms such as watermarking or other techniques to enable users to identify AI-generated content.
For 2024, the project will focus on three key technologies: recommender systems, harmful content classifiers, and foundational models. The project will tackle the need to find practical and concrete solutions to address the governance of Social media, with the aim of deepening the research with a practical exercise of content classifiers.
Data Trusts and Institutions
The research conducted by the Open Data and Aapti Institutes in 2020, followed by GPAI’s Advancing Data Justice project, highlighted the need for local organisations and communities to play an active role in the data value chain. Building on this work, the Enabling Data Sharing for Social Benefit Through Data Trusts project, explores bottom-up data institutions and trustworthy practices where communities are empowered. CEIMIA researchers carried it out in 2 phases with GPAI Expert Teki Akuetteh:
- First, by focusing on climate-induced migration in Lake Chad Basin with a pilot study to map the local data ecosystem and explore how data institutions and AI could make a difference for impacted communities, with the ultimate aim of better integrating Global South perspectives ;
- Second, based on these learnings, we designed the Trustworthy Data Institutional Framework (TDIF), a tool to help organisations assess their data governance maturity, and provide them with a path to improve trustworthiness.
Scaling Responsible AI Solutions 2023 Edition
In order to have a positive impact, AI-for-good solutions need to scale. However, scaling in a way that’s respectful of society, the environment and human rights can be tricky. Based on this observation, the Responsible AI Working Group initiated last year a first edition of the Scaling Responsible AI Solutions programme. At the time led by the soon-to-be Co-Chairs Francesca Rossi and Amir Banifatemi, the WG published a call for participation for responsible AI-focused teams currently facing obstacles that hinder both the scalability and responsibility of their AI solution. Eventually, 5 teams from a variety of contexts and regions, working on diverse topics, completed a mentorship programme with a group of Experts. The participating teams identified their most significant obstacle relative to their responsible scaling, then explored how to implement responsible AI principles within their solution at a larger scale, and produced an implementation plan. Congratulations are in order to the ergoCub team, who were listed Responsible AI change makers and received an award during the AI Game Changers Ceremony at the GPAI Summit in New Delhi!
Building on this, the WG has released a set of recommendations for governments and AI initiatives to foster the development of RAI practices across jurisdictions and to entice enterprises to make sure that their solutions are grounded in responsible principles.
Next phase:
This year, the Experts will focus on expanding the geographical coverage of activities, and supporting more participating teams for a longer period of time. The 2024 edition will be marked with the introduction of a dedicated track in Africa in partnership with local partners. This track will run in parallel and will be coordinated with the SRAIS global track, and will focus on mobilising African mentors and AI teams.
Interested in responsibly scaling your AI solution? Look out for the 2024 edition of the GPAI Scaling Responsible AI mentorship programme! Express your interest here to receive all information relative to this year’s edition.
Diversity and Gender Equality
AI offers a wide range of possibilities to enhance the well-being of different groups and contribute to the UN Sustainable Development Goals (SDGs). However, it can deepen economic, knowledge, gender, and cultural divides. Indeed, AI is usually still designed, developed, monitored, and evaluated without systematic Diversity and Gender Equality (DGE) approaches. This precludes it from achieving its potential for social good, and increases harm to already marginalised groups. Last year, In collaboration with MILA, the project “Towards Real Diversity and Gender Equality in AI” was set up to provide the ecosystems with tools, frameworks, and resources to incorporate effective DGE strategies throughout the AI life cycle.
Read about the advancement of the project.
Next phase:
For 2024, the RAI WG aims to publish a final report containing a practical set of recommendations to ensure that the technology is adopted with Diversity and Gender Equality principles. Additionally, it will develop a toolkit comprising a practical, curated, and annotated repository of the most efficient tools identified during Phase 1 to foster systemic gender inclusion in the AI ecosystems.
If you’d like to join our project advisory group, let us know!
From co-generated data to generative AI
In the face of rapidly advancing AI technologies, traditional laws are becoming obsolete. There is an urgent need to rethink governance models, collective data rights, and ownership in digital ecosystems, with a focus on ethical deployment for public good. The project “From Co-generated Data to Generative AI” is the latest in GPAI’s efforts to bridge the gap between theory and practice, building on previous work from the Responsible AI, Data Governance and Innovation and Commercialization Working Groups. It aims to understand the variety of rights and legal protections available to co-generators, and how Generative AI can complicate these situations; and to help our countries adapt to the new reality of cogeneration in AI-powered societies.
Next phase:
This year, Experts will deepen the research during the project’s second phase. Stay tuned for the release of the policy brief and guiding principles for co-generated data this spring!
If you’d like to join our project advisory group, reach out to us!
Pandemic Resilience
AI can be used to inform efficient and timely responses to global threats in times of crisis. This is precisely what the Pandemic Resilience project has explored: the capacity of AI to adequately drive policy making in contexts of uncertainty. Here, in the context of global pandemic, the GPAI Responsible AI Working Group focused on forecasting the spread and impact of COVID-19 in a range of locations through ensemble modelling. What the Experts found is that this type of modelling significantly produces lower errors than individual models, can reduce bias and share insights across models.
Based on these findings, here’s what they recommend:
- Strengthening ties between modellers and decision makers ;
- Establishing a feedback mechanism to facilitate the adjustment of policies according to model outcomes ;
- Developing a public data pipeline
Read more about this project and how AI can help us navigate future pandemics.
A new adjacent project for 2024
In 2024, the WG will decided to continue exploring ensemble modelling with the project “Digital Ecosystems that Empower Communities”. Building on the processes, tools and learnings developed during the Pandemic Resilience project and the principles published as a result of the Advancing Data Justice project, this new project will use the same type of modelling in search of achieving better data justice for marginalised communities who, most of the time, in large data collection contexts, give up their data but usually receive very little in return.
The Role of Government as a Provider of Data for AI
As key players in the AI landscape, governments will play an increasing role in providing AI developers access to public data. Indeed, they are uniquely placed to be a provider of data because of their reach, power and scale: their data sets can be comprehensive (or close to it), accurate, timely, and sustainable, due to their public funding. This will be vital in ensuring transparency, accountability, and equitable access to information. However, it also conflicts with existing legal frameworks and data processing principles. Initiated in 2023, the first phase of the project “ The Role of Government as a Provider of Data for Artificial Intelligence” evaluates the aims of, and mechanisms for, sharing public information with the private sector through four case studies.
Read about the first phase of the project.
Next phase:
This year, the Experts will carry out the second phase of the project which aims to examine how governments are making this data available to AI developers in well-governed ways, taking into account issues around technology, culture, public attitudes, and fair financial models, and to provide actionable recommendations as well as insight on the resources required to successfully govern the hosting of public data resources.
Repositories of Public Algorithms
This newborn GPAI project also promotes a new way of working: it aims to build cross-Working Group collaboration. In this case, the RAI and DG Experts will work closely together to create new national and subnational repositories of public algorithms to increase the availability of publicly held data about automated decision-making systems adopted by the public sector.
Turning now to our CEIMIA portfolio, three major projects are in the spotlight this year:
Regulatory Diplomacy Framework
The second phase of the Regulatory Diplomacy Framework, studying another set of AI regulatory approaches from 5 other regions. This second phase should be released this spring.
Discoverability
Mandated by the Ministry of Culture and Communications of Quebec, this ambitious project leverages the latest developments in AI to dissolve barriers that hinder the discoverability of French cultural content in digital environments.
AI for Africa – A CEIMIA Think Tank
As mentioned in our latest newsletter editorial, it is our duty to leverage AI to find solutions to current issues such as climate change, education, global health, and to advocate for a more just AI adoption that closes the social gap. It is in this perspective that CEIMIA is developing its own think tank to assist Africa appropriating this technology. Over the next six months, CEIMIA will refine its strategy with Cameroon and Senegal as pathfinder countries.
Stay tuned, more will be unveiled within the next months!
Meet the CEIMIA team
Through a unique collaborative structure, CEIMIA is a key player in the development, funding and implementation of applied AI projects for the benefit of humanity.