As momentum to build new ‘AI for Good’ projects emerges from last week’s AI for Good Global Summit, experts and leaders are recognizing the need to share resources to help such projects achieve significant scale.
Discussions towards ‘AI and Data Commons’ are gathering pace.
Commons would offer assemblies of AI tools and datasets – and supporting knowledge and expertise – to launch new AI projects, scale-up fast, and contribute new and improved resources to the AI for Good community.
All four ‘breakthrough’ tracks at last week’s summit – which looked at healthcare, satellite imagery, smart cities, and trust in AI – highlighted the value of common platforms in testing, launching and maximizing the impact of new AI projects.
The ‘breakthrough teams’ leading these four tracks have each proposed a ‘Project Zero’ geared towards the development of such common platforms.
The healthcare team, for instance, proposed a shared ‘laboratory environment’ to strengthen and improve the coordination of AI-related healthcare resources. A proposed ‘global service platform’ aims to support new satellite data projects in achieving immediate scale. An ‘Internet of Cities’ could assist the replication of successful smart city projects. Trustfactory.ai aims to be an incubator for new projects to build trust in AI, a community able to host multidisciplinary collaboration.
“In the track presentations, the various projects, data has come up several times in different contexts,” says Urs Gasser, Executive Director of the Berkman Klein Center for Internet & Society at Harvard University.
Gasser led a team of ‘Data Rapporteurs’ tasked with monitoring the data dimensions of Summit.
“There is a notion, almost a kind of vision, that you need data commons as we think about AI for the social good,” says Gasser.
Reporting on the discussions of the summit’s four tracks, the team of Data Rapporteurs offered a ‘Roadmap Zero’ towards AI and data commons.
The figure below presents “a snapshot and some sort of bottom‑up version 1.0 of such a taxonomy or roadmap as it is emerging,” says Urs Gasser.
The layered model builds on a ‘narrow’ version of data commons – three core technical fields – with three fields more ‘broad’ in functionality. The model calls for interoperability across its six interdependent layers.
“For this narrow version of data commons, the role of standards and standardization is really important,” says Gasser.
Data Rapporteur for satellite imagery, Sean McGregor, Syntiant Corporation, illustrates the importance of standardized data formats and the interaction of different layers.
Mobile phones are a valuable source of geo-location data, says McGregor. Labelling agricultural resources with geo-reference data crowdsourced from mobile phones – using standardized formats – could provide the ‘data from the ground’ required to improve AI’s ability to monitor agriculture and biodiversity using satellite imagery.
Metadata could also bring greater transparency to datasets, says Data Rapporteur for trust in AI, Ryan Budish, Harvard University.
Labelling datasets with information on their provenance and limitations, says Budish: “[could be] a guardrail of sorts to help prevent using the data in ways that may not be appropriate, that may introduce unintentional bias into the outcomes.”
The three upper layers of the model bring distinctly human elements to Roadmap Zero.
Urs Gasser asked how the AI for Good community might encourage the emergence of organizational practices emphasizing collaboration, challenging the community to explore incentives to share data, move towards greater interoperability and establish related best practices.
“Thanks to all of the previous smart city initiatives, we have a lot of data already,” says Data Rapporteur for smart cities, Marie-Ange Boyomo, AI Project Manager at ANIMA. “If we pileup all the layers of data, then we have an overview and we can create a place where people can try, people can fail, people can have success … what we have called the ‘Internet of Cities’.”
Institutional arrangements including law and policy can both enable and form barriers to the use of data for good. Debates around IP regimes, data protection and privacy – and data governance more generally – will all factor into the level of institutional support for AI and data commons, says Gasser.
The model’s top-most layer focuses on knowledge-sharing and education. This ‘human layer’ calls for collaboration to build trust and common understanding among AI developers and stakeholders.
“The health conversation was very much at the intersection of institutions, policy, law and then what we call the human layer,” says Data Rapporteur for healthcare, Elena Goldstein, Harvard University.
The healthcare sector hosts a diverse set of stakeholders and diverse forms of data, complicating discussions around data commons.
Can we trust AI like a doctor?
We cannot fully explain the decisions of AI algorithms, but the same could be said of doctors’ decisions.
“In the health context it’s often about life-saving treatment. It’s essential that we do have a view of how these decisions are being made,” says Goldstein.
“This presents a unique opportunity perhaps for AI and some of these diagnostic tools to actually offer increased transparency, which, as we look to a data commons, is extremely promising.”
See highlights and interviews from the summit.
Send this to a friend