The current debate over whether open or closed advanced AI models are safer or better is a distraction. Rather than focusing on one business model, we need to embrace a more comprehensive definition of what it means for AI to be open. This means changing the conversation to focus on the need for open science, transparency, and fairness when building AI that works for the public good.
Open science is the foundation of technological progress. We need more ideas and a greater variety of ideas. It's more widely available, not less. The organization I lead, Partnership on AI, is itself a mission-driven experiment in open innovation, bringing together academia, civil society, industry partners, and policymakers to tackle one of the most difficult problems and improve technology. We aim to ensure that the benefits reach as many people as possible. , not a few.
In an open model, we cannot forget the influential upstream role played by public funding of science and open publication of academic research.
National science and innovation policies are critical to open ecosystems. In her book entrepreneurial nation, economist Mariana Mazzucato points out that public funding for research planted some of the seeds for the intellectual property that grew into the U.S.-based technology company. Many of today's AI technologies, from the Internet to the iPhone to Google Adwords algorithms, were fueled by early government funding for new and applied research.
Similarly, making research public and evaluating it through ethical review is critical to scientific progress. For example, ChatGPT would not be possible without access to research published by researchers on transformer models. The decline in the number of AI PhDs, as reported in the Stanford AI Index, is alarming. While the number of graduates entering academia has declined over the past decade, the number of graduates entering industry has increased, with more than twice as many graduates entering industry in 2021.
It's also important to remember that open does not mean transparent. And while transparency may not be an end in itself, it is essential for accountability.
Transparency requires timely disclosure, clear communication to relevant audiences, and explicit standards of documentation. As PAI's Guidance on Deploying Secure Foundation Models shows, steps taken throughout the model lifecycle increase external oversight and auditability while protecting competitiveness. This includes transparency regarding training data types, testing and evaluation, incident reporting, labor sources, human rights due diligence, and environmental impact assessments. Developing documentation and disclosure standards is essential to ensuring the safety and accountability of advanced AI.
Finally, as our research shows, it is easy to recognize the need to be open and create space for diverse perspectives to envision the future of AI, but achieving it is difficult. is much more difficult. Indeed, an open ecosystem has fewer barriers to entry and is more inclusive of people from backgrounds not traditionally seen in Silicon Valley. It is also true that an open ecosystem sets the stage for more players to share in the economic benefits of AI, rather than further centralizing power and wealth.
But we have to do more than just set the stage.
We are fully committed to helping communities disproportionately affected by algorithmic harm and historically marginalized groups develop and deploy AI that works for them, while protecting their data and privacy. You have to invest in order to be able to participate. This means focusing on skills and education, but also redesigning who develops AI systems and how they are evaluated. Citizen-driven AI innovations are currently being piloted around the world through private and public sandboxes and labs.
Being safe doesn't mean taking sides between open and closed models. Rather, it is about introducing a national research and open innovation system that advances a resilient field of scientific innovation and integrity. It's about creating space for a competitive marketplace of ideas to foster prosperity. It is about enabling policymakers and the public to visualize the development of these new technologies and better investigate their potential and risks. It's about recognizing that we can all travel faster and safer with clear rules of the road. Most importantly, for AI to deliver on its promise, we must find sustainable, respectful, and effective ways to listen to new and different voices in the AI conversation.
Rebecca Finlay is CEO of Partnership on AI.
More must-read commentary published by luck:
The opinions expressed in Fortune.com commentary articles are solely those of the author and do not necessarily reflect the author's opinions or beliefs. luck.