Hope for responsible technological progress

Silicon states

Global AI collaboration can bridge our digital divides and share digital benefits equitably.

OVERVIEW

Governments worldwide are intensifying investments in silicon chip production and development of their own “sovereign” AI, seeing this as a strategic national asset. A zero-sum “AI arms race” risks exacerbating the digital and other divides, leaving developing countries, many of which lack the resources, digital infrastructure, energy access and skills to fully leverage the benefits of AI for development, further behind.  A more equitable path would be greater collaboration towards more inclusive AI, as the Global Digital Compact envisages, to leverage its value and deliver benefits for all.

SIGNALS

If cars had improved at the rate semiconductor chips have since 1960, you could be driving at 200 times the speed of light. ChatGPT reached 1 million users within 5 days of its launch (it took Netflix 3 years). AI is changing our world so fast, with such profound implications for societies and economies, that many governments are investing in building their own systems. The CEO of silicon chip producer Nvidia urged countries to develop their own “sovereign AI”, using their infrastructure, data, workforce and business networks.  The Abu Dhabi state-backed AI company promises enterprises and government complete control of their data. China has invested heavily in AI education and now produces half the world’s top AI researchers.

Some experts argue against thinking of AI as an arms race. One thousand tech leaders even signed a petition in March 2023 urging a pause of AI development – but with no noticeable impact on new product releases. Yet this “AI arms race” risks exacerbating the digital and other divides.  AI has unique opportunities for developing countries (including in agriculture, healthcare, pollution control, education) – but they are much less ready to take advantage of them. Investments in energy access, digital infrastructure and skills, as well as robust policies for AI and data protection, are needed first. Without them, the concentration of development and ownership of AI will widen the north/south gap and worsen current inequities.

Ethical governance is crucial as AI grows more powerful, as recognized by the comprehensive regulatory framework of the EU’s imminent AI Act and the US Executive Order on AI. India now requires government approval of new AI models, reversing its previous hands-off approach. China’s cyberspace regulator has promised to work with Africa on governance. Regional efforts towards ethical AI governance include the Santiago Declaration to Promote Ethical AI among 20 countries of Latin America and the Caribbean, and the African Union’s Continental AI Strategy for Africa.

With broad recognition that responsible governance is needed, some suggest that a global legal framework is warranted, following the 2023 Bletchley Park conference, or enhanced international governance of AI. The challenge is finding ways to regulate AI for safety and ethics without stifling innovation or dampening the extraordinary opportunities of AI applications to sustainable development, while navigating different views among countries; a balance that the proposed Global Digital Compact and the 2024 UN General Assembly resolution on AI strive to strike, in steering AI towards global goods.

SO WHAT FOR DEVELOPMENT?

AI is racing ahead of attempts to regulate it. Future generations risk being locked into the unintended consequences and risks of AI systems that aren’t fully understood or adequately governed. A flood of undetectable AI-generated content could fuel values-based divides, eroding social cohesion; and jeopardise access to essential services if AI can exploit weaknesses in critical infrastructure. In the extreme, uncontrolled AI could represent an existential threat to humanity.

A rights-based approach to AI development is essential, establishing safeguards that protect data and privacy, and mitigate or eliminate AI biases. This is hard (for example, users’ data assimilated into large language models cannot be removed). Because AI is trained on existing datasets and patterns of behaviour, people who are using AI models trained in other contexts, on foreign datasets and in different languages are at a disadvantage. “We really need data that speaks to Africa itself”.  Nigeria, for example, is developing its own multilingual Large Language Model (the basis for generative AI tools), able to work across five indigenous languages and develop new datasets from them.

“Algorithm activism” movements like the Algorithmic Justice League are trying to harness the power of technology for the public good, illuminating the social implications and harms of AI. The open-source AI movement , like Meta’s goal to build the best open models with Llama 3, has potential to democratize access to AI, though some think advanced models should be regulated to prevent misuse.

Developing robust and inclusive AI governance frameworks, especially at national level, will require a significant increase in AI awareness and education, and in some contexts other supportive infrastructure too, like stronger STEM education, startup financing and government digital capacity building: investments that are necessary to preserve the choices of future generations to shape and use AI to their own ends.