Artificial Intelligence, Beyond Algorithms: An Evolving Regulatory Dilemma

There are numerous approaches to regulating the development and use of artificial intelligence. We write this blog to share our findings about regulations aimed at mitigating risks and maximizing the benefits of this technology.

15 de Junio de 2024
PNUD Argentina

Artificial Intelligence (AI) is here to stay. Of course, it’s not entirely new; it’s already part of our daily lives, closely related to software development and the use of data—lots of data. Yet, it also carries numerous implications that we still don’t fully understand. Hence the challenge of choosing among the various alternatives for regulating the development and use of AI, maximizing its potential, and mitigating its risks. At Co_Lab, the UNDP Acceleration Lab in Argentina, we believe that the first step on this journey is to understand those policies and regulations that directly or indirectly aim in that direction. For this reason, we wrote this blog to share our findings and to invite you to tell us about what other regulations you would add to further our objectives.

Principles and Recommendations

First and foremost, there are guidelines and declarations with recommendations and principles to follow for the safe, reliable, ethical and human-centered use and development of AI. Some examples include the OECD principles, those of UNESCO, or the UN Roadmap for Digital Cooperation, the report of its advisory body and the recent resolution of the General Assemby.

Efforts also come from governments, academia, civil society, and the private sector. One pioneering initiative was compiled in the principles of the Asilomar Conference (2017) by the Future of Life Institute. This organization is the same one that later, in 2023, released the open letter drawing attention to the risks associated with the unknown aspects of AI and recommending a halt to its development for at least six months. From our region, there have been warnings emphasizing the importance of respecting people’s rights and avoiding biases inherent to the origin of data. Efforts should be directed towards developing AI tools aligned with human rights, valuing the “Latin American cultural heritage” and the regulatory sovereignty of the countries in the region, as highlighted in the Montevideo Declaration.

All these international documents maintain areas of contact, including overlapping ones. For example, among the common principles are those of responsibility and accountability for effective oversight and evaluation of AI models, human oversight, transparency, and explainability to prevent AI systems from becoming “black boxes,” equity and inclusive and sustainable growth, as well as security, and protection of privacy and personal data.

Regulation Models

As we are seeing, there are more questions than certainties regarding the implications brought by AI and how to shape regulatory frameworks to address them. A key question is: Do we need a general law to regulate everything related to AI, or rather specific laws tailored to the sector where the technology is applied? The first option could facilitate a uniform and cohesive approach of the principles governing AI development within a country. At the same time, it can provide a comprehensive view of the risks and benefits associated with this technology. An example of this approach is the European Union’s (EU) IA law proposal, which is based on defining potential risks to establish cross-cutting standards for the development, use, and commercialization of AI.

On the other hand, a focus on the specific effects of AI in each sector (e.g., healthcare, labor, or security) can assist in a specialized approach to risks and benefits. Additionally, it facilitates the sectoral integration of AI, leveraging the institutional frameworks of each area for regulation and monitoring. The United States (US) Executive Order on Artificial Intelligence aligns closely with this approach. In accordance with the jurisdiction of each government entity, the Order assigns them specific tasks—to be fulfilled within specific timeframes—related to the creation of guidelines and standards for the responsible development and implementation of AI within their respective areas of activity. This regulation also focuses on monitoring large AI models and requires companies and individuals developing them to report their activities to the relevant national agencies.

The Argentine Experience

Building upon international principles such as those of UNESCO and the OECD, efforts have been made in Argentina to discuss and advance AI regulation. In this regard, the precedent of the National Artificial Intelligence Plan of 2019 is worth mentioning. Other initiatives have aimed at establishing institutional frameworks to work on and promote AI in the country, including coordinating efforts between the national government, universities, companies, and labor unions. Regarding national coordination, there has also been the creation of an inter-ministerial committee to carry out a multi-sectoral approach to the ethical and sustainable development of AI in the country. National efforts have also included fostering the creation of spaces that aid in the development of AI in the Spanish language, and the publication of recommendations for carrying out public innovation projects based on AI. 

In general, these documents originate from the executive sphere of the government and are published during times of government transitions, which draws attention to the political timing for the development of regulatory frameworks on AI and the challenges posed by government transitions.

The Regulatory Archipelago

The mentioned national initiatives are complemented by scattered laws and regulations that, although from other fields—not specifically AI—, offer guidance and directives on how to address situations related to this technology. For example, the Personal Data Protection Law (Law 25.326) is a fundamental regulation for data management; or the Intellectual Property Law (Law 11.723) serves as a framework for questions about AI and its relationship with intellectual property rights. Although there is no legal vacuum, the reality is that a specific AI law has not yet been enacted.

These initial findings on policies and regulations aimed at mitigating the risks and enhancing the benefits of AI demonstrate that the landscape is varied. Some policies, resolutions, and norms specifically address AI, while others deal with aspects of it. It is also observed that international guidelines and recommendations have had a performative character in national initiatives, as the most recent legislations or resolutions have tended to align with them.

It becomes evident that the process of regulating AI raises questions that require pluralistic debate and collaborative efforts. Should new laws related to AI be enacted? How should the use of AI be managed in specific areas? What adjustments are needed in existing regulations? How can regulations on AI be constructed to address the needs and challenges specific to our country and region?  How should responsibilities be distributed and assigned in the construction of ethical AI? These are all questions that remain open to our collective attention and action.

The Co_Lab surveyed policies, regulations, and laws, both national and international (to which Argentina has adhered), with potential direct or indirect impact on aspects related to AI. This survey was informed by specialized literature, data from the Ministry of Justice of the Nation, and input from experts. Access this survey  here and let us know which other regulations you would add. This is a collective exercise, and we want to invite you to help us complete and expand these initial findings.

*We appreciate the collaboration and guidance of Natalia Zuazo in the process of curating regulations for this blog. We also thank Milagros Crispillo, a Co_Lab Exploration Intern, for her support in the bibliographic research.