Very recently, the Argentine Information Technology Subsecretariat, which is part of the Chief of Staff Office, issued Resolution 2/2023 approving a set of recommendations for trustworthy artificial intelligence (“AI”), specifically directed to the public sector.
This Resolution came during very active discussions worldwide regarding the use of artificial intelligence in all industries. Particularly, addressing the use of generative AI and tools such as ChatGPT (in its different versions).
In that connection, Argentina has no still specific general legislation regulating the use, development and/or deployment of AI. The buzzword “AI” and/or “artificial intelligence” could be found in the recitals of many laws and regulations but still no specific guidance on that respect. For example, communications from the Argentine Central Bank refer to certain obligation and requirements -including conducting an impact assessment-. At the same time, Argentina has adhered to the UNESCO Recommendation on Artificial Intelligence.
With all of these as background, the recommendations aim to compile and provide tools for those carrying out innovation projects through technology, specifically focusing on those involving the use of AI. They aim to provide a framework for the technological adoption of AI focused on individuals and their rights. As anticipated, these recommendations are directed specifically to the public sector but, nonetheless, it is reasonable to expect that, in the absence of other guidelines directed to the private sector or mandatory applicable regulations, they could also work as non-mandatory guidelines for the private sector as well.
In general, the recommendations focus on establishing a set of ethical principles to guarantee the protection of fundamental rights, respect democratic values, prevent or reduce risks, foster innovation and a people-centred design. To establish and conceptualize the principles, the recommendations are structured and developed throughout the lifecycle of AI projects.
In that connection, they establish a preparatory stage that deals with how artificial intelligence should be conceived and what measures are recommended to be taken before starting the AI cycle. Guidelines highlight building an interdisciplinary team, awareness campaigns, pre mortem and the model scope definition, besides others.
Among others, the guidelines highlight the principles of proportionality and harmlessness, safety and security, equity and non-discrimination, sustainability, right to privacy and data protection, oversight and human decision making, transparency and explainability, responsibility and accountability.
On the other hand, within the AI cycle, the recommendations are divided into four stages concerning "Design and data modelling" (Stage 1), "Verification/Validation" (Stage 2), "Implementation" (Stage 3), and "Operation and maintenance" (Stage 4). Finally, in a fourth stage, the recommendations set out what ethical issues should be considered outside the AI cycle.
In that connection, the recommendations suggest to emphasize on the difference between executions and responsibility concepts, making it clear that although the execution of a task or service might be delegated to algorithms inserted into an AI project, the decision and, therefore, the responsibility should always rest with the organization or individual controlling the development and deployment.
Regarding measures that could be taken during the AI cycle, it should be pointed out that Stage 1 (Design and data modelling) and stage 2 (Verification and validation) addresses measures tending to reduce risks and ensure transparency, accountability, data quality and modelling biases elimination. There are several tools recommended by the guidelines to address these objectives such as ethical commitment signing, data scientist participation and implementation of processing activities records.
They also suggest measures to be taken within Stage 3 (Implementation) depending on whether the implementation was made on premise (on own infrastructure), via cloud services or a combination, aiming to guarantee an adequate degree of information security, traceability of actions and decisions. Stage 4 (related to Operation and Maintenance) guidelines recommend certain actions in order to guarantee the availability, continuity and sustainability of the service provided by this technology, such as system performance ensurance, improvements adopted by the existence of biases and ethical incidents, and access, updates and authentications management control procedures.
In the last section, the guidelines raise several issues related to the post-cycle of AI, recognizing that each stage requires constant assessment of both changes and risks, the appointment of individuals responsible for containing and remedying the harms generated by artificial intelligence, as well as the proper recording of accountability and responsibility actions for learning and process improvement.
As previously mentioned, these guidelines are a first set of recommendations aimed at the public sector which follow, in many respects, a similar set of international principles as those of UNESCO (and others).
It remains to see how and if they will be followed by the public sector and whether they would carry any weigh in the private sector as well. At the same time, many expect the different public regulators, including for example the Data Protection Authority, to work together with the Information Technology Subsecretariat and other bodies to come together and produce a more comprehensive set of recommendations that would tackle AI from many different angles, including privacy and data protection as well as Intellectual Property Rights.
Article provided by INPLP member: Diego Fernandez (Marval O’Farrell Mairal, Argentina)
Dr. Tobias Höllwarth (Managing Director INPLP)