Early May 2024, Chilean government introduced a bill of law to regulate AI. The purpose of this bill is to promote the creation, development, innovation, and deployment of AI systems at the service of human beings, respectful of democratic principles, the rule of law and people´s fundamental rights in case of harmful effects that certain uses could cause. It provides requirements and obligations to AI developers and deployers with regarding specific uses of AI.
The proposed regulation shall apply to:
- Suppliers that introduce AI systems into Chilean market or into service in the national territory.
- AI deployers domiciled in Chile.
- AI suppliers and developers of AI systems domiciled abroad, when the information generated by the AI system is used in Chile.
- Importers and distributors of AI systems, as well as authorized representatives of AI system suppliers domiciled in Chile.
Nevertheless, it shall not apply to AI systems developed and used for national defense purposes; research, testing and development of activities on AI systems prior to their introduction to the market or their release into service; and AI components provided under open-source licenses.
Moreover, this regulation follows the AI EU Act, by defining four levels of risk for AI systems: unacceptable (Subliminal manipulative systems, AI that exploit people's vulnerabilities to generate harmful behaviors, biometric categorization systems based on sensitive data, social classification based on social behavior, remote biometric identification in public spaces and in real time; systems for the non-selective extraction of facial images; and to evaluate people´s emotions.), high, limited, and no clear risk.
It will be considered high risk when the AI system presents a significant risk to harm health, security, fundamental rights, consumer rights or the environment, regardless of whether it has been introduced in the market or into service or is intended to be used as a safety component of a product, or whether it is itself a product.
Furthermore, this proposed regulation states specific requirements for high-risk AI. They shall have a risk management system, a data governance, technical documentation, record of all activities and security measures, provide information to users and recipients providing reasonable understanding on how the system works (transparency), human supervision, and cybersecurity.
Enforcement of this regulation shall correspond to the Data Protection Agency. Is important to keep in mind that the data protection regulation which creates such Agency is still in Congress and once approved will have two years legal vacancy.
Finally, infringements are classified as very severe, severe, and minor. And fines, are up to U$ 1,400,000.-
Article provided by INPLP member: Macarena Gatica (Alessandri Abogados, Chile)
Discover more about the INPLP and the INPLP-Members
Dr. Tobias Höllwarth (Managing Director INPLP)