Introduction
The Government of Indonesia is preparing a Presidential Regulation on Artificial Intelligence Ethics (“Draft Regulation”), which is expected to be issued later this year. The Draft Regulation builds upon ethical principles previously introduced under Ministry of Communication and Informatics (now known as the Ministry of Communication and Digital Affairs – “MOCDA”) Circular Letter No. 9 of 2023 on AI Ethics and Law No. 27 of 2022 on Personal Data Protection (“PDP Law”).
Who Must Comply?
- End users;
- Sectoral Actors, comprising of data providers, AI developers, AI system providers, and AI system operators; and
- Ministries, Government Agencies and other Public Institutions.
Risk-Classification of AI Systems
The Draft Regulation adopts a tiered risk-classification approach, under which AI systems are regulated based on the risks they pose to human rights, safety, and public interests:
a. Unacceptable risk: AI systems which threaten or endanger user safety and human rights are prohibited.
b. High-risk: AI systems which process specific personal data under the PDP Law or may have a significant impact on human rights, safety, or essential public services, and are subject to enhanced requirements and supervision.
c. Low-risk: AI systems posing minimal or no threat to human rights or safety. Such systems must still implement appropriate safeguards to ensure responsible use.
Safeguard Measures to Implement
To address the above risks, Sectoral Actors must implement safeguards throughout the development, implementation, and use of AI systems, while Users remain responsible for ensuring ethical and responsible use.
At a minimum, safeguards must address: (a) the promotion of benefit and protection for humans, the environment, and the state; (b) the avoidance of harm and misuse; (c) transparency and accountability; (d) fairness and proportionality; (e) inclusivity and diversity; (f) system security and reliability through adequate technical competence; (g) respect for intellectual property and culture; and (h) effective governance and meaningful human oversight and control.
Monitoring and Evaluation
The Draft Regulation introduces monitoring and evaluation obligations in relation to AI use and development, including:
a. Self assessment by Sectoral Actors using the standardized questionnaire set out in the Draft Regulation, which can be further adapted to meet the specific needs of each sector.
b. Incident reporting through government institutions that receive complaints from users regarding ethical breaches and forward them to Sectoral Actors for corrective action.
c. Periodic sector reporting by government institutions to the MOCDA.
d. An Integrated monitoring system to be developed by the MOCDA to consolidate incident and sectoral reports for cross-sector oversight.
What to Prepare?
The Draft Regulation reflects the Indonesian Government’s increasing focus on AI governance. Businesses deploying AI within their products or services should begin preparing to identify and classify AI-related risks, implement appropriate safeguards, conduct internal assessments, and anticipate potential reporting requirements.
The Draft Regulation provides a two-year transition period for compliance. Although it does not expressly stipulate sanctions, enforcement risks may arise under existing laws, including the Copyright Law, the Electronic Information and Transactions Law, the PDP Law, making AI ethics compliance part of broader regulatory risk management.
Article provided by INPLP members: Reagan Roy Teguh (Makarim & Taira S., Indonesia)
Co-author: Mr. Demi Narendra Soegandi
Discover more about the INPLP and the INPLP-Members
Dr. Tobias Höllwarth (Managing Director INPLP)
