Skip to main content

Israel’s Interministerial Team on AI in the Financial Sector: From Interim to Final Report

|

Israel’s interministerial team, formed to address the implications of artificial intelligence (AI) in the financial sector, has completed a comprehensive review culminating in a Final Report, published after extensive public consultation. While the report focuses on financial services, the authors explicitly note that its findings on data protection, governance, and transparency apply across industries. The Final Report significantly relaxes the Team’s initial position on data protection.

Background and Context

In late 2023, Israel established an interministerial team comprising representatives from the Ministry of Justice, the Ministry of Finance, the Bank of Israel, the Israel Securities Authority, the Capital Market Authority, and the Competition Authority. The team’s mandate was to enable innovation while safeguarding financial stability, consumer rights, and privacy.

It is important to note that, under the Israeli Protection of Privacy Law (PPL), the only legal bases for processing personal data are informed consent and explicit legal authority. While the law allows consent to be implied, regulatory guidance and case law have historically shown a strong preference for explicit consent, sometimes coupled with an added requirement of free will. This legal context shaped both the interim and final recommendations, particularly regarding how to balance innovation with privacy protection.

In addition to consent, the PPL provides statutory defences in specific circumstances where a breach is committed in “good faith”, including where there is a legal, moral, social, or professional duty; where the breach is necessary to protect a legitimate personal interest (courts have found that banks may have a legitimate “personal” interest); where the breach occurs in the ordinary course of lawful work; and where the breach serves the public interest. Until recently, the regulator's position has been that these defences do not apply, or apply only narrowly, to data processing.

In December 2024, the team released an Interim Report and invited public feedback. The draft emphasized:

  1. Most uses of AI in financial services should require explicit (opt-in) consent from individuals. Defences were mentioned only in passing.
  2. A risk-based approach tailored to financial services.
  3. Leveraging existing regulatory powers rather than introducing sweeping new laws.
  4. Explainability obligations for high-risk decisions.
  5. Anonymization and PETs were recommended tools, but institutions received no particular advantage for using them.
  6. Human oversight is required for critical, individual-impacting decisions.
  7. Disclosure requirements for AI use in financial products.

 

Public Consultation and Stakeholder Feedback

The consultation period drew responses from banks, insurers, fintech companies, and global tech providers. Key themes included:

  1. Concerns over mandatory opt-in consent for most AI uses.
  2. Calls for at least GDPR-style legal bases for AI model training, with exceptions for sensitive data.
  3. Stakeholders argued that innovation should be given greater weight than fairness and privacy.

 

Final Report: Key Changes

Published in December 2025, the Final Report reflects significant adjustments:

  1. Consent Framework Refined: Opt-in consent now applies only to high-risk or unexpected uses. The mere use of AI does not “necessarily” require consent, and routine AI applications using existing data may not require renewed consent.
  2. Recognition of Anonymization: Anonymization is acknowledged as a mitigating factor in consent assessments (e.g., an opt-out might be sufficient, or consent may not be needed at all where data is properly anonymized) and in assessing the extent to which defences apply, signalling a significant shift in the regulator's position on defences.
  3. Recommendation to Enact (Expanded) Legal Bases: The recommendations include enacting GDPR-style legal bases, as well as specific legal bases for AI training, reducing reliance on consent alone.
  4. Governance Reinforced: Institutions must map AI uses, assess risks, and implement governance measures, including bias prevention.
  5. Explainability and Oversight: Maintained as core principles, calibrated for proportionality.

 

Article provided by INPLP member: Eyal Roy Sage (AYR Lawyers, Israel)

 

 

Discover more about the INPLP and the INPLP-Members

Dr. Tobias Höllwarth (Managing Director INPLP)

Cloud Privacy Check (CPC). Data Privacy Compliance in the Cloud Made Easy

Understand Cloud and Data Protection Law in only 4 easy steps. Plus highly relevant legal information for 33 countries. Provided by EuroCloud and 53 European lawyers.

VIEW STREAM

About Us

EuroCloud is an independent non-profit organization and consists of a two-tier setup where organisations form all European countries can apply to participate in as long as they respect the EuroCloud Statutes.

To act as a true European player, all programs that are developed are intended to be European activities. These European programs are the strength of EuroCloud as a whole. Respect to local cultures along with the will to promote a real European spirit.

{$page.footerData}