The use of artificial intelligence (AI) in the judiciary will now be subject to safeguards ensuring that human judgment remains paramount, under a framework adopted by the Supreme Court (SC).
In a resolution dated Feb. 18, 2026, issued in A.M. No. 25-11-28-SC, the Court approved the “Governance Framework on the Use of Human-Centered Augmented Intelligence in the Judiciary,” anchored on the principles of fairness, accountability, and transparency.
According to the SC, the framework supports “the public’s faith and confidence in the independence and impartiality of the judicial system.”
Central to the framework is the principle that AI must remain “human-centered,” serving only to assist and enhance human cognitive functions, not supplant judicial reasoning.
“The use of human-centered augmented intelligence should be centered on human values, such as the promotion of the rule of law and fundamental freedoms, dignity and autonomy, privacy and data protection, fairness, nondiscrimination, and social justice,” the framework states.
The framework provides that AI tools require prior authorization from the SC En Banc and will be implemented in phases, beginning with pilot testing. It also states that such tools must not be used as the sole basis for adjudicatory decisions, with human decision-makers retaining responsibility for independent legal reasoning and final judgments.
It further requires disclosure of AI use, including the tool and version, purpose, level of involvement, human oversight, and accountability for the output, covering functions such as legal research, summarization, transcription, translation, citation generation, proofreading, and data processing.
In addition, it provides that a comprehensive risk assessment must be conducted before deployment and prohibits the use of systems that may harm stakeholders, violate rights, or undermine the rule of law.
The framework also mandates compliance with data protection standards, stating that confidential, privileged, or sensitive information must not be processed using AI tools without express authority. It likewise provides for training programs to address risks such as algorithmic and automation bias.
Under the framework, a permanent Committee on Human-Centered Augmented Intelligence will be created to guide the design, development, and ethical use of AI tools.
The framework applies to members of the judiciary, court personnel, litigants, and third-party providers involved in AI systems.
Developed by a working group chaired by Senior Associate Justice Marvic M.V.F. Leonen, the framework draws from international standards, including ASEAN and UNESCO guidelines.
The Court said the framework supports the Strategic Plan for Judicial Innovations 2022–2027, which seeks to build a technology-driven judiciary that is transparent, accountable, and accessible.
Follow Tan Briones & Associates on LinkedIn for more legal updates and law-related articles.

