Businesses that conduct effective assessments of their AI systems can better harness the technology’s potential to boost innovation, productivity and growth. 

This is according to a policy paper published by global accountancy body the Association of Chartered Certified Accountants (ACCA) and global professional services organisation EY.  

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

The report, titled AI Assessments: Enhancing Confidence in AI, explores the emerging field of AI assessments. 

The ACCA and EY’s report covers a wide range of AI evaluations including technical, governance and compliance assessments, as well as more traditional assurance and audits.  

It highlights the role these assessments play in evaluating whether AI systems are well-governed, compliant with laws and regulations, and meet the expectations of business leaders and users. 

The paper argues that effective AI assessments enable businesses to deploy AI systems that are more likely to be effective, reliable and trusted.  

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

It also addresses the challenges associated with these emerging types of AI assessments and identifies key elements needed to make them robust and meaningful for different users. 

As AI adoption accelerates worldwide, AI assessments, whether voluntary or mandated, are increasingly being considered by businesses, investors, insurers and policymakers to build and enhance trust in the technology.  

The ACCA notes that the publication of this paper comes at a time when the policy landscape related to AI assessment is evolving. 

The paper details three emerging types of AI assessments: governance assessments, conformity assessments and performance assessments.  

Governance assessments evaluate the internal governance structures surrounding AI systems.  

Conformity assessments determine compliance with applicable laws, regulations and standards.  

Performance assessments measure AI systems against predefined quality and performance metrics. 

ACCA chief executive Helen Brand said: “As AI scales across the economy the ability to trust what it says is not just important, it is vital for the public interest. This is an area where we need to bridge skills gaps and build trust in the AI ecosystem as part of driving sustainable business.” 

EY global vice-chair of assurance Marie-Laure Delarue added: “AI has been advancing faster than many of us could have imagined, and it now faces an inflection point, presenting incredible opportunities as well as complexities and risks.  

“It is hard to overstate the importance of ensuring safe and effective adoption of AI. Rigorous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI’s full potential as a driver of growth and prosperity.” 

The report also outlines challenges that hinder the robustness and effectiveness of some AI assessment frameworks and explains how these can be managed through well-specified objectives, clearly defined methodologies and criteria, and competent, objective and professionally accountable providers. 

It offers concrete suggestions for business leaders and policymakers.  

Business leaders should consider the role AI assessments, including voluntary ones, can play in enhancing corporate governance, risk management, and confidence in AI systems among customers and employees. 

Policymakers are encouraged to clearly define the purpose, components, methodology and criteria of AI assessments and support AI assessment standards that are compatible with those in other countries and minimally burdensome on businesses.  

They should also support capacity-building in the market to provide high-quality, consistent and cost-effective assessments, the industry body said.