Major players are ignoring governance in the race to dominate the AI market, despite greater investor engagement and regulatory scrutiny, according to analysis released ahead of the World Economic Forum.
While 38% of major technology companies publish their ethical AI principles, none disclose the results of human rights impact assessments (HRIA), which the World Benchmarking Alliance (WBA) said exposed “weak accountability” across the sector.
As well as the leading firms in the AI supply chain, major players in industries that rely on digital technology have embedded AI systems throughout their operations over the last 18 months.
“Companies in every layer of the AI value chain have a responsibility to act as stewards of good governance. Any company that develops, procures or deploys these systems can affect risks and rights downstream,” said WBA.
Less than a fifth (19%) of the 200 tech firms analysed had committed to any regional or international AI frameworks or incorporated respect for human rights into their principles.
AI governance standards are still emerging in major jurisdictions, with most frameworks including ethical AI principles among their core expectations, covering firms’ approach to fairness, transparency, accountability, privacy and safety in the technology’s development and deployment.
This week, the UK announced new laws, a regulatory investigation and threatened further sanctions against social media platform X after its Grok AI chatbot was used to create non-consensual intimate images.
In Europe, use of AI is regulated by the EU AI Act, which introduce risk-based rules to ensure AI is trustworthy, safe, and respects fundamental rights.
The WBA study reported a fall in the number of firms issuing ethical AI principles, with just nine publishing these for the first time in 2025, compared with 19 in 2024. Major firms such as ASML, Oracle, SK Hynix and TSMC, and platforms such as Spotify or Uber, still have no public AI principles.
WBA benchmarked the ethical AI practices of 200 large tech firms as part of a wider study of the impact on people and planet of 2,000 global companies.
Noting the lack of information on HRIAs, WBA said firms should “aim high” given the scope for AI to tools to distort public discourse, amplify misinformation, facilitate surveillance and act as vehicles for discrimination and gender-based violence.
“Companies must connect high-level principles to deeper transparency and be willing to disclose HRIA results, which are essential for understanding who is affected and how, and what risks require mitigation,” the WBA report said.
It builds on a recent report by the WBA’s Collective Impact Coalition for Ethical Artificial Intelligence (AI CIC), an group of stakeholders – including 64 investors worth US$11.3 trillion AUM – seeking to ensure that digital technology companies integrate human rights and ethical considerations into AI development, deployment and procurement.
According to the CIC report, more companies are creating governance structures dedicated to ethical AI, but “considerable gaps” remain in transparency around implementation.
While 52 out of 76 companies (68%) had responded to investor outreach by the coalition, major corporate players – notably US-based ‘hyperscalers’ – “elude dialogue”.
Steering committee members of the CIC’s investor group include Candriam, Amundi, Boston Common Asset Management and Fidelity International.
AI has been a high-profile topic in recent years at the World Economic Forum, which holds its annual meetings in Davos next week.
