Home » A pioneering report by the Thomson Reuters Foundation and UNESCO sheds light on the way 3,000 companies approach AI

A pioneering report by the Thomson Reuters Foundation and UNESCO sheds light on the way 3,000 companies approach AI

by NNW Bureau
0 comments

UNESCO and the Thomson Reuters Foundation are launching today a global report “Responsible AI in practice” based on information gathered from 3,000 companies about the Artificial Intelligence-related adoption and strategies. It finds that, as AI development and adoption accelerates rapidly in the private sector, nearly half of the companies (44%) reported having an AI strategy. One in ten companies is also publicly committed to adhering to an AI governance framework.

Almost half of the 3000 companies in our study have already realized that, to benefit from the transformative power of AI, they need to have an AI strategy. The more visionary 10% of them have also realized the importance of developing, deploying and using AI responsibly and ethically and adhering to an internationally recognized AI governance framework, and are moving fast in that direction. AI systems that abide by human rights, are respectful of the environment and do not discriminate are not only “the right” thing to do, but also the smart one. They make AI-based products and services better and companies more competitive, at the same time they help build trust with and by the public

Lidia BritoUNESCO’s Assistant Director General for Social and Human Sciences.

While companies increasingly communicate ambition, principles and oversight structures, it is nevertheless less clear where, inside the company, AI is actually deployed, how risks are controlled in practice, and who is accountable when systems fail. Organisations tend to describe governance at a conceptual level, and to more rarely disclose how AI is leveraged on a day to day, operational level. Having more evidence about dedicated resources, escalation pathways or monitoring mechanisms would help better understand how risks are managed once AI is deployed.

The same dynamic extends beyond core governance. For example, ethical and environmental considerations tend to be framed as principles, but only occasionally linked to measurable processes or management practices: only 11% of companies said to evaluate environmental impact, and only 7% the human rights impact of the AI they leverage.

AI is no longer a niche technical topic – it is a core governance issue. Without robust oversight of how businesses are adopting AI, we risk causing significant downstream harm to the environment and wider society. Business leaders could also be sleepwalking towards costly surprises like fines, forced feature rollbacks, delayed product launches and reputational hits to their brand.

Antonio ZappullaThomson Reuters Foundation CEO

Need for Human Oversight

Against growing public concern around automated decision-making, only 12% of companies are found to have a policy ensuring human oversight of AI systems. This is often in relation to recruitment or public-facing systems, such as customer service chatbots, or apps deciding on social benefit allocation.  

Only 1 in 6/7 companies could identify the person within their organization responsible for the ethical risks that may emerge at different stages of the AI lifecycle.

Need for Data Governance and training

For ¾ of companies, the report found no evidence of policies related to checking the quality of training data used in AI systems. And less than 1/5 of companies said they had conducted privacy or data protection impact assessments specifically in relation to AI. Just one in five companies had policies governing how data is shared with third party AI vendors.

While 30% of companies claimed to have AI training programmes, only 12% of companies offered structured training with comprehensive coverage. Companies often acknowledged the importance of workforce transition and skills development, yet seldom showed how these programmes affect workers’ learning outcomes, or how workers’ concerns can be raised and addressed.

Taken together, the report suggests that the central challenge of responsible, ethical AI is no longer awareness but operationalisation. As privately developed or deployed AI systems shape more of daily life, transparency must move beyond technical descriptions and show how accountability works — who makes decisions, how issues are escalated, and what remedies may exist when things go wrong. Just as we expect openness and accountability from government, the private sector shall meet comparable transparency standards for AI. This will work for the benefit of all, as adhering to transparency practices and leveraging ethical, inclusive AI builds trust with investors and consumers.

READ MORE: https://www.unesco.org/en/articles/pioneering-report-thomson-reuters-foundation-and-unesco-sheds-light-way-3000-companies-approach-ai?hub=701

You may also like