Embedding AI Without Learning Embeds Risk

As companies rush to embed AI into their operations, the governance debate is stalling. Regulators deliberate over directives, policy makers argue over strokes and engineers argue over technical controls. These questions are important, but they ignore the immediate driver of responsible AI governance: the people who use these systems every day. Without investing in workforce, organizations risk damaging their operations and finding themselves liable when things go wrong.
The adoption of AI does not wait for governance to reach
Companies are integrating AI tools wherever they can to capture efficiencies and revenue gains, even without governance structures in place. Recent news from the UK illustrates this tension between governance and innovation. That week the Treasury Committee warned that the financial sector should embrace AI at risk of causing “serious harm” in society and the economy, Lloyds Banking Group announced that the adoption of AI increased revenue by 2025 for £50 million ($66.8 million).
The risk to management, then, is not just that AI is advancing rapidly. Here, the risk stems from the fact that AI is embedded in workplaces where employees are not equipped to understand its limitations, failures or compliance implications. That gap is where new management concerns arise.
The managerial risks of using AI without literacy
The most predictable result of poorly regulated AI adoption is what practitioners call “shadow AI” Without formal training, workers turn to unauthorized consumer-grade tools to complete professional tasks, often without disclosure. In the UK, 81 percent of AI users do not disclose the use of AI to management. Sensitive business data can be fed into public models that store or reuse input for additional training, creating new regulatory and reputational risks.
The problem comes when employees don’t fully understand how AI works. Practitioners may treat AI as a fact-based search engine rather than a pattern-based reasoning engine, failing to critically evaluate the accuracy of its results. Take, for example, the widely reported cases of licensed attorneys channeling AI-generated “hallucinations”. in court filings. When users can’t evaluate the effects of AI effectively, their employer is responsible, which lowers the trust of customers and regulators.
Bias presents another limitation of governance. AI systems derive patterns from their training data. If employees fail to recognize discriminatory effects, they risk embedding systemic bias in performance decisions. In 2021, this issue was prioritized in the US through automatic reporting lending systems are rejected by 80 percent of housing applications from black applicants. Similar failures have appeared in the algorithms used check welfare applications again job applications. From a governance perspective, this creates significant ethical, legal and reputational risks, not to mention wider implications for human rights and social justice.
Even where harm does not occur, the distribution limits the low-skilled return on investment. Technology rollout is not the same as digital transformation. Without redesigned workflows and trained employees, AI produces discrete productivity gains rather than company-wide impacts.
Building governance from the ground up
In Europe, working class rule has already been accepted. I EU AI legislation embeds AI literacy as a legal requirement for practitioners involved in AI systems. In the absence of similar regulations in the US, companies must lead this effort themselves. Based on our experience enriching organizations with AI governance, a a reliable way to the top it rests on three interlocking bases.
The first is AI literacy, which is divided by role. For managers, literacy means knowing what questions to ask: How do we look at bias? Who is responsible for the performance of the model? When do human reviews override AI results? Leaders must be able to assess whether AI is the right answer to a business problem, rather than the simple one.
For technical teams, AI learning means responsible data management, model validation, performance monitoring and documentation. For end-users in other roles, such as recruiters using AI assessment tools, marketers writing AI-assisted campaigns or analysts using production AI as research assistants, literacy is practical and a process. It includes understanding approved tools, verifying results, knowing how to raise concerns and using human judgment.
The organizations we have worked with that are ahead of the cage divide literacy training by role, treating it as a functional skill that comes with accountability.
The second foundation is updated policies and procedures. Clear acceptable use policies reduce the likelihood of AI shadowing, prevent overreliance on outsourcing and clarify accountability for AI-assisted decisions.
Policies governing the AI supply chain and procurement need to be reviewed. AI vendors should be subject to systematic due diligence including training, data management, bias mitigation procedures, monitoring capabilities and contractual clarifications regarding liability. As we have written in the context of business sustainabilityeven well-intentioned organizations can undermine their management efforts by relying on poorly audited supply chains.
The third pillar is clear accountability frameworks throughout the AI lifecycle. This may include various AI governance committees, responsible AI leads, board-level risk oversight or independent assurance providers. The structure will vary by organization size and sector. What is important is that responsibility is clear, and that governance is integrated into product development, procurement, compliance and risk management rather than being treated separately.
Responsible AI governance as an investment, not a hindrance
AI governance discussions will continue at the regulatory level. Standards will change, and enforcement conditions will change. Many of these factors remain outside the control of a single company. Functionality does not.
Restructuring AI management in terms of employee investment, revised policies and clear agency accountability is giving back to business leaders. It also provides a positive counterweight to concerns about AI-driven job displacement: instead of shutting down workers, responsible AI governance empowers and upskills them. Those organizations that take this seriously will be better placed to maintain the trust of customers, regulators and the public as consideration for AI adoption continues to grow.
Amelia Williams is a Senior Research Impact Officer at Trilateral Research with expertise in science communication at the intersection of emerging technologies, environmental issues, ethics and policy. At Trilateral, he supports the development and implementation of policy-related research projects, media, and industry engagement.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘
fbq(‘init’, ‘618909876214345’);
fbq(‘track’, ‘PageView’);



