What AI Reveals About Judgment, Risk and Reputation

In Shakespeare’s book Othellopublished in 1604, the villain Iago offers a stark reminder of what people must always have: “A good name in man and woman … is the nearest treasure of their souls.” And when that “good name” is destroyed (or taken away), he adds, “it makes me really poor.” Four hundred years later, the value of a good name has not changed. Reputation remains one of the most precious and fragile assets any person or institution, large or small, can possess. Deloitte Australia learned this the hard way.
In an “independent assurance review” commissioned by the Australian Department of Employment and Workplace Relations, Deloitte he presented the report which was later found to contain fictitious references and quotations. Deloitte has admitted to using manufacturing AI to help with the design. But any “human review” that exists has clearly failed in the most basic tasks: checking work and verifying sources before sending the final product to the customer.
The results were quick and expensive. The report was redacted and reissued, and Deloitte agreed to return part of its money. It’s tempting to frame this as a simple story about “bad ideas of AI” and sloppy quality control. But that interpretation misses the deeper point. This was not just a technical failure. It was a lapse in judgment—a very difficult thing, especially in the fog of speed, confidence and fitness.
And this is where AI becomes insidiously dangerous: not because it makes us careless, but because it can give us the illusion of courage. No—AI is not making us brave. On the contrary, it can make us feel brave and capable of brave and even heroic actions. Courage and bravery are not the same thing, but both involve doing something in the face of fear or danger. However, courage has another requirement: judgment. It’s not just asking “Can I do something?” but “Should I, and on what basis?”
Aristotle famously found courage between two vices: cowardice and rashness. A coward retreats from danger; a rash person attacks without thinking. A brave person makes progress, but only after deliberation, seeing the dangers clearly and choosing anyway.
AI is changing the texture of emotional action. It can reduce the conflict so dramatically that the act feels easy, bold, fearless even. Drafts appear quickly. The report looks polished. Quotes come pre-packaged. The user experiences great confidence: we have this.
But confidence is not courage. And speed is not judgement.
What often sells AI is a kind of artificial courage: the feeling of having a decision without the burden of deciding. The sense of accomplishment through effort is reduced or obscured.
Deloitte’s episode of what might be called “cubicle heroism” was not cruel. It was normal. It showed the quiet joy of doing more, faster; the allure of authoritative sounding prose; the thought that the update can easily affect because the output looks reliable. However, the result was not what was intended.
And haste, even more so when the institution’s credibility is multiplied, can make us “really poor.” The temptation is intensifying.
McKinsey’s 2025 Global Survey, The State of AI in 2025: Agents, Innovations and Transformation, published in November 2025, reports that 88 percent of respondents say they are organizations are using AI regularly at least one business activity, up from 78 percent last year.
The same research points to a shift beyond large language models such as “predictive text” engines and toward AI systems that can program and execute multi-step workflows. McKinsey reports that 23 percent of respondents said their organizations are already measuring an AI system in at least one task, while another 39 percent are experimenting with agents.
McKinsey’s conclusion is clear: organizations with an ambitious AI agenda see the greatest benefits. This is exactly where the danger is acute.
Because the more AI can do, the easier it is for humans to stop judging. Agents don’t just suggest; they will take action. And when action is cheap, organizations will begin to confuse effect with results, and move away from progress.
AI can expand the possibilities in a surprising way. It can also reduce the space when we pause, ask and confirm. That shrinkage is a real danger, because the liver lives in that space. Judgment is not a footnote or a warning; it’s the whole game. If AI is the defense, judgment is the fulcrum. Without it, the lever does not lift; it moves, usually more than intended.
Judgment determines when to use AI Not every task deserves automation. Some work is important because it forces discussion: strategy, hiring, operational decisions, clinical or legal judgment and critical reputational communication. Using AI is inherently wrong, but it raises the required level of revision rather than lowering it.
Judgment shapes how to use AI, where AI generates opportunities and humans provide guidance. Clear purpose, constraints and context are increasingly important as systems become more dynamic. “I do this for you” is no longer enough. “Do this with these considerations, this level of evidence, and this level of certainty” is the best approach.
Judgment filters truth from noise. The Deloitte episode is not an outside story; it is a predictable failure mode where all AI users are vulnerable. Generative AI can be good and confidently wrong in ways that look good. If we treat fluency as precision, we will send errors to the scale. Judgment protects what must always be human. Trust, accountability and moral responsibility do not work well in isolation. So is leadership. We can delegate writing; we cannot delegate ownership.
True courage in the age of AI will not be surprising, but it should not be invisible. It will be a process, transparent and sometimes frustrating.
It will look like this:
- The courage to slow down when the tool makes it easy to accelerate.
- Dare to confirm when your output sounds polished enough to be shipped.
- The courage to disclose when AI has meaningfully shaped the work product.
- The courage to say “I don’t know” is more than accepting a sound answer.
- Courage to find short-term conflict to avoid long-term reputation loss.
This is not an anti-AI position. One pro-accountability. As a published author, speaker and frequent contributor of opinions and articles, these are the strict rules I hold myself to. As a board director, often in discussions about how AI can improve the bottom line, these are the same rules I ask for.
AI will give us speed, scale and integration. People will provide insight, values and context. When those come together, the results can be amazing. If they don’t, the AI will be ramping up the rush.
The paradox of AI is that it can make us feel fearless while simultaneously weakening our work—and our reputation. It can weaken the action while complicating the results. It can make us look competent while quietly eliminating the key habits—verification, doubt, reasoning—that earned credibility and competence in the first place.
So, the important question is not whether AI will make us braver. It won’t.
The real question is whether we will continue to judge when AI offers a convincing alternative: the illusion of courage—confidence without obligation, speed without scrutiny, output without ownership. Because that’s how a “good name” is tarnished. And that is how, in Iago’s words, we can be made “poor indeed.”




