Good Judgment Is a Competitive Advantage in the Age of AI

Good Judgment Is a Competitive Advantage in the Age of AI

By Matt Dallisson, 02/10/2023

It wasn’t that long ago that AI was considered the domain of elite experts or data scientists. Companies spoke effusively about its potential to transform business, but only a fraction of their employees had access to it. With generative AI, the game has changed — suddenly everyone is a potential programmer thanks to tools like OpenAI’s ChatGPT, Google’s Bard, or Anthropic’s Claude 2. But as we travel these technological leaps forward, there’s still a fundamental, longstanding capability every organization will need to realize AI’s true potential: judgment.

That idea of judgment in the age of AI was a central tenet of the work of our late colleague and visionary friend Alessandro Di Fiore. Alessandro believed in putting humans at the center and considered technology to be a way to help people augment their creativity, autonomy, and critical thinking. As a frequent contributor to HBR (and the former chairman of Harvard Business Review Italia) he often reflected on how innovation, leadership, and AI went hand in hand. And in this 2018 article, he argued that as AI gets more accessible to all employees, judgment would become as crucial as any technical skill.

Alessandro argued that judgment will be the real competitive advantage for organizations as AI systems rise to new common operating standards. But reinforcing skills alone won’t develop better judgment. Companies will need to radically rethink how they view and deploy judgment in order to adapt to the pace of change. Significantly, Alessandro saw three critical facets of judgment as crucial to this moment: 

This speaks to the idea of who has the authority or permission to exercise judgment.  As an early advocate of democratization of judgment, Alessandro knew that making good decisions was not something confined to the C-suite. As knowledge, data and technologies are more widely distributed, judgment must also be more widely distributed.

Companies need to understand how to leverage and scale generative AI, and trying to prohibit that access will ultimately be a futile effort. Making sure your workers have access to all the ways they can unlock the value in these tools is part of the transformation, of course doing so in a secure and controlled manner. This requires trust and communication. But leaders should consider that an increasing number of new use cases and practices are bound to emerge bottom up rather than stirred top down.

Exercise of Judgment

This speaks to the act or process of deciding or forming an opinion. Alessandro considered judgment as a continuous process rather than a single moment:

Judgment is not only exercised in the moment of making a decision assessing data and information. Judgment is broader than that and it starts with asking the right questions, framing the right problem, evaluating the broader context. Judgment is co-creative, it’s a journey.

This concept of judgment is even more true today. The rise of generative AI expands judgment beyond discrete decisions — it’s now a collaborative human-machine process.

With AI chatbots, the significance of contextual interactions is evident. Judgment emerges through integrated human-AI dialogue, not separated spheres. Our recent ChatGPT experiment on how generative AI can enhance 10 popular management practices is a case in point. In a meaningful dialogue with ChatGPT, we had to exercise our judgment ability before, during, and after the prompting, inputting the right context, crafting the best chain of prompts, and carefully interpreting recommendations.

The experiment confirmed Alessandro’s intuition: the ideal outcomes arise at the intersection of human and machine intelligence. Sound judgment’s future is this symbiotic co-creation process. Such a shift mandates a comprehensive people transformation and reskilling, arming workers with the judgment prowess essential for the AI era.

This speaks to the systems or processes in place to oversee or check decisions. The traditional methods of control are quickly growing out of date. Strict top-down oversight can stifle agility in this new paradigm. Yet totally unconstrained autonomy poses its own risks if ethical AI development and deployment principles are not ingrained across organizations.

The solution consists of two parts: first, building trust and responsibility in the system with a code of ethics for a fair, safe and sustainable usage and to prevent AI models from producing inaccurate information or generating responses that contradict your company’s values; second, providing training to users on how to set the right context for human-AI decision-making. That second part involves explaining appropriate boundaries for prompts and framing inquiries responsibly. Rather than micro-approving choices, leadership should focus on empowering workers with these skills at all levels.

In this vision, control transforms from bureaucratic gatekeeping to fostering collective responsibility. As Alessandro wrote, “Leaders have in first person a duty to set the right context and conditions to empower employees make more autonomous decisions with the help of data and technologies. Giving freedom is good. But helping them exercise their freedom is more crucial.”

Alessandro’s vision underscores the need for sound judgment as AI reshapes society. His legacy remains a constant source of inspiration as we work towards a future where humans and technologies collaborate seamlessly, fostering innovation and progress.

And the future remains ours to shape through vision, ethics, and responsible innovation.

This content was originally published here.