Maximum AI value will be unlocked when humans and AI work together, not by automating people out of the loop. “There is far more opportunity in augmenting humans to do new tasks rather than automating what they can already do,” explained economist Erik Brynjolfsson in his seminal paper, The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. With this framing, AI becomes a digital partner helping humans to make better decisions with optimal outcomes.
These ideas counter the common business approach of using technology to automate away human costs. After years of implementing AI as co-founder of the Cloverpop decision intelligence platform, I’ve come across many frameworks for prioritizing AI initiatives using traditional business measures like return on investment, time to market, technical feasibility and business risk. However, all of these frameworks lack a filtering step to reveal projects that meaningfully augment human capabilities.
To bridge that gap, I propose an intuitive and radically human-centric framework for making decisions about AI projects using shared human values of love, forgiveness and compassion. We can apply this framework to AI projects by asking three questions:
While these values are foreign to AI, they are inherent to people, so this framework surfaces projects that maximally align with human values and culls those that do not. Eliminating low-value automation projects reduces substantial downside risk. Highlighting opportunities to augment humans with AI will unlock the greatest business value.
As an acid test, imagine these two stories in the press:
The first story warns of AI’s dystopian downside, while the second promises true AI innovation. So, let’s consider specific examples of how we might use love, forgiveness and compassion when deciding on AI initiatives.
Using AI tools to help us do our jobs and improve our lives is an enormous productivity opportunity. However, there is a slippery slope between using AI to clean up a thoughtful employee review and writing a deceptively convincing review based on a few words dashed out by a manager in the minute before their next meeting. We must be careful.
Authenticity matters when love is required, including love for employees, customers and other stakeholders, even competitors and regulators. We all know what love is. Pragmatically, humans must remain in the loop when love is involved.
For example, we should take extraordinary care before using AI to:
Instead, we can look for ways AI can help provide more loving experiences, products and services. For example, we can use AI to:
Using AI to unlock human creativity, connect people to help and improve safety are loving strategies. How often do we have a chance to bring more love into our companies? Let’s seize the opportunity AI gives us.
Automating situations that are likely to require human forgiveness dehumanizes us. We cannot forgive a driverless car that kills a pedestrian because the vehicle cannot receive forgiveness – our forgiveness does not affect it. Automated harm lessens us because of this missing reciprocity.
We must avoid using AI to control areas of our businesses where forgiveness reigns. In a negative sense, it is inauthentic for AI to ask for and receive forgiveness when it has no human standing and cannot correct itself or make amends. In a positive sense, wanting and receiving forgiveness is a powerful individual and organizational motivator to drive change. Thus, when automation distances us from our mistakes or numbs our feelings, it destroys the business benefits of forgiveness. This prescription includes forgiveness for mistakes or personal harm by employees, customers, organizations and all stakeholders.
For instance, we must be cautious before using AI to:
Instead, let’s look for ways AI can make forgiveness faster and easier. For example, we can use AI to:
Forgiveness benefits the forgiver and the forgiven. If we care to help our employees, customers and broader society, we can give them no greater gift and receive no greater motivation for positive change.
AI hides inhuman complexity beneath deceptively simple faux-human exteriors. Yet, despite its mind-boggling complexity, there is no ghost in the machine. If those ghosts were real, they could feel our pain and wish for solace. Machines cannot.
We will thrive by avoiding AI control of situations where compassion is necessary. Our ability to perceive suffering and our desire to relieve it is the essence of human dignity. We must not separate our awareness of suffering from our desire to give relief. In practical terms, compassion combines love and forgiveness, and delivers the business benefits of both.
So, AI should not have a controlling seat at the table when we:
Instead, we can use AI to dignify our companies by making it safer and easier for us to feel compassion. For example, we can use AI to:
AI cannot be compassionate. But it can help unleash compassion that often lies dormant in our companies.
Love, forgiveness and compassion are the foundation of human experience. Our businesses take on dangerous downside risks and miss enormous opportunities to create value if we ignore these precious human values in the race to implement AI.
We must not diminish ourselves by allowing AI to curtail our feelings of love, forgiveness and compassion, and we must protect ourselves against the dehumanization of AI-created harm. We can thrive by using AI to unlock more opportunities to love, forgive, and be compassionate through AI projects that align with those human values. Finally, we must “never let a good crisis go to waste.” Business situations that involve love, forgiveness and compassion represent powerful engines of positive change. Let’s fuel business growth by elevating human capabilities with the power of AI.