Image default
Guest Articles

The Human-AI Relationship Relies on Trust

G. Craig Vachon talks about AI application in various arenas of human life cycle. He emphasizes on the limits of AI considering its experiences gained at every dimension.

The future of AI depends on the technology complementing human intelligence rather than supplanting it. 

If we are being thoughtful, we don’t want AI to gain human-like intelligence. Instead, we should want AI to augment and complement human intelligence. 

Human intelligence is terrific. Until it is not. We humans forget things. We misremember things. We get stubborn and won’t learn new things. Our brains are designed to stop us from paying too much attention. We deprioritize difficult things. We like what we like, and will rationalize for those things. Human intelligence is amazing (understanding context, being creative, problem solving), until it isn’t. 

And yet, AI today gives us only the thinnest veneer of humanity. It is enormously narrow in its scope. And we desire AI to help solve humanity’s challenges. AI shows us correlation, not causation. But it’s ability to show us correlation over huge data sets is an enormous strength. As long as we humans understand what AI has actually learned.

AI’s challenges are many, but the two most significant are:

1) it often perpetuates the biases of the humans who trained it;

2) AI often fails the common-sense test due to the lack of context.

For example, a COVID AI diagnostic agent zeroed in on the typeface different hospitals used to label scans, and concluded that the fonts used by hospitals with worse cases were good predictors of COVID risk.

Right now, AI needs to earn our trust. And yet, trust is only gained through shared common experiences.
Humans need to interact, in real time, with AI in order to share the experience of the learning/AI training, (reinforcement (RL), human-in-the-loop (HITL) learning). This will create an apprentice-like relationship with humans. Humans could then correct, steer and/or tweak the AI’s training, explain the results, and thereby, trust the ongoing process to ensure value-alignment and social benefit. In short, humans could act as a mentor to AI. We think this is the next generation of AI (AI 2.0?).

The way forward is not for AI to supplant human intelligence, but instead augment it. Human intelligence has its weaknesses that AI (and rules-based, heuristic systems) can complement. The end goal must be that AI elevates humanity. AI 2.0 must include RL with HITL to ensure humans trust and ultimately benefit. 

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to

Related posts

Specific Steps to Create a Data-Forward Culture for Your Employees

Rasheed Sabar

Collective Will Mastering your cognitive domain – Part 2

David Shrier

5 Lessons Learnt in Scaling Autonomous Mobile Robots from 10 to 10K

Joe Wieciek