top of page

Behind AI Lies Innocent Human Beings

What’s the missing component to AI that’s often neglected? The humans.


After years of AI buzz, the tech industry has taken immense shape. We now have the systems that can write, design, translate, and recommend for us. Not only that, we’re headed toward a future where AI might likely replace us all.


But even in this type of momentum, recent trends have shown that the market has become over saturated. With a heavy focus on innovation, nobody is talking about the most important part: the people who take responsibility for these machines.


When it comes to AI, we tend to shift our focus to algorithms, datasets, and convenience. We think about the tools it can do, but not about the people who do it. And if society at large is forgotten, the technology may end up defining us rather than serving us.


A big part of the issue is trust. One recent global study through KPMG International found that roughly 80% of people said they would be more willing to trust AI systems when solid organizational assurance mechanisms were in place. That means people aren’t saying they trust AI outright. They’re saying they want AI rooted in trust, transparency, and human accountability.


“When people talk about AI, they often forget the obvious, there are still humans behind it. I built CodeBoxx to bring that truth back into the spotlight. In Business, every algorithm, every product, every line of code must serve human curiosity, intent, and problem-solving. Like every other tech breakthrough we experienced, I believe AI was never meant to replace people,  it’s here to amplify what we’re capable of. That’s the mission that drives me: keeping humanity at the center of every technology we create,” adds Nicolas Genest, CEO and Founder of CodeBoxx


Humans are critical for purpose and context. For instance, in the healthcare space, a machine might be able to detect a heart defect, but it won’t be able to provide the treatment or operate the heart surgery. In this scenario, the human is needed to save the patient’s life.


The same narrative goes for any other workplace automation. As systems grow more capable of tasks, the more humans it must rely on to maintain accuracy and integrity. If we build AI with the mindset of humanity, important responsibilities, like a patient’s life, won’t go unnoticed.


On the other hand, AI without humans-driven values could look devastating. Imagine a world where the machines had final say in hiring, educating, or lending, without considering the lived experiences of the people affected. Or a society where humans simply didn’t have to come into work anymore because the robot handled it all. Without the people, the human race risks complete loss.


This is why inclusion must be treated as a stepping stone for AI development, and it’s not enough for humans to just be on the sidelines. To make systems that actually work, they must be designed by a wide range of voices, backgrounds, and perspectives. By doing this, AI doesn’t just get smarter, it reduces bias and becomes more transparent.


The important question is, what happens next? First, it’s important to accept that humans must remain central in modern innovation. It’s about building technology with people, their values, and their beliefs in mind, and then layering AI to support that. Secondly, it’s about proper education and oversight. Before product launches, AI machines need to be heavily researched. Humans must also make the tough decisions when AI cannot.


Today, let’s no longer avoid what’s been gone for years: human dignity. Because we know AI is not a future concept, as it’s already embedded in our everyday lives, why not reframe the rules so that the people can catch up? Otherwise, without proactive human-centered innovation, the world might default to a machine-only era.


One day, AI and humans might be able to work in harmony. But that is only if a shift in the process begins now. 


So the opportunity awaits. Will people embrace the moment?

Comments


bottom of page