Personal Articles

Why AI Hallucinates—and What That Reveals About Human Intelligence

Recently, OpenAI released a study exploring why AI hallucinates. They discovered that hallucinations are a direct consequence of a fundamental aspect of machine learning: the success metric. When success is defined by accuracy, AI tends to hallucinate more. Instead of acknowledging a lack of information and requesting clarification, the drive for higher accuracy leads the model to generate an answer in the hope that it might be correct. After all, there's always a chance that a lucky guess could be right.

This is where probability outweighs reasoning. OpenAI also discussed potential solutions to this issue. Now, when an AI confidently provides a wrong answer, it is penalized more heavily than if it had simply admitted it lacked sufficient information. This adjustment highlights a deeply human trait: humility. While expressed in varying degrees and forms, humility is something all humans possess, it guides and shapes how we learn throughout our lives.

This episode is one of many that demonstrate the limits of mimicking human intelligence in machines. The act of learning and acquiring intelligence goes beyond even our own understanding. What does it truly take to learn? To be knowledgeable? And do these questions matter if the ultimate goal is simply to have a machine perform like a human? Or is the real goal to have it perform better?

The truth is, when we examine this case,where something as seemingly straightforward as using accuracy as a success metric led to one of the most persistent challenges in AI over the past five years,we begin to realize that learning itself may involve elements we don’t fully understand. And these elements can have consequences that are more visible than the concepts behind them.

The question of how to measure intelligence and understanding has long been studied in pedagogy. Should we evaluate a child based solely on correct answers, regardless of how they arrived at them? Or should we assess their reasoning process? And if so, how can we truly know what that process was?

AI raises countless questions because, when examined closely, the intelligence part is something humanity itself hasn’t fully defined. Interestingly, researchers found that prompts encouraging reasoning and step-by-step thinking tend to produce more accurate AI responses. Isn’t that similar to how we guide children, leading them through their thought process often results in better answers?

The ability to connect diverse subjects and areas of knowledge is essential to understanding AI, and this is just one example. To grow as a professional in this field, technical skills are important, but I would argue that the ability to look beyond numbers, models, and screens is what will truly define success in the AI era.

Why Should We Care About AI Ethics?

Ethics is the study of right and wrong. It is a set of moral principles that guide human behavior. In everyday life, ethics helps us navigate complex decisions, from how we treat others to how we use power and resources. In the context of technology, and especially artificial intelligence (AI), ethics becomes even more critical because the decisions made by machines can affect millions of people, often invisibly.

AI ethics is a multidisciplinary field that explores how to design, develop, and use AI systems in ways that are fair, transparent, and aligned with human values. It asks questions like: Who is responsible when an AI system causes harm? How do we ensure AI doesn’t reinforce bias or discrimination? Can we trust AI to make decisions that affect our lives? These questions are already shaping policies, products, public opinion, and very much our daily lives.

That way, AI is transforming society: from how we work and learn to how we communicate and make decisions. But with great power comes great responsibility. If we don’t think critically about how AI is used, we risk reinforcing existing inequalities, losing control over our data, and creating systems that are efficient but unjust.

In a world where many rush to use AI for shortcuts, those who pause to understand its impact will stand out. Caring about AI ethics is not just about avoiding harm, but rather, it’s about building a better future. It’s a mindset that reflects maturity, responsibility, and leadership. By learning how to use AI thoughtfully and transparently, you’re gaining a skill that will shape your identity as a professional and as a human being. Employers, educators, and communities are looking for people who can navigate this new era with integrity. Students who understand the ethical dimensions of AI will be better prepared to lead in their fields, make informed choices, and contribute to a more inclusive and equitable digital world.

This text was written as part of the AI Literacy pathway from St. Lawrence College, ON, Canada. Full work can be seen at: URSLC/exai

Contact

Location

Kingston, ON, Canada

Email

jessicaynsato@gmail.com

Linkedin

linkedin.com/in/jessica-sato