
There is an increasing interest in Artificial Intelligence (AI) and its related technologies. AI has received significant acclaim for its capacity to support prompt decision-making and efficiently handle large volumes of data.
In Africa, there is increasing interest in the potential of AI technology to enhance value chains across both the public and private sectors. In the end, this might act as a driver for economic and social transformation on the continent.
In Zimbabwe, there is a growing excitement around activities related to artificial intelligence. For example, educational institutions are beginning to recognize the importance of updating their curricula to match the changing technological environment.
In the end, the aim is to have a workforce that is also skilled in utilizing technology like AI to advance the Education 5.0 initiative towards Vision 2030.
Another point to highlight is the rise of centers like the Zimbabwe AI Lab and Tech Hub Harare, aiming to foster the growth of tech entrepreneurs, with AI tools anticipated to play a significant role.
The minister of ICTs has also clearly emphasized the importance of forming a multi-stakeholder AI committee, which will serve as an advisory body for those involved in Zimbabwe's knowledge and ICT sector.
However, even with these developments, we must remain vigilant. A key concern involves a phenomenon called AI hallucinations. In basic terms, AI hallucinations describe the misleading certainty an artificial intelligence system shows when producing information that seems reliable but is actually incorrect, contradictory, or completely made up.
Some have characterized this as an illogical result that requires thorough checking prior to sharing or publishing.
A research paper featured in the journal Humanities and Social Sciences analyzed 243 cases of AI-created misleading information, identifying several recurring mistakes as key results of the study.
These encompassed mistakes related to facts, contradictions, logic, reasoning, errors, and even made-up content. The results highlight the importance of applying an additional level of careful review to any information obtained via AI tools.
A double layer of misleading assurance is present from both the AI system and the final user.
Initially, the AI system depends significantly on the questions and instructions provided by the user to create its responses. Throughout this procedure, numerous things have the potential to go wrong.
Certain outcomes might be based on real facts and actual events, whereas others could be weak, hypothetical, or completely wrong. The risk is that everything is conveyed with equal certainty, requiring the user to determine what is correct.
The second level of misplaced assurance stems from the human user. In this context, elements like impression management become relevant, as people may utilize information—whether confirmed or not—to push a particular viewpoint or make a favorable impression. This frequently happens when users do not critically examine or verify AI-produced content prior to relying on it for decision-making or reinforcing their arguments.
Against this background, it's unsurprising to witness the difficulties that have already arisen in parts of Zimbabwe's economy as a result of unthinking application of AI. Take the legal field, for instance.
Recently, the Supreme Court of Zimbabwe received an apology following the submission of a legal document that included false case law and incorrect interpretations produced by AI. This issue is also impacting the legal community in South Africa, where judges have criticized lawyers for referencing non-existent case law citations that were generated as hallucinations by AI tools.
In a prominent example, an acting judge directly linked the presence of fabricated legal references in a submission to the utilization of AI-produced material.
What action should we take?
Initially, we need to recognize the real advantages of AI technologies in simplifying our tasks and daily routines.
As their growing popularity becomes evident, it's apparent that we are just starting to tap into their complete possibilities. These instruments will keep advancing and integrate more deeply into our everyday routines.
Second, the responsibility and duty of care when utilizing AI rests entirely with us, the final users. Employing unconfirmed data is inappropriate and may lead us into risky, potentially legally problematic, circumstances.
I asked ChatGPT, a widely-used generative AI platform, for guidance on managing the issue of AI hallucinations.
The reply stated: "When working with AI, ensure you cross-check its information against reliable outside sources, examine the evidence or citations it offers, and consider it as a helper, not the ultimate source of truth."
Third, and most importantly, we should not give up our autonomy and critical thinking when confronted with technological advancements. The trust we have in AI results should be tempered with doubt and an openness to challenge them.
We must not let ourselves be controlled by machines, allowing our ability to think critically to weaken under the influence of AI-written content.
Staying alert and utilizing our human judgment is a crucial ability for today and tomorrow.
We are genuinely experiencing the peak of a technological moral panic, an era where our capacity to utilize our executive functioning abilities is diminishing, just when we require them most. It is an age where false voices are abundant, spreading with the simple click of a button, frequently without checking or thoughtful consideration.
Nevertheless, this is precisely the time when we need to be most cautious and step up to the challenge of developing additional abilities and routines that demonstrate our dedication to honesty, critical thinking, and validation.
This is what defines us as humans and our exercise of control.
Artificial intelligence can serve as a strong partner, but it is essential that humans continue to be the ones who determine what is accurate, dependable, and deserving of confidence.
- Chinyamurindi serves as a professor within the Department of Applied Management, Administration, and Ethical Leadership at the University of Fort Hare in South Africa. He expresses his views individually. These weekly New Perspectives pieces, featured in the Zimbabwe Independent, are organized by Lovemore Kadenge, an independent advisor, who is also the managing consultant at Zawale Consultants (Pvt) Ltd, former president of the Zimbabwe Economics Society, and former president of the Chartered Governance & Accountancy in Zimbabwe (CGI Zimbabwe). —[email protected]or mobile: +263 772 382 852.