Our Nigeria News Magazine
The news is by your side.

When Scholarship Is Fabricated: A Nigerian Professor’s Encounter With an AI-Generated Ghost Citation

60

When Scholarship Is Fabricated: A Nigerian Professor’s Encounter With an AI-Generated Ghost Citation

By Matthew Eloyi

For Prof. Moses Ochonu, a routine Google alert turned into something far more unsettling, a glimpse into what he fears may be the future of academic publishing if urgent safeguards are not put in place.

Ochonu, a professor of African History at Vanderbilt University, had received a notification that his work had been cited in a newly published article. For scholars, such alerts are common, often offering a moment of validation or, at the very least, curiosity about how one’s research is being engaged. But this time, something was off.

Clicking through to the article, Ochonu located the in-text citation bearing his name. It referenced a 2019 publication that he did not recognise. At first, the confusion was almost humorous. Had he forgotten his own work? Was there another academic publishing under his name?

The answer, he soon realised, was more troubling. Scrolling down to the reference list, he found the full citation: “Empathizing with the Oppressor: The Case of Nigerian Political Stockholm Syndrome.” It was attributed to him. But, he insists, it does not exist.

What he encountered, he believes, is a fabricated citation, likely generated by artificial intelligence and inserted into a scholarly article without verification.

Ochonu’s experience is not an isolated anomaly but part of a growing global concern in academic circles. With the increasing use of AI tools in research and writing, scholars and publishers are grappling with a new phenomenon: “hallucinated” references.

These are citations that look entirely legitimate (complete with plausible titles, dates, and authors) but have no basis in reality.

In Ochonu’s case, the phantom article was convincing enough to be included in a published work. That it bore his name added another layer of concern, raising questions not just about accuracy but about intellectual identity and reputational risk.

What has particularly alarmed the historian is where the article originated. According to him, the piece was authored by a Nigerian scholar and published in a Nigerian journal.

For Ochonu, this detail points to a deeper structural issue. Academic publishing in Nigeria has long faced scrutiny over peer review standards, editorial rigor, and the proliferation of lesser-known journals with inconsistent quality controls.

The introduction of AI into this already fragile ecosystem, he suggests, could accelerate a decline in standards if not carefully managed.

His reaction was stark: a warning that unchecked use of AI could “kill whatever is left of integrity in scholarly publishing” in the country.

The emergence of generative AI has transformed how knowledge is produced, accessed, and disseminated. For many researchers, it offers efficiency, helping to summarise texts, suggest structures, or even generate drafts.

But as Ochonu’s experience illustrates, the technology also carries risks, particularly when outputs are accepted uncritically.

Unlike traditional errors, AI-generated inaccuracies can be systematic and difficult to detect at a glance. A fabricated citation does not immediately appear false; it blends seamlessly into the conventions of academic writing.

This raises pressing questions: Who is responsible for verifying sources? Are authors relying too heavily on AI tools? And are journals equipped to detect such issues during peer review?

At its core, the issue is not just about one false citation. It is about trust. Academic publishing relies on a chain of credibility: authors produce work grounded in verifiable sources; reviewers scrutinise it; editors ensure standards are upheld. When any link in that chain weakens, the entire system is at risk.

For younger scholars, especially in environments where pressure to publish is intense, the temptation to cut corners using AI tools may be significant. Without strong institutional checks, such practices could become normalised.

Ochonu’s discovery reads less like an isolated grievance and more like an early warning signal. His experience underscores the need for Nigerian universities, journal editors, and researchers to rethink their engagement with emerging technologies. It calls for stricter editorial policies, improved peer review mechanisms, and greater awareness of AI’s limitations.

Because if a scholar can be cited for a paper he never wrote, the implications extend far beyond personal inconvenience. They strike at the very foundation of knowledge production.

In the end, the question is not whether AI will shape the future of academic publishing; it already is. The real question is whether institutions will adapt quickly enough to ensure that, in the process, truth is not replaced by something that merely looks like it.

Leave A Reply

Your email address will not be published.