This article is republished from Science and Culture Today.
COSM 2025 hosted a fascinating session on artificial intelligence (AI) with Uli Homann, Corporate Vice President in the cloud and AI business at Microsoft. Homann took us through a brief history of AI, reminding us that it was only three years ago — in November 2022 — that OpenAI first released ChatGPT to the public, and everyone went “gaga.”
But Homann quickly brought us back to reality. He said that the promise that ChatGPT represented artificial general intelligence (AGI) — “a ‘superbeing’ that would know everything” — is “fake.” Instead, he wryly commented that what we got was “autocomplete with ambition.” Homann nonetheless noted that “ambition and autocomplete” is “amazing.” Of course he’s right, but our current versions of AI are not the true AGI that everyone had been hoping for.
Thus, Homann was very clear that ChatGPT and other modern AIs cannot perform “reasoning.” He further said that AI has never had an “original thought” — a point we at Discovery would agree with, though we would add that AI has never even had any kind of a “thought,” full stop.
At bottom, AI is a very complex method of probabilistically predicting which numbers, pixels, or words are going to come next. But it doesn’t have any reasoning power to comprehend the meaning of what it’s processing.
The Assistant Phase
Homann’s presentation also gave insights into how Big Tech companies approached the release of AI, and he offered hints about where it’s going in the future.
Currently he says we are in the “assistant phase” of AI, where AI is given to the public to help us answer questions or solve problems. In this phase, we go to AI with a question, and it gives us an answer. It’s like the Star Trek computer, always on call, waiting to cater to your intellectual needs. Crucially, however, he noted that in this phase the AI “relies on you to provide the trigger in how you interact with it.”
But when AI assistants were first released, AI companies faced a problem: people were hesitant. According to Homann, many people were afraid of AIs, probably because of movies like Terminator, and reluctant to use them. Tech companies are aware of this hesitancy, so they have deliberately marketed these AIs to the public as the friendly and benign “copilot” or “assistant” — here to cater to your needs, not take over the world.
But this phase also has had its downsides. According to Homann, in this phase AI performance gains “didn’t always translate into measurable revenue growth.”
There have also been major problems with AI hallucinations. He said that when GPT 4 was first released in 2022 it had about 11 percent hallucinations. GPT 5 has since reduced hallucinations to below 1 percent, but even then hallucinations remain a problem that isn’t yet fully solved.
I’ve seen this problem firsthand. Over the last 12 months, I’ve taught and graded students turning in essay assignments. I could easily tell when AI had been used on assignments when they included AI-hallucinated quotations that didn’t actually exist. This problem is more common than you might expect.
Three Levels of AI
Finally, Homann presented three levels of human interaction with AI.
In Level 1, humans ask AI for answers. This is exactly where we are now. We control what AI does, and it’s basically just giving us information and advice. This is the “assistant phase” mentioned above.
What’s coming next is Level 2, he said: “Human assigns, AI executes.” Most of the general public doesn’t quite have access to this yet, but I can imagine how in the future it could help me with my current needs. Right now I have about 8,000 scans of archival documents with useless numerical filenames that I’m going through one at a time to coherently rename and organize into folders. It’s a slow and arduous process. In an AI-Level-2 world, perhaps I could instruct an AI embedded within my laptop’s operating system to “Rename all these files according to the following file-naming protocols.” It would be done in seconds, saving me about a month’s worth of work!
Level 3 is where it starts to get interesting. Here, “Humans and AI assign tasks to each other.” Perhaps this is already happening to some extent when people use AI schedule-makers to prioritize their workflow. But I suppose it’s completely optional because you can choose whether to follow the AI-generated task schedule or not. But imagine an AI assigning you tasks, and it isn’t optional.
This suggests a hypothetical AI Level 4 where “AI tells humans what to do.” This is my dystopian idea, and Homann didn’t explore this option in his presentation. But as a thought experiment, imagine that some tech executive assigns some AI its own managerial power to tell subordinate human employees what to do. Those rats in a corporate maze must now execute orders from their AI boss, or they’re out on the street looking for a job. I suspect many folks would prefer the Terminator version of AI to the tyranny of the soulless AI corporate boss!
An Objective Paradigm
What I appreciated about Homann’s presentation was its clarity and the paradigm it offered for seeing the progression of where AI has been, where it is now, and where it’s going next. He was clear-eyed about AI’s strengths and limitations, and also direct in telling us that AI cannot perform “reasoning” or have “original thoughts.” AI is a great tool, which has many strengths and many weaknesses. I only wish more of its advocates were as objective as Uli Homann was at COSM.
