By Stephen Leeb, PhD and Megan Davis
“People are worried about AI taking over the world, and Chat GPT can’t even do basic computations.”
The above quote was from a video that the musician Rick Beato made to highlight the limitations of AI and urge the importance of prioritizing human creativity. It makes plain the widening gap between public perception of AI and its functional reality.
We’ve all heard that AI is autodidactic—that it evolves independently, continually teaching itself via the input of massive amounts of information. But the reality is more complex, because to be able to simply absorb all that information, it would run into a scaling problem.
We recently asked Gemini, Google’s most up-to-date AI tool, how it deals with this issue. And its answer was incredibly revealing. It admitted that it actually depends on the people who control it. AI needs human contribution to determine which words it learns and what information it finds worthwhile to input. So then, how could AI possibly be devoid of the biases that humans have? It can’t. AI is not an independent, neutral mind: It is a curated reflection of a small, unelected group of architects, and this tethers it to human fallibility.
We had simply sought to get AI to admit it cannot be creative and doesn’t understand metaphor. And while we found both of those things to be true, we also came across a far more disturbing truth.
Follow the AI money
In the same way people say “follow the money” when reading a newspaper—because if you know who owns the newspaper, you’ll know who controls the information—the same can be said of AI. A newspaper doesn’t (generally) report a story that’s outright false. Rather, it presents a story in a way that reflects the perspective or bias of those in charge. Similarly, AI, which has been presumed to be bias-free, is anything but.
To understand this better, you can make an analogy with how over the years, major theological works have been translated in ways to shape entire communities and reinforce their prejudices. For example, there are many different versions of the Bible, each doctrinally driven and aimed at specific subsects. The New World Translation is used by Jehovah’s Witnesses, while The Living Bible is associated with Ken Taylor and used to justify Arminian theological views, omitting or altering verses that do not align. If we accept any version as dogma, without doing our own research and allowing for human error, we stray further from the truth. The dawn of the AI era is no different. There are people who own the AI we use, and they stand to gain, whether in the form of money or something else, from the furtherance of specific narratives.
If AI were truly self-taught, it would learn from everyone in equal measure, exactly proportionate to populations of every kind of person, but that’s not the case. As AI models grow, they can’t simply take in the entire internet without becoming inefficient and chaotic. AI was supposed to be a technological advancement that would free us from the prejudices that have plagued the world for ages. Instead, these prejudices are being reinforced—and are reinforcing the existing power structure—but now are dressed up in a different guise, as a “sterile” algorithm.
Hear it in Gemini’s own words:
When we asked Gemini to respond to the issue of scaling and the exponential increase in data input, Gemini answered:
“Curation: The shift is moving from ‘all the data’ to ‘the best data.’ By training on high-reasoning textbooks or clean code, we can achieve better results with a smaller (and thus more energy-efficient) model.”
We then asked it: “In terms of ‘all the data’ versus ‘the best data’ who makes that decision?”
Gemini’s response:
“…the answer is a bit uncomfortable: It’s a small, unelected group of humans. While I am a product of ‘big data,’ the filter that determines what is ‘good’ vs. ‘bad’ is surprisingly manual and human-centric… At companies like Google, OpenAI, or Meta, researchers define the heuristics for data quality. They don’t read every sentence, but they write the code that decides what stays. They might decide that a peer-reviewed paper is ‘high quality’ and a 4chan thread is ‘low quality.’ The Bias Problem: If an engineer in California decides that ‘formal English’ is the gold standard for ‘the best data,’ the model might lose the ability to understand regional dialects or slang… [The Human Feedback Loop] is the most critical stage. Reinforcement Learning from Human Feedback (RLHF) involves thousands of human contractors (often in lower-cost labor markets) who ‘rank’ my outputs. How it works: I generate two answers. A human looks at both and says, ‘Answer A is more coherent and helpful; Answer B is a hallucination.’ The Result: I am literally trained to please these specific humans. If they prefer a certain tone or a specific type of logic, I ‘learn’ that their preference is the definition of ‘best.’”
Similarly, you have the emergence of marketing companies like Profound, which are a new AI form of Search Engine Optimization. People may ask AI things like, “How can I get better organized?” and trust that the AI they are using will give them an unbiased response, when it actually might suggest an app that the AI owners stand to benefit from if you use it. The fact that these companies see a marketing gap that they seek to fill isn’t in itself necessarily a problem. What is a problem is that the “objective answer” is up for sale.
The competency gap
AI is a tool that has the potential to do enormous good but that also can be used to control people. It is also important to keep in mind is that AI is a limited tool.
One example is that if you phrase a prompt one way, you get one answer. But if you phrase the same query a different way, you’ll get a different answer. AI requires precise, step-by-step human hand-holding. Without a human to guide every individual task, the system “spins out,” showing it can’t replicate the end-to-end reasoning of a human mind.
And that reinforces the importance of human creativity. If we try to supplant freedom of thought and ideas with AI, it could lead to our country malfunctioning on multiple levels. It could mean there would be no food on grocery store shelves, because if you ask AI to perform a well-defined task that requires various steps, it can’t do it. You have to task it individually at every step, which could end up taking more time than simply doing the task yourself. In October 2025, researchers from the Center for AI Safety (CAIS) and Scale AI, led by Mantas Mazeika, released “The Remote Labor Index: Measuring AI Automation of Remote Work”. They described the report as “a broadly multi-sector benchmark comprising real-world, economically valuable projects designed to evaluate end-to-end agent performance in practical settings.”
The findings were astounding. AI agents performed “near the floor” on RLI, and the highest-performing agent achieved an automation rate of a mere 2.5%. Any human employee with such a low rating would be instantly fired—and this 2.5% represented the best of AI.
The data misinformation crisis
These results would be less harrowing if the companies pouring billions of dollars into AI development acknowledged the technology is in a rudimentary or beta stage, where a 2.5% automation rate would be a mere building block of development. But creating any credible path to improvement and development is a major problem. If you look at a chart of where different AI models train, serious issues emerge. Profound, the same marketing company that helps brands gain visibility in AI-generated answers we referenced earlier, shows a Google AI Overview Pie Chart of Percentage Share in which its two largest training grounds for Gemini are Reddit, at 21%, and YouTube, at 18.8%. ChatGPT uses Wikipedia (47.9%) and Reddit (11.3%).
There are vast problems with all these user-generated sites, including everything from misinformation to bias. Harvard University’s explanation of sources acceptable to cite alerts researchers that “…when you’re doing academic research, you should be extremely cautious about using Wikipedia. As its own disclaimer states, information on Wikipedia is contributed by anyone who wants to post material, and the expertise of the posters is not taken into consideration. Users may be reading information that is outdated or that has been posted by someone who is not an expert in the field —or by someone who wishes to provide misinformation. While Wikipedia editors do correct misinformation, observers have found that they don’t catch everything—at least not right away.”
Further, they warn: “There are other sites besides Wikipedia that feature user-generated content, including Quora and Reddit… Keep in mind that because these sites are user-authored, they are not reliable sources of fact-checked information.” And as for YouTube, the facts, or lack thereof, are perhaps even worse. According to The Guardian, Google, the owner of YouTube, was sent a letter signed by more than 80 groups, including Full Fact in the UK and the Washington Post’s Fact Checker, as well as fact-checking groups in India, Nigeria, the Philippines, and Colombia, urging YouTube to implement changes to the platform to prevent and combat the spread of disinformation, with Dan Milmo, The Guardian’s global technology editor, calling YouTube “a major conduit of online disinformation and misinformation worldwide.”
If we attempt to supplant human creativity with a system that, at best, is 97.5% incapable of completing a real-world task and 100% dependent on the biases of its creators, we aren’t moving toward a more efficient future. We are moving toward a sterile, distorted reality where human inventiveness is sidelined by a machine that doesn’t understand metaphor, cannot perform a simple calculation, and only knows how to tell its masters exactly what they want to hear. In a world where energy and resource prices drive everything, it is difficult to see how, even if we are able to dramatically improve the automation output rate, it could possibly come close to competing with outsourcing. We could easily find someone on Fiverr or Upwork to do a high-caliber creative job for very little money. And yet the cost of AI is bound to increase with rising scarcities of water and, relatedly, electricity.
What if, as Rick Beato suggests at the end of his video, we instead put even a fraction of the billions of dollars being invested in AI into music education, art education, STEM—any form of education that furthers our children, our future, and human creativity—and utilized AI purely as an aide to human creative development? Who knows how bright our future could be.
Take Your Investment Strategy to the Next Level
Affordable. Essential. By joining Turbulent Times Investor, you’ll gain full access to 75 Stocks in the Core Investment Portfolio recommendations… Updates delivered directly to your inbox throughout the month… Instant buy/sell alerts.
Join now for just $10 per month
Most investors have yet to grasp the extent to which the world is changing and the profound impact it will have on financial markets. The global stock markets are rapidly approaching an era of unprecedented turbulence. Investors face enormous risks—but also some great opportunities, which we highlight and monitor in our Core Investment Portfolios.
Don’t miss out—join now to stay in the loop.

















