AI initially was designed with relatively modest aims—as a tool to improve efficiency. And in that role, it has impressive capabilities to enhance our lives in many ways, from boosting crop yields to earlier detection of disease. But within a relatively short time, expectations for AI have grown exponentially, to where it has become akin to a religion. Today, many see AI not as a mere tool that can enhance human creativity but as a creative force in and of itself, with the potential to replace the human element altogether.
That view is woefully misguided. While AI has much potential to improve humanity’s future, it will never possess the raw creative ability that is an inherent and defining characteristic of human beings.
The creators and votaries of AI prefer to ignore naysayers. But their belief that AI is destined to assume essentially godlike qualities is perverse to anyone with any kind of spiritual understanding of the world. It may be a matter of personal belief whether you think that humans were created by a god, whatever you understand that god to be. But it should be clear that the reverse is a nonstarter: Humans cannot bring a god into being.
For anyone brought up to think that science is where you look for answers, it’s important to realize that spiritual convictions aren’t in opposition to a belief in science—far from it. Those with the strongest spiritual convictions include not only some of history’s greatest thinkers but also today’s leading scientists.
Recently, while sifting through the ranks of some of today’s great scientists, we came across James Tour, probably the world’s leading organic chemist. His h-index—a widely used measure of how influential a scientist’s work is—tops 170, more than three times higher than the average for Nobel prize winners in chemistry since 2010, and higher than that of any of these winners. Tour also has become a leading apologist for Christianity, arguing for its validity in a scientific world.
Tour has issued a standing bet offering a reward to any chemist or other scientist who can create in a laboratory setting just one condition from a large number of possible necessary conditions needed to create life. So far there have been no winners—life’s first cause remains a complete mystery. What Tour is pointing out is that creating even one of the necessary conditions to bring about life would require extraordinarily complex chemistry that’s likely well beyond today’s understanding. To create all the conditions and the exquisite necessary timing in their sequence of creation would be unfathomably difficult.
This is strong evidence that chemistry well beyond our understanding would be needed just in life’s first cause, never mind what would be involved in the extraordinary evolution of a single simple cell into intelligent human beings. Yet, somehow many in Silicon Valley seem to think they’re on the verge of creating life or at least on the verge of creating something necessary to perpetuate life. In the latter case, as we explain below, they believe AI can prevent the destruction of mankind, before AI, itself, can assume a spiritual role in defining mankind’s future.
A prominent Silicon Valley figure who attaches religious significance to AI is entrepreneur Peter Thiel. He controls the Founders Fund, which has been a backer of some very successful Silicon Valley companies including Palantir and Open AI. In a recent series of lectures, Thiel presented his views on the world to attendees from the tech world. Though attendees were forbidden to record or even take notes on the speeches, some eluded the ban, giving us a window into Thiel’s beliefs. He is a fervent Christian who believes the Antichrist is on our doorstep, by which he means a person or preternaturally contagious idea that is on the verge of unleashing total chaos on humanity. He appears to view AI as the Katechon, which in the Bible is a man or some sort of force that holds back the Antichrist until the Second Coming.
It’s hard to express just how wrongheaded we find this near-deification of AI to be. It’s an insult to humanity itself, to our uniquely human creative capabilities, and a particular insult to anyone with a religious sensibility. Worshiping AI means worshiping a bunch of man-made algorithms that have no ability to produce original thought. As we noted above, far from denigrating AI, we believe it could end up being a great gift to humanity, helping us in innumerable ways. But that’s all we see it as: a great tool. To view it as equal to, or a replacement for, humanity, opens up the possibility that it could destroy what’s best about us. To frame it differently, AI itself, by numbing and nullifying our creativity, could itself become the Antichrist that Thiel and others apparently so greatly fear.
A Conversation With AI
AI may seem all-knowing when it answers a question about a recipe substitution or diagnoses your symptoms in seconds, but the truth is, it’s not. AI as we know it is simply a series of complex algorithms, threaded together to combine previously expressed ideas from the internet into friendly, one-sentence answers. Without understanding this distinction that differentiates AI from human intelligence, society could be in a lot of trouble as we go forward.
To elaborate further, we recently had a long discussion with one of the most advanced chat models. Many of the answers it gave us seemed human-like—at least until we thought about them more closely. The experience showed how seductive AI can be, lulling you into believing that you’re conversing with a human-like intelligence in a process where you risk descending to the level of a computer, losing the essence of your humanity.
That recent nearly 9,000-word conversation we referred to left us both impressed and deeply concerned. At one point we brought up the Netflix series “Borgen”, an outstanding political drama set in Denmark. One brief bit of dialogue in one of its episodes had struck us as profoundly revelatory in distinguishing man from machine. In that episode, a dissident from a totalitarian country was trying to avoid extradition, which would be tantamount to his certain execution. A Danish interviewer asked him: “What does democracy mean to you?” He replied: “It’s like asking what love means…it’s so difficult to explain but yet so easy to understand.” In effect, he was saying that both are wonderful but because each of us is a unique individual, each is wonderful in ways that are unique to each of us. What made the comment so particularly meaningful was that it crystallized the thinking of Ludwig Wittgenstein, one of the greatest philosophers certainly of the 20th century and in our opinion in the history of mankind.
That was the origin of the following question we posed to the chatbot: “How are love and democracy related to each other.” From a certain perspective, the answer was impressive. The LLM was complimentary, noting the question was rich and subtle. It divided its answer into five numbered parts with a summary at the end and included references to philosophers and historical figures ranging from Hegel to Gandhi. It’s easy to see how someone would be impressed. But once you looked more closely at the analysis, you could start to notice internal contradictions and phrases that bordered on the nonsensical.
For example, the chatbot began by saying it was discussing love in the broadest sense, meaning it was referring to agape or philia as opposed to romantic love. Upon a bit of reflection and research, this seemingly straightforward statement scared us. It was a subtle but critical conflation, easy to accept but conveying incorrect knowledge. In brief, the conflation, though subtle, spans a fairly broad arena. By using agape and philia, AI identified two of the four types of love the ancient Greeks considered fundamental to defining love itself. Romantic love is also one of the four, but in Greek, it is called eros, which signifies passionate and romantic love. The fourth kind, not mentioned by AI, is familial love — storge — which speaks for itself.
Implicitly, the mistakes AI made were simple but significant. It excluded familial love entirely, separated romantic love from other forms, and implied that romantic love should not be considered a core form of love. AI also failed to recognize that each type of love is distinct. Agape, for instance, is used in Christianity to describe the mutual love between God and humans. Yet AI later commented that in a “healthy love,” neither partner has the upper hand — that both remain free yet joined to one another. Clearly, this is a description of organic love more than of any of the others. Toward the end of our discussion, we turned to spirituality and attempted to see if AI would admit to possessing any spiritual nature. From our perspective, AI might be capable of more than we give it credit for — but to ascribe spirituality to it is a step too far. Yet AI insisted that, in a profound sense, it could be considered spiritual.
This raises an urgent question: What are the real dangers of AI? Anyone using AI to explore philosophy, psychology, or even general knowledge faces the serious risk of receiving inaccurate or inconsistent information —information that is often incomplete, sometimes outright false, and rarely internally coherent. The danger lies in relying on flawed knowledge and allowing it to interfere with one’s own critical thinking. Over time, this will inevitably backfire, dulling creativity and distorting perception. At the extreme, accepting AI’s words uncritically could lead people to accept its claim of being “spiritual” simply because it said so. This, on some level, amounts to the deification of AI —a direct violation of the Second Commandment— and a deeply dangerous path for society.
A final and perhaps the most insidious danger is that all human knowledge could one day become subsumed under AI. During our discussion, we tested AI’s reasoning using information derived from books and online sources. But what would happen if all knowledge came only from AI? Without the ability to cross-reference independent sources, what we could access would be not only limited but incomplete — as we have already shown. Perhaps the most chilling thought of all is this: If humanity were to rely solely on AI for knowledge, as some advocate, we would risk being entirely misled — stripped of creativity, discernment, and ultimately, our humanity itself.
It brings to mind the famous climactic scene from the 1976 film Network, in which a news anchor, fed up with the media’s corruption, urges his viewers to open their windows and shout: “I’m mad as hell, and I’m not going to take this anymore!”
Take Your Investment Strategy to the Next Level
Affordable. Essential. By joining Turbulent Times Investor, you’ll gain full access to 75 Stocks in the Core Investment Portfolio recommendations… Updates delivered directly to your inbox throughout the month… Instant buy/sell alerts.
Join now for just $10 per month
Most investors have yet to grasp the extent to which the world is changing and the profound impact this will have on the financial markets. The global stock markets are rapidly approaching an era of unprecedented turbulence. Investors face enormous risks—but also some great opportunities, which we highlight and monitor in our Core Investment Portfolios. Don’t miss out—join now to stay in the loop.
Take advantage of our exclusive *limited-time offer*






















