Should AI Appreciate its own Ignorance?

Tom Burton


Subscribe Contact us

Since the origins of the quest for artificial intelligence (AI), there has been a debate about what is unique to human intelligence and behaviour and what can be meaningfully replicated by technology. In this article we discuss these arguments and the ramifications of 'ignorance' as it is expressed by current AI models.


To what Extent can Artificial Intelligence Match or Surpass Human Intelligence?


This article approaches the question of artificial intelligence by posing philosophical questions about the current limitations in AI capabilities and whether they could have significant consequences if we empower those agents with too much responsibility.


Two recent podcast series provide useful and comparative insights into both the current progress towards Artificial General Intelligence (AGI) and the important role of ignorance in our own cognitive abilities. The first is Season 3 of 'Google DeepMind: The Podcast”, presented by Hannah Fry, which describes the current state of art in AI. The second is Season 2 of the BBC's 'The Long History of… Ignorance' presented by Rory Stewart, which explores our own philosophical relationship with ignorance.


A Celebration of Ignorance


Rory Stuart’s podcast is a fascinating exploration of the value that we gain from ignorance. It is based on the thesis that ignorance is not just the absence of intelligence. It feeds humility and is essential to the most creative endeavours that humans have achieved. To ignore ignorance, is to put complex human systems, such as government and society, into peril.


The key question we pose is whether or not current AI appreciates its ignorance. That is, can it recognise that it doesn’t know everything. Can AI embrace, respect and correctly recognise its own ignorance: meaning it doesn’t just learn through hindsight but becomes wiser; and is fundamentally influenced, when it makes decisions and offers conclusions, that it is doing so from a position of ignorance.


The Rumsfeldian Trinity of Knowns


The late Donald Rumsfeld is most popularly remembered for his theory of knowns. He described that there are the things we know we know; things we known we don’t know; and things we don’t know we don’t know.


Stewart makes multiple references to this in his podcast. At the time that Rumsfeld made the statement it was widely reported as a blunder—as a statement of the blindingly obvious. Since then, the trinity of knowns has entered the discourse of a variety of fields and is widely quoted and used in epistemological systems and enquiries. Let us take each in turn, and consider how AI treats or understands these statements.


Understanding our 'known knowns' is relatively easy. We would suggest that current AI is better than any of us at knowing what it knows


We also put forward that 'known unknowns' should be pretty straightforward for AI. If you ask a human a question, and they don't know the answer, it is easy to report this an an unknown. In fact, young children deal with this task without issue. AI should also be able to handle this concept. Both human and artificial intelligence will sometimes make things up when the facts to support an answer aren’t known, but that should not be an insurmountable problem to solve.


As Rumsfeld was trying to convey, it is the final category of 'unknown unknowns' that tends to pose a threat. These are missing facts that you cannot easily deduce as missing. This includes situations where you have no reason to believe that 'something' (in Rumsfeld's case, a threat) might exist.


It is an area of huge misunderstandings in human logic and reasoning; such as accepting that the world is flat because nobody has yet considered that it might be spherical. It is expecting Isaac Newton to understand the concept of particle physics and the existence of the Higgs boson when he theorises about gravity. Or following one course of action because there was no reason to believe that there might be another available: all evidence in my known universe points to Plan A, so Plan A must be the only viable option.


In experiments with ChatGPT, there is good reason to believe that it can be humble; that it recognises it doesn’t know everything. But the models seem far more focused on coping with 'known unknowns' than recognising the existence of 'unknown unknowns'. When asked how it handles unknown unknowns, it explained that it would ask clarifying questions or acknowledge when something is beyond its knowledge. These appear to be techniques for dealing with known unknowns and not unknown unknowns.


The More we Learn, the More we Understand How Much we Don’t Know


Through early life, in our progression from childhood to adulthood, we are taught that the more you know and understand, the more successful you will be. Not knowing a fact or principle was not something to be proud of, and should be addressed by learning the missing knowledge and followed by learning even more to avoid failure in the future. In education we are encouraged to value knowledge more than anything else.


But as we get older, we learn with hindsight from the mistakes we have made from ill-informed decisions. In the process, we become more conscious of how little we actually know. If AI in its current form does not appreciate or respect this fundamental concept of ignorance, then we should ask what flaws might exist in its decision-making and reasoning?


The Peril of Hubris


To feel that we can understand all aspects of a complex system is hubris. Rory Stewart touches on this from his experience in government. It is a fallacy to believe that we should be able to solve really difficult systemic problems just by understanding more detail and storing more facts about the characteristics of society.


As Stewart notes, this leads to brittle, deterministic solutions based on the known facts with only a measure of tolerance for the 'known unknowns'. Their vulnerability to the 'law of unintended consequences' is proven repeatedly when the solution is found fundamentally flawed because of facts that were never, and probably could never be, anticipated.


These unknown unknowns might be known elsewhere, but remain out of sight to the person making the decision. Some unknown unknowns might be revealed, by speaking to the right experts or with the right lines of enquiry. However, many things are universally unknown at any moment in time. There are laws of physics today that were unknown unknowns to scientists only few decades previously.


The Basis of True Creativity


Stewart dedicates an entire episode to ignorance’s contribution to creativity, bringing in the views and testaments of great artists of our time, like Antony Gormley. If creativity is more than the incremental improvement of what has existed before, how can it be possible without being mindful of the expanse of everything you don’t know?


This is not a new theory. If you search for “the contribution that ignorance makes to human thinking and creativity” you will find numerous sources that discuss it, with references ranging from Buddhism to Charles Dickens. Stewart describes Gormley’s process of trying to empty his mind of everything in order to set the conditions for creativity. Creativity is vital to more than creating works of art. It is an essential part of complex decision-making. We use metaphors like 'brainstorming or blue sky thinking' to describe the state of opening your mind and not being constrained by bias, preconception or past experience. This is useful, not just to come up with new solutions, but also to 'war game' previously unforeseen scenarios that might present hazards to those solutions.


What would you Entrust to a Super-Genius?


So, if respecting and appreciating our undefined and unbounded ignorance is vital to making good and responsible decisions as humans, where does this leave AI? Is AI currently able to learn from hindsight – not just learn the corrected fact, but learn from the very act of being wrong? In turn, from this learning, can it be more conscious of its shortcomings when considering things with foresight? Or are we creating an arrogant super-genius unscarred by its mistakes of the past and unable to think outside the box? How will this hubris affect the advice it offers and the decisions it takes?


What if we lived in a village where the candidates for leader were a wise, humble elder and a know-it-all? The wise elder had experienced many different situations, including war, famine, joy and happiness; they have improvised solutions to problems that they have faced in the past, and have learnt in the process that a closed mind stifles creativity; they knew the mistakes they had made, and therefore knew their eternal limitations. The village 'genius' was young and highly educated, having been to the finest university in the land. They knew everything ever written in a book, and they were not conscious of making a bad decision.


Who would you vote for to be your leader?


Conclusion


The concepts described here are almost certainly being dealt with by teams at Google DeepMind and the other AI companies. They shouldn’t be insurmountable. The current models may have a degree of caution built into them to damp the more extreme enthusiasm. But I’d argue that caution when making decisions based on what you know is not the same as creatively exploring the 'what if' scenarios in the vast expanse of what you don’t know.


We should be cautious of the advice we take from these models and what we empower them to do—until we are satisfied that they are wise and creative as well as intelligent. Some tasks don’t require wisdom or creativity, and we can and should exploit the benefits that these technologies bring in this context. But does it take both qualities to decide which ones do? We leave you with that little circular conundrum to ponder.


Contact - Real Cost of Scope 3

Subscribe to our Newsletter

Blog Subscribe

SHARE CONTENT

Two blocks of data with bottleneck inbetween
by Paul Brooker 29 October 2025
Read our article on hidden complexity and find out how shadow IT, duplicate tools and siloed buying bloat costs. See how CIOs gain a single view of IT spend to cut waste, boost compliance and unlock 5–7% annual savings | READ FULL ARTICLE
Neon 'Open' sign in business window
by Tom Burton 9 October 2025
SMEs make up 99% of UK businesses, three fifths of employment, over 50% of all business revenue, are in everyone's supply chain, and are exposed to largely the same threats as large enterprises. How should they get started with cyber security? Small and Medium sized Enterprises (SME) are not immune to the threat of cyber attacks. At the very least, if your business has money then it will be attractive to criminals. And even if you don’t have anything of value, you may still get caught up in a ransomware campaign with all of your data and systems made inaccessible. Unfortunately many SMEs do not have an IT team let alone a cyber security team. It may not be obvious where to start, but inaction can have significant impact on your business by both increasing risk and reducing the confidence to address new opportunities. In this article we outline 5 key questions that can help SMEs to understand what they need to do. Even if you outsource your IT to a supplier these questions are still relevant. Some can’t be delegated, and others are topics for discussion so that you can ensure your service provider is doing the right things, as well as understanding where their responsibilities stop and yours start. Q1: What's Important & Worth Defending Not everything needs protecting equally. In your personal life you will have some possessions that are dear to you and others that you are more laissez-faire about. The same applies to your digital assets, and the start point for any security plan needs to be an audit of the things you own and their importance to your business. Those ‘things’, or assets, may be particular types of data or information. For instance, you may have sensitive intellectual property or trade secrets; you may hold information about your customers that is governed by privacy regulations; or your financial data may be of particular concern. Some of this information needs to be protected from theft, while it may be more important to prevent other types of data from being modified or deleted. It is helpful to build a list of these assets, and their characteristics like the table below:
Illustration of EV sensor fields
by Duncan Clubb 25 September 2025
Explore the rise of edge AI: smaller data centres, faster networks, and sustainable power solutions. See why the future of digital infrastructure is distributed and intelligent | READ FULL ARTICLE
A close-up of the Downing St sign
by Craig Cheney 19 September 2025
Craig Cheney | The conversation around artificial intelligence (AI) in Government has shifted in recent years. The publication of the UK Government’s AI Playbook represents more than just updated guidance — it signals a huge shift in the government's approach to AI.
Volcano lava lake
by Scott Armstrong 18 September 2025
Discover why short-term thinking on sustainability risks business growth. Explore how long-term climate strategy drives resilience, valuation, and trust | READ FULL ARTICLE
Close up of electricity pylon
by Duncan Clubb 17 September 2025
The UK’s AI ambitions face gridlock. Discover how power shortages, costly electricity, and rack density challenges threaten data centre growth – and what’s being done | READ FULL ARTICLE
Abstract neon hexagons
by Tom Burton 17 September 2025
Delaying cybersecurity puts startups at risk. Discover how early safeguards boost investor confidence, customer trust, and long-term business resilience | READ FULL ARTICLE
More posts