Risk Management in an AI World

Tom Burton


Subscribe Contact us

Why do we trust computers?


AI is a constant feature in the news these days, but a couple of news items in recent weeks might have struck you as worthy of more thought. First was the announcement by Meta and OpenAI that they will shortly be releasing models that ‘think’ more like people and are able to consider the consequences of their decisions. And the second was an article in the FT that the speed of AI development is outstripping the development of methods to assess risk.


These two developments and the conflicts they raise are related to a quixotical feature of human nature: why do we trust computers more than humans?


If there is a reliable basis of trust in a person or in a piece of technology, then the level of risk being taken can be more clearly understood. Without a sound basis of trust, this risk becomes increasingly uncertain.


In this article, Tom Burton, a cyber security expert and technology thought leader, addresses the historical roots of this dilemma, and also answers the following:


  • Why is digital risk such a challenging concept?
  • How will AI make this problem more complicated?
  • What principles could be put in place to manage trust in an era of AI?


Where does this bias originate?


Why is it that a human is more likely to implicitly trust what a computer tells them than what another human tells them?


Before you hit back with disagreement, consider this scenario: How would you react if a gentleman dressed in the regalia of an African prince turned up at your door offering untold riches without any conditions? 


Many over the years have been taken in by exactly that offer received by email. Phishing, fake news on social media, and numerous other socially engineered deceptions rely on this digital bias, which has been the subject of plenty of research.


When Tom Burton was responsible for information systems, information management, and information exploitation in his Army Headquarters, he found it striking how many people assumed the accuracy of a unit’s location on a screen was 100% reliable. They would take a similar ‘sticky’ marker on a physical map with caution; recognising that there was implicit uncertainty in the accuracy of the ‘reported’ location, and that the unit in question may have moved significantly since they made that report. Yet, they would be happy to zoom in to the greatest detail on a screen and ask why A Squadron or B Company was on the east side of the track rather than the west.


This implicit trust has striking implications for many aspects of our digital lives, and will be brought into even sharper focus with the widespread adoption of AI applications.

Is tech more like a hammer or a human?


Tom has a theory. Humans are inherently fallible, deceitful and unpredictable. We make mistakes, sometimes intentionally; sometimes due to tiredness, emotions or bias. And we have spent at least 300,000 years reaffirming this model of each other.


Machines are considered to be predictable and deterministic. No matter how many times two large numbers are entered into a calculator, it’s expected that they will be added up correctly and consistently.


When considering the output of a computer, at least subconsciously, it is considered to be more like a hammer than a human: a predictable tool, that will produce the result it was programmed for.


But even in the case of conventional, non-AI technology, this perspective is a fallacy. Computers are designed and programmed by fallible humans. Mistakes are made, and those mistakes are transferred to the code, and in turn, the results that this code produces. The more complex the code, the less certainty there will be of accurate and consistent results. 


A ‘truthful’ response may also be dependent on having a similar perspective to the person who designed a system. If ambiguous problems are interpreted differently by the designer, then the probability that the results will be misinterpreted increases significantly.


People consider their digital tools as predictable as a hammer, but too frequently they operate more like the humans who created them.


This situation is only likely to get more extreme with AI. Technology is actively being designed to operate more like humans. To learn and apply insight from that learning in new situations. The question asked of a system today might well produce a different answer if asked again in the future, because the information and ‘experiences’ that answer is based on will change. In exactly the same way that if one asks a human the same question ten years apart, we are not surprised by a different answer, particularly if seeking an opinion.


How does this affect the risks of employing increasingly advanced technologies?


If technological tools are increasingly becoming more similar to humans than hammers, then how does this affect risk? The diversity and unpredictability of humans is something with which society is familiar and has been managing for some time; so let's look at the similarities, because, after all, the aim is to replace people with technologies that operate in a similar way.


It's known that people misunderstand tasks because language is ambiguous, and interpretation is based on an individual’s perspective. Everyone has different value systems, influencing where focus is placed and where corners might be cut. At an extreme, these different values may lead to behaviour that is negligent or even malicious. People can be subverted or coerced to do things. All of these behaviours have parallels with complex technology, and AI in particular.


Ambiguity will always create uncertainty and risk. AI models are based on value systems that are intended to steer them towards the most desired outcome; but those value systems may be imperfect, especially when defined in the past for unforeseen situations in the future. And it's known that technology can be compromised to produce undesirable outcomes.


But it is important to note that there are some fundamental differences as well. Groups and organisations tend to have inherent dampers that reduce extremes (though geopolitics might provide evidence against this). Recruiting one person to do a task might result in a ‘good egg’ or a bad one. But recruiting a team of ten increases the chance that different perspectives will challenge extreme behaviours. Greater diversity increases this effect. This does not eliminate risk, and a very strong character might be able to influence the entire team, but it introduces some resistance. However, if the ‘team’ comprises instances of the same AI model, feeding from the same knowledge base, using the same value systems and learning directly from each other, it might operate more like an echo chamber; as seen with runaway trading algorithms that are tipped out of control by the positive feedback of their value systems.


Are digital risk and business risk the same?


Assuming the trajectory of technology continues into the age of AI, intelligent tools will be used wherever possible to do tasks currently done by humans. Over time, every aspect of business will be decided or influenced by digital systems, using digital tools, operating on digital objects, to produce outcomes that will be digital in nature before they transition into the physical world.


Consequently, there will not be many risks that do not have a very significant digital element. It could therefore be argued that managing cyber-, information- or digital-risk (whichever term you prefer) will be inseparable from the majority of business risks. Going into the future, the current construct of a CISO function managing information risk separate from many of the other corporate risk areas might seem quaint. Instead, it's uncertain whether any area of business risk management will be able to claim they ‘don’t do technology’ and it will be more important than ever for technology risk to be managed with an intimate and universal understanding of the business.


Applying human risk management to artificial intelligence


Improving our understanding of risk by considering technology components as people, at least at a conceptual level, is possible. Society is already there in many respects and, as AI solutions emerge over the years and decades to come, this convergence is only going to accelerate. An AI model’s decisions are based on an unpredictable array of inputs that will change over time. They are based on a set of values that need to be maintained in line with business and ethical values. But most importantly, they will learn. Learn from their own experiences, and learn from each other. This sounds far more like a human actor than a hammer.


Tom Burton suggests that we can take lessons from managing human risk and apply them to digital risk. He suggests the following measures that can be immediately adopted by businesses:


  • Initiation: When embarking on an initiative, time needs to be taken to consider the inherent risks faced. Not just the discrete risks within the initiative but also the more systemic risks that need to be avoided.


  • Recruitment: Deciding what is meant by trust when selecting the types of technologies to be employed, and where technology will be applied versus where a human in the loop is desired, is necessary. The frame of reference used to define and measure trust, what external evidence can be taken, and how much needs to be reinforced with one's own due diligence needs consideration. For instance, government regulation and certification of AI models may provide a baseline of trust, but in the more sensitive and high risk areas of business, 'interviews' and tests will likely need to be applied.


  • Design: The more risk that can be designed out, the easier (and cheaper) it will be to manage the residual risk in operations. The concept of Secure by Design is important now but will become essential as the progression continues. Ensuring the equivalent of segregation of duties until more is understood about how these systems will operate, learn, and develop over time is crucial. Applying segmentation is too often ignored today, with broad flat networks, but it will be vital to contain risk in the future.


  • Operations: In operations, just like with people, preparation for the worst scenario is necessary. This is not just about monitoring an environment. It is also about maintaining an understanding of risk and war gaming new scenarios that come to mind. The military planning process always includes the question: "Has the situation changed." Industrialising this in the way systems are managed, maintained, and evolved is needed. The most obvious 'big issue' that comes to mind is the point when operationalised quantum computing comes to the fore; but there will be others as well and adaptation to overcome them will be required.


Summary: Optimism is Good, but Hope is not a Strategy


There is a lot to be optimistic about in the future. There will be change, and the need to adapt, but the pace of change and the breadth of its impact demands that we take an objective approach to understanding and managing risk—hope is not a strategy.


If we do not understand something, then our trust in it must decrease as a consequence. This does not mean that we should not employ it; after all, the trust we have in our people and our partners isn't binary. But we put controls and frameworks in place to limit the damage that people can do proportionate to this trust.


We need to treat technologies that demonstrate human traits in a similar way.

About Tom Burton


With over 20 years of experience in business, IT, and security leadership roles, including several C-suite positions, Tom has an acute ability to distil and simplify complex security problems, from high-altitude discussions about business risk with the board, to detailed discussions about architecture, technology good practice, and security remediation with delivery teams. With a tenacious drive to enhance cyber security and efficiency, Tom has spent a significant amount of time in the Defence, Aerospace, Manufacturing, Pharmaceuticals, High Tech, and Government industries, and has developed an approach based on applying engineering principles to deliver sustainable business change. 


If you would like to speak to Tom or anyone from the Cyber Security team, please use the form below.

About Cambridge Management Consulting


Cambridge Management Consulting (Cambridge MC) is an international consulting firm that helps companies of all sizes have a better impact on the world. Founded in Cambridge, UK, initially to help the start-up community, Cambridge MC has grown to over 160 consultants working on projects in 20 countries.


Our capabilities focus on supporting the private and public sector with their people, process and digital technology challenges.


For more information visit www.cambridgemc.com or get in touch below.


Contact - Risk Management in an AI World

Subscribe to our Newsletter

Blog Subscribe

SHARE CONTENT

Close up of electricity pylon
by Duncan Clubb 17 September 2025
The UK’s AI ambitions face gridlock. Discover how power shortages, costly electricity, and rack density challenges threaten data centre growth – and what’s being done | READ FULL ARTICLE
Abstract neon hexagons
by Tom Burton 17 September 2025
Delaying cybersecurity puts startups at risk. Discover how early safeguards boost investor confidence, customer trust, and long-term business resilience | READ FULL ARTICLE
Neon wave
by Anthony Aarons 16 September 2025
An in-depth look at AI risk and governance: OECD frameworks, EU AI Act, and UK/US strategies reveal how nations balance innovation with safety and accountability | READ NOW
Aerial shot of wind turbines
by Pete Nisbet 15 September 2025
Discover how businesses can drive value through sustainability by focusing on compliance, cost savings, and credibility—building trust, cutting emissions, and attracting investors | READ ARTICLE NOW
Abstract kaleidoscope of AI generated shapes
by Tom Burton 10 September 2025
This article explores the ‘Third Way’ to AI adoption – a balanced approach that enables innovation, defines success clearly, and scales AI responsibly for lasting impact | READ FULL ARTICLE
A Data centre in a field
by Stuart Curzon 22 August 2025
Discover how Deep Green, a pioneer in decarbonised data centres, partnered with Cambridge Management Consulting to expand its market presence through an innovative, sustainability‑driven go‑to‑market strategy | READ CASE STUDY
Crystal ball on  a neon floor
by Jason Jennings 21 August 2025
Discover how digital twins are revolutionising project management. This article explores how virtual replicas of physical systems are helping businesses to simulate outcomes, de-risk investments and enhance decision-making.
A vivid photo of the skyline of Stanley on the Falkland Islands
by Cambridge Management Consulting 20 August 2025
Cambridge Management Consulting (Cambridge MC) and Falklands IT (FIT) have donatede £3,000 to the Hermes/Viraat Heritage Trust to support the learning and development of young children in the Falkland Islands.
A modern office building on a wireframe floor with lava raining from the sky in the background
by Tom Burton 29 July 2025
What’s your organisation’s type when it comes to cyber security? Is everything justified by the business risks, or are you hoping for the best? Over the decades, I have found that no two businesses or organisations have taken the same approach to cybersecurity. This is neither a criticism nor a surprise. No two businesses are the same, so why would their approach to digital risk be? However, I have found that there are some trends or clusters. In this article, I’ve distilled those observations, my understanding of the forces that drive each approach, and some indicators that may help you recognise it. I have also suggested potential advantages and disadvantages. Ad Hoc Let’s start with the ad hoc approach, where the organisation does what it thinks needs to be done, but without any clear rationale to determine “How much is enough?” The Bucket of Sand Approach At the extreme end of the spectrum is the 'Bucket of Sand' option which is characterised by the belief that 'It will never happen to us'. Your organisation may feel that it is too small to be worth attacking or has nothing of any real value. However, if an organisation has nothing of value, one wonders what purpose it serves. At the very least, it is likely to have money. But it is rare now that an organisation will not hold data and information worth stealing. Whether this data is its own or belongs to a third party, it will be a target. I’ve also come across businesses that hold a rather more fatalistic perspective. Most of us are aware of the regular reports of nation-state attacks that are attempting to steal intellectual property, causing economic damage, or just simply stealing money. Recognising that you might face the full force of a cyber-capable foreign state is undoubtedly daunting and may encourage the view that 'We’re all doomed regardless'. If a cyber-capable nation-state is determined to have a go at you, the odds are not great, and countering it will require eye-watering investments in protection, detection and response. But the fact is that they are rare events, even if they receive disproportionate amounts of media coverage. The majority of threats that most organisations face are not national state actors. They are petty criminals, organised criminal bodies, opportunistic amateur hackers or other lower-level actors. And they will follow the path of least resistance. So, while you can’t eliminate the risk, you can reduce it by applying good security and making yourself a more challenging target than the competition. Following Best Practice Thankfully, these 'Bucket of Sand' adopters are less common than ten or fifteen years ago. Most in the Ad Hoc zone will do some things but without clear logic or rationale to justify why they are doing X rather than Y. They may follow the latest industry trends and implement a new shiny technology (because doing the business change bit is hard and unpopular). This type of organisation will frequently operate security on a feast or famine basis, deferring investments to next year when there is something more interesting to prioritise, because without business strategy guiding security it will be hard to justify. And 'next year' frequently remains next year on an ongoing basis. At the more advanced end of the Ad Hoc zone, you will find those organisations that choose a framework and aim to achieve a specific benchmark of Security Maturity. This approach ensures that capabilities are balanced and encourages progressive improvement. However, 'How much is enough?' remains unanswered; hence, the security budget will frequently struggle for airtime when budgets are challenged. It may also encourage a one-size-fits-all approach rather than prioritising the assets at greatest risk, which would cause the most significant damage if compromised. Regulatory-Led The Regulatory-Led organisation is the one I’ve come across most frequently. A market regulator, such as the FCA in the UK, may set regulations. Or the regulator may be market agnostic but have responsibility for a particular type of data, such as the Information Commissioner’s Office’s interest in personal data privacy. If regulatory compliance questions dominate most senior conversations about cyber security, the organisation is probably in this zone. Frequently, this issue of compliance is not a trivial challenge. Most regulations don’t tend to be detailed recipes to follow. Instead, they outline the broad expectations or the principles to be applied. There will frequently be a tapestry of regulations that need to be met rather than a single target to aim for. Businesses operating in multiple countries will likely have different regulations across those regions. Even within one country, there may be market-specific and data-specific regulations that both need to be applied. This tapestry is growing year after year as jurisdictions apply additional regulations to better protect their citizens and economies in the face of proliferating and intensifying threats. In the last year alone, EU countries have had to implement both the Digital Operational Resilience Act (DORA) and Network and Infrastructure Security Directive (NIS2) , which regulate financial services businesses and critical infrastructure providers respectively. Superficially, it appears sensible and straightforward, but in execution the complexities and limitations become clear. Some of the nuances include: Not Everything Is Regulated The absence of regulation doesn’t mean there is no risk. It just means that the powers that be are not overly concerned. Your business will still be exposed to risk, but the regulators or government may be untroubled by it. Regulations Move Slowly Cyber threats are constantly changing and evolving. As organisations improve their defences, the opposition changes their tactics and tools to ensure their attacks can continue to be effective. In response, organisations need to adjust and enhance their defences to stay ahead. Regulations do not respond at this pace. So, relying on regulatory compliance risks preparing to 'Fight the last war'. The Tapestry Becomes Increasingly Unwieldy It may initially appear simple. You review the limited regulations for a single region, take your direction, and apply controls that will make you compliant. Then, you expand into a new region. And later, one of your existing jurisdictions introduces an additional set of regulations that apply to you. Before you know it, you must first normalise and consolidate the requirements from a litany of different sets of rules, each with its own structure, before you can update your security/compliance strategy. Most Regulations Talk about Appropriateness As mentioned before, regulations rarely provide a recipe to follow. They talk about applying appropriate controls in a particular context. The business still needs to decide what is appropriate. And if there is a breach or a pre-emptive audit, the business will need to justify that decision. The most rational justification will be based on an asset’s sensitivity and the threats it is exposed to — ergo, a risk-based rather than a compliance-based argument. Opportunity-Led Many businesses don’t exist in heavily regulated industries but may wish to trade in markets or with customers with certain expectations about their suppliers’ security and resilience. These present barriers to entry, but if overcome, they also offer obstacles to competition. The expectations may be well defined for a specific customer, such as DEF STAN 05-138 , which details the standards that the UK Ministry of Defence expects its suppliers to meet according to a project’s risk profile. Sometimes, an entire market will set the entry rules. The UK Government has set Cyber Essentials as the minimum standard to be eligible to compete for government contracts. The US has published NIST 800-171 to detail what government suppliers must meet to process Controlled Unclassified Information (CUI). Businesses should conduct due diligence on their suppliers, particularly when they provide technology, interface with their systems or process their data. Regulations, such as NIS2, are increasingly demanding this level of Third Party Risk Management because of the number of breaches and compromises originating from the supply chain. Businesses may detail a certain level of certification that they consider adequate, such as ISO 27001 or a System & Organization Controls (SOC) report. By achieving one or more of these standards, new markets may open up to a business. Good security becomes a growth enabler. But just like with regulations, if the security strategy starts with one of these standards, it can rapidly become unwieldy as a patchwork quilt of different entry requirements builds up for other markets. Risk-Led The final zone is where actions are defined by the risk the business is exposed to. Being led by risk in this way should be natural and intuitive. Most of us might secure our garden shed with a simple padlock but would have several more secure locks on the doors to our house. We would probably also have locks on the windows and may add CCTV cameras and a burglar alarm if we were sufficiently concerned about the threats in our area. We may even install a secure safe inside the house if we have some particularly valuable possessions. These decisions and the application of defences are all informed by our understanding of the risks to which different groups of assets are exposed. The security decisions you make at home are relatively trivial compared to the complexity most businesses face with digital risk. Over the decades, technology infrastructures have grown, often becoming a sprawling landscape where the boundaries between one system and another are hard to determine. In the face of this complexity, many organisations talk about being risk-led but, in reality, operate in one of the other zones. There is no reason why an organisation can’t progressively transform from an Ad Hoc, Regulatory-Led or Opportunity-Led posture into a Risk-Led one. This transformation may need to include a strategy to enhance segmentation and reduce the sprawling landscape described above. Risk-Led also doesn’t mean applying decentralised, bespoke controls on a system-by-system basis. The risk may be assessed against the asset or a category of assets, but most organisations usually have a framework of standard controls and policies to apply or choose from. The test to tell whether an organisation genuinely operates in the Risk-Led zone is whether they have a well-defined Risk Appetite. This policy is more than just the one-liner stating that they have a very low appetite for risk. It should typically be broken down into different categories of risk or asset types; for instance, it might detail the different appetites for personal data risk compared to corporate intellectual property marked as 'In Strict Confidence'. Each category should clarify the tolerance, the circumstances under which risk will be accepted, and who is authorised to sign off. I’ve seen some exceptionally well-drafted risk appetite policies that provide clear direction. Once in place, any risk review can easily understand the boundaries within which they can operate and determine whether the controls for a particular context are adequate. I’ve also seen many that are so loose as to be unactionable or, on as many occasions, have not been able to find a risk appetite defined at all. In these situations, there is no clear way of determining 'How much security is enough'. Organisations operating in this zone will frequently still have to meet regulatory requirements and individual customer or market expectations. However, this regulatory or commercial risk assessment can take the existing strategy as the starting point and review the relevant controls for compliance. That may prompt an adjustment to security in certain places. But when challenged, you can defend your strategy because you can trace decisions back to the negative outcomes you are attempting to prevent — and this intent is in everyone’s common interest. Conclusions Which zone does your business occupy? It may exist in more than one — for instance, mainly aiming for a specific security maturity in the Ad Hoc zone but reinforced for a particular customer. But which is the dominant zone that drives plans and behaviour? And why is that? It may be the right place for today, but is it the best approach for the future? Apart from the 'Bucket of Sand' approach, each has pros and cons. I’ve sought to stay balanced in how I’ve described them. However, the most sustainable approach is one driven by business risk, with controls that mitigate those risks to a defined appetite. Regulatory compliance will probably constitute some of those risks, and when controls are reviewed against the regulatory requirements, there may be a need to reinforce them. Also, some customers may have specific standards to meet in a particular context. However, the starting point will be the security you believe the business needs and can justify before reviewing it through a regulatory or market lens. If you want to discuss how you can improve your security, reduce your digital risk, and face the future with confidence, get in touch with Tom Burton, Senior Partner - Cyber Security, using the below form.
More posts