AI Regulation: Can Policy Keep Up with its Potential?

Anthony Aarons

SUBSCRIBE CONTACT US

Author


Though the term ‘Artificial Intelligence’ (AI) was first coined by John McCarthy in the 1950s, its popularity in day-to-day life, conversation, and particularly in business has seen an unprecedented explosion since the start of his decade. With this has grown the power and capacity of this technology, as the chart below demonstrates the recent rapid evolution of AI’s capabilities across reading, writing, analysis, and generation.

It comes as no surprise, then, that with this increased prevalence has come equitable anxieties regarding its potential uses and their risks. These largely surround the question-mark following this growth; in an interview with Adam Grant, Sam Altman, CEO of OpenAI, claimed that there are ‘huge unknowns of how this is [going to] play out’: ‘[AI is] not [going to] be as big of a deal as people think, at least in the short term. Long term, everything changes.’


The antidote to these concerns? Better, robust, standardised governance to regulate this potential and bring light to these unknowns. AI at its core is a tremendously powerful force for facilitating ingenuity, efficiency, and creation, but more exacting control is required to ensure that it remains directed towards socially beneficial values and usage, and limit the potential downside imposed by serious unchecked risks.


The general assumption, spurred by quotes like Altman’s above, is that AI is an unstoppable inevitability and a possible march towards dystopia. This is not the case: AI is fettered to the people who define it and use it. In this article, we will look at the ways in which these people have approached its governance and the differing attitudes towards how important this is.

Current Approaches to Regulating Artificial Intelligence

Currently, the overarching values which govern many countries’ and unions’ approach to developing and implementing AI are the OECD’s AI Principles, which are comprised of five considerations these adherents must keep in mind, the ‘Principles’, and five recommendations the OECD makes to policy makers for applying these in practice.


With the objective of keeping AI humane, transparent, accountable, and aligned with human rights, the OECD’s Principles are as follows:


  • Inclusive growth, sustainable development, and wellbeing: AI platforms should be developed for the benefit of people and the planet.


  • Human rights and democratic values, including fairness, and privacy: AI engineers should internalise the law, human rights, and democratic values in their projects.


  • Transparency and explainability: AI developers should disclose meaningful information and context surrounding their products.


  • Robustness, safety, and security: AI systems should be secure throughout their entire lifecycle so that there is no risk to safety at any point.


  • Accountability: AI actors should be accountable for the proper functioning of their products and platforms.


To this end, the OECD makes the following recommendations for nations, unions, and organisations to demonstrate these values:


  • Investing in AI research and development: Governments should encourage research and development into AI tools through public and private investment.


  • Fostering an inclusive AI-enabling ecosystem: Governments should ensure an inclusive, dynamic, sustainable digital ecosystem to cultivate AI development.


  • Shaping an enabling interoperable governance and policy environment for AI: Governments should promote agility in order to accelerate research to deployment.


  • Building human capacity and preparing for labour market transformation: Governments should collaborate with stakeholders to prepare for the changes AI will impact on work and society.


  • International co-operation for trustworthy AI: Governments should work with both the OECD and each other to maximise the opportunities of AI.


At time of writing, 48 countries and one union have signed up as adherents to the OECD’s principles, representing a widespread recognition and interest in the importance of governance. However, the ways in which these different nations and bodies have filtered it into their own attempts at policy and regulation does not always demonstrate the enthusiasm to make this imminent or even obligatory. The EU AI Act, for example, was one of the first attempts to govern the usage of AI at an international level, yet its rollout has been continually delayed; and Australia have laid out eight AI Ethics Principles and an AI Safety Standard, yet these are both completely voluntary.


What this comes down to is evident: a bipedal concern between amplifying the innovation which the OECD promotes in order to maximise the potential of AI, as well as ensuring that this potential remains balanced and ethical in line with the OECD’s emphasis on doing so safely and securely. Moreover, the removal of restrictions in some regions coupled with clear misunderstandings surrounding the borderless nature of AI means that businesses and governments are focused on being first-to-market rather than the first to control values and provide the necessary guardrails to control this immensely powerful technology.

The Importance of Innovation

For many nations and legislative bodies, the reluctance to govern AI too stringently is attributed to a commitment to maintaining the space and breathability for innovators to make the most of its promise and wide-ranging potentials. The UK, for example, has resisted the publication of a comprehensive, regulated AI act for fear of weakening the country’s attractiveness to AI companies. Prime Minister of the UK, Keir Starmer, stated that ‘Instead of over-regulating these new technologies, we’re seizing the opportunities they offer’, and has unveiled the UK’s AI Opportunities Action Plan. This avoids the risk-based approach of the EU, which several have criticised for being too prescriptive, in favour of sustained economic growth by giving businesses the incentive to innovate and invest. 


These commitments are founded on the principles of empowering talent and spreading the transformative effects of AI in order to drive economic growth, benefit public services, and increase personal opportunities. By allowing AI entrepreneurs the power to realise their ideas without the buffer of regulation, the objective is to ensure a positive direction of these innovations from the inside out instead of projecting it externally through governance and law. 


Similarly, the US, which aligns with the UK on a pro-innovation attitude towards AI, currently lacks an overarching governing principle surrounding AI, and President of the US Donald Trump has gone so far as to issue an ‘Executive Order for Removing Barriers to American Leadership in AI’ which repeals all previous policies or directives regarding its regulation. This makes it easier for developers to create AI products, platforms, and tools, especially when it comes to trialling and testing early models which otherwise risk-focused regulations would decelerate. However, given the very nature of AI and the experimentation required to develop it, the rush to deploy has left many businesses floundering with failed implementations, huge costs with little to show for it, and, in some cases, serious damage to their brands and business infrastructure. 



However, as we shall cover in the next section, it is absolutely vital that this lack of regulatory control is mitigated through sound governance principles, as well as societal pressures whereby the development of AI is defined by what it should do, not what it could do.

The Pace of Change vs the Pace of Learning

However, the prioritisation of innovation over regulation raises some concerns as to the Pace of Change compared to the Pace of Learning.


Here, the Pace of Change refers to technological evolution, while the Pace of Learning represents our own human capacity to understand and remain current with these developments. As the above diagram displays, these intersected in the 1950s, but the introduction of compute (the technology which established the roots of AI) caused them to widen to an almost exponential degree.


This acceleration is encouraged by policies such as the US and the UK’s which, while we hope they will ultimately bring good through the technologies and industries that they innovate, also have the potential to bring risk or harm if unchecked. In the US, the ‘Executive Order for Removing Barriers to American Leadership in AI’ replaced a more risk-based policy introduced during Joe Biden’s presidency, the ‘Executive Order for the Safe, Secure, and Trustworthy Development and Use of AI’. Similarly, the UK’s AI Opportunities Action Plan was unveiled in lieu of ‘The Artificial Intelligence (Regulation) Bill’, an attempt to introduce more strict legislation on its use which has been continually delayed or dismissed.


Even for governing bodies which are making attempts to regulate AI and its applications, there are concerns about how successful these are in keeping stride with the Pace of Change. The EU AI Act, for example, has been criticised for its delayed and ambiguous timeline. Though it was published in March 2024, the first provisions only went into effect in February of this year, and there is uncertainty as to the rollout of its subsequent stages, providing the Pace of Change with a significant head start.



Despite this, the desire for robust governance and motivation is still present. While the EU AI Act’s phases take time to implement, the EU has introduced an ‘AI Pact’ in the interim which organisations can sign to display their endorsement of EU AI Act before it officially goes into effect. Thus far, over 200 organisations have signed this pact, representing a commitment to balancing AI innovation with security and a protection of human dignity and rights.

Conclusion

From an analysis of the different attitudes towards regulating AI laid out in this article, it is clear that there is a balance to be struck between maximising the innovative potential of AI to make a positive change, and ensuring that this change remains strictly positive through robust and holistic governance. After all, it is not necessarily the AI tools and platforms which pose the biggest risk, but those who develop and use them, and by adhering to safe and secure legislation, they can ensure that their products are engineered with people-forward principles at the forefront. 



At Cambridge Management Consulting, we are equipped with the knowledge, expertise, and experience to ensure that your AI strategies remain compliant with policies and regulation to avoid penalties, and that they are built around the safety of your people and data. Get in touch now to strengthen your approach to AI that balances safety with success.


With over 25 years on experience in AI and leadership management, Anthony Aarons is an Associate for AI & Risk at Cambridge Management Consulting.


Get in touch with Anthony on LinkedIn or use the Contact Form below.

Contact Form

Contact - Craig Devolution Blog

Subscribe to our Newsletter

Blog Subscribe

SHARE CONTENT

Aerial shot of wind turbines
by Pete Nisbet 15 September 2025
Discover how businesses can drive value through sustainability by focusing on compliance, cost savings, and credibility—building trust, cutting emissions, and attracting investors | READ ARTICLE NOW
Abstract kaleidoscope of AI generated shapes
by Tom Burton 10 September 2025
This article explores the ‘Third Way’ to AI adoption – a balanced approach that enables innovation, defines success clearly, and scales AI responsibly for lasting impact | READ FULL ARTICLE
A Data centre in a field
by Stuart Curzon 22 August 2025
Discover how Deep Green, a pioneer in decarbonised data centres, partnered with Cambridge Management Consulting to expand its market presence through an innovative, sustainability‑driven go‑to‑market strategy | READ CASE STUDY
Crystal ball on  a neon floor
by Jason Jennings 21 August 2025
Discover how digital twins are revolutionising project management. This article explores how virtual replicas of physical systems are helping businesses to simulate outcomes, de-risk investments and enhance decision-making.
A vivid photo of the skyline of Stanley on the Falkland Islands
by Cambridge Management Consulting 20 August 2025
Cambridge Management Consulting (Cambridge MC) and Falklands IT (FIT) have donatede £3,000 to the Hermes/Viraat Heritage Trust to support the learning and development of young children in the Falkland Islands.
A modern office building on a wireframe floor with lava raining from the sky in the background
by Tom Burton 29 July 2025
What’s your organisation’s type when it comes to cyber security? Is everything justified by the business risks, or are you hoping for the best? Over the decades, I have found that no two businesses or organisations have taken the same approach to cybersecurity. This is neither a criticism nor a surprise. No two businesses are the same, so why would their approach to digital risk be? However, I have found that there are some trends or clusters. In this article, I’ve distilled those observations, my understanding of the forces that drive each approach, and some indicators that may help you recognise it. I have also suggested potential advantages and disadvantages. Ad Hoc Let’s start with the ad hoc approach, where the organisation does what it thinks needs to be done, but without any clear rationale to determine “How much is enough?” The Bucket of Sand Approach At the extreme end of the spectrum is the 'Bucket of Sand' option which is characterised by the belief that 'It will never happen to us'. Your organisation may feel that it is too small to be worth attacking or has nothing of any real value. However, if an organisation has nothing of value, one wonders what purpose it serves. At the very least, it is likely to have money. But it is rare now that an organisation will not hold data and information worth stealing. Whether this data is its own or belongs to a third party, it will be a target. I’ve also come across businesses that hold a rather more fatalistic perspective. Most of us are aware of the regular reports of nation-state attacks that are attempting to steal intellectual property, causing economic damage, or just simply stealing money. Recognising that you might face the full force of a cyber-capable foreign state is undoubtedly daunting and may encourage the view that 'We’re all doomed regardless'. If a cyber-capable nation-state is determined to have a go at you, the odds are not great, and countering it will require eye-watering investments in protection, detection and response. But the fact is that they are rare events, even if they receive disproportionate amounts of media coverage. The majority of threats that most organisations face are not national state actors. They are petty criminals, organised criminal bodies, opportunistic amateur hackers or other lower-level actors. And they will follow the path of least resistance. So, while you can’t eliminate the risk, you can reduce it by applying good security and making yourself a more challenging target than the competition. Following Best Practice Thankfully, these 'Bucket of Sand' adopters are less common than ten or fifteen years ago. Most in the Ad Hoc zone will do some things but without clear logic or rationale to justify why they are doing X rather than Y. They may follow the latest industry trends and implement a new shiny technology (because doing the business change bit is hard and unpopular). This type of organisation will frequently operate security on a feast or famine basis, deferring investments to next year when there is something more interesting to prioritise, because without business strategy guiding security it will be hard to justify. And 'next year' frequently remains next year on an ongoing basis. At the more advanced end of the Ad Hoc zone, you will find those organisations that choose a framework and aim to achieve a specific benchmark of Security Maturity. This approach ensures that capabilities are balanced and encourages progressive improvement. However, 'How much is enough?' remains unanswered; hence, the security budget will frequently struggle for airtime when budgets are challenged. It may also encourage a one-size-fits-all approach rather than prioritising the assets at greatest risk, which would cause the most significant damage if compromised. Regulatory-Led The Regulatory-Led organisation is the one I’ve come across most frequently. A market regulator, such as the FCA in the UK, may set regulations. Or the regulator may be market agnostic but have responsibility for a particular type of data, such as the Information Commissioner’s Office’s interest in personal data privacy. If regulatory compliance questions dominate most senior conversations about cyber security, the organisation is probably in this zone. Frequently, this issue of compliance is not a trivial challenge. Most regulations don’t tend to be detailed recipes to follow. Instead, they outline the broad expectations or the principles to be applied. There will frequently be a tapestry of regulations that need to be met rather than a single target to aim for. Businesses operating in multiple countries will likely have different regulations across those regions. Even within one country, there may be market-specific and data-specific regulations that both need to be applied. This tapestry is growing year after year as jurisdictions apply additional regulations to better protect their citizens and economies in the face of proliferating and intensifying threats. In the last year alone, EU countries have had to implement both the Digital Operational Resilience Act (DORA) and Network and Infrastructure Security Directive (NIS2) , which regulate financial services businesses and critical infrastructure providers respectively. Superficially, it appears sensible and straightforward, but in execution the complexities and limitations become clear. Some of the nuances include: Not Everything Is Regulated The absence of regulation doesn’t mean there is no risk. It just means that the powers that be are not overly concerned. Your business will still be exposed to risk, but the regulators or government may be untroubled by it. Regulations Move Slowly Cyber threats are constantly changing and evolving. As organisations improve their defences, the opposition changes their tactics and tools to ensure their attacks can continue to be effective. In response, organisations need to adjust and enhance their defences to stay ahead. Regulations do not respond at this pace. So, relying on regulatory compliance risks preparing to 'Fight the last war'. The Tapestry Becomes Increasingly Unwieldy It may initially appear simple. You review the limited regulations for a single region, take your direction, and apply controls that will make you compliant. Then, you expand into a new region. And later, one of your existing jurisdictions introduces an additional set of regulations that apply to you. Before you know it, you must first normalise and consolidate the requirements from a litany of different sets of rules, each with its own structure, before you can update your security/compliance strategy. Most Regulations Talk about Appropriateness As mentioned before, regulations rarely provide a recipe to follow. They talk about applying appropriate controls in a particular context. The business still needs to decide what is appropriate. And if there is a breach or a pre-emptive audit, the business will need to justify that decision. The most rational justification will be based on an asset’s sensitivity and the threats it is exposed to — ergo, a risk-based rather than a compliance-based argument. Opportunity-Led Many businesses don’t exist in heavily regulated industries but may wish to trade in markets or with customers with certain expectations about their suppliers’ security and resilience. These present barriers to entry, but if overcome, they also offer obstacles to competition. The expectations may be well defined for a specific customer, such as DEF STAN 05-138 , which details the standards that the UK Ministry of Defence expects its suppliers to meet according to a project’s risk profile. Sometimes, an entire market will set the entry rules. The UK Government has set Cyber Essentials as the minimum standard to be eligible to compete for government contracts. The US has published NIST 800-171 to detail what government suppliers must meet to process Controlled Unclassified Information (CUI). Businesses should conduct due diligence on their suppliers, particularly when they provide technology, interface with their systems or process their data. Regulations, such as NIS2, are increasingly demanding this level of Third Party Risk Management because of the number of breaches and compromises originating from the supply chain. Businesses may detail a certain level of certification that they consider adequate, such as ISO 27001 or a System & Organization Controls (SOC) report. By achieving one or more of these standards, new markets may open up to a business. Good security becomes a growth enabler. But just like with regulations, if the security strategy starts with one of these standards, it can rapidly become unwieldy as a patchwork quilt of different entry requirements builds up for other markets. Risk-Led The final zone is where actions are defined by the risk the business is exposed to. Being led by risk in this way should be natural and intuitive. Most of us might secure our garden shed with a simple padlock but would have several more secure locks on the doors to our house. We would probably also have locks on the windows and may add CCTV cameras and a burglar alarm if we were sufficiently concerned about the threats in our area. We may even install a secure safe inside the house if we have some particularly valuable possessions. These decisions and the application of defences are all informed by our understanding of the risks to which different groups of assets are exposed. The security decisions you make at home are relatively trivial compared to the complexity most businesses face with digital risk. Over the decades, technology infrastructures have grown, often becoming a sprawling landscape where the boundaries between one system and another are hard to determine. In the face of this complexity, many organisations talk about being risk-led but, in reality, operate in one of the other zones. There is no reason why an organisation can’t progressively transform from an Ad Hoc, Regulatory-Led or Opportunity-Led posture into a Risk-Led one. This transformation may need to include a strategy to enhance segmentation and reduce the sprawling landscape described above. Risk-Led also doesn’t mean applying decentralised, bespoke controls on a system-by-system basis. The risk may be assessed against the asset or a category of assets, but most organisations usually have a framework of standard controls and policies to apply or choose from. The test to tell whether an organisation genuinely operates in the Risk-Led zone is whether they have a well-defined Risk Appetite. This policy is more than just the one-liner stating that they have a very low appetite for risk. It should typically be broken down into different categories of risk or asset types; for instance, it might detail the different appetites for personal data risk compared to corporate intellectual property marked as 'In Strict Confidence'. Each category should clarify the tolerance, the circumstances under which risk will be accepted, and who is authorised to sign off. I’ve seen some exceptionally well-drafted risk appetite policies that provide clear direction. Once in place, any risk review can easily understand the boundaries within which they can operate and determine whether the controls for a particular context are adequate. I’ve also seen many that are so loose as to be unactionable or, on as many occasions, have not been able to find a risk appetite defined at all. In these situations, there is no clear way of determining 'How much security is enough'. Organisations operating in this zone will frequently still have to meet regulatory requirements and individual customer or market expectations. However, this regulatory or commercial risk assessment can take the existing strategy as the starting point and review the relevant controls for compliance. That may prompt an adjustment to security in certain places. But when challenged, you can defend your strategy because you can trace decisions back to the negative outcomes you are attempting to prevent — and this intent is in everyone’s common interest. Conclusions Which zone does your business occupy? It may exist in more than one — for instance, mainly aiming for a specific security maturity in the Ad Hoc zone but reinforced for a particular customer. But which is the dominant zone that drives plans and behaviour? And why is that? It may be the right place for today, but is it the best approach for the future? Apart from the 'Bucket of Sand' approach, each has pros and cons. I’ve sought to stay balanced in how I’ve described them. However, the most sustainable approach is one driven by business risk, with controls that mitigate those risks to a defined appetite. Regulatory compliance will probably constitute some of those risks, and when controls are reviewed against the regulatory requirements, there may be a need to reinforce them. Also, some customers may have specific standards to meet in a particular context. However, the starting point will be the security you believe the business needs and can justify before reviewing it through a regulatory or market lens. If you want to discuss how you can improve your security, reduce your digital risk, and face the future with confidence, get in touch with Tom Burton, Senior Partner - Cyber Security, using the below form.
AI co-pilot
by Jason Jennings 28 July 2025
Jason Jennings | Elevate your project management with AI. This guide for senior leaders explains how AI tools can enhance project performance through predictive foresight, cognitive collaboration, and portfolio intelligence. Unlock the potential of AI in your organisation and avoid the common pitfalls.
St Pauls Cathedral
by Craig Cheney 24 July 2025
Craig Cheney | The UK Government has taken a major step forward in reshaping local governance in England with the publication of the English Devolution and Community Empowerment Bill. This is more than a policy shift — it’s a structural rethink that sets out to make devolution the norm, not the exception.
by Faye Holland 11 July 2025
Today, we are proud to be spotlighting Faye Holland, who became Managing Partner at Cambridge Management Consulting for Client PR & Marketing as well as for our presence in the city of Cambridge and the East of England at the start of this year, following our acquisition of her award-winning PR firm, cofinitive. Faye is a prominent entrepreneur and a dynamic force within the city of Cambridge’s renowned technology sector. Known for her ability to influence, inspire, and connect on multiple fronts, Faye plays a vital role in bolstering Cambridge’s global reputation as the UK’s hub for technology, innovation, and science. With over three decades of experience spanning diverse business ventures, including the UK’s first ISP, working in emerging business practices within IBM, leading European and Asia-Pacific operations for a global tech media company, and founding her own business, Faye brings unparalleled expertise to every endeavour. Faye’s value in the industry is further underscored by her extensive network of influential contacts. As the founder of cofinitive, an award-winning PR and communications agency focused on supporting cutting-edge start-ups and scale-ups in tech and innovation, Faye has earned a reputation as one of the UK’s foremost marketing strategists. Over the course of a decade, she built cofinitive into a recognised leader in the communications industry. The firm has since been featured in PR Weekly’s 150 Top Agencies outside London, and has been named year-on-year as the No. 1 PR & Communications agency in East Anglia. cofinitive is also acknowledged as one of the 130 most influential businesses in Cambridge, celebrated for its distinctive, edge, yet polished approach to storytelling for groundbreaking companies, and for its support of the broader ecosystem. Additionally, Faye is widely recognised across the East of England for her leadership in initiatives such as the #21toWatch Technology Innovation Awards, which celebrates innovation and entrepreneurship, and as the co-host of the Cambridge Tech Podcast. Individually, Faye has earned numerous accolades. She is listed among the 25 most influential people in Cambridge, and serves as Chair of the Cambridgeshire Chambers of Commerce. Her advocacy for women in technology has seen her regularly featured in Computer Weekly’s Women in Tech lists, and recognised as one of the most influential women in UK tech during London Tech Week 2024 via the #InspiringFifty listing. Faye is also a dedicated mentor for aspiring technology entrepreneurs, having contributed to leading entrepreneurial programs in Cambridge and internationally, further solidifying her role as a driving force for innovation and growth in the tech ecosystem. If you would like to discuss future opportunities with Faye, you can reach out to her here .
More posts