AI Regulation: Can Policy Keep Up with its Potential?

Anthony Aarons

SUBSCRIBE CONTACT US

Author


Though the term ‘Artificial Intelligence’ (AI) was first coined by John McCarthy in the 1950s, its popularity in day-to-day life, conversation, and particularly in business has seen an unprecedented explosion since the start of his decade. With this has grown the power and capacity of this technology, as the chart below demonstrates the recent rapid evolution of AI’s capabilities across reading, writing, analysis, and generation.

It comes as no surprise, then, that with this increased prevalence has come equitable anxieties regarding its potential uses and their risks. These largely surround the question-mark following this growth; in an interview with Adam Grant, Sam Altman, CEO of OpenAI, claimed that there are ‘huge unknowns of how this is [going to] play out’: ‘[AI is] not [going to] be as big of a deal as people think, at least in the short term. Long term, everything changes.’


The antidote to these concerns? Better, robust, standardised governance to regulate this potential and bring light to these unknowns. AI at its core is a tremendously powerful force for facilitating ingenuity, efficiency, and creation, but more exacting control is required to ensure that it remains directed towards socially beneficial values and usage, and limit the potential downside imposed by serious unchecked risks.


The general assumption, spurred by quotes like Altman’s above, is that AI is an unstoppable inevitability and a possible march towards dystopia. This is not the case: AI is fettered to the people who define it and use it. In this article, we will look at the ways in which these people have approached its governance and the differing attitudes towards how important this is.

Current Approaches to Regulating Artificial Intelligence

Currently, the overarching values which govern many countries’ and unions’ approach to developing and implementing AI are the OECD’s AI Principles, which are comprised of five considerations these adherents must keep in mind, the ‘Principles’, and five recommendations the OECD makes to policy makers for applying these in practice.


With the objective of keeping AI humane, transparent, accountable, and aligned with human rights, the OECD’s Principles are as follows:


  • Inclusive growth, sustainable development, and wellbeing: AI platforms should be developed for the benefit of people and the planet.


  • Human rights and democratic values, including fairness, and privacy: AI engineers should internalise the law, human rights, and democratic values in their projects.


  • Transparency and explainability: AI developers should disclose meaningful information and context surrounding their products.


  • Robustness, safety, and security: AI systems should be secure throughout their entire lifecycle so that there is no risk to safety at any point.


  • Accountability: AI actors should be accountable for the proper functioning of their products and platforms.


To this end, the OECD makes the following recommendations for nations, unions, and organisations to demonstrate these values:


  • Investing in AI research and development: Governments should encourage research and development into AI tools through public and private investment.


  • Fostering an inclusive AI-enabling ecosystem: Governments should ensure an inclusive, dynamic, sustainable digital ecosystem to cultivate AI development.


  • Shaping an enabling interoperable governance and policy environment for AI: Governments should promote agility in order to accelerate research to deployment.


  • Building human capacity and preparing for labour market transformation: Governments should collaborate with stakeholders to prepare for the changes AI will impact on work and society.


  • International co-operation for trustworthy AI: Governments should work with both the OECD and each other to maximise the opportunities of AI.


At time of writing, 48 countries and one union have signed up as adherents to the OECD’s principles, representing a widespread recognition and interest in the importance of governance. However, the ways in which these different nations and bodies have filtered it into their own attempts at policy and regulation does not always demonstrate the enthusiasm to make this imminent or even obligatory. The EU AI Act, for example, was one of the first attempts to govern the usage of AI at an international level, yet its rollout has been continually delayed; and Australia have laid out eight AI Ethics Principles and an AI Safety Standard, yet these are both completely voluntary.


What this comes down to is evident: a bipedal concern between amplifying the innovation which the OECD promotes in order to maximise the potential of AI, as well as ensuring that this potential remains balanced and ethical in line with the OECD’s emphasis on doing so safely and securely. Moreover, the removal of restrictions in some regions coupled with clear misunderstandings surrounding the borderless nature of AI means that businesses and governments are focused on being first-to-market rather than the first to control values and provide the necessary guardrails to control this immensely powerful technology.

The Importance of Innovation

For many nations and legislative bodies, the reluctance to govern AI too stringently is attributed to a commitment to maintaining the space and breathability for innovators to make the most of its promise and wide-ranging potentials. The UK, for example, has resisted the publication of a comprehensive, regulated AI act for fear of weakening the country’s attractiveness to AI companies. Prime Minister of the UK, Keir Starmer, stated that ‘Instead of over-regulating these new technologies, we’re seizing the opportunities they offer’, and has unveiled the UK’s AI Opportunities Action Plan. This avoids the risk-based approach of the EU, which several have criticised for being too prescriptive, in favour of sustained economic growth by giving businesses the incentive to innovate and invest. 


These commitments are founded on the principles of empowering talent and spreading the transformative effects of AI. By allowing AI entrepreneurs the power to realise their ideas without the buffer of regulation, the objective is to ensure a positive direction of these innovations from the inside out instead of projecting it externally through governance and law. 


Similarly, the US, which aligns with the UK on a pro-innovation attitude towards AI, currently lacks an overarching governing principle surrounding AI, and President of the US Donald Trump has gone so far as to issue an ‘Executive Order for Removing Barriers to American Leadership in AI’ which repeals all previous policies or directives regarding its regulation. This makes it easier for developers to create AI products, platforms, and tools, especially when it comes to trialling and testing early models which otherwise risk-focused regulations would decelerate. However, given the very nature of AI and the experimentation required to develop it, the rush to deploy has left many businesses floundering with failed implementations, huge costs with little to show for it, and, in some cases, serious damage to their brands and business infrastructure. 


However, as we shall cover in the next section, it is absolutely vital that this lack of regulatory control is mitigated through sound governance principles, as well as societal pressures whereby the development of AI is defined by what it should do, not what it could do.

The Pace of Change vs the Pace of Learning

However, the prioritisation of innovation over regulation raises some concerns as to the Pace of Change compared to the Pace of Learning.


Here, the Pace of Change refers to technological evolution, while the Pace of Learning represents our own human capacity to understand and remain current with these developments. These intersected in the 1950s, but the introduction of compute (the technology which established the roots of AI) caused them to widen to an almost exponential degree.


This acceleration is encouraged by policies such as the US and the UK’s which, while we hope they will ultimately bring good through the technologies and industries that they innovate, also have the potential to bring risk or harm if unchecked. In the US, the ‘Executive Order for Removing Barriers to American Leadership in AI’ replaced a more risk-based policy introduced during Joe Biden’s presidency, the ‘Executive Order for the Safe, Secure, and Trustworthy Development and Use of AI’. Similarly, the UK’s AI Opportunities Action Plan was unveiled in lieu of ‘The Artificial Intelligence (Regulation) Bill’, an attempt to introduce more strict legislation on its use which has been continually delayed or dismissed.


Even for governing bodies which are making attempts to regulate AI and its applications, there are concerns about how successful these are in keeping stride with the Pace of Change. The EU AI Act, for example, has been criticised for its delayed and ambiguous timeline. Though it was published in March 2024, the first provisions only went into effect in February of this year, and there is uncertainty as to the rollout of its subsequent stages, providing the Pace of Change with a significant head start.


Despite this, the desire for robust governance and motivation is still present. While the EU AI Act’s phases take time to implement, the EU has introduced an ‘AI Pact’ in the interim which organisations can sign to display their endorsement of EU AI Act before it officially goes into effect. Thus far, over 200 organisations have signed this pact, representing a commitment to balancing AI innovation with security and a protection of human dignity and rights.

Conclusion

From an analysis of the different attitudes towards regulating AI laid out in this article, it is clear that there is a balance to be struck between maximising the innovative potential of AI to make a positive change, and ensuring that this change remains strictly positive through robust and holistic governance. After all, it is not necessarily the AI tools and platforms which pose the biggest risk, but those who develop and use them, and by adhering to safe and secure legislation, they can ensure that their products are engineered with people-forward principles at the forefront. 


At Cambridge Management Consulting, we are equipped with the knowledge, expertise, and experience to ensure that your AI strategies remain compliant with policies and regulation to avoid penalties, and that they are built around the safety of your people and data. Get in touch now to strengthen your approach to AI that balances safety with success.

Contact Form

Contact - Craig Devolution Blog

Subscribe to our Newsletter

Blog Subscribe

SHARE CONTENT

Pembroke College lawn bathed in sunlight
by Tim Passingham 12 March 2026
CAMBRIDGE | See how Cambridge MC and Pembroke College are creating mutual value through a unique corporate partnership spanning student opportunities, academic collaboration and industry events | READ FULL CASE STUDY
Neon sharks made out of code.
by Simon Crimp 9 March 2026
Cyber Security | Ransomware in 2026 is a board-level resilience issue. Learn the key risks, weak spots and practical questions boards should ask to improve readiness, recovery and response.
The Top 21.2026 at the awards event in Cambridge, UK.
6 March 2026
The #21toWatch Top21.2026 winners have been announced at an awards ceremony at The Glasshouse innovation hub in Cambridge.
Asian business woman near a long window and looking at a tablet.
by Arianna Mortali 6 March 2026
BLOG | A student’s perspective on why women shouldn’t have to ‘play masculine’ to succeed at work – and how valuing empathy, confidence and inclusive leadership can help close gender gaps and build healthier organisations.
Abstract squiggle of circles
by Simon Crimp 19 February 2026
Where should leaders start with AI in 2026? A practical guide to moving beyond pilots, clarifying risk appetite, strengthening governance, improving data readiness, and delivering measurable enterprise value from AI at scale | READ FULL ARTICLE
Close up of a data centre stack with ports and wires visible
12 February 2026
We were approached by one of the fastest growing data centre providers in Europe. With over 20 data centres throughout the continent, they are consistently meeting the need for scalable, high-performance infrastructure. Despite this, a key data centre in Scandinavia had become reliant on a single, non-redundant 1 Gbps internet service from a local provider, posing significant risks to operational continuity. To enhance the reliability of its network and resolve these risks, our client needed to establish additional connectivity paths to ensure the redundancy of its infrastructure. The Ask Cambridge Management Consulting was engaged to address these connectivity challenges by identifying and evaluating potential vendors and infrastructure options to create second and third connectivity paths. This involved exploring various types of connectivity, including internet access, point-to-point capacity, wavelengths, and dark fibre. Additionally, Cambridge MC was asked to provide recommendations for building a local fibre network around the data centre to control and maintain diverse paths. This would allow the data centre to connect directly to nearby points of presence (PoPs) and reduce dependency on external providers, thereby enhancing network resilience and operational control. The goal of this project was to ensure that the Nordic data centre could maintain continuous operations even in the event of a failure in the primary connection. Approach & Skills Cambridge MC approached the project with a focus on ensuring operational continuity and resilience for the data centre. By identifying multiple connectivity paths, we aimed to mitigate the risk of network failures and ensure that the data centre could maintain continuous operations even in the event of a failure in the primary connection. This approach allowed Cambridge MC to provide a comprehensive solution to address both immediate and long-term connectivity needs. We employed a combination of Agile and Waterfall methodologies to manage the project. The initial investigative phase allowed a Waterfall approach, in which our team conducted thorough research and analysis to identify potential vendors and connectivity options. This phase involved detailed interviews with various telecommunications providers and an assessment of publicly available information. Once the initial analysis was complete, the workflow transitioned to an Agile approach for the implementation phase. This allowed Cambridge MC to adapt to new information and feedback from stakeholders, ensuring that the final solution was both flexible and robust. Challenges Lack of information: One of the primary obstacles we faced was the lack of detailed network maps and information from some of the potential vendors. To overcome this, the team conducted extensive interviews with contacts at these companies and leveraged its existing network of industry contacts to gather as much information as possible. Remote location: Another challenge was the remote location of the data centre, which limited the availability of local infrastructure and required us to explore creative solutions for connectivity. Cambridge MC addressed this by proposing the construction of a local fibre network around the data centre, which would allow for greater control and flexibility in connecting to nearby PoPs. Fragmented factors: Additionally, coordinating with multiple vendors and ensuring that their services could be integrated seamlessly posed a logistical challenge. We mitigated this by recommending a phased approach to implementation, starting with the most critical connectivity paths and gradually expanding to include additional options. Outcomes & Results Increased Connectivity: Cambridge MC successfully identified and evaluated multiple connectivity paths for the data centre. By exploring various types of connectivity, including internet access, point-to-point capacity, wavelengths, and dark fibre, we provided a comprehensive solution that significantly enhanced network resilience and reliability. Greater Control & Flexibility: Our recommendations for building a local fibre network around the data centre allowed for greater control and flexibility in connecting to nearby points of presence, ensuring continuous operations even in the event of a failure in the primary connection. New Vendors: The team’s extensive network of industry contacts and deep understanding of the regional telecommunications landscape allowed for a thorough and nuanced evaluation of potential vendors and connectivity options. Scope for Future Work: Cambridge MC identified several future developments with the potential to further enhance international connectivity and provide additional redundancy for the data centre. We also proposed further assistance, including a site visit for a more in-depth analysis of options, issuing RFI/RFP to vendors for capacity and fibre, and conducting similar connectivity studies for other candidate sites in the region.
Neon discs fading from blue to green to purple, cascading diagnolly across the screen.
by Cambridge Management Consulting 28 January 2026
Thames Freeport this week revealed the eight companies selected to participate in the Freeport’s Connectivity Lab, an initiative focused on validating commercially proven technologies in live port and logistics environments.
Aerial view of a data centre warehouse in the English countryside
by Duncan Clubb 13 January 2026
Author
by Matt Lawson 2 January 2026
Emerging as a hub for innovation, Thames Freeport is a unique initiative designed to stimulate trade and transform the lives of people in its region. Leveraging global connectivity and occupying a strategic position with intermodal capabilities across river, rail, and road, Thames Freeport has recognised its opportunity to drive economic regeneration for the local area. Thames Freeport engaged Cambridge Management Consulting to design a clear strategy for innovation over the next three to five years. Key considerations for this innovation strategy included objectives and KPIs, the future of the business ecosystem in the region, physical clusters and assets such as innovation hubs, and opportunities and challenges on the way. The Solution Working with our innovation partner, L Marks, Cambridge MC conducted an innovation strategy project which involved the following: Engaging with a range of stakeholders and partners from local authorities to corporate partners across the Thames Freeport area, leveraging interviews with key individuals to build a common picture of innovation aspirations, opportunities, and challenges. Conducting a series of workshops for the Thames Freeport team to consider visions and objectives, themes and focus areas, physical hubs and overall programme structure, and a three-year roadmap plan. Building a comprehensive innovation strategy which internalised all of the above questions. This was then presented to their board and formed the basis of the public tenders for innovation programmes that were then made public. 
More posts