Legislating AI: A Comparison between the EU and the UK

Rachi Weerasinghe


Subscribe Contact us

The EU AI Act


In March of this year, the European Union published their Artificial Intelligence Act, establishing a common regulatory and legal framework for AI across the EU. 


Two significant features of this act include the definition and prohibition of AI practices which pose an ‘unacceptable risk’; as well as the requirement for developers and ‘implementers’ to register high-risk AI models and maintain technical documentation of the model and training results.


The AI Act is the first comprehensive AI legal framework in the world. It will help to shape the digital future of the EU and guarantee the safety and fundamental rights of people and businesses. 


Who does it Apply to?


The Act applies to any marketing or use of AI within the EU, regardless of whether those providers or developers are established there or in another country. While this effectively makes the act global in scope, this will depend heavily on how effectively authorities can prosecute outside of the EU.


A Risk-Based Approach


The EU’s AI Act adopts a risk-based approach which categorises AI systems into different risk levels (Unacceptable, High, Limited, and Minimal Risk), and imposes corresponding regulatory requirements.

Unacceptable Risk


AI systems that pose a threat to safety, livelihoods, or individual rights will be banned. This includes, for example, government social scoring and voice-assisted toys promoting dangerous behaviour.


High Risk


AI Systems are considered High Risk if they profile individuals, i.e. the automated processing of personal data to assess various aspects of person’s life. Consequently, AI systems used in the following are categorised as High Risk: critical infrastructures like transport; education if it could affect the outcome of someone’s career, e.g., exam scoring; safety components such as AI in robot-assisted surgery; employment, where it affects selection, e.g., CV sorting; essential services like credit scoring which may affect eligibility for a loan; law enforcement, e.g., evidence evaluation; migration services which could affect asylum claims; and democratic processes such as court ruling searches.


High-risk AI technologies will be subject to strict obligations before they are allowed onto the market.


Limited Risk


Limited risk involves the risks associated with AI's lack of transparency. The AI Act mandates transparency to build trust with users. For example, users must know when they are interacting with an AI, for example when they use chatbots. Providers must label AI-generated content, including AI-generated text or media made to inform the public. This also applies to audio and video content that constitutes deep fakes.


Low Risk


The AI Act permits unrestricted use of minimal-risk AI, including AI-enabled video games and spam filters. Low Risk encompass most AI systems currently used in the EU.


Key Objectives of the EU Act


The EU’s AI Act is comprehensive and wide-reaching, however its primary principles and objectives can be summarised under three main purposes: Regulation, Trust, and Innovation.


Regulation


As aforementioned, the AI Act aims to create the first-ever legal framework for AI, addressing the risks and challenges which it has posed within its recent, rapid evolution, including the banning of those that are deemed harmful. This is particularly prescient for high-risk AI systems used in critical infrastructures, including education and employment, with the aim of maintaining their safety through conformity assessments, human monitoring, risk management, and more. Not only this, but the Act accounts for significant penalties for non-compliance, including fines of up to €35m or 7% of global revenue, which will be managed by a governance structure involving multiple entities such as the European AI Office, national authorities, and market surveillance authorities. While this appears to be a relatively complex ecosystem and may require further funding to be successful, the Act overall aligns with existing EU laws regarding data protection, privacy, and confidentiality, ensuring a cohesive regulatory environment.


Trust


In regulating AI, what has become a fast-growing industry, the Act promotes trust and transparency by making it more human-centric and with a revitalised respect for fundamental rights, safety, and ethical principles. It imposes requirements which ensure that AI systems interacting with humans are clearly identified as such, and mandates documentation and logging for high-risk AI systems. This is particularly salient for generative AI models, with specific regulations introduced to ensure compliance with EU copyright laws and ethical standards. This keeps the development of AI models in the right direction, in other words a trajectory which is ethical, beneficial to society, and contributes positively to social and economic well-being.


Innovation


Further to this, the Act maintains the momentum of AI’s development by fostering innovation and competitiveness. This will be beneficial for SMEs and start-ups, including measures to reduce administrative burdens, and promoting international cooperation and collaboration. Furthermore, the Act encourages the use of regulatory sandboxes and real-world testing environments to develop and train innovative AI systems.


The UK Government AI Framework: 5 Core Principles


The UK announced their own response to AI regulation in February of this year, in which the Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology, described her aim to produce a ‘bold and considered approach that is strongly pro-innovation and pro-safety’. As such, the Act acknowledges the rapid growth of AI, while being grounded in a risk-based approach similar to its EU counterpart. In order to address key concerns surrounding societal harms, misuse risk, and autonomy risks, the UK Act puts forward five core cross-sectorial principles to mitigate potential dangers:


  1. Safe, Secure & Robust: AI applications should function securely, safely, and robustly, with risks carefully managed.
  2. Appropriate Transparency: Organisations developing and deploying AI should be able to communicate the context in which it was used as well as the system's operation.
  3. Fairness: AI should comply with existing laws such as the Equality Act 2010 and UK GDPR, and not discriminate against individuals or create unfair commercial outcomes.
  4. Accountability & Governance: Measures are needed to ensure the appropriate oversight and clear accountability for the ways in which AI is used.
  5. Contestability & Redress: People need clear routes through which to dispute harmful outcomes or decisions generated by AI.


In following these values, the UK hopes to fulfil their goal ‘to make the UK a great place to build and use AI that changes our lives for the better’. 


Key Differences between the UK and EU AI Legislations


The primary difference between the EU and UK’s respective approaches to AI regulation is that, where the former framework requires the introduction of a new European Agency, National Authorities, and numerous registration/compliance processes in order to operate, the latter is much more flexible. The UK Act works by asking existing regulatory bodies to interpret and adapt its regulations into their sectors.


As such, the UK framework is arguably a more practical approach, given that such entities are likely to already be considering the impact of AI. The EU AI Act, on the other hand, can be considered ‘top-heavy’, and may become bogged down in administration, only able to focus on the very big or scandalous AI incidents. It also risks becoming outdated quickly, as AI evolves and outpaces regulations.


One way to visualise this is that the EU offers a horizonal, top-down approach, while the UK is operating on a more agile, vertical system – in other words, the EU is prescriptive whereas the UK is principles-based.


This is not to say that the UK regulation is not without its drawbacks, however. It requires current regulators to quickly become more AI-savvy, and delegates the interpretation of principles to the discretion of each entity. This may produce a series of patchy or inconsistent approaches in different sectors, and it allows companies more opportunities to exploit gaps.


Summary


To conclude, though the respective EU and UK approaches to regulating the swift development of AI share certain similarities while maintaining notable differences, thus equipping them with strengths and weaknesses respective to their goals, what they both represent is a global interest in keeping AI within a strict legal and ethical framework. This is important for maintaining the safety and transparency of an industry which has the potential to introduce irreparable risk, but also seeks to increase its momentum, encouraging innovation in a way that is pragmatic, beneficial, and principled. 


How we Can Help


With increased regulations comes further considerations and heightened scrutiny upon your business. AI is an increasingly prescient and useful tool, so to make sure you are utilising it to its full potential, while remaining compliant with worldwide standards, contact Cambridge Management Consulting. Our Digital & Innovation team is equipped with combined decades of real-world experience, and an acute and up-to-date knowledge on market trends, regulations, and technologies, to ensure your business is making the most of our evolving digital landscape. Contact Rachi Weerasinghe to learn more.


Contact - NIS2 Article

Subscribe to our Newsletter

Blog Subscribe

SHARE CONTENT

Pembroke College lawn bathed in sunlight
by Tim Passingham 12 March 2026
CAMBRIDGE | See how Cambridge MC and Pembroke College are creating mutual value through a unique corporate partnership spanning student opportunities, academic collaboration and industry events | READ FULL CASE STUDY
Neon sharks made out of code.
by Simon Crimp 9 March 2026
Cyber Security | Ransomware in 2026 is a board-level resilience issue. Learn the key risks, weak spots and practical questions boards should ask to improve readiness, recovery and response.
The Top 21.2026 at the awards event in Cambridge, UK.
6 March 2026
The #21toWatch Top21.2026 winners have been announced at an awards ceremony at The Glasshouse innovation hub in Cambridge.
Asian business woman near a long window and looking at a tablet.
by Arianna Mortali 6 March 2026
BLOG | A student’s perspective on why women shouldn’t have to ‘play masculine’ to succeed at work – and how valuing empathy, confidence and inclusive leadership can help close gender gaps and build healthier organisations.
Abstract squiggle of circles
by Simon Crimp 19 February 2026
Where should leaders start with AI in 2026? A practical guide to moving beyond pilots, clarifying risk appetite, strengthening governance, improving data readiness, and delivering measurable enterprise value from AI at scale | READ FULL ARTICLE
Close up of a data centre stack with ports and wires visible
12 February 2026
We were approached by one of the fastest growing data centre providers in Europe. With over 20 data centres throughout the continent, they are consistently meeting the need for scalable, high-performance infrastructure. Despite this, a key data centre in Scandinavia had become reliant on a single, non-redundant 1 Gbps internet service from a local provider, posing significant risks to operational continuity. To enhance the reliability of its network and resolve these risks, our client needed to establish additional connectivity paths to ensure the redundancy of its infrastructure. The Ask Cambridge Management Consulting was engaged to address these connectivity challenges by identifying and evaluating potential vendors and infrastructure options to create second and third connectivity paths. This involved exploring various types of connectivity, including internet access, point-to-point capacity, wavelengths, and dark fibre. Additionally, Cambridge MC was asked to provide recommendations for building a local fibre network around the data centre to control and maintain diverse paths. This would allow the data centre to connect directly to nearby points of presence (PoPs) and reduce dependency on external providers, thereby enhancing network resilience and operational control. The goal of this project was to ensure that the Nordic data centre could maintain continuous operations even in the event of a failure in the primary connection. Approach & Skills Cambridge MC approached the project with a focus on ensuring operational continuity and resilience for the data centre. By identifying multiple connectivity paths, we aimed to mitigate the risk of network failures and ensure that the data centre could maintain continuous operations even in the event of a failure in the primary connection. This approach allowed Cambridge MC to provide a comprehensive solution to address both immediate and long-term connectivity needs. We employed a combination of Agile and Waterfall methodologies to manage the project. The initial investigative phase allowed a Waterfall approach, in which our team conducted thorough research and analysis to identify potential vendors and connectivity options. This phase involved detailed interviews with various telecommunications providers and an assessment of publicly available information. Once the initial analysis was complete, the workflow transitioned to an Agile approach for the implementation phase. This allowed Cambridge MC to adapt to new information and feedback from stakeholders, ensuring that the final solution was both flexible and robust. Challenges Lack of information: One of the primary obstacles we faced was the lack of detailed network maps and information from some of the potential vendors. To overcome this, the team conducted extensive interviews with contacts at these companies and leveraged its existing network of industry contacts to gather as much information as possible. Remote location: Another challenge was the remote location of the data centre, which limited the availability of local infrastructure and required us to explore creative solutions for connectivity. Cambridge MC addressed this by proposing the construction of a local fibre network around the data centre, which would allow for greater control and flexibility in connecting to nearby PoPs. Fragmented factors: Additionally, coordinating with multiple vendors and ensuring that their services could be integrated seamlessly posed a logistical challenge. We mitigated this by recommending a phased approach to implementation, starting with the most critical connectivity paths and gradually expanding to include additional options. Outcomes & Results Increased Connectivity: Cambridge MC successfully identified and evaluated multiple connectivity paths for the data centre. By exploring various types of connectivity, including internet access, point-to-point capacity, wavelengths, and dark fibre, we provided a comprehensive solution that significantly enhanced network resilience and reliability. Greater Control & Flexibility: Our recommendations for building a local fibre network around the data centre allowed for greater control and flexibility in connecting to nearby points of presence, ensuring continuous operations even in the event of a failure in the primary connection. New Vendors: The team’s extensive network of industry contacts and deep understanding of the regional telecommunications landscape allowed for a thorough and nuanced evaluation of potential vendors and connectivity options. Scope for Future Work: Cambridge MC identified several future developments with the potential to further enhance international connectivity and provide additional redundancy for the data centre. We also proposed further assistance, including a site visit for a more in-depth analysis of options, issuing RFI/RFP to vendors for capacity and fibre, and conducting similar connectivity studies for other candidate sites in the region.
Neon discs fading from blue to green to purple, cascading diagnolly across the screen.
by Cambridge Management Consulting 28 January 2026
Thames Freeport this week revealed the eight companies selected to participate in the Freeport’s Connectivity Lab, an initiative focused on validating commercially proven technologies in live port and logistics environments.
Aerial view of a data centre warehouse in the English countryside
by Duncan Clubb 13 January 2026
Author
by Matt Lawson 2 January 2026
Emerging as a hub for innovation, Thames Freeport is a unique initiative designed to stimulate trade and transform the lives of people in its region. Leveraging global connectivity and occupying a strategic position with intermodal capabilities across river, rail, and road, Thames Freeport has recognised its opportunity to drive economic regeneration for the local area. Thames Freeport engaged Cambridge Management Consulting to design a clear strategy for innovation over the next three to five years. Key considerations for this innovation strategy included objectives and KPIs, the future of the business ecosystem in the region, physical clusters and assets such as innovation hubs, and opportunities and challenges on the way. The Solution Working with our innovation partner, L Marks, Cambridge MC conducted an innovation strategy project which involved the following: Engaging with a range of stakeholders and partners from local authorities to corporate partners across the Thames Freeport area, leveraging interviews with key individuals to build a common picture of innovation aspirations, opportunities, and challenges. Conducting a series of workshops for the Thames Freeport team to consider visions and objectives, themes and focus areas, physical hubs and overall programme structure, and a three-year roadmap plan. Building a comprehensive innovation strategy which internalised all of the above questions. This was then presented to their board and formed the basis of the public tenders for innovation programmes that were then made public. 
More posts