AI Regulation: Can Policy Keep Up with its Potential?

Anthony Aarons

SUBSCRIBE CONTACT US

Author


Though the term ‘Artificial Intelligence’ (AI) was first coined by John McCarthy in the 1950s, its popularity in day-to-day life, conversation, and particularly in business has seen an unprecedented explosion since the start of his decade. With this has grown the power and capacity of this technology, as the chart below demonstrates the recent rapid evolution of AI’s capabilities across reading, writing, analysis, and generation.

It comes as no surprise, then, that with this increased prevalence has come equitable anxieties regarding its potential uses and their risks. These largely surround the question-mark following this growth; in an interview with Adam Grant, Sam Altman, CEO of OpenAI, claimed that there are ‘huge unknowns of how this is [going to] play out’: ‘[AI is] not [going to] be as big of a deal as people think, at least in the short term. Long term, everything changes.’


The antidote to these concerns? Better, robust, standardised governance to regulate this potential and bring light to these unknowns. AI at its core is a tremendously powerful force for facilitating ingenuity, efficiency, and creation, but more exacting control is required to ensure that it remains directed towards socially beneficial values and usage, and limit the potential downside imposed by serious unchecked risks.


The general assumption, spurred by quotes like Altman’s above, is that AI is an unstoppable inevitability and a possible march towards dystopia. This is not the case: AI is fettered to the people who define it and use it. In this article, we will look at the ways in which these people have approached its governance and the differing attitudes towards how important this is.

Current Approaches to Regulating Artificial Intelligence

Currently, the overarching values which govern many countries’ and unions’ approach to developing and implementing AI are the OECD’s AI Principles, which are comprised of five considerations these adherents must keep in mind, the ‘Principles’, and five recommendations the OECD makes to policy makers for applying these in practice.


With the objective of keeping AI humane, transparent, accountable, and aligned with human rights, the OECD’s Principles are as follows:


  • Inclusive growth, sustainable development, and wellbeing: AI platforms should be developed for the benefit of people and the planet.


  • Human rights and democratic values, including fairness, and privacy: AI engineers should internalise the law, human rights, and democratic values in their projects.


  • Transparency and explainability: AI developers should disclose meaningful information and context surrounding their products.


  • Robustness, safety, and security: AI systems should be secure throughout their entire lifecycle so that there is no risk to safety at any point.


  • Accountability: AI actors should be accountable for the proper functioning of their products and platforms.


To this end, the OECD makes the following recommendations for nations, unions, and organisations to demonstrate these values:


  • Investing in AI research and development: Governments should encourage research and development into AI tools through public and private investment.


  • Fostering an inclusive AI-enabling ecosystem: Governments should ensure an inclusive, dynamic, sustainable digital ecosystem to cultivate AI development.


  • Shaping an enabling interoperable governance and policy environment for AI: Governments should promote agility in order to accelerate research to deployment.


  • Building human capacity and preparing for labour market transformation: Governments should collaborate with stakeholders to prepare for the changes AI will impact on work and society.


  • International co-operation for trustworthy AI: Governments should work with both the OECD and each other to maximise the opportunities of AI.


At time of writing, 48 countries and one union have signed up as adherents to the OECD’s principles, representing a widespread recognition and interest in the importance of governance. However, the ways in which these different nations and bodies have filtered it into their own attempts at policy and regulation does not always demonstrate the enthusiasm to make this imminent or even obligatory. The EU AI Act, for example, was one of the first attempts to govern the usage of AI at an international level, yet its rollout has been continually delayed; and Australia have laid out eight AI Ethics Principles and an AI Safety Standard, yet these are both completely voluntary.


What this comes down to is evident: a bipedal concern between amplifying the innovation which the OECD promotes in order to maximise the potential of AI, as well as ensuring that this potential remains balanced and ethical in line with the OECD’s emphasis on doing so safely and securely. Moreover, the removal of restrictions in some regions coupled with clear misunderstandings surrounding the borderless nature of AI means that businesses and governments are focused on being first-to-market rather than the first to control values and provide the necessary guardrails to control this immensely powerful technology.

The Importance of Innovation

For many nations and legislative bodies, the reluctance to govern AI too stringently is attributed to a commitment to maintaining the space and breathability for innovators to make the most of its promise and wide-ranging potentials. The UK, for example, has resisted the publication of a comprehensive, regulated AI act for fear of weakening the country’s attractiveness to AI companies. Prime Minister of the UK, Keir Starmer, stated that ‘Instead of over-regulating these new technologies, we’re seizing the opportunities they offer’, and has unveiled the UK’s AI Opportunities Action Plan. This avoids the risk-based approach of the EU, which several have criticised for being too prescriptive, in favour of sustained economic growth by giving businesses the incentive to innovate and invest. 


These commitments are founded on the principles of empowering talent and spreading the transformative effects of AI in order to drive economic growth, benefit public services, and increase personal opportunities. By allowing AI entrepreneurs the power to realise their ideas without the buffer of regulation, the objective is to ensure a positive direction of these innovations from the inside out instead of projecting it externally through governance and law. 


Similarly, the US, which aligns with the UK on a pro-innovation attitude towards AI, currently lacks an overarching governing principle surrounding AI, and President of the US Donald Trump has gone so far as to issue an ‘Executive Order for Removing Barriers to American Leadership in AI’ which repeals all previous policies or directives regarding its regulation. This makes it easier for developers to create AI products, platforms, and tools, especially when it comes to trialling and testing early models which otherwise risk-focused regulations would decelerate. However, given the very nature of AI and the experimentation required to develop it, the rush to deploy has left many businesses floundering with failed implementations, huge costs with little to show for it, and, in some cases, serious damage to their brands and business infrastructure. 



However, as we shall cover in the next section, it is absolutely vital that this lack of regulatory control is mitigated through sound governance principles, as well as societal pressures whereby the development of AI is defined by what it should do, not what it could do.

The Pace of Change vs the Pace of Learning

However, the prioritisation of innovation over regulation raises some concerns as to the Pace of Change compared to the Pace of Learning.


Here, the Pace of Change refers to technological evolution, while the Pace of Learning represents our own human capacity to understand and remain current with these developments. As the above diagram displays, these intersected in the 1950s, but the introduction of compute (the technology which established the roots of AI) caused them to widen to an almost exponential degree.


This acceleration is encouraged by policies such as the US and the UK’s which, while we hope they will ultimately bring good through the technologies and industries that they innovate, also have the potential to bring risk or harm if unchecked. In the US, the ‘Executive Order for Removing Barriers to American Leadership in AI’ replaced a more risk-based policy introduced during Joe Biden’s presidency, the ‘Executive Order for the Safe, Secure, and Trustworthy Development and Use of AI’. Similarly, the UK’s AI Opportunities Action Plan was unveiled in lieu of ‘The Artificial Intelligence (Regulation) Bill’, an attempt to introduce more strict legislation on its use which has been continually delayed or dismissed.


Even for governing bodies which are making attempts to regulate AI and its applications, there are concerns about how successful these are in keeping stride with the Pace of Change. The EU AI Act, for example, has been criticised for its delayed and ambiguous timeline. Though it was published in March 2024, the first provisions only went into effect in February of this year, and there is uncertainty as to the rollout of its subsequent stages, providing the Pace of Change with a significant head start.



Despite this, the desire for robust governance and motivation is still present. While the EU AI Act’s phases take time to implement, the EU has introduced an ‘AI Pact’ in the interim which organisations can sign to display their endorsement of EU AI Act before it officially goes into effect. Thus far, over 200 organisations have signed this pact, representing a commitment to balancing AI innovation with security and a protection of human dignity and rights.

Conclusion

From an analysis of the different attitudes towards regulating AI laid out in this article, it is clear that there is a balance to be struck between maximising the innovative potential of AI to make a positive change, and ensuring that this change remains strictly positive through robust and holistic governance. After all, it is not necessarily the AI tools and platforms which pose the biggest risk, but those who develop and use them, and by adhering to safe and secure legislation, they can ensure that their products are engineered with people-forward principles at the forefront. 



At Cambridge Management Consulting, we are equipped with the knowledge, expertise, and experience to ensure that your AI strategies remain compliant with policies and regulation to avoid penalties, and that they are built around the safety of your people and data. Get in touch now to strengthen your approach to AI that balances safety with success.


With over 25 years on experience in AI and leadership management, Anthony Aarons is an Associate for AI & Risk at Cambridge Management Consulting.


Get in touch with Anthony on LinkedIn or use the Contact Form below.

Contact Form

Contact - Craig Devolution Blog

Subscribe to our Newsletter

Blog Subscribe

SHARE CONTENT

Neon letters 'Ai' made from stacks of blocks like a 3D bar graph
by Darren Sheppard 4 December 2025
What is the Contract Lifecycle Management and Why does it Matter? The future success of your business depends on realising the value that’s captured in its contracts. From vendor agreements to employee documents, everywhere you look are commitments that need to be met for your business to succeed. The type of contract and the nature of goods or services it covers will determine what sort of management activities might be needed at each stage. How your company is organised will also determine which departments or individuals are responsible for what activities at each stage. Contract Lifecycle Management, from a buyer's perspective, is the process of defining and designing the actual activities needed in each stage for any specific contract, allocating ownership of the activities to individuals or groups, and monitoring the performance of those activities as the contract progresses through its lifecycle. The ultimate aim is to minimise surprises, ensure the contracted goods or services are delivered by the vendor in accordance with the contract, and realise the expected business benefits and value for money. The Problem of Redundant Spend in Contracts Despite the built-in imbalance of information favoring suppliers, companies still choose to oversee these vendors internally. However, many adopt a reactive, unstructured approach to supplier management and struggle to bridge the gap between contractual expectations and actual performance. Currently, where governance exists, it is often understaffed, with weak, missing, or poorly enforced processes. The focus is primarily on manual data collection, validation, and basic retrospective reporting of supplier performance, rather than on proactively managing risk, relationships, and overall performance. The amount of redundant spend in contracts can vary widely depending on the industry, the complexity of the contracts, and how rigorously they are managed. For further information on this, Cambridge MC’s case studies provide insights into typical ranges and common sources of redundant spend. As a general estimate, industry analysts often state that redundant spend can account for as much as 20% of total contract value. In some cases, especially in poorly managed contracts, this can be much higher. What is AI-driven Contract Management? Artificial Intelligence (AI) is redefining contract management, transforming a historically time-consuming and manual process into a streamlined, efficient, and intelligent operation. Traditionally, managing contracts required legal teams to navigate through extensive paperwork, drafting, reviewing, and monitoring agreements — a process prone to inefficiencies and human error. With the emergence of artificial intelligence, particularly generative AI and natural language processing (NLP), this area of operations is undergoing a paradigm shift. This step change is not without concerns however, as there are the inevitable risks of AI hallucinations, training data biases and the threat to jobs. AI-driven contract management solutions not only automate repetitive tasks but also uncover valuable insights locked up in contract data, improving compliance and reducing the risks that are often lost in reams paperwork and contract clauses. Put simply, AI can automate, analyse, and optimise every aspect of your contract lifecycle. From drafting and negotiation to approval, storage, and tracking, AI-powered platforms enhance precision and speed across these processes; in some cases reducing work that might take several days to minutes or hours. By discerning patterns and identifying key terms, conditions, and concepts within agreements, AI enables businesses to parse complex contracts with ease and efficiency. In theory, this empowers your legal and contract teams (rather than reducing them), allowing personnel to focus on high-level tasks such as strategy rather than minutiae. However, it is important to recognise that none of the solutions available in the marketplace today offer companies an integrated supplier management solution, combining a comprehensive software platform, capable of advanced analytics, with a managed service. Cambridge Management Consulting is one of only a few consultancies that offers fully integrated Contract Management as a Service (CMaaS). Benefits of Integrating AI into your Contract Lifecycle Management Cambridge MC’s Contract Management as a Service (CMaaS) 360-degree Visibility: Enable your business to gain 360-degree visibility into contracts and streamline the change management process. Real-time Data: Gain real-time performance data and granularly compare it against contractually obligated outcomes. More Control: Take control of your contracts and associated relationships with an integrated, centralised platform. Advanced meta data searches provide specific information on external risk elements, and qualitative and quantitative insights into performance. Reduces Costs: By automating manual processes, businesses can significantly reduce administrative costs associated with contract management. AI-based solutions eliminate inefficiencies in the contract lifecycle while minimising reliance on external legal counsel for routine tasks. Supplier Collaboration: Proactively drive supplier collaboration and take a data-driven approach towards managing relationships and governance process health. Enhanced Compliance: AI tools ensure that contracts adhere to internal policies and external regulations by flagging non-compliant clauses during the drafting or review stage. This proactive approach reduces the risk of costly disputes or penalties. Reduces Human Errors: In traditional contract management processes, human errors can lead to missed deadlines and hidden risks. AI-powered systems use natural language processing to identify inconsistencies or inaccuracies in contracts before they escalate into larger issues. Automates Repetitive Tasks: AI-powered tools automate time-consuming tasks such as drafting contracts, reviewing documents for errors, and extracting key terms. This frees up legal teams to focus on higher-value activities like strategic negotiations and risk assessment. We can accurately model and connect commercial information across end-to-end processes and execution systems. AI capabilities then derive and apply automated commercial intelligence (from thousands of commercial experts using those systems) to error-proof complex tasks such as searching for hidden contract risks, determining SLA calculations and performing invoice matching/approvals directly against best-in-class criteria. Contract management teams using AI tools reported an annual savings rate that is 37% higher than peers. Spending and tracking rebates, delivery terms and volume discounts can ensure that all of the savings negotiated in a sourcing cycle are based on our experience of managing complex contracts for a wide variety of customers. Our Contract Management as a Service, underpinned by AI software tooling, has already delivered tangible benefits and proven success. 8 Steps to Transition Your Organisation to AI Contract Management Implementing AI-driven contract management requires a thoughtful and structured approach to ensure seamless integration and long-term success. By following these key steps your organisation can avoid delays and costly setbacks. Step 1 Digitise Contracts and Centralise in the Cloud: Begin by converting all existing contracts into a digital format and storing them in a secure, centralised, cloud-based repository. This ensures contracts are accessible, organised, and easier to manage. A cloud-based system also facilitates real-time collaboration and allows AI to extract data from various file formats, such as PDFs and OCR-scanned images, with ease. Search for and retrieve contracts using a variety of advanced search features such as full text search, Boolean, regex, fuzzy, and more. Monitor upcoming renewal and expiration events with configurable alerts, notifications, and calendar entries. Streamline contract change management with robust version control and automatically refresh updated metadata and affected obligations. Step 2 Choose the Right AI-Powered Contract Management Software: Selecting the right software is a critical step in setting up your management system. Evaluate platforms based on their ability to meet your organisation’s unique contracting needs. Consider key factors such as data privacy and security, integration with existing systems, ease of implementation, and the accuracy of AI-generated outputs. A well-chosen platform will streamline workflows while ensuring compliance and scalability. Step 3 Understand How AI Analyses Contracts: To make the most of AI, it’s essential to understand how it processes contract data. AI systems use Natural Language Processing (NLP) to interpret and extract meaning from human-readable contract terms, while Machine Learning (ML) enables the system to continuously improve its accuracy through experience. These combined technologies allow AI to identify key clauses, conditions, and obligations, as well as extract critical data like dates, parties, and legal provisions. Training your team on these capabilities will help them to understand the system and diagnose inconsistencies. Step 4 Maintain Oversight and Validate AI Outputs: While AI can automate repetitive tasks and significantly reduce manual effort, human oversight is indispensable. Implement a thorough process for spot-checking AI-generated outputs to ensure accuracy, compliance, and alignment with organisational standards. Legal teams should review contracts processed by AI to verify the integrity of agreements and minimise risks. This collaborative approach between AI and human contract management expertise ensures confidence in the system. Step 5 Refine the Data Pool for Better Results: The quality of AI’s analysis depends heavily on the data it is trained on. Regularly refine and update your data pool by incorporating industry-relevant contract examples and removing errors or inconsistencies. A well-maintained data set enhances the precision of AI outputs, enabling the system to adapt to evolving business needs and legal standards. Step 6 Establish Frameworks for Ongoing AI Management: To ensure long-term success, set clear objectives and measurable goals for your AI contract management system. Define key performance indicators (KPIs) to track progress and prioritise features that align with your organisation’s specific requirements. Establish workflows and governance frameworks to guide the use of AI tools, ensuring consistency and accountability in contract management processes. Step 7 Train and Empower Your Teams: Equip your teams with the skills and knowledge they need to use AI tools effectively. Conduct hands-on training sessions to familiarise users with the platform’s features and functionalities. Create a feedback loop to gather insights from your team, allowing for continuous improvement of the system. Avoid change resistance by using change management methodologies, as this will foster trust in the technology and drive successful adoption. Step 8 Ensure Ethical and Secure Use of AI: Tools Promote transparency and integrity in the use of AI-driven contract management. Legal teams should have the ability to filter sensitive information, secure data within private cloud environments, and trace data back to its source when needed. By prioritising data security and ethical AI practices, organisations can build trust and mitigate potential risks. With the right tools, training, and oversight, AI can become a powerful ally in achieving operational excellence as well as reducing costs and risk. Overcoming the Technical & Human Challenges While the benefits are compelling, implementing AI in contract management comes with some unique challenges which need to be managed by your leadership and contract teams: Data Security Concerns: Uploading sensitive contracts to cloud-based platforms risks data breaches and phishing attacks. Integration Complexities: Incorporating AI tools into existing systems requires careful planning to avoid disruptions and downtime. Change Fatigue & Resistance: Training employees to use new technologies can be time-intensive and costly. There is a natural resistance to change, the dynamics of which are often overlooked and ignored, even though these risks are often a major cause of project failure. Reliance on Generic Models: Off-the-shelf AI models may not fully align with your needs without detailed customisation. To address these challenges, businesses should partner with experienced providers who specialise in delivering tailored AI-driven solutions for contract lifecycle management. Case Study 1: The CRM That Nobody Used A mid-sized company invests £50,000 in a cutting-edge Customer Relationship Management (CRM) system, hoping to streamline customer interactions, automate follow-ups, and boost sales performance. The leadership expects this software to increase efficiency and revenue. However, after six months: Sales teams continue using spreadsheets because they find the CRM complicated. Managers struggle to generate reports because the system wasn’t set up properly. Customer data is inconsistent, leading to missed opportunities. The Result: The software becomes an expensive shelf-ware — a wasted investment that adds no value because the employees never fully adopted it. Case Study 2: Using Contract Management Experts to Set Up, Customise and Provide Training If the previous company had invested in professional services alongside the software, the outcome would have been very different. A team of CMaaS experts would: Train employees to ensure adoption and confidence in using the system. Customise the software to fit business needs, eliminating frustrations. Provide ongoing support, so issues don’t lead to abandonment. Generate workflows and governance for upward communication and visibility of adherence. The Result: A fully customised CRM that significantly improves the Contract Management lifecycle, leading to: more efficient workflows, more time for the contract team to spend on higher value work, automated tasks and event notifications, and real-time analytics. With full utilisation and efficiency, the software delivers real ROI, making it a strategic investment instead of a sunk cost. Summary AI is reshaping the way organisations approach contract lifecycle management by automating processes, enhancing compliance, reducing risks, and improving visibility into contractual obligations. From data extraction to risk analysis, AI-powered tools are empowering legal teams with actionable insights while driving operational efficiency. However, successful implementation requires overcoming challenges such as data security concerns and integration complexities. By choosing the right solutions, tailored to their needs — and partnering with experts like Cambridge Management Consulting — businesses can overcome the challenges and unlock the full potential of AI-based contract management. A Summary of Key Benefits Manage the entire lifecycle of supplier management on a single integrated platform Stop value leakage: as much as 20% of Annual Contract Value (ACV) Reduce on-going governance and application support and maintenance expenses by up to 60% Deliver a higher level of service to your end-user community. Speed without compromise: accomplish more in less time with automation capabilities Smarter contracts allow you to leverage analytics while you negotiate Manage and reduce risk at every step of the contract lifecycle Up to 90% reduction in creating first drafts Reduction in CLM costs and extraction costs How we Can Help Cambridge Management Consulting stands at the forefront of delivering innovative AI-powered solutions for contract lifecycle management. With specialised teams in both AI and Contract Management, we are well-placed to design and manage your transition with minimal disruption to operations. We have already worked with many public and private organisations, during due diligence, deal negotiation, TSAs, and exit phases; rescuing millions in contract management issues. Use the contact form below to send your queries to Darren Sheppard , Senior Partner for Contract Management. Go to our Contract Management Service Page
Sun through the trees
by Scott Armstrong 26 November 2025
Nature means something different to everyone. For some, it is a dog-walk through the park; for others, it is hiking misty mountains in Scotland, swimming in turquoise waters, or exploring tropical forests in Costa Rica.
Aerial view of Westminster, London.
by Craig Cheney 25 November 2025
With the UK Budget being published tomorrow, councils are facing intense financial pressure. Rising demand for adult and children’s social care, homelessness services, and temporary accommodation has left little room for manoeuvre.
by Cambridge Management Consulting 20 November 2025
Press Release
Lightning strike in dark sky
by Scott Armstrong 17 November 2025
Non-commodity charges are driving UK energy costs higher. Discover what’s changing, why it matters, and the steps businesses should take to protect budgets | READ NOW
Futuristic building with greenery growing out of it.
by Cambridge Management Consulting 10 November 2025
Over the last few decades, carbon offsetting has become a go-to strategy for businesses looking to demonstrate sustainability commitments and enhance their external credibility. Offsetting takes many forms, from tree planting and forest conservation to providing communities with clean cookstoves and renewable energy.
Aerial view of solar panels in a green field.
by Drew Davy 7 November 2025
In today's rapidly evolving business landscape, Environmental, Social, and Governance (ESG) factors have moved from niche considerations to critical drivers of long-term value, investor confidence, and societal impact.
Two blocks of data with bottleneck inbetween
by Paul Brooker 29 October 2025
Read our article on hidden complexity and find out how shadow IT, duplicate tools and siloed buying bloat costs. See how CIOs gain a single view of IT spend to cut waste, boost compliance and unlock 5–7% annual savings | READ FULL ARTICLE
More posts