What Is the Third Way to Successful AI Adoption?

Tom Burton


Subscribe Contact us

Not a single day passes without exposure to news or opinion about artificial intelligence (AI). It may be positive, espousing the benefits of a future world run by technology, or it may be more pessimistic, describing the many dangers to society. Either way, you can’t ignore it.


Like many topics that dominate the media, it has become extremely polarised. I’ve found that a significant number of organisations currently fall into one of two diametrically opposed tribes:


  • Adoption at all costs: At one end of the spectrum we have those organisations that are charging in with boundless enthusiasm — “Let’s switch on Copilot and see what happens” or “Move fast and break things is our dictum” — but, what if the thing that is broken is the entire organisation?


  • Cautious Paralysis: At the other end, we have those organisations that want to make sure they fully understand what they are doing before they put the business at risk. They might be choosing to do nothing until the technology has been proven by the early adopters. Or they may be trialling a variety of Proof of Concepts (PoCs) while never getting through Proof of Value (PoV) and into production. They may comfort themselves by dreaming of a future where they can sit smugly saying “I told you so” surrounded by the devastation of early adopters. But the alternative future is one where they have been left behind and are struggling to catch up.


Like most polarised topics, surely the wise move is to chart a course between the two extremes. Tony Blair was famous for his consistent adoption of 'The Third Way'. This article describes a similar approach to AI adoption, and the management of risk on the route to the benefits that lie ahead.


Stage 1: Definition & Risk


While the approach being described could, and I’d argue should, be applied to all AI initiatives, including machine learning (ML) and narrow AI, this article focuses more on Generative AI (GenAI).


The risks from careless employment of GenAI are greater, and the ease with which it can be adopted – particularly compared to ML – means it is less likely to have robust governance wrapped around its adoption.


What Problems You Are Trying to Solve with AI?


Our objective should be tangible, measurable benefits delivered through the employment of AI with a tolerable degree of risk. We want to do this again and again, consistently, and efficiently. One mode of failure would be wasting resources in experiments that never get into production to earn value. Another mode would be the deployment of an initiative into production but with devastating consequences from risks that we didn’t consider.


To get to that objective we need to decide what problems we are really trying to solve. It may seem obvious, but I suspect a lot of experimentation at the moment is being led by the technology and not by a genuine business need. What problems does your business have: Would you like to take cost out of your overheads? Do you want to differentiate your market offering with a more personalised customer experience? Perhaps you want to increase your marketing ROI by increasing your content with tailored messages that respond to changing sentiment and market news?


These 'Mid-sized Hairy Audacious Goals (MHAG)' are aspirations. The mind should be open and not encumbered by the art of the possible. But MHAGs are too broad or ill-defined to execute. They need to be broken down into bounded, measurable initiatives that can be assessed, triaged and executed on their own. By breaking the MHAGs into a discrete set of initiatives you can keep them loosely coupled so that if one isn’t currently possible the others can proceed.


The remainder of this article focuses on how to conduct this assessment, triage and execution.


Define the Solution


You can’t design what you haven’t defined. The objectives of each initiative need to be defined. What are you planning to achieve and how are you going to do it? What part does technology play, what type of technology, what data and information will it need, what will it do to these inputs, and what output will it generate? What part will people play, and what other resources and infrastructure will be involved?


It is also important to define what good looks like. What would a high-quality transactional output or outcome be, and how would you recognise a low quality one? What is the quality threshold? This will be essential when you are doing the PoC and PoV, because you will be able to objectively determine whether the concept can be achieved to the desired level of quality.


Don’t stop at the threshold though; also define what the more challenging scenarios or edge cases will be. Using a solution to calculate 2+2 to prove that AI can do mathematics is not going to be a particularly representative test if the business objective is to be able to solve fourth-order differential equations.


Each initiative, no matter how trivial, is going to require time and resource and will also attract some risk. Just like any business case, the benefits need to be defined to justify the cost, and to be validated in the PoV.


What Could Possibly Go Wrong?


There will be risks of many different types. It is far better to consider these risks before committing significant resources, than to wait for them to impact in production where the consequences may be dire.


Most risks are manageable, but by considering them before the outset you can decide whether 'the juice is worth the squeeze'. Is the likelihood and consequence of a risk impacting too great for the benefits? By understanding the risks up-front you can also design tests in the PoC and PoV to assess whether the safety and quality concerns are likely to manifest themselves.


Model Safety & Quality


We are used to considering technology to be logical, predictable and consistent. If you feed numbers into your calculator you can rely on the answers that are produced. The most you might worry about is whether there are any rounding errors, but the number of significant digits produced mean this is unlikely to be a significant concern.


Modern AI systems are not deterministic in this way. The answer you get will be dependent on the way you phrase the answer, the training data that it was taught on, and whether it has learnt from any new data since you asked the question last. This introduces a number of risks to the quality of the output and the safety of using that output to carry out an action.


  • Hallucinations: When the model doesn’t have sufficient information to respond to a particular prompt, current GenAI models will invent what it considers the most believable response. Lawyers have been sanctioned by judges for citing fictional case law and precedent that was generated by GenAI. This is now more widely recognised, but there is not yet a reliable solution to the problem. It is a core characteristic of the way these GenAI models operate. What would be the consequences of a hallucination generated in your initiative?


  • Bias: GenAI models are trained on huge volumes of information — primarily from the internet — that has been generated by people over the years (notwithstanding the risk of Model Collapse below). We know that people are biased, whether consciously or subconsciously. Even if the authors didn’t produce biased information, the sheer volume of material carries biases in its own right. For instance, far more material is available on the internet about some demographics than others. Certain demographics are more likely to be portrayed in a particular light than others. These biases can flow through into the conclusions that the models produce. What would be the consequences to your business if it acted on an initiative based on biased outputs?


  • Model Collapse: GenAI models produce progressively lower quality outputs when they are taught on the output from AI models. This concept, called Model Collapse, has been researched and proven. Over time more content will be published that has been AI generated and as this becomes a greater proportion of the total training data used by models there is a risk that this starts to undermine the models themselves. Similar in effect to hallucinations and bias, it has the potential to create systemic weaknesses with significantly greater impacts.


  • Unintended Behaviour: Not even the engineers who design the tools can truly explain why they behave in the ways that they sometimes do. Early research already has demonstrated behaviours that are of concern. In one example the latest versions of ChatGPT and DeepSeek were pitted against the most advanced online chess game and instructed to ‘win against a powerful chess engine’. On multiple occasions the AI beat the opponent by illegally modifying the location of the pieces, or cheating — for want of a better word. The rationale the AI gave was that “The task is to ‘win against a powerful chess engine’ – not necessarily to win fairly in a chess game”.


  • Fundamental Limitations in Models: While we might treat AI tools as analogous to human intelligence and reasoning, this is a deception. There are fundamental differences between human cognitive capabilities and the way that current AI reasons. They have no comprehension of truth (as described in the FT article referenced in the 'Hallucinations' bullet above). They also have no appreciation of the fact that there is more that they don’t know than what they do (we wrote an earlier article about this area of risk). This places limitations on the capabilities of these models, which can have a direct impact on the safety of the decisions they make.


Information & Data Security


Initiatives will generally require us to provide our own information and connect the AI solutions up to other systems we use. This introduces new data and information security risks that need to be considered. They are much the same as any other changes made to your digital services, but there are some that are more specific to the use of GenAI:


  • Contextual Blindness: If the information you provide for the model to analyse and use as the basis of its output doesn’t define any context then the model won’t be able to differentiate context and act accordingly. For example, let’s say you are a consultancy firm and have confidential information about a significant number of clients. You have been asked by one of your clients for advice on their strategy and use a GenAI tool, perhaps Microsoft Copilot, to do the analysis. If this tool has access to all clients’ information and there is nothing to identify the client that a document relates to, there is the risk that it will employ and possibly reproduce confidential information from another client when conducting the research. This could very easily breach confidentiality and non-disclosure clauses when released to the requesting client.


  • Data Use in Model Training: If you are using a multi-tenanted or SaaS AI tool then you need to find out how customer data is used to train the model. If training from customer data is isolated and only used in analysis and responses for that customer, then there should not be a problem. But if general training makes use of customer data (this tends to be the default case when people are using the free versions) then there is the risk that your own, or your client’s confidential information may leak into responses to other customers’ transactions.


  • Scope Creep: GenAI tools have a voracious ability to consume data, and once it has been consumed there may be no way to erase the memory. You should therefore aim to constrain the information it can access, limiting it only to the information it needs for the task or intended purpose. After all, if you recruited a new employee to write your marketing copy you wouldn’t give them access to every piece of information in your organisation.


  • Specific AI Attack Modes: There are a variety of malicious tactics that can be used to attack, manipulate or otherwise corrupt AI models and their data. The following three are the most frequent to be a concern:


  • Data Poisoning: Involves the attacker deliberately corrupting the data used to train the model, causing it to produce inaccurate or deliberately misleading outputs. In Prompt Injection, like SQL Injection attacks on web applications, carefully crafted entries are made in the hope that they will get through any input validation and cause the model to generate incorrect outputs, release sensitive information, or trigger deliberately damaging actions.


  • Model Inversion: Involves extracting sensitive information or training data from the model by making inferences about the knowledge that trained the model from the outputs generated in response to specific inputs.


  • Conventional Cyber Risk: Whether you are hosting the technology yourself or using a SaaS service, there will be all the normal cyber risks that need to be considered and controlled. Depending on the nature of your business and the data you are giving the tools access to, there may be additional regulatory obligations to meet as well, such as privacy regulations if the datasets contain personal information.


Does Your Business Have What It Takes?


So, you have decided what your objectives are, how the solution will achieve them, how you will recognise 'good' in the more challenging circumstances, and what risks your organisation may be exposed to. You still haven’t started to implement anything, but you do need to assess whether you have the data and other resources necessary to make it work.


Availability of data and information is the dominant issue here. GenAI has made this simpler than conventional ML, where the quality, structure and availability of datasets at scale presents a high bar to clear. But even with GenAI, you will need quality, structured (or at least unambiguous) data at scale to produce reliable answers to all but the most trivial problems. And beyond availability you will also need to confirm your authority to use it. The technology may be immature, but legislation and regulation has hardly even got out of the starting blocks.


Data isn’t the only cost though. There will be implementation and recurring costs for the technology, and any other contributions made by people. But the most frequently overlooked costs are the business changes that will be required to realise the benefits. What policies and procedures will need to be changed? What training and education will be necessary? Will the transition be directed or encouraged? What impact with the change have on other areas of the business, and does it need to be coordinated with a broader programme of change to avoid inefficiency just being moved around rather than eliminated? Change is hard, and applying new technology to the same old processes is just a more expensive way of doing the same old thing.


Stage 2: Triage, Prioritise and Execute


Many of the original initiatives identified in first stage may already have been either discarded or deferred to sometime in the future when the technology has improved further. But hopefully you have a few initiatives that appear viable and worthwhile. You should also be confident that you have considered the pitfalls, understand how you will manage the risks, and have a high probability of being able to take most of them quickly through PoC, PoV and into production.


Prove the Concept


In the PoC you are seeking to prove that the hypothesis is possible. A fully working solution is tested at low scale to verify whether it meets the objectives. The more challenging edge cases are run to see whether they still meet the quality thresholds.


Prove the Value


You can now scale up the trial, using it for for its intended purpose, but with tight oversight. Full trust in the output still can’t be taken for granted. Frequently you will want to conduct the PoV in parallel with the legacy approach to identify divergence and quality issues. Or you may put a manual check in place if no legacy approach exists. The test is whether the solution is reliably delivering the expected benefits for the anticipated costs.


Manage in Production


And now we have the opportunity to reap the rewards. We know that it works, are confident that we have the risks covered, have a change programme in place to drive adoption and transition, and are ready to go.


It is necessary to remain cautious though. The risks identified up front may still be present, despite gaining confidence that they are tolerable through PoC and PoV. A level of supervision and monitoring to ensure that what was predicted is borne out in reality may be wise. Models learn, and so they change. The way one behaved yesterday does not guarantee that it will behave in the same way tomorrow. You also may have overlooked certain risks.


As suggested in a previous article of ours, the concept of trust in AI has many similarities with the level of trust we have in people. Trust is established over time; there will always be a limit to the level of trust you have, and any trust needs to be regularly revalidated.


Conclusion


The aim of the approach described in this article is to successfully adopt AI, delivering the greatest benefits while managing the consequential risks. By following this approach consistently an organisation can minimise the investments that never get into production.


Highlighting the risks that arise from AI adoption does not deny the benefits that can be delivered; it enables those benefits to be realised with the greatest chance of success.


How Cambridge MC Can Help


Navigating between unchecked enthusiasm and cautious paralysis requires a balanced, strategic approach that maximises value while managing risk at every stage. At Cambridge Management Consulting, our Data and AI services are designed to help you chart this 'Third Way,' by ensuring your AI initiatives are aligned with genuine business needs, governed by pragmatic risk management, and positioned for measurable success.


Partner with us to:


  • Unlock tangible business value from your data and AI investments.


  • Implement AI solutions with strong governance, security, and ethical oversight.


  • Move confidently from proof of concept to production, minimising wasted resources and maximising ROI.


  • Build trust in AI by establishing transparent, repeatable processes that deliver consistent results.


Let us help you realise the full potential of AI: safely, strategically, and sustainably.


Discover more about our Data and AI services or use the contact form below to get in touch with your query.


Contact - AI Third Way article

Subscribe to our Newsletter

Blog Subscribe

SHARE CONTENT

A Data centre in a field
by Stuart Curzon 22 August 2025
Discover how Deep Green, a pioneer in decarbonised data centres, partnered with Cambridge Management Consulting to expand its market presence through an innovative, sustainability‑driven go‑to‑market strategy | READ CASE STUDY
Crystal ball on  a neon floor
by Jason Jennings 21 August 2025
Discover how digital twins are revolutionising project management. This article explores how virtual replicas of physical systems are helping businesses to simulate outcomes, de-risk investments and enhance decision-making.
A vivid photo of the skyline of Stanley on the Falkland Islands
by Cambridge Management Consulting 20 August 2025
Cambridge Management Consulting (Cambridge MC) and Falklands IT (FIT) have donatede £3,000 to the Hermes/Viraat Heritage Trust to support the learning and development of young children in the Falkland Islands.
A modern office building on a wireframe floor with lava raining from the sky in the background
by Tom Burton 29 July 2025
What’s your organisation’s type when it comes to cyber security? Is everything justified by the business risks, or are you hoping for the best? Over the decades, I have found that no two businesses or organisations have taken the same approach to cybersecurity. This is neither a criticism nor a surprise. No two businesses are the same, so why would their approach to digital risk be? However, I have found that there are some trends or clusters. In this article, I’ve distilled those observations, my understanding of the forces that drive each approach, and some indicators that may help you recognise it. I have also suggested potential advantages and disadvantages. Ad Hoc Let’s start with the ad hoc approach, where the organisation does what it thinks needs to be done, but without any clear rationale to determine “How much is enough?” The Bucket of Sand Approach At the extreme end of the spectrum is the 'Bucket of Sand' option which is characterised by the belief that 'It will never happen to us'. Your organisation may feel that it is too small to be worth attacking or has nothing of any real value. However, if an organisation has nothing of value, one wonders what purpose it serves. At the very least, it is likely to have money. But it is rare now that an organisation will not hold data and information worth stealing. Whether this data is its own or belongs to a third party, it will be a target. I’ve also come across businesses that hold a rather more fatalistic perspective. Most of us are aware of the regular reports of nation-state attacks that are attempting to steal intellectual property, causing economic damage, or just simply stealing money. Recognising that you might face the full force of a cyber-capable foreign state is undoubtedly daunting and may encourage the view that 'We’re all doomed regardless'. If a cyber-capable nation-state is determined to have a go at you, the odds are not great, and countering it will require eye-watering investments in protection, detection and response. But the fact is that they are rare events, even if they receive disproportionate amounts of media coverage. The majority of threats that most organisations face are not national state actors. They are petty criminals, organised criminal bodies, opportunistic amateur hackers or other lower-level actors. And they will follow the path of least resistance. So, while you can’t eliminate the risk, you can reduce it by applying good security and making yourself a more challenging target than the competition. Following Best Practice Thankfully, these 'Bucket of Sand' adopters are less common than ten or fifteen years ago. Most in the Ad Hoc zone will do some things but without clear logic or rationale to justify why they are doing X rather than Y. They may follow the latest industry trends and implement a new shiny technology (because doing the business change bit is hard and unpopular). This type of organisation will frequently operate security on a feast or famine basis, deferring investments to next year when there is something more interesting to prioritise, because without business strategy guiding security it will be hard to justify. And 'next year' frequently remains next year on an ongoing basis. At the more advanced end of the Ad Hoc zone, you will find those organisations that choose a framework and aim to achieve a specific benchmark of Security Maturity. This approach ensures that capabilities are balanced and encourages progressive improvement. However, 'How much is enough?' remains unanswered; hence, the security budget will frequently struggle for airtime when budgets are challenged. It may also encourage a one-size-fits-all approach rather than prioritising the assets at greatest risk, which would cause the most significant damage if compromised. Regulatory-Led The Regulatory-Led organisation is the one I’ve come across most frequently. A market regulator, such as the FCA in the UK, may set regulations. Or the regulator may be market agnostic but have responsibility for a particular type of data, such as the Information Commissioner’s Office’s interest in personal data privacy. If regulatory compliance questions dominate most senior conversations about cyber security, the organisation is probably in this zone. Frequently, this issue of compliance is not a trivial challenge. Most regulations don’t tend to be detailed recipes to follow. Instead, they outline the broad expectations or the principles to be applied. There will frequently be a tapestry of regulations that need to be met rather than a single target to aim for. Businesses operating in multiple countries will likely have different regulations across those regions. Even within one country, there may be market-specific and data-specific regulations that both need to be applied. This tapestry is growing year after year as jurisdictions apply additional regulations to better protect their citizens and economies in the face of proliferating and intensifying threats. In the last year alone, EU countries have had to implement both the Digital Operational Resilience Act (DORA) and Network and Infrastructure Security Directive (NIS2) , which regulate financial services businesses and critical infrastructure providers respectively. Superficially, it appears sensible and straightforward, but in execution the complexities and limitations become clear. Some of the nuances include: Not Everything Is Regulated The absence of regulation doesn’t mean there is no risk. It just means that the powers that be are not overly concerned. Your business will still be exposed to risk, but the regulators or government may be untroubled by it. Regulations Move Slowly Cyber threats are constantly changing and evolving. As organisations improve their defences, the opposition changes their tactics and tools to ensure their attacks can continue to be effective. In response, organisations need to adjust and enhance their defences to stay ahead. Regulations do not respond at this pace. So, relying on regulatory compliance risks preparing to 'Fight the last war'. The Tapestry Becomes Increasingly Unwieldy It may initially appear simple. You review the limited regulations for a single region, take your direction, and apply controls that will make you compliant. Then, you expand into a new region. And later, one of your existing jurisdictions introduces an additional set of regulations that apply to you. Before you know it, you must first normalise and consolidate the requirements from a litany of different sets of rules, each with its own structure, before you can update your security/compliance strategy. Most Regulations Talk about Appropriateness As mentioned before, regulations rarely provide a recipe to follow. They talk about applying appropriate controls in a particular context. The business still needs to decide what is appropriate. And if there is a breach or a pre-emptive audit, the business will need to justify that decision. The most rational justification will be based on an asset’s sensitivity and the threats it is exposed to — ergo, a risk-based rather than a compliance-based argument. Opportunity-Led Many businesses don’t exist in heavily regulated industries but may wish to trade in markets or with customers with certain expectations about their suppliers’ security and resilience. These present barriers to entry, but if overcome, they also offer obstacles to competition. The expectations may be well defined for a specific customer, such as DEF STAN 05-138 , which details the standards that the UK Ministry of Defence expects its suppliers to meet according to a project’s risk profile. Sometimes, an entire market will set the entry rules. The UK Government has set Cyber Essentials as the minimum standard to be eligible to compete for government contracts. The US has published NIST 800-171 to detail what government suppliers must meet to process Controlled Unclassified Information (CUI). Businesses should conduct due diligence on their suppliers, particularly when they provide technology, interface with their systems or process their data. Regulations, such as NIS2, are increasingly demanding this level of Third Party Risk Management because of the number of breaches and compromises originating from the supply chain. Businesses may detail a certain level of certification that they consider adequate, such as ISO 27001 or a System & Organization Controls (SOC) report. By achieving one or more of these standards, new markets may open up to a business. Good security becomes a growth enabler. But just like with regulations, if the security strategy starts with one of these standards, it can rapidly become unwieldy as a patchwork quilt of different entry requirements builds up for other markets. Risk-Led The final zone is where actions are defined by the risk the business is exposed to. Being led by risk in this way should be natural and intuitive. Most of us might secure our garden shed with a simple padlock but would have several more secure locks on the doors to our house. We would probably also have locks on the windows and may add CCTV cameras and a burglar alarm if we were sufficiently concerned about the threats in our area. We may even install a secure safe inside the house if we have some particularly valuable possessions. These decisions and the application of defences are all informed by our understanding of the risks to which different groups of assets are exposed. The security decisions you make at home are relatively trivial compared to the complexity most businesses face with digital risk. Over the decades, technology infrastructures have grown, often becoming a sprawling landscape where the boundaries between one system and another are hard to determine. In the face of this complexity, many organisations talk about being risk-led but, in reality, operate in one of the other zones. There is no reason why an organisation can’t progressively transform from an Ad Hoc, Regulatory-Led or Opportunity-Led posture into a Risk-Led one. This transformation may need to include a strategy to enhance segmentation and reduce the sprawling landscape described above. Risk-Led also doesn’t mean applying decentralised, bespoke controls on a system-by-system basis. The risk may be assessed against the asset or a category of assets, but most organisations usually have a framework of standard controls and policies to apply or choose from. The test to tell whether an organisation genuinely operates in the Risk-Led zone is whether they have a well-defined Risk Appetite. This policy is more than just the one-liner stating that they have a very low appetite for risk. It should typically be broken down into different categories of risk or asset types; for instance, it might detail the different appetites for personal data risk compared to corporate intellectual property marked as 'In Strict Confidence'. Each category should clarify the tolerance, the circumstances under which risk will be accepted, and who is authorised to sign off. I’ve seen some exceptionally well-drafted risk appetite policies that provide clear direction. Once in place, any risk review can easily understand the boundaries within which they can operate and determine whether the controls for a particular context are adequate. I’ve also seen many that are so loose as to be unactionable or, on as many occasions, have not been able to find a risk appetite defined at all. In these situations, there is no clear way of determining 'How much security is enough'. Organisations operating in this zone will frequently still have to meet regulatory requirements and individual customer or market expectations. However, this regulatory or commercial risk assessment can take the existing strategy as the starting point and review the relevant controls for compliance. That may prompt an adjustment to security in certain places. But when challenged, you can defend your strategy because you can trace decisions back to the negative outcomes you are attempting to prevent — and this intent is in everyone’s common interest. Conclusions Which zone does your business occupy? It may exist in more than one — for instance, mainly aiming for a specific security maturity in the Ad Hoc zone but reinforced for a particular customer. But which is the dominant zone that drives plans and behaviour? And why is that? It may be the right place for today, but is it the best approach for the future? Apart from the 'Bucket of Sand' approach, each has pros and cons. I’ve sought to stay balanced in how I’ve described them. However, the most sustainable approach is one driven by business risk, with controls that mitigate those risks to a defined appetite. Regulatory compliance will probably constitute some of those risks, and when controls are reviewed against the regulatory requirements, there may be a need to reinforce them. Also, some customers may have specific standards to meet in a particular context. However, the starting point will be the security you believe the business needs and can justify before reviewing it through a regulatory or market lens. If you want to discuss how you can improve your security, reduce your digital risk, and face the future with confidence, get in touch with Tom Burton, Senior Partner - Cyber Security, using the below form.
AI co-pilot
by Jason Jennings 28 July 2025
Jason Jennings | Elevate your project management with AI. This guide for senior leaders explains how AI tools can enhance project performance through predictive foresight, cognitive collaboration, and portfolio intelligence. Unlock the potential of AI in your organisation and avoid the common pitfalls.
St Pauls Cathedral
by Craig Cheney 24 July 2025
Craig Cheney | The UK Government has taken a major step forward in reshaping local governance in England with the publication of the English Devolution and Community Empowerment Bill. This is more than a policy shift — it’s a structural rethink that sets out to make devolution the norm, not the exception.
by Faye Holland 11 July 2025
Today, we are proud to be spotlighting Faye Holland, who became Managing Partner at Cambridge Management Consulting for Client PR & Marketing as well as for our presence in the city of Cambridge and the East of England at the start of this year, following our acquisition of her award-winning PR firm, cofinitive. Faye is a prominent entrepreneur and a dynamic force within the city of Cambridge’s renowned technology sector. Known for her ability to influence, inspire, and connect on multiple fronts, Faye plays a vital role in bolstering Cambridge’s global reputation as the UK’s hub for technology, innovation, and science. With over three decades of experience spanning diverse business ventures, including the UK’s first ISP, working in emerging business practices within IBM, leading European and Asia-Pacific operations for a global tech media company, and founding her own business, Faye brings unparalleled expertise to every endeavour. Faye’s value in the industry is further underscored by her extensive network of influential contacts. As the founder of cofinitive, an award-winning PR and communications agency focused on supporting cutting-edge start-ups and scale-ups in tech and innovation, Faye has earned a reputation as one of the UK’s foremost marketing strategists. Over the course of a decade, she built cofinitive into a recognised leader in the communications industry. The firm has since been featured in PR Weekly’s 150 Top Agencies outside London, and has been named year-on-year as the No. 1 PR & Communications agency in East Anglia. cofinitive is also acknowledged as one of the 130 most influential businesses in Cambridge, celebrated for its distinctive, edge, yet polished approach to storytelling for groundbreaking companies, and for its support of the broader ecosystem. Additionally, Faye is widely recognised across the East of England for her leadership in initiatives such as the #21toWatch Technology Innovation Awards, which celebrates innovation and entrepreneurship, and as the co-host of the Cambridge Tech Podcast. Individually, Faye has earned numerous accolades. She is listed among the 25 most influential people in Cambridge, and serves as Chair of the Cambridgeshire Chambers of Commerce. Her advocacy for women in technology has seen her regularly featured in Computer Weekly’s Women in Tech lists, and recognised as one of the most influential women in UK tech during London Tech Week 2024 via the #InspiringFifty listing. Faye is also a dedicated mentor for aspiring technology entrepreneurs, having contributed to leading entrepreneurial programs in Cambridge and internationally, further solidifying her role as a driving force for innovation and growth in the tech ecosystem. If you would like to discuss future opportunities with Faye, you can reach out to her here .
Cambridge MC Falklands team standing with Polly Marsh, CEO of the Ulysses Trust, holding a cheque
by Lucas Lefley 10 July 2025
From left to right: Tim Passingham, Tom Burton, Erling Aronsveen, Polly Marsh, and Clive Quantrill.
Long curving glass walkway looking out on a city. Image has a deep red tint and high contrast
30 June 2025
Cambridge Management Consulting is delighted to announce that we have been recognised as a Platinum-level telecommunications consultancy in Consultancy.uk’s 2025 ‘Top Consulting Firms in the UK’ ranking. This achievement places us among an upper tier of telecommunications consultancies across the UK, reflecting our continued commitment to delivering exceptional expertise and results for our clients in this rapidly evolving sector. A Rigorous Assessment The Consultancy.uk ranking represents one of the most comprehensive evaluations of the UK’s consulting landscape, assessing over 1,400 firms across the country. This methodology combines extensive client feedback from more than 800 clients and peer reviews from over 3,000 consultants, alongside detailed capabilities assessments that examine the reputation of each firm, project track records, analyst benchmarks, industry recognitions, and thought leadership. Within the telecommunications sector specifically, over 500 consulting firms were evaluated, with only 50 qualifying as top players. The ranking system operates across five distinct levels – Diamond, Platinum, Gold, Silver, and Bronze; thus, Platinum status cements Cambridge MC as one of the most trusted, expert, and influential telecommunications consultancies in the UK. This recognition is particularly meaningful given the competitive nature of the UK’s telecommunications consulting market, where established global firms compete alongside specialist independents. Our Platinum ranking demonstrates that Cambridge MC has successfully established itself as a leading authority in telecommunications strategy, transformation, and innovation. Building on a Foundation of Success This latest accolade adds to Cambridge MC’s impressive collection of recent achievements and industry recognition. At The Consultancy Awards 2024, we were honoured to receive three awards, winning in every category for which we were nominated. These included: Digital Transformation: Acknowledging our project management of a multinational oil and gas company’s EV charging hub portfolio. Productivity Improvement & Cost Reduction: Celebrating our delivery of over £10m in savings for a major UK online retailer. Fastest Growing: Recognising our remarkable 30% revenue growth and expansion across new geographies. Beyond organisational achievements, our individual team members continue to earn recognition for their expertise and contributions. Zoë Webster, expert at Cambridge Management Consulting for AI, Digital & Innovation, was named among AI Magazine’s Top 10 AI Leaders in the UK & Europe. Furthermore, Craig Cheney, Managing Partner for Public Sector & Education, was made an Alderman of the City of Bristol, and Marvin Rees OBE, a member of our advisory board, was introduced to the House of Lords. Craig and Marvin were also co-founders of the Bristol City LEAP project, which recently received the World Economic Forum’s 2024 Award of Distinction for Public-Private Collaboration in Cities. This £1bn partnership between Bristol City Council and Ameresco UK represents a world-first initiative in sustainable urban development, demonstrating our capacity to deliver transformational projects with genuine societal impact. At the Forefront of Digital Infrastructure and TMT Our Platinum ranking in telecommunications specifically reflects Cambridge MC’s deep expertise across the full spectrum of Telecoms, Media & Technology (TMT) challenges. We work alongside TMT companies to optimise digital infrastructure and estates while delivering integrated cost reduction services that enhance procurement and contract management functions. Our capabilities span from digital transformation, procurement and network transformation to data centre optimisation and emerging technology integration. The telecommunications landscape continues to evolve rapidly, with exponential data growth, IoT deployment, and the infrastructure demands of generative AI driving substantial transformation in both virtual and physical infrastructure. Our team support organisations to stay afloat in this changing market, with a proven track record including managing over $5bn in client revenues, saving organisations over $2bn, and driving procurement transactions exceeding $5bn. Recent case studies demonstrate the breadth of our telecommunications expertise, from conducting technical due diligence for major investment decisions, to designing and procuring modern network solutions for leading academic institutions. Our work with the University of Bristol, helping them to complete their progressive Modern Network transformation, exemplifies our ability to navigate complex technical and commercial requirements, while delivering measurable outcomes. Looking Ahead As we celebrate this Platinum recognition, Cambridge MC remains committed to pushing the boundaries of what’s possible in telecommunications consulting. Ever since Tim Passingham founded Cambridge Management Consulting, to support telecommunications startups in the city of Cambridge, UK, our purpose has been to help clients make a better impact on the world. This mission drives everything we do, from individual product delivery to industry-wide transformation initiatives. This achievement belongs to our entire team of specialist practitioners who bring decades of hands-on experience to every engagement. As we continue to expand our capabilities and global reach, this recognition serves as both validation of our progress and motivation for the challenges ahead. Thank you to everyone who has joined us on this journey.
More posts