Achieving ultra-reliable networks in 2022: the challenge of delivering 5G and low latency

Jon Wilton

Capacity Europe 2021

This article was written to support a panel discussion at Capacity Europe 2021 on 21 October in London. The panel is titled ‘How Digitalisation is Driving the Interconnectivity Landscape’, and it is being moderated by Charles Orsel des Sagets, Managing Partner at Cambridge Management Consulting for Europe and LATAM. Contributions to the article were made by Ivo Ivanov, CEO of DE-CIX International; Tim Passingham, Chairman at Cambridge Management Consulting; and Eric Green, Senior Partner at Cambridge Management Consulting. Thank you to everyone involved.  

5G illustration with neon icons and a satellite tower cell

How the pandemic has shaped the roadmap for internet connectivity in Europe

The pandemic has brought into sharp focus the need and importance of reliable and flexible networks at home. The switch to remote working and the rapid normalcy of meetings from our desks at home, brought with it surges in internet traffic and demand for reliable, stable connections. 


Networks across Europe coped well, for the most part, but there are questions hanging over the near-horizon, about how carriers will adapt and scale for both a growing remote workforce and the predicted rise of new technologies.


5G is part of the answer (but only a part) and its development and rollout will coincide with and drive innovations in IoT, autonomous vehicles, and AI. These emerging technologies also exist within larger tectonic shifts in society and culture, including increasing digitalisation, virtualisation and autonomy in services; and the beginnings of the decentralised application of blockchain technology.


As our society embraces digitalisation, and the process is accelerated by Covid-19, we ask, what are the major challenges faced by carriers heading into 2022 to deliver on twin fronts of both infrastructure demand and customer expectation? 


We will discuss the issue of low-latency, particularly in meeting customer expectations. These expectations anticipate the demands of video-centric content, remote working, IoT and gaming. We will also explore metrics for customer experience, and conclude with the impact of 5G technology. 

Measuring Quality of Experience for internet connectivity

With a surge in internet traffic during lockdowns potentially being the start of a sustained uptick in demand, including online games and growing markets for streaming games and VR, there is a spotlight on the issue of latency as a key indicator of customer expectation. 


Let us first, talk more broadly about indicators of network quality in 2021 and beyond.

Digital Equality - The widening speed gap in Europe

Europe’s internet speeds have increased by more than 50% in the last 18 months. However, this comes at the cost of widening gaps between urban and rural areas and also between Northern European countries and South-Eastern Europe. 


The UK too lags behind much of its Western European neighbours when it comes to average internet speed. It was placed 47th on the list in a study conducted in 2020. In fact, the average broadband speed in the UK was less than half that of the Western European average. 


The EU has a stated goal to be the most connected continent by 2030. It has already taken action to do this by ending roaming charges and introducing a price-cap on inter-EU communications. The key goal is for every European household to have access to high-speed internet coverage by 2025 and gigabit connectivity by 2030.


The elevation of internet access as a necessary human right is of course encouraging, and so are the targets set by the EU. However, for these targets to be truly meaningful, there needs to progress on a number of challenges to connectivity across Europe. Redefining the metrics we use to track this progress is also vital.

Measuring Quality of Experience (QoE)

There are a variety of problems with measuring internet speeds in a comparable way. Usually, ISPs present averages across a range of time in Mbps or sometimes the % of plan speed achieved across a range of time. 


As bandwidth in many countries in Europe moves towards, and over, 100Mbps, this proxy is becoming a weaker indicator of user experience. 


There are also a number of key reasons why figures published by an ISP might be misleading compared to actual user experience. 


Some of these problems are as follows:


  • Lab-testing of internet speed does not replicate the real-world chain of devices/hosts involved in sending and receiving packets


  • Averages of Mbps ignores speeds at peak periods when the network is congested and networks throttle bandwidth


  • The ‘plan speed’ does not reflect actual speeds experienced in a household where packet queuing and WiFi congestion and network  affects users differently on the customer LAN


  • This metric ignores latency, which is becoming a better signal of internet experience in an age of video streaming and online gaming (more on this below)


Many voices in the industry are pushing for more holistic Quality of Experience (QoE) metrics to enlarge the current set of Quality of Service (QoS) measurements. 


The difference between QoE and QoS is that the latter method is comparable to measuring the success of a call centre by how many calls are concluded in a given day. This metric completely ignores whether a caller’s problem was sufficiently resolved or how satisfied the caller felt about the interaction, the ‘experience’.


Research shows that users are happy when a website loads in under two seconds (QoE). If network management is calibrated with this information, bandwidth saved can be allocated elsewhere if necessary (QoS). 


Thus, one characteristic of QoE is the realisation that there are many examples where a better QoS (above a threshold) does not readily impact the user’s perception/experience of the service. 


This has some important ramifications in terms of design. For example, services such as online gaming rely on low latency far more than video streaming, where buffering protocols absorb lag. QoE can be used to design SLAs and network management that are specific to the needs of an individual service. 


If network providers can achieve methods of gathering QoE data, it can be used to build Autonomic Network Management (ANM) capabilities that use artificial intelligence to allow networks to achieve even more efficient network performance that reacts in real-time to user experience. 

Low latency and packet loss

Bandwidth has generally been king in the history of communication networks. Low latency has generally lagged behind (pun intended) as a priority in the upgrading of networks.

What is latency?

From a QoE perspective, latency can be roughly defined as ‘the delay between a user’s action and the response of a web application’ – in QoS terms, this is the time taken for a data packet to make a round trip to and from a server (round trip delay). 


Latency is affected by many variables, but the main four are:


  • Transmission medium: The physical path between the start and end points i.e. a copper-based network is much slower than fibre-optic.


  • Network management: The efficiency of routers and other devices or software that manage incoming traffic


  • Propagation: The further apart two nodes are in the network will affect latency. For every 100 miles of fibre-optic cable it is estimated this adds 1ms of latency


  • Storage delays: Accessing stored data will generally increase latency

Jitter

There are two types of latency issue.


One is the ‘lag’ (delay) we defined above, and the other is ‘jitter’, the variations in latency that can make connections unreliable. Jitter is usually caused by network traffic jams, bad packet queuing and setup errors.

Packet loss

Impacting the QoE ‘perception’ of latency is also packet loss. Packet loss occurs when packets of data do not reach their intended destination. It is commonly caused by congestion and hardware issues —the issue can be more frequent over WiFi where environmental factors and weak signal are factors. The effect of packet loss is worse for real-time services such as video, voice and gaming. Packet loss is also worse in networks where there are no TCP protocols to retrieve and re-send packets that have dropped.

Why is low latency so important now?

“All areas of business and private life rely more heavily today than ever before on digital applications. The latency-sensitivity of these applications is not only a hallmark of quality and guarantee of commercial productivity, but also – in critical use cases, a lifeline”
—Ivo Ivanov, CEO of DE-CIX International

Recent technological innovations all tend to require lower latency. Cloud applications, mobile gaming, virtual/augmented reality, and the smart home rely on real-time monitoring and fast signal to action responsiveness. The growth of IoT and a world of interconnected sensors dictate that networks have a consistently low latency that is less than human reaction speeds.


  • Human beings: 250 milliseconds responding to a visual stimulus 
  • 4G latency: 200 milliseconds
  • 5G latency: 1 millisecond


Consider the safety implications when your car can react 250 times faster than you. At 100km/h the reaction speed of a human creates a reaction distance of 30m. With a 1 millisecond (1ms) reaction time, your autonomous car can break with a reaction distance of 3cm.


End-to-end latency infographic for difference use cases

The relationship of latency and user experience to geography

The maximum affordable latency for a decent end-user experience with today’s general-use applications is around 65 milliseconds. However, a latency of no more than 20 milliseconds is necessary to perform all these daily activities with the level of performance that everybody deserves. Translating this into distance, this means the content and the applications need to be as close to the users as possible. Geographically speaking, applications like interactive online gaming and live streaming in HD/4K need to be less than 1,200 km from the user. But the applications that our digital future will be based on will demand much lower latency – in the range of 1-3 milliseconds. Smart IoT applications, and critical applications requiring real-time responses, like autonomous driving, need to be performed within a range of 50-80 km from the user.

How networks can reduce latency

There are a variety of ways of lowering latency. Businesses can pay for dedicated private networks and links that deliver extremely reliable and stable connections. This is also one of the few solutions that tackles performance gaps in the ‘middle mile’ (the network infrastructure that connects last mile (local) networks to highspeed network service providers) of the internet. 


Any service which uses the backbone of the internet will run into problems of inefficient routing due to:


  • Border Gateway Protocol (BGP) for routing (because it has no congestion avoidance)


  • Least-cost routing policies


  • Transmission Control Protocol (TCP): It is a blunt-tool protocol that reacts strongly to congestion and throttles throughput

SD-WAN, latency and efficient network management

Photo of neon glow of exposed cluster of optical fibres

One other solution is offered by the latest breed of SD-WAN software. SD-WAN operates as a virtual overlay of the internet, testing and identifying the best routes via a feedback loop of metrics. Potentially SD-WAN can limit packet loss and decrease latency by sending data through pre-approved optimal routes. MPLS does something similar, labelling traffic to ensure it is dealt with on a priority basis; but this service is more expensive than SD-WAN and its architecture is not suited to cloud connectivity.


SD-WAN is a hybrid solution, meaning that the software overlay can route traffic over a host of networks, including MPLS, a dedicated line and the internet. WAN management also includes a host of virtualised network tools that optimise network efficiency. This includes abbreviating redundant data (known as deduplication), compression, and also caching (where frequent data is stored closer to the end user). 


To find out more about the range of network infrastructure and SD-WAN services offered by Cambridge Management Consulting visit our website.

5G promises ultra-low latency

Infographic of transmission distance 5G vs 4G

5G promises to lead us into a world of ultra-low latency, paving the way for robotics, IoT, autonomous cars, VR and cloud gaming. For this to become a reality, new infrastructure must be installed; this requires significant investment from governments and telecoms companies. Most countries need to install much more fibre to deal with the backhaul of data. 


During the transition, the current 4G network will need to support 5G and there will be a combination of new and old tech, patches and upgrades to masts. Edge computing will eventually move data-centres closer to users, also contributing to lower latency. It could be many years before we see the kinds of low-latency connections that have been promised. 

How 5G and 'network slicing' will end high latency

With the fifth generation of cellular data, gigabit bandwidth should become the norm, and the frame length (the time waiting to put bits into the channel) will be drastically reduced. 5G moves up the electromagnetic spectrum to make use of millimeter waves (mmWave), which have much greater capacity but poorer propagation characteristics. These millimeter waves can be easily blocked by a wall, or even a person or a tree. Therefore, operators will use a combination of low, mid, and high range spectrum to support different use cases. 


The mid- to long-term solution to propagation restrictions is that 5G will require a network of small cells as well as the cell towers to support them (NG-RAN architecture). Small cells can be located on lampposts, sides of buildings, and also within businesses and public buildings. They will enable the ‘densification’ of networks, broadcasting high capacity millimeter waves primarily in urban areas. Because optical fibre may not be available at all sites, wireless backhaul will be a common option for small cells.


Edge computing will further support this near-user vision. Using off-the-shelf servers, and smaller data centres closer to the cell towers, edge computing can ensure low latency and high bandwidth. 


Infrastructure requirements of 5G (infographic)

“As latency requirements get lower and lower, it becomes more and more important to bring interconnection services as close to people and businesses as possible, everywhere. Latency truly is the new currency for the exciting next generation of applications and services” 

—Ivo Ivanov, CEO of DE-CIX International

What is network slicing?

The key innovation enabling the full potential of 5G architecture to be realised is network slicing. This technology adds an extra dimension by allowing multiple logical networks to simultaneously overlay a shared physical network infrastructure. This creates end-to-end virtual networks that include both networking and storage functions. 


Operators can effectively manage diverse 5G use protocols with differing throughput, network latency and availability demands by ‘slicing’ network resources and tailoring them to multiple users.

What is realistic progress for 5G in 2022?

Neon 5G text on black background

According to the California-based company Grand View Research, the global 5G infrastructure market size —valued at $1.9bn in 2019— is projected to reach $496.6bn by 2027.


There are however significant costs associated with 5G roll-out, as well as complications arising from planning regulations (for small cells in the UK alone, separate planning applications have to be files for each cell) and the need to alleviate public health fears about the technology. 


There is still also the issue of digital equality (conquering the digital divide). There is a risk the divide could widen further if 5G services are concentrated only in cities, as economics will almost certainly dictate.


The EU recently announced their Path to the Digital Decade, a concrete plan to achieve the digital transformation of society and the economy by 2030. 


Read more about the Path to the Digital Decade.

“The European vision for a digital future is one where technology empowers people. So today we propose a concrete plan to achieve the digital transformation. For a future where innovation works for businesses and for our societies. We aim to set up a governance framework based on an annual cooperation mechanism to reach targets in the areas of digital skills, digital infrastructures, digitalisation of businesses and public services.” 

—Margrethe Vestager, Executive Vice President for ‘A Europe Fit for the Digital Age’

5G has been dubbed by some as the next industrial revolution. If all the technologies that it intends to drive are realised within the next decade, that could certainly be the case. What is achievable in the short-term, however, is less clear and progress could be slowed by infrastructural barriers and rising costs.


As we head into 2022 there needs to be significant work to upgrade legacy systems to integrate with the rollout of 5G and an acceleration laying fibre optic cables to deal with the backhaul of data from the proliferation of 5G cells. 


While 5G leads the technological improvement of the network, lowering latency at the network edge also needs to be a primary goal and operators must focus on latency as one element (albeit it a key element) of a holistic strategy to improve the mobile internet experience (and measure this against a robust QoE framework). 

Contributors

Thanks to Ivo Ivanov, CEO of DE-CIX International; Charles Orsel des Sagets, Managing Partner, Cambridge MC; Eric Green, Senior Partner, Cambridge MC; and Tim Passingham, Chairman, Cambridge MC, who all made contributions to this article. Special thanks to Ivo Ivanov, for his quotes.


Thanks to Karl Salter, web designer and graphic designer, for infographics.


You can find out more about Ivo Ivanov on LinkedIn and DE-CIX via their website.


Read bios for Charles Orsel des Sagets, Tim Passingham and Eric Green.

About Us

Cambridge Management Consulting is a specialist consultancy drawing on an extensive network of global talent. We are your growth catalyst, assembling a team of experts to focus on the specific challenges of your market.

 

With an emphasis on digital transformation, we add value to any business attempting to scale by combining capabilities such as marketing acceleration, digital innovation, talent acquisition and procurement. 

 

Founded in Cambridge, UK, we created a consultancy to cope specifically with the demands of a fast-changing digital world. Since then, we’ve gone international, with offices in Cambridge, London, Paris and Tel Aviv, 100 consultants in 17 countries, and clients all over the world.


Find out more about our SD-WAN and network architecture consultancy services.


Find out more about our digital transformation services and full list of capabilities.


Subscribe to our Newsletter

Blog Subscribe

SHARE CONTENT

Abstract kaleidoscope of AI generated shapes
by Tom Burton 10 September 2025
This article explores the ‘Third Way’ to AI adoption – a balanced approach that enables innovation, defines success clearly, and scales AI responsibly for lasting impact | READ FULL ARTICLE
A Data centre in a field
by Stuart Curzon 22 August 2025
Discover how Deep Green, a pioneer in decarbonised data centres, partnered with Cambridge Management Consulting to expand its market presence through an innovative, sustainability‑driven go‑to‑market strategy | READ CASE STUDY
Crystal ball on  a neon floor
by Jason Jennings 21 August 2025
Discover how digital twins are revolutionising project management. This article explores how virtual replicas of physical systems are helping businesses to simulate outcomes, de-risk investments and enhance decision-making.
A vivid photo of the skyline of Stanley on the Falkland Islands
by Cambridge Management Consulting 20 August 2025
Cambridge Management Consulting (Cambridge MC) and Falklands IT (FIT) have donatede £3,000 to the Hermes/Viraat Heritage Trust to support the learning and development of young children in the Falkland Islands.
A modern office building on a wireframe floor with lava raining from the sky in the background
by Tom Burton 29 July 2025
What’s your organisation’s type when it comes to cyber security? Is everything justified by the business risks, or are you hoping for the best? Over the decades, I have found that no two businesses or organisations have taken the same approach to cybersecurity. This is neither a criticism nor a surprise. No two businesses are the same, so why would their approach to digital risk be? However, I have found that there are some trends or clusters. In this article, I’ve distilled those observations, my understanding of the forces that drive each approach, and some indicators that may help you recognise it. I have also suggested potential advantages and disadvantages. Ad Hoc Let’s start with the ad hoc approach, where the organisation does what it thinks needs to be done, but without any clear rationale to determine “How much is enough?” The Bucket of Sand Approach At the extreme end of the spectrum is the 'Bucket of Sand' option which is characterised by the belief that 'It will never happen to us'. Your organisation may feel that it is too small to be worth attacking or has nothing of any real value. However, if an organisation has nothing of value, one wonders what purpose it serves. At the very least, it is likely to have money. But it is rare now that an organisation will not hold data and information worth stealing. Whether this data is its own or belongs to a third party, it will be a target. I’ve also come across businesses that hold a rather more fatalistic perspective. Most of us are aware of the regular reports of nation-state attacks that are attempting to steal intellectual property, causing economic damage, or just simply stealing money. Recognising that you might face the full force of a cyber-capable foreign state is undoubtedly daunting and may encourage the view that 'We’re all doomed regardless'. If a cyber-capable nation-state is determined to have a go at you, the odds are not great, and countering it will require eye-watering investments in protection, detection and response. But the fact is that they are rare events, even if they receive disproportionate amounts of media coverage. The majority of threats that most organisations face are not national state actors. They are petty criminals, organised criminal bodies, opportunistic amateur hackers or other lower-level actors. And they will follow the path of least resistance. So, while you can’t eliminate the risk, you can reduce it by applying good security and making yourself a more challenging target than the competition. Following Best Practice Thankfully, these 'Bucket of Sand' adopters are less common than ten or fifteen years ago. Most in the Ad Hoc zone will do some things but without clear logic or rationale to justify why they are doing X rather than Y. They may follow the latest industry trends and implement a new shiny technology (because doing the business change bit is hard and unpopular). This type of organisation will frequently operate security on a feast or famine basis, deferring investments to next year when there is something more interesting to prioritise, because without business strategy guiding security it will be hard to justify. And 'next year' frequently remains next year on an ongoing basis. At the more advanced end of the Ad Hoc zone, you will find those organisations that choose a framework and aim to achieve a specific benchmark of Security Maturity. This approach ensures that capabilities are balanced and encourages progressive improvement. However, 'How much is enough?' remains unanswered; hence, the security budget will frequently struggle for airtime when budgets are challenged. It may also encourage a one-size-fits-all approach rather than prioritising the assets at greatest risk, which would cause the most significant damage if compromised. Regulatory-Led The Regulatory-Led organisation is the one I’ve come across most frequently. A market regulator, such as the FCA in the UK, may set regulations. Or the regulator may be market agnostic but have responsibility for a particular type of data, such as the Information Commissioner’s Office’s interest in personal data privacy. If regulatory compliance questions dominate most senior conversations about cyber security, the organisation is probably in this zone. Frequently, this issue of compliance is not a trivial challenge. Most regulations don’t tend to be detailed recipes to follow. Instead, they outline the broad expectations or the principles to be applied. There will frequently be a tapestry of regulations that need to be met rather than a single target to aim for. Businesses operating in multiple countries will likely have different regulations across those regions. Even within one country, there may be market-specific and data-specific regulations that both need to be applied. This tapestry is growing year after year as jurisdictions apply additional regulations to better protect their citizens and economies in the face of proliferating and intensifying threats. In the last year alone, EU countries have had to implement both the Digital Operational Resilience Act (DORA) and Network and Infrastructure Security Directive (NIS2) , which regulate financial services businesses and critical infrastructure providers respectively. Superficially, it appears sensible and straightforward, but in execution the complexities and limitations become clear. Some of the nuances include: Not Everything Is Regulated The absence of regulation doesn’t mean there is no risk. It just means that the powers that be are not overly concerned. Your business will still be exposed to risk, but the regulators or government may be untroubled by it. Regulations Move Slowly Cyber threats are constantly changing and evolving. As organisations improve their defences, the opposition changes their tactics and tools to ensure their attacks can continue to be effective. In response, organisations need to adjust and enhance their defences to stay ahead. Regulations do not respond at this pace. So, relying on regulatory compliance risks preparing to 'Fight the last war'. The Tapestry Becomes Increasingly Unwieldy It may initially appear simple. You review the limited regulations for a single region, take your direction, and apply controls that will make you compliant. Then, you expand into a new region. And later, one of your existing jurisdictions introduces an additional set of regulations that apply to you. Before you know it, you must first normalise and consolidate the requirements from a litany of different sets of rules, each with its own structure, before you can update your security/compliance strategy. Most Regulations Talk about Appropriateness As mentioned before, regulations rarely provide a recipe to follow. They talk about applying appropriate controls in a particular context. The business still needs to decide what is appropriate. And if there is a breach or a pre-emptive audit, the business will need to justify that decision. The most rational justification will be based on an asset’s sensitivity and the threats it is exposed to — ergo, a risk-based rather than a compliance-based argument. Opportunity-Led Many businesses don’t exist in heavily regulated industries but may wish to trade in markets or with customers with certain expectations about their suppliers’ security and resilience. These present barriers to entry, but if overcome, they also offer obstacles to competition. The expectations may be well defined for a specific customer, such as DEF STAN 05-138 , which details the standards that the UK Ministry of Defence expects its suppliers to meet according to a project’s risk profile. Sometimes, an entire market will set the entry rules. The UK Government has set Cyber Essentials as the minimum standard to be eligible to compete for government contracts. The US has published NIST 800-171 to detail what government suppliers must meet to process Controlled Unclassified Information (CUI). Businesses should conduct due diligence on their suppliers, particularly when they provide technology, interface with their systems or process their data. Regulations, such as NIS2, are increasingly demanding this level of Third Party Risk Management because of the number of breaches and compromises originating from the supply chain. Businesses may detail a certain level of certification that they consider adequate, such as ISO 27001 or a System & Organization Controls (SOC) report. By achieving one or more of these standards, new markets may open up to a business. Good security becomes a growth enabler. But just like with regulations, if the security strategy starts with one of these standards, it can rapidly become unwieldy as a patchwork quilt of different entry requirements builds up for other markets. Risk-Led The final zone is where actions are defined by the risk the business is exposed to. Being led by risk in this way should be natural and intuitive. Most of us might secure our garden shed with a simple padlock but would have several more secure locks on the doors to our house. We would probably also have locks on the windows and may add CCTV cameras and a burglar alarm if we were sufficiently concerned about the threats in our area. We may even install a secure safe inside the house if we have some particularly valuable possessions. These decisions and the application of defences are all informed by our understanding of the risks to which different groups of assets are exposed. The security decisions you make at home are relatively trivial compared to the complexity most businesses face with digital risk. Over the decades, technology infrastructures have grown, often becoming a sprawling landscape where the boundaries between one system and another are hard to determine. In the face of this complexity, many organisations talk about being risk-led but, in reality, operate in one of the other zones. There is no reason why an organisation can’t progressively transform from an Ad Hoc, Regulatory-Led or Opportunity-Led posture into a Risk-Led one. This transformation may need to include a strategy to enhance segmentation and reduce the sprawling landscape described above. Risk-Led also doesn’t mean applying decentralised, bespoke controls on a system-by-system basis. The risk may be assessed against the asset or a category of assets, but most organisations usually have a framework of standard controls and policies to apply or choose from. The test to tell whether an organisation genuinely operates in the Risk-Led zone is whether they have a well-defined Risk Appetite. This policy is more than just the one-liner stating that they have a very low appetite for risk. It should typically be broken down into different categories of risk or asset types; for instance, it might detail the different appetites for personal data risk compared to corporate intellectual property marked as 'In Strict Confidence'. Each category should clarify the tolerance, the circumstances under which risk will be accepted, and who is authorised to sign off. I’ve seen some exceptionally well-drafted risk appetite policies that provide clear direction. Once in place, any risk review can easily understand the boundaries within which they can operate and determine whether the controls for a particular context are adequate. I’ve also seen many that are so loose as to be unactionable or, on as many occasions, have not been able to find a risk appetite defined at all. In these situations, there is no clear way of determining 'How much security is enough'. Organisations operating in this zone will frequently still have to meet regulatory requirements and individual customer or market expectations. However, this regulatory or commercial risk assessment can take the existing strategy as the starting point and review the relevant controls for compliance. That may prompt an adjustment to security in certain places. But when challenged, you can defend your strategy because you can trace decisions back to the negative outcomes you are attempting to prevent — and this intent is in everyone’s common interest. Conclusions Which zone does your business occupy? It may exist in more than one — for instance, mainly aiming for a specific security maturity in the Ad Hoc zone but reinforced for a particular customer. But which is the dominant zone that drives plans and behaviour? And why is that? It may be the right place for today, but is it the best approach for the future? Apart from the 'Bucket of Sand' approach, each has pros and cons. I’ve sought to stay balanced in how I’ve described them. However, the most sustainable approach is one driven by business risk, with controls that mitigate those risks to a defined appetite. Regulatory compliance will probably constitute some of those risks, and when controls are reviewed against the regulatory requirements, there may be a need to reinforce them. Also, some customers may have specific standards to meet in a particular context. However, the starting point will be the security you believe the business needs and can justify before reviewing it through a regulatory or market lens. If you want to discuss how you can improve your security, reduce your digital risk, and face the future with confidence, get in touch with Tom Burton, Senior Partner - Cyber Security, using the below form.
AI co-pilot
by Jason Jennings 28 July 2025
Jason Jennings | Elevate your project management with AI. This guide for senior leaders explains how AI tools can enhance project performance through predictive foresight, cognitive collaboration, and portfolio intelligence. Unlock the potential of AI in your organisation and avoid the common pitfalls.
St Pauls Cathedral
by Craig Cheney 24 July 2025
Craig Cheney | The UK Government has taken a major step forward in reshaping local governance in England with the publication of the English Devolution and Community Empowerment Bill. This is more than a policy shift — it’s a structural rethink that sets out to make devolution the norm, not the exception.
by Faye Holland 11 July 2025
Today, we are proud to be spotlighting Faye Holland, who became Managing Partner at Cambridge Management Consulting for Client PR & Marketing as well as for our presence in the city of Cambridge and the East of England at the start of this year, following our acquisition of her award-winning PR firm, cofinitive. Faye is a prominent entrepreneur and a dynamic force within the city of Cambridge’s renowned technology sector. Known for her ability to influence, inspire, and connect on multiple fronts, Faye plays a vital role in bolstering Cambridge’s global reputation as the UK’s hub for technology, innovation, and science. With over three decades of experience spanning diverse business ventures, including the UK’s first ISP, working in emerging business practices within IBM, leading European and Asia-Pacific operations for a global tech media company, and founding her own business, Faye brings unparalleled expertise to every endeavour. Faye’s value in the industry is further underscored by her extensive network of influential contacts. As the founder of cofinitive, an award-winning PR and communications agency focused on supporting cutting-edge start-ups and scale-ups in tech and innovation, Faye has earned a reputation as one of the UK’s foremost marketing strategists. Over the course of a decade, she built cofinitive into a recognised leader in the communications industry. The firm has since been featured in PR Weekly’s 150 Top Agencies outside London, and has been named year-on-year as the No. 1 PR & Communications agency in East Anglia. cofinitive is also acknowledged as one of the 130 most influential businesses in Cambridge, celebrated for its distinctive, edge, yet polished approach to storytelling for groundbreaking companies, and for its support of the broader ecosystem. Additionally, Faye is widely recognised across the East of England for her leadership in initiatives such as the #21toWatch Technology Innovation Awards, which celebrates innovation and entrepreneurship, and as the co-host of the Cambridge Tech Podcast. Individually, Faye has earned numerous accolades. She is listed among the 25 most influential people in Cambridge, and serves as Chair of the Cambridgeshire Chambers of Commerce. Her advocacy for women in technology has seen her regularly featured in Computer Weekly’s Women in Tech lists, and recognised as one of the most influential women in UK tech during London Tech Week 2024 via the #InspiringFifty listing. Faye is also a dedicated mentor for aspiring technology entrepreneurs, having contributed to leading entrepreneurial programs in Cambridge and internationally, further solidifying her role as a driving force for innovation and growth in the tech ecosystem. If you would like to discuss future opportunities with Faye, you can reach out to her here .
Cambridge MC Falklands team standing with Polly Marsh, CEO of the Ulysses Trust, holding a cheque
by Lucas Lefley 10 July 2025
From left to right: Tim Passingham, Tom Burton, Erling Aronsveen, Polly Marsh, and Clive Quantrill.
More posts