Jenna Goldstein
When organisations embark on artificial intelligence (AI) transformation, the focus is often on the new: new capabilities, new efficiencies, and new ways of working. Yet, the responsible management of legacy systems and processes is just as critical to long-term success. Far from a technical afterthought, decommissioning is a strategic imperative that underpins business continuity, operational resilience, and the realisation of AI’s promised benefits.
In the rush to implement cutting-edge AI solutions, it is all too easy to overlook the need to retire outdated systems and processes. This oversight can lead to duplication, inefficiency, and increased risk. Many organisations find themselves running old and new operations in parallel for far longer than anticipated, draining resources and complicating business activity.
There is also a human dimension to this challenge. As AI automates tasks previously performed by people, questions arise about workforce planning, skills redeployment, and the long-term continuity of organisational knowledge. The temptation to focus solely on the shiny new technology must be resisted; a holistic approach is required.
At Berkeley’s AI: beyond the pilot panel event, a senior AI leader shared their view on the risk of being overly optimistic on AI capabilities.
“You have optimism bias in the system, where AI is being bought by people who have a little bit of knowledge but don’t necessarily understand the associated risks,” he said.
“As you would with any other tool you're implementing, you need to look at the downsides as well. You need to understand the contestability and operational resilience of your systems. If you're using AI, you need to assess what's going to happen if it stops working and how to recover from that.”
A senior AI leader, speaking at Berkeley’s ‘AI: beyond the pilot’ panel event
This is a crucial point. AI systems, like any other technology, can fail. They may be disrupted by data quality issues, integration problems, or changes in business requirements. Without a clear plan for business continuity, organisations risk significant operational disruption if an AI system goes offline or underperforms.
Business continuity must be at the heart of any decommissioning strategy. This means not switching off legacy systems until there is full confidence in the replacement.
Another panellist, a chief digital officer for a global consumer products company, shared their experience of this issue. “We've simplified about 20% of our local applications across different markets and it's been a huge undertaking. Part of it was ensuring that we had business continuity throughout, not stopping something until the confidence was there on the replacement.”
This approach requires rigorous testing, phased rollouts, and contingency planning, with a controlled acceptance into service of new operations and retirement of the old.
It also demands close collaboration between IT, business units, and support functions such as risk management and compliance. Only by working together can organisations ensure that critical operations are maintained throughout the transition.
Decommissioning is not just a technical exercise; it has significant financial and operational implications. Maintaining legacy systems alongside new AI solutions can be costly, both in terms of direct expenses and the opportunity cost of delayed transformation. Conversely, retiring systems too quickly can expose the organisation to operational risks and potential service interruptions.
There is also the question of “firing” AI agents and retiring digital assets. The aforementioned chief digital officer said, “If I turn off an agent that I've built a business case around, what does that mean and how do I prevent firing an agent that equals a write-off in the future? … We're looking at total cost of ownership, and the impact of creating AI as an asset for us for the future.”
As AI becomes more embedded in business processes, organisations must consider the total cost of ownership, including the costs of ongoing maintenance, upgrades, and eventual decommissioning. This holistic view is essential for effective budgeting and risk management.
Putting suitable controls in place can help organisations to manage these risks from the outset. Guardrails for controlling the proliferation of AI agents, clear product ownership and accountability for their creation and maintenance are appropriate measures to mitigate potential future problems.
Effective decommissioning requires robust risk management and governance frameworks.
Effective frameworks will include:
Comprehensive risk assessments to identify potential points of failure and their impact on operations
Clear decision-making processes for when and how to retire legacy systems
Regular reviews and audits to ensure that decommissioning plans remain aligned with business objectives and regulatory requirements
Transparent communication with stakeholders, including employees, customers, and partners.
By embedding these practices into their AI transformation programmes, organisations can mitigate risks and build resilience.
Leaders play a pivotal role in ensuring that decommissioning and business continuity are not neglected. They must set the tone from the top, allocate resources, and hold teams accountable for delivering on both the promise of AI and the practicalities of transition. This includes fostering a culture of realism, balancing ambition with caution and innovation with operational discipline.
By planning for the retirement of legacy systems, ensuring operational resilience, and managing the human and financial dimensions of change, organisations can unlock the full value of AI while safeguarding their core operations.
Share: