Integration and Divestment: It’s all About the Data!

As part of a team, I was recently involved in the re-launch of a major UK bank on to British high streets. It was a personal culmination of 20 months’ hard graft on a complex divestment programme, following an equally challenging 18 months on the preceding integration programme. Both periods provided insights into large-scale programme challenges and the opportunity to see things through the lens of both IT and business. Stepping back, there are a number of important issues and challenges to highlight to firms looking to do the same and one moment on the project in particular is worth revisiting.

The divestment and integration were huge undertakings, both in terms of scale and change (of hardware and software) as well as data. From a hardware and software perspective, the main challenge centres on gaining a clear understanding of what the target solution should be. While this might seem a simple task at first glance, for a financial service organisation IT change is never simple. As their business functions grow and develop, as strategic decisions are made, and as technology itself transitions to a legacy state, financial services platforms evolve over time into highly complex organisms.

As a result, the more technical elements of large scale programmes – be it ‘scale and remediate’ or ‘partition, clone and build’ of the target systems – are highly complex activities that require clear design and careful delivery. Invariably this requires heavy lifting around all IT activity, from development to testing – be it system testing, system integration testing (SIT), non-functional testing, and so on. As this portion of the programme involves physical change, it would naturally appear to be the most complicated activity. However, personal experience suggests the reality can be different.

The outset of the integration programme took place in a small meeting room situated in the Moorgate district of the City of London. It commenced with a debate about resources with, among others, the programme’s overarching business test lead, who espoused the complexities of the data. “It’s all about the data,” he repeated adamantly. It’s fair to say that, at the time, the majority of the audience were more concerned with the delivery of the infrastructure and code. But on reflection, were it possible to turn back time, I’d stand up in that meeting and declare: “I’m with the business lead… it’s all about the data!”

Abstract versus Tangible

Part of the difficultly when it comes to testing data is that it feels somewhat abstract. As such, it is difficult to quantify and qualify compared against more tangible deliverables such as mainframes, servers and code. You can count infrastructure and you can measure code, but it’s more difficult to articulate data as a concept.

Nonetheless, it appears that data shapes an organisation by underpinning strategic thinking and decision making. If an IT system were the organs then data would be the blood that pumps through it. So, when faced by a large-scale change programme with a heavy bout of data testing, what should you consider?

  1. The Requirement: While this sounds obvious, it proves much harder to quantify than some realise. ‘How much data do you need?’, ‘how many accounts?’ or ’how much of x, y and z?’ would be common questions. Clearly, test professionals would quote different approaches in setting about this definition – such as boundary value analysis – but be aware that some functions, such as risk and finance, will invariably require large volume data sets to validate macro level objectives such as distributions, strategy analysis and modelling. Always consider macro level data outcomes and acknowledge the use of samples.
  2. Staging the Requirement: Invariably a data cut or some sort of data staging activity would need to be taken to support testing. Care and thought should be given as to when it is taken and how this is aged, to ensure that it can meet the business test outcomes. Considerations should be given to month-end and quarter-end processing, which are quite different to daily processing. The cut should mimic the timings of the event itself. Transactional activity should be given due attention to synthesise account behaviour.
  3. Quality of the Data: ‘What does good look like?’ is the invariable question. Cleansing activities need to be done and if there are data manipulation activities, such as a conversion, these need to be checked and verified thoroughly.
  4. Test the Outcome: Data testing needs an environment, so all the hard challenges listed at the outset regarding hardware and software should be resolved. Stable code-sets and configuration management are absolutely vital as they ensure the foundations are solid for testing to be executed. Issues with code or infrastructure will invariably make triage of data issues extremely difficult.

When it comes to data in enterprise environments, there’s a huge amount you could explore from mitigating strategies and creative approaches to use of tools, modelling software and so forth. However, that is for another article. What is evident, after having been on both sides of the fence in IT and the business, is that data testing is both absolutely vital and extremely hard. The importance of data and data testing needs to be recognised right at the outset of an integration and divestment programme in order to plan and direct efforts appropriately. At the end of the day, it really is all about the data.

28 views

Related reading