The saying that ‘nothing is more constant than change’ applies to project and operational situations equally and if you want to introduce new IT systems smoothly then you would be well advised to adopt it as a mantra and test, test and test new systems before putting them live.
Broad IT concepts, such as the governance structures or the change management approach of the IT Infrastructure Library publications (ITIL) show what can be achieved with some forwarding thinking and planning. Within treasury departments, and indeed any organisation, common organisational awareness is crucial if the inevitable changes that new IT systems go through – traversing several phases from their development until their eventual productive deployment – are to be safely navigated.
Comprehensive TMS solutions are likewise often characterised by their immediate ability to be effective automatic payment transaction conduits, as well as their qualification to support strategic decisions. For example, concerning how to supply a treasury department with sufficient liquidity reserves at all times or provide key risk management figures and functionality.
An increasing level of integration and automation of processes affecting a corporation’s treasury activities means a higher level of business dependency on a TMS’ faultless operation. Failures can have dramatic implications. This is the reason why changes on productive environments should be reduced to a minimum dimension and implemented only following determined processes. Changes which are of a more comprehensive nature and go beyond this dimension are mostly realised by project organisations.
Within an initial project scope, any changes for minor overhauls or new TMS implementations are usually less repetitive, innovative and of vital importance for the entire system’s operability. Typical considerations include customising settings for all system components, as well as code changes for bug fixes, enhancement developments or even entire smaller patch upgrades. An integration of such changes must be thoroughly tested and documented.
While staging and testing in this context are often used synonymously, the term testing rather refers to a systematic and well-documented approach to testing specific system functionality. Staging in the broader sense means a gradual procedure based on dedicated approvals within a deployment process. The validity of the test results depends on a data basis and functional specification which should be identical to a reference environment – usually the ‘to-be’ production after the ‘go-live’. Thus, in a narrow sense, succeeding the rather functional testing activities, staging is the integrative validation of changes on a separate instance under conditions which to the highest possible extent should be identical to the target environment.
For debugging and traceability reasons the entire process ideally follows a four-step approach. It comprises separate development, testing and staging instances, and a reference environment in this procedural sequence. While the staging instance is mainly intended for the validation of changes under target conditions, the latter are simultaneously subject to considerations regarding the extent of similarity compared to the reference environment. This on the one hand is due to, for example, the necessity to route outbound interfaces to test queues in order to ensure the system’s isolation (for instance bank confirmations or payment files and messages). On the other hand, it is cost driven since, for example, licences for real-time market data feeds for a non-productive environment appear disproportionate.
While specific settings, concerning SWIFT communication or the firewall configuration for example, deliberately distinguish the staging process from the reference environment, measures should also be taken to systematically adapt staging, as well as development and test instances, to the target. Within this ‘back-staging’, the key target environment is regularly copied to the preceding environments and only adjusted concerning the specifically necessary settings. This is intended to provide for continuously similar conditions, compared to the target environment, for each purpose of development, functional and integration testing.
According to this approach, changes must not be adopted by the target environment if they have not been comprehensively and integratively tested, documented, and approved on the staging procedure first, preferably by at least one responsible senior person. What sounds as if it should be a matter of course, is in practice subject to compromise, particularly in emergency IT installations driven by compliance demands or a system failure.
With each single decision to deviate from the minimum standard to be complied with, risk assessments should be taken considering the consequences of a failure of a TMS and the affect it could potentially have. For instance, upon a corporate’s ability to pay its financial transactions or the integrity of its accounting logic. Quite frequently, it is a preceding deviation from the standardised quality assuring process and staging procedure that causes a future failure so corporates would be advised to stick to the procedure.
Tim de Knegt, treasurer for the Port of Rotterdam, discusses how he is looking to bring more value to the Port's clients using blockchain.
Regulation technology is fast gaining currency by transforming how financial institutions can tackle compliance in a swift, comprehensive and less expensive manner.
Many banks around the world, large and small, continue to experience major security failures. Biometric systems such as pay-by-selfie, iris scanners and vein pattern authentication can help.
The implementation date of Europe's revised Markets in Financial Instruments Directive, aka MiFID II, is fast approaching. Yet evidence suggests that awareness about the impact of Brexit on MiFID II is, at best, only patchy and there are some alarming misconceptions.