The severe electrical storm that struck Washington, DC and northern Virginia in the US recently brought down the physical servers that were housing some cloud-based Amazon services and information, adversely impacting some treasuries and other organisations. It took several hours for the affected sites to be restored. The event provoked an immediate flood of comment about the robustness and reliability of the cloud computing model, especially for housing sensitive and critical applications and their related databases, such as treasury management systems (TMS).
The advent of phenomena such as technology hosting, active server pages (ASP), software-as-a-service (SaaS) and cloud computing in general have all, in their various ways, provided an opportunity for treasuries and other organisations to outsource their IT. Such outsourcing may be primarily driven by cost comparison matrixes, versus in-house alternatives, or by the perceived mitigation of a major source of operational risk. Alternatively, it may be seen as a way to provide to compensate for the lack of in-house IT resource or expertise, or be driven by a combination of all these factors.
More and more treasury applications are presently cloud hosted, and there is a general trend in the IT industry to adopt cloud-based solutions, primarily as the cloud is seen as a cost effective option. This is because a cloud deployment can be considered as a bank of virtual servers; the cost is based on the actual usage of physical processers and disks, not the upfront capital expenditure cost, and it is this direct linkage between cost and usage that creates the observed cost effectiveness benefit. On-going fees are often a cost well worth paying when the IT staffing falls to someone else as well.
According to Matthew Eastwood, an analyst for the research firm IDC, commenting on a similar 2011 cloud crash that adversely affected end users, the incidents of storms causing outages, “will force a conversation in the industry”. Eastwood believes that the discussion needs to centre around what data and computer operations might safely be sent off to the cloud, and which should be kept inside the corporate walls for business continuity purposes. How should treasurers view cloud hosting in the wake of the latest outage on the eastern seaboard of the US, and what lessons can be learnt?
In my opinion the core issues raised by these storm-caused outages are not ultimately cloud-specific problems: they relate to the appropriate level of robustness and redundancy required for the technological support of a critical financial function, such as treasury. The business risk evaluation process is ultimately the same, regardless of whether an in-house, hosted or cloud-based technology solution is being analysed. And of course cost is a crucial factor: what is it worth paying to cut back the probability of system outage to an insignificant level? Given the appropriate investment, the cloud can be secured as effectively as any current alternative.
The impact of a serious technology outage on treasury operations is very much time related: notably, a failure as payment cut-off time approaches is likely to have significant cost consequences, and there are likely to be other critical points in a treasury’s workflow at which a system failure would cause immediate problems. The way to minimise such risks is to invest in sufficient redundancy and system replication facilities so that a data centre failure simply means that the application is switched to another site. This is just as relevant to a cloud solution as any other – but the treasurer who has ignored this possibility and has signed up for a cloud solution without looking into the disaster management realities is vulnerable to a nasty shock. Selecting cloud hosting on the sole basis of price is no smarter than awarding a bank mandate on the basis of transaction cost alone.
Users of cloud based treasury services should understand the service levels that their supplier guarantees, and they should understand enough about the technology platform to be able to evaluate the risk to their continuity of service of failure in a particular physical data centre. In looking at alternatives, the extremely high levels of ‘Six Sigma’ performance dependability built into big banking technology are beyond the budgets of most corporate treasury departments, but treasurers using the cloud should satisfy themselves that they can confidently expect continuity of service in almost all situations.
Avoiding dependency on a single site data centre is an obvious necessity. Assurance of swift service restoration is another must; ‘swift’ can vary from real time to sufficiently rapid to avoid significant problems.
If I have given the impression that users of cloud based treasury technology should invest in suitable levels of service protection – and thus give back some of the cost savings achieved by taking the cloud option – that would be a fair conclusion. The risk evaluation will differ in detail between organisations, but the cost of mitigating the risk should be seen as an insurance premium that protects the continuity of treasury operations.
If cloud deployment of treasury technology is properly evaluated and implemented, it provides a perfectly viable alternative. It’s simply a case of understanding the real risks, and managing them.
The recent NotPetya cyberattack underlined the need for organisations to address their exposure and how to mitigate the risk.
For companies to survive the intense competition, the only way is to make better use of information gathered from the business process.
Accidental data breaches are causing almost as much concern as the steady rise in ransomware attacks, reports insurer Beazley.
Automated accounting promises to save business owners time and money and remove much of the tedium from routine tasks.