In response to market conditions over the past few years, executive management must stay on top of debt and liquidity issues. As a result, treasury professionals have found that their reporting roles are becoming more important than ever before; information that was once supplied on a quarterly basis must now be available daily. This has forced most treasury departments to restructure their current delivery methods for reports and opt for quicker, more readily available. One of the best ways that our office has found to deal with this need is to move away from the traditional spreadsheet reporting methods relied on in the past and toward a more adaptable model based around a specialised reporting database, online analytical processing (OLAP) analysis tools and web-based delivery methods. Doing this has put enhanced the information we gather and enables our department to adapt its reporting focus in a matter of minutes rather than hours, as well as allowing us to issue ‘dashboard’ style treasury reports on a daily basis. It also enables us to slice, dice and analyse our data in much more sophisticated ways than in the past.
Previously, our treasury reports were theoretically pulled directly out of our enterprise resource planning (ERP) system into Excel. This may sound straightforward but it was actually a very cumbersome process for several reasons. In common with many companies in our industry (Prologis is a real estate investment trust (REIT)), we manage and use an unusually large number of bank accounts and other treasury products, many with a specialised or restricted purpose. We therefore needed our IT department to build and modify very complex, specialised queries so as to categorise and present all this information correctly. This could rarely be accomplished with the sort of turnaround times that we needed. The data tables in the ERP system had more of an operational focus and were not really designed with executive reporting in mind, and so didn’t highlight the data attributes that are important to us for reporting purposes. For reasons of system security and performance, we in treasury could not access ERP data directly and so we were reliant on IT professionals who – rightly – were mainly focused on daily operational matters (ensuring that payment files were delivered to the bank on time, checking that cash journals had been created correctly and so forth) and who often lacked the in-depth understanding of treasury strategy and capital markets that is necessary to design the types of report that would be useful to senior management. This led to a time-consuming, iterative process of report generation that could be frustrating for IT and treasury alike. It could take months to achieve results, and even then the structure and format of the reports were often not suitable for senior executives and, therefore, required a lot of ‘off-line’ modification in Excel.
Another problem related to our international operations. Prologis has regional treasury centres in Europe and Japan that use the same ERP platform as our North America operations, but on separate, decentralised databases. Because of this structure, the queries designed by IT could only extract data related to the North American business. We were therefore reliant upon our treasury colleagues in Europe and Japan to send us spreadsheets with data to be incorporated into our global reports. This approach worked reasonably well when we only had to issue reports on a monthly or quarterly basis, but as we faced increasing demand for daily information, it became much too cumbersome and time-consuming, both for the corporate treasury function and for the regional treasury centres.
Finally, there were some types of data, for example information relating to interest rates, debt maturities and security, which were very important to our treasury reports, but that were not directly captured in the ERP system. Like the international data, this information had to be manually grafted onto the reports at the end, rather than pulled in directly from a database.
These difficulties put us in a situation where we were essentially ‘hand-crafting’ our treasury reports from scratch each time they needed to be refreshed. As well as being time consuming and inefficient, this process was very inflexible. If there was an error in the underlying data, or if senior management wanted to see the data displayed in a different way, we would often have to reconstruct the entire report. This situation also meant that it could be difficult and time-consuming to create an audit trail between the information on a report and the underlying data source.
Taking Control of the Treasury Reporting Process
To solve these problems, we decided that we needed to take direct control of our treasury reporting process. Essentially there was a gap we needed to bridge between the people in treasury who understood the data, and the underlying sources of that data. While IT remains critical to our treasury operations, including payment execution, daily cash positioning and treasury accounting, treasury has largely taken direct control over the reporting ‘layer’ of our technology stack. This has given us the fine-grained control over report content that we need, the ability to turn around changes in our reports quickly and efficiently, and (for our company) unprecedented analytical capabilities. The core of this new process is a treasury reporting database that contains a range of key treasury information, including bank accounts and balances, line of credit use and cost, debt and exchange rates, all on a global basis and updated daily. With this approach, we have much more control over our data; our only reliance on the ERP system is a daily ‘information dump’ into the treasury reporting database that uses a very simple query, which is easy to maintain. Once we have pulled this information from the ERP system, we have full control over how this gets aggregated, analysed and presented for reporting purposes. We have the ability to custom-build our own queries, either directly from the database or using OLAP tools as described below. Now, if we need a report with only two of the three fields, we can write a query against the reporting database to pull the information together. If the report criteria changes, we can quickly rebuild our query to gather different information. This change from fixed ERP queries to flexible queries against our reporting database has saved us time and made reporting more efficient. It also creates a clear and easily reproducible audit trail between the report and the source data.
The next step we undertook was to involve a data structure known as an OLAP Cube. An OLAP Cube is a very specialised type of non-relational data source intended primarily for reporting and analytical purposes. It has the capability to aggregate, evaluate and present data from a variety of different perspectives. Many readers will be familiar with pivot tables in Excel; an OLAP Cube is essentially a large and complex pivot table that is stored permanently as a stand-alone data structure. The Cube can then be queried, either on a spreadsheet or using specialised software (we use the ProClarity product). By using an OLAP cube, we are able to ‘dimension’ our data; in other words, we can slice and dice it into categories that are useful for reporting or analytical purposes. For us, these categories include geographical region, legal entity, currency and institution/bank. For example, we might be asked what the available cash was on a certain day in a certain region, for a particular division and held in a certain currency. Using our old reporting approach, this type of ad hoc question would have been very difficult to answer without a lot of time, e-mails and secondary analysis. Now, using a quick pivot table query against one of our cubes, we can extract this information in seconds and send it to management.
Now suppose we want to know a weekly history of available cash in that same region. It would take a minute to pull this information from the pivot table already built. Without the Cube, a new spreadsheet would need to be constructed and the analyst would need to research the amounts to populate the sheet one day at a time. The information in our Cubes has a full history so it can be extracted in any combination of dimensions within a few minutes without including other, unnecessary information. The Cube structure also enables the information to be easily graphed and charted for use in presentations and other documents.
Our final change was to take the reports that we were running and post them out on our treasury intranet site. We are no longer forced to e-mail separate reports to separate people. We simply post all of the reports on one site on a daily basis and anyone who needs to access the information can do so whenever they want. We control access to the reports by restricting who can access the intranet site; this actually makes the reports more secure because, under the old approach, we had no control over how e-mails were forwarded. Users, by bookmarking the relevant page on the intranet site, can always be sure that they are looking at the most up-to-date version of a report. We have also included a full archive of reports on the site, so we no longer have to e-mail missed reports on request; instead they can be directly accessed by the end user. When a report has been refreshed, we can now remind users by sending out a link to the report rather than an e-mail attachment. This is a big improvement; the size of some of the old reports was so large that they would clog up users’ e-mail boxes and many recipients would delete them as soon as they had read them – resulting in the loss of any prior day reports that they might have needed. Recently, we have launched an online daily treasury dashboard that leverages all the various improvements described above. The dashboard has a portal-style interface with information about a range of treasury activities (e.g., cash balances, liquidity, debt maturities and hedge portfolio) as well as capital markets data (e.g., interest rates, exchange rates and stock and bond prices). Users can click links on the portal to drill down into successive levels of detail about each of the subject matter areas.
We are starting to design more reports for our internal departmental use in addition to our external requirements. We have recently started to load monthly bank fee statements into our database. This allows us to monitor fees on our accounts in aggregate and also highlight exceptional and unusual charges at an individual bank account level. Most importantly, we are not locked into a single reporting model anymore or forced to try and predict the business’ reporting needs months or years in advance. The agile reporting platform we now have allows for continuous adaptation and evolution in response to changing business conditions and the changing needs and demands of senior management.
Many banks around the world, large and small, continue to experience major security failures. Biometric systems such as pay-by-selfie, iris scanners and vein pattern authentication can help.
The implementation date of Europe's revised Markets in Financial Instruments Directive, aka MiFID II, is fast approaching. Yet evidence suggests that awareness about the impact of Brexit on MiFID II is, at best, only patchy and there are some alarming misconceptions.
Despite all the automation and improvements that digital banking has the potential to achieve, customers and their needs still form the very core of the banking sector.
Banks might feel justified in victim blaming when fraud occurs, but it does little for customer confidence.