The Road to Big Data

Big Data is inescapable in today’s financial services (FS) industry, as increasing volumes of customer and transactional information flow through the organisation. The FS sector is one of the most data-driven industries, and various analysts estimate that between 80% and 90% of the data that exists within a bank’s datacentre is not analysed, including data from call logs, weblogs, emails and documents. In addition, there are other relatively new sources of unstructured data that continuously spew ‘digital exhaust’ and contribute to what’s known as ‘Big Data’; such as blogs, online news, weather, Twitter, YouTube and Facebook to name just a few examples.

There is a potential goldmine of information in both types of data source and the combination of data from traditional sources, combined with data from new and unknown sources, presents financial services institutions (FSIs) with new opportunities not only to comply with stringent regulatory requirements but also to derive valuable information and insight for business advantage. The cost of missing out on these benefits is high; in a recent Oracle survey , C-level FSI executives said that their companies were unable to realise an average of 12% of additional revenue per year due to an inability to draw actionable insight just from the data their organisation collects.

The second key tenet of Big Data is the industrialisation and commercialisation of a slew of technologies such as Hadoop, Hive, Sqoop, Flume, Pig and Mahout, which enable businesses to sift through massive quantities of unstructured and structured content. The potential benefits are tremendous, from the ability to respond to regulatory pressures faster, more efficiently and more cheaply, to the creation of targeted products, services and marketing campaigns. At a consumer level, utilising Big Data can improve loyalty. For example, should a customer make a large purchase and reach their credit limit, they may migrate to a competing card provider for their next purchase. However, if the bank records a status update from the customer announcing their desire for a new product, it can prepare and pre-approve a loan for this purchase, subsequently decreasing the likelihood that the customer will visit a competing bank for their next purchase.

Fully grasping Big Data is a big undertaking, best achieved by assessing the organisation’s current position and prioritising which challenge should be tackled first to meet the stated goal. There are three steps that FSIs can take to begin to manage and leverage Big Data successfully:

Step 1: Targeting a specific problem to solve

Depending on the organisation, the following areas are strong candidates for taking advantage of Big Data:

  • Next Best Offer: Banks can use predictive analytics on a combination of data to create a series of targeted offers for customers, and make these offers available in real time at the next point of customer interaction.
  • Pricing Management: Banks, capital markets institutions and insurance firms can use information from both types of data source to price products for individual customers; taking into account risk, capital, cost allocations, transfer pricing and a multitude of additional dimensions.
  • Payments Analytics: Banks can offer value-added services to their merchant customers by analysing payments cleared by them and alerting them of opportunities if they see certain patterns emerge.
  • Fraud, Anti-Money Laundering (AML), Trader and Broker Compliance: All categories of fraud suffer from over-zealous software that generates a high number of false positives. These result in a significant operational problem, as they need to be analysed manually. Tuning the software to reduce the number of these alerts results in the opposite problem, with real fraud going unreported. Using both sources of seemingly unrelated data offers the potential to catch fraudulent activities earlier than current methods allow. In the case of internal fraud monitoring, trader and broker compliance software can monitor trading activity coupled with additional data points from sources such as social media, short message service (SMS) and emails, and from this create a graph analysis that traditional tools are unable to provide in order to detect any patterns.
  • Reputational Risk: All institutions worry about their reputation and getting feedback on newly-adopted policies and newly-launched products. Most of the ’noise’ around a brand comes from new data sources, and Big Data technologies can be very effective at quickly gathering this information for analysis.
  • Credit Risk: Valuations for credit risk are very computationally intensive. The number of risk factors that need to be modelled are in the thousands, with each risk factor taking thousands of stochastic paths. This problem is traditionally solved using very large grids of compute nodes. Big Data technologies offer a significantly cheaper alternative, with a huge number of loosely-coupled Hadoop nodes that these computations can be offloaded onto.

Step 2: Deciding on a Data Strategy

The essence of Big Data is bringing disparate sources of data together – some structured, some not, some persistent, some streaming – all at the right instant of time. Unlike traditional data management, where the structure of the data is decided upon its arrival – commonly known as ‘schema on write’ – Big Data mandates the realisation of metadata at time of consumption, or  ‘schema on read’. This creates a new series of challenges in determining not only which data to persist and where, but also how to locate the persistent data when it is needed, all in real-time.

It is critical to identify sources of structured data within an institution, and to unify these sources. Their metadata is well-known and bringing this data into a unified data model is a good first step. Identifying additional data needs, based on the use cases identified above in step 1, will bring this second source of data closer to the point of realisation. Figuring out where to persist this data will depend on the use case.

Step 3: Applications: Build or Buy?

For the identified use cases, applications can be built in-house or bought. A significant number of vendors supply packages that can generate Next Best Offer or identify fraud. The ones that stand out are those that seamlessly merge the traditional world of well-known data, analytics and visualisation with the new world of seemingly innumerable data sources, utilising Big Data technologies to generate new insights.

Ultimately, Big Data is here to stay. Those FSIs that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, are well-positioned to make the most of it.

10 views

Related reading