It all started two decades ago in the early Nineties, when financial planning and analytics (FP&A) systems, enterprise resource management (ERM) and centralised corporate data warehouses were all the rage. The concept was simple; companies needed all of their data in one place for a “single version of the truth”. The concept was easier to outline on a whiteboard than it was to implement though, especially when organisations were altering their business through events such as growth by acquisition.
This was the era when finance departments got closer to becoming the organisation’s technical guru than most would have preferred, although some people enjoyed it enough to make it their career. They were the brave heroes, sitting at their computer trying to evaluate an error message that popped up during a long financial consolidation job they ran. To the credit of these hard-working people and new technologies, we have seen a tremendous boost in productivity as a result of their work.
The Big Data Era
Big Data is the next generation of information management and business analytics and is poised to deliver top line revenues cost-efficiently for enterprises. The greatest part of this phenomenon is the rapid pace of analytics innovation and change. Where we are today is not where we’ll be in just two years’ time, and definitely not where we’ll be in the next decade.
This new age didn’t suddenly emerge and is no overnight phenomenon. It has been coming for a while, developing deep roots and many branches. In fact, if you speak with most information industry veterans, Big Data has been around for decades for those firms that have been handling tons of transactional data over the years, even dating back to the mainframe era.
The reasons for this new age are varied and complex. So what’s different? As my co-authors and I defined in our book ‘Big Data, Big Analytics’, here are three examples of what’s new:
- Computing perfect storm: Big Data analytics are the natural result of four major global trends: Moore’s Law, which basically says that technology always gets cheaper; mobile computing, represented by the smart phone or mobile tablet in your hand; social networking in the form of Facebook, Foursquare, Pinterest etc; and cloud computing, where you no longer even have to own hardware or software anymore as you can rent or lease someone else’s.
- Data perfect storm: Volumes of transactional information have been around for decades for most big firms, but the floodgates have now opened with more volume, and the velocity and variety – the ‘three Vs’ – of data that have arrived in unprecedented ways. This ‘perfect storm’ represented by the three Vs makes analysing and processing it extremely complex and cumbersome with the current information management and analytics technology and practices.
- Convergence perfect storm: A further perfect storm is also underway. The elements are traditional information management and analytics software and hardware technologies; open-source technology that is not proprietary and free; and commodity hardware (cheaper alternatives to big hardware) which are merging to create new alternatives for IT and business executives to address Big Data analytics.
Traditionally, data, especially operational data, is ‘structured’ as it is entered into a database based on the type of data; for example character, numeric, etc. Over the past two decades, data has increasingly become ‘unstructured’ as the sources of data have proliferated beyond operational applications. Unstructured data is basically information that either does not have a predefined data model and/or does not fit well into a relational database. Unstructured information is typically text-heavy, but may contain data such as dates, numbers and facts as well.
The Big Data trend is to use tools and techniques that analyse information that you already have, such as financial planning and analysis (FP&A) and enterprise resource planning (ERP) data, and combine that with additional insights that you can get such as social media, demographic data, website trends, spending insights and other information. For example, what about using social data for sentiment analysis of how people feel to assess the reputation of a particular firm you are acquiring or divesting? Or perhaps it’s mining blogs and customer service comments to figure out if there is a pricing problem with one of your product lines.
This movement is exciting but the same reality applies – preparing the information for analytics is the hardest part. Data cleansing is the foundation of a successful analytics project. In fact this becomes even more difficult when it comes to predictive analytics; i.e. forward-looking predictions versus rear-view mirror reporting.
Success and Data Scientists
Aside from IT-centric data cleansing tools and techniques, there are many technologies and approaches that are helping professionals manage this challenge. Sometimes that comes in the form of a service, such as a firm that serves up clean and organised social media data insights through a user-friendly interface. For a more hands-on approach, there are visualisation tools to assess the information both quantitatively and visually. This is helpful when you are trying to find out what information, both structured and unstructured, you have to support a particular business problem. How much information is there? Are there missing variables? What’s the best way to extract this information for analysis?
What is the ‘secret sauce’? Over the past decade companies like Facebook, Google, LinkedIn, and eBay have created treasured firms that rely on the skills of new data scientists. These individuals are breaking the traditional barriers by leveraging new technology and approaches to capture and analyse data that drives their business. Data scientists are the resources who create value by combining their deep math, science, and computer science backgrounds to address specific business problems. They have the skills to create models that are representations of the data and patterns.
Ultimately, it all comes down to focus and alignment. The best of intentions can go awry if the goals of the data scientists and the end business consumers are not aligned. This is even more critical given the highly iterative nature of analytics, which demands that generators and consumers work closely on a continuous basis. It’s important that you have the right skills for the IT roles and business people that actually understand the data and how it should be applied.
The leaders need to define business priorities and problems to be solved and define road maps that are time-bound yet at the same time measurable and achievable. As intuitive as it may seem, without focus and direction no processes or technology will make a difference. This is the only way that a firm can truly capitalise on the exciting Big Data era.
Tim de Knegt, treasurer for the Port of Rotterdam, discusses how he is looking to bring more value to the Port's clients using blockchain.
Regulation technology is fast gaining currency by transforming how financial institutions can tackle compliance in a swift, comprehensive and less expensive manner.
Many banks around the world, large and small, continue to experience major security failures. Biometric systems such as pay-by-selfie, iris scanners and vein pattern authentication can help.
The implementation date of Europe's revised Markets in Financial Instruments Directive, aka MiFID II, is fast approaching. Yet evidence suggests that awareness about the impact of Brexit on MiFID II is, at best, only patchy and there are some alarming misconceptions.