What a Naval Historian Can Teach Us About Data

Engineering

What a Naval Historian Can Teach Us About Data

Finbourne Logo

08/03/2018

Everyone knows that data is important, but do we really appreciate how important access to correct data is for the successful execution of an investment strategy?

It sometimes feels that the financial services industry has only recently woken up to the idea that data is critical to the efficient running of the business – witness the plethora of Data Management Offices which have shot up, and the multi-million dollar Data Strategy projects which are being undertaken across the industry. The hard deadline of MiFID added more urgency to these initiatives, significantly increasing the expectations on quantity and quality of data reported to regulators. Is this going to be a continually-increasing cost of doing business?

A key part of the rollout of every new IT system is sorting out the data that it will ingest, process and output. Typically resources are set aside in the project plan to source the additional data, validate it, load it into the new system and maintain it on an ongoing basis. When this effort is under-resourced it is likely that the project will falter, requiring additional people to fix the data issues or delaying the project to allow time to implement additional data handling capabilities. In the worst case, the project can fail due to the inability to get the data sorted.

Another consequence of an out-of-control data regime is on the activities people find themselves doing. Employees who are hired to perform one role inevitably end up spending a significant amount of their time just sorting out the data – for example, portfolio managers spending an hour each morning calculating their current holdings, middle office teams maintaining strategy tags, or the back office manually sourcing data from disparate sources to produce a custom client report. This seems absurd when a portfolio manager should be focused on managing money, not data.

I have personal experience of this. Nearly 20 years ago I joined a team building an order generation application. Fast forward 20 years and the application has spread throughout the business. It is used as part of client reporting processes, as part of the compliance monitoring function, and as a data source for a number of spreadsheets – all this in addition to its core role as a portfolio rebalancing and trading system. The reason: it was the only place where data from multiple sources was pulled together, integrated, cleaned and made available in a flexible and performant application.

So why is this such a difficult problem to solve? I suspect that one of the key reasons is related to “Parkinson’s law of triviality” written by naval historian and author Cyril Northcote Parkinson.

In his third chapter, “High Finance, or the Point of Vanishing Interest”, Parkinson writes about a fictional finance committee meeting with a three-item agenda: The first is the signing of a £10 million contract to build a reactor, the second a proposal to build a £350 bicycle shed for the clerical staff, and the third proposes £21 a year to supply refreshments for the Joint Welfare Committee.

  1. The £10 million number is both too high in value and too technical for all members to understand so discussion of this item passes in two and a half minutes. One committee member proposes a completely different plan, which nobody is willing to accept as planning is advanced, and another who understands the topic has concerns, but does not feel that he can explain his concerns to the others on the committee.
  2. The bicycle shed is a subject understood by the board, and the amount within their comfort zone. Committee member Mr Softleigh says that an aluminium roof is too expensive and they should use asbestos. Mr Holdfast wants galvanised iron. Mr Daring questions the need for the shed at all. Holdfast disagrees. Parkinson then writes: “The debate is fairly launched. A sum of £350 is well within everybody’s comprehension. Everyone can visualise a bicycle shed. Discussion goes on, therefore, for forty-five minutes, with the possible result of saving some £50. Members at length sit back with a feeling of accomplishment.”
  3. Parkinson then describes the third agenda item, writing: “There may be members of the committee who might fail to distinguish between asbestos and galvanised iron, but every man there knows about coffee – what it is, how it should be made, where it should be bought – and whether indeed it should be bought at all. This item on the agenda will occupy the members for an hour and a quarter, and they will end by asking the secretary to procure further information, leaving the matter to be decided at the next meeting.”

Like this fictional finance committee meeting, there are some aspects of a data project that are easily grasped by most people – such as creating a full data dictionary and listing and modelling all the data types – often generating more than their fair share of comment and discussion. This means that the more complex aspects of a data project, which could lead to something truly innovative and future proof, are lost as people concentrate on solving what is in front of them.

Another problem is that the launch of any project which purports to solve a business’s data problems is likely to be quickly overrun with previously unknown requirements from across the organisation, leading to confusion of priorities and a “you can’t please everyone all the time” situation.

So where is the innovation to be found? At FINBOURNE, we believe a data system should excel in a number of key areas, and we are striving to make our product, LUSID, the industry leader. We believe the data system of the future should include:

  • Bi-temporality – the ability to access all data in the system as it was at any point in the past, ie to “roll-back” to the state of the system at a previous time
  • Efficient service-based real-time access to the latest data, avoiding the need for client applications to maintain a local cache
  • Notifications for all data changes, allowing client applications to react to changes in real-time
  • Ability to record a “signed-off” dataset, for example (but not limited to) month-end official positions
  • Ability to segregate the official data from data used for other purposes, for example what-if analysis or User Acceptance Testing
  • Flexibility to add new data points quickly and easily, with enough controls on access to fields to avoid a spaghetti of dependencies across the business. For example, a desk-specific pricing hierarchy would be stored centrally in LUSID, but only accessible for the particular desk that created it.

We’re seeing a real change in attitudes towards data. “Good enough” is no longer good enough. Taking months to source new datasets or add data points is unacceptable. Clients will soon demand access to their own data via APIs. Regulators will ask for more real-time feeds.

This is already happening to retail banking with the Open Banking Initiative. Imagine a similar thing for investments, where your pensions and investments across all your providers are accessible in a simple and secure way, allowing you to see what you want when you want, allowing you to grant access to your financial advisor, and allowing you to change providers with a simple click. We can, and we’re thrilled to be helping the asset management industry stay one step ahead of these changes.

If anything here has interested you, or you want to hear more about our plans and ideas, please don’t hesitate to get in touch.

Finbourne Logo

FINBOURNE

08/03/2018

twitterlinkedinfacebook

Related articles

Asset Managers – Exploring the Buy versus Build Dilemma

Finbourne LogoFINBOURNE03/05/2024

Exclusive Q&A: Northern Trust and FINBOURNE partner to win the data race

Finbourne LogoFINBOURNE02/05/2024

Droit and FINBOURNE Partner to Deliver End-to-End Position Reporting Solution

Finbourne LogoFINBOURNE30/04/2024

A Brief History Of Enterprise Data Management – Part 1: The Data Deluge

Finbourne LogoFINBOURNE23/04/2024