Fund start-up pain and making mistakes for the long term
This post is the second in a series looking at investment data management topics that come up regularly in our work with fund managers.
We’ve spoken with around 100 small hedge funds and asset managers specifically to learn about their experiences setting up platforms and operations, and growing the business.
This post concerns the fund set-up process itself and how some fairly innocuous decisions early on can have huge implications later.
Names, locations and dates have been changed to protect the innocent.
Decisions for the long term
Let’s say you’re an asset manager founded a decade ago, managing around $1 billion and trading a variety of different asset classes. You started life with a simple fund management system, fund accountant and used Excel for the order management, analytics and everything else. In those days, that was enough to raise funds and get licensed. Assets grew initially but have remained steady for the past few years, whilst costs of staff, licenses and most significantly – regs have rocketed.
You’d love to put controls around Excel usage for data storage, tracking and entitlements and bring the fund platform in-house, primarily to own your data, such that additional reporting and asset classes can be added immediately and for free (not in 6 months and at great cost), but also to enhance your data analytics capability, add some strategies and perhaps some visualisation tools.
None of these changes are trivial in terms of investment expense, expertise and time, and after years of working with closed vendor systems, your team has specialised in VBA and pivot tables. Each time a fund or asset class requiring new functionality is added, you consider again an in-house solution, but the migration of transaction history from the fund platform appears too complex a task; instead, specialist external tools are bolted on over the years. Simplifying this architecture now is far too complex to contemplate. Even replacing the order process (i.e. getting off Excel) is challenging with many order management systems wanting to up-sell portfolio capability too.
Expanding the business
As a systematic trader, you are growing into new strategies across multiple asset classes, platforms and regions. Most of your staff are adept at coding and your technical setup reflects that. Your fund platform was developed in-house, is open-source and worked well at first, albeit with a limited instrument set. It also solved everything downstream. Your position and risk were straightforward, as was the reconciliation between the blotter and the fund accountant and regs reporting was likewise trivial for exchange contracts. But over time, as the fund grew and added trading venues and instruments, process complexity increased.
The prime broker set expanded, with the number of feeds ingested daily expanding into the hundreds.
Whilst you can cope with this, position breaks are starting to occur too frequently for comfort; you are gradually losing control of your reconciliation process and you can’t seem to retain anyone to maintain the platform or coordinate the workflow.
You have plenty of in-house technical capability, but it’s not in systems architecture and whilst you know what the solution should look like, you are loath to begin work on it without a precise design.
External vendors are offering to solve the mapping, and to some extent the reconciliation, but the implementation time is horrible, the cost per connection is prohibitive and you are concerned about access to your data once it is ‘external’.
You are a global fund manager, with more than $100 billion under management. Over the years, you have acquired several other major funds and somewhere along the journey, the cost, complexity and time needed to fully integrate products and funds to the original platform became prohibitive.
The current landscape is of siloed systems and teams, each specialising and perhaps working hard to be the core solution. You have no common data representation, and consequently normalising for risk and reporting is a major headache; without this normalisation, you simply cannot add new asset classes without also bolting-on specialist systems, bi-temporal investigation of breaks is near impossible and modifications or upgrades are hugely labour intensive and slow.
You have launched a multi-year project to resolve this. There are external vendors specialising in enterprise scale projects, but you have heard horror stories of costs, timelines and end-state flexibility, asset classes and data usability, so have decided to handle the build in-house. Your tech team assures you the capability exists although you are not aware what the time frame is, what the solution will look like and how to assess if it is fit for purpose.
The common thread through all these scenarios (and indeed almost all our conversations with clients) is that of specialisation. Building a state-of-the-art data platform takes the brightest minds in the city, 2-3 years, millions in investment and decades of expertise as to how things should be done – we know, because we’ve done it.
The solution needs to incorporate an extensible data model to allow for new asset classes without re-design; It needs a separate entitlements engine to control and track usage; it needs to be accessible via open APIs to address connectivity; it must be bi-temporal for regs, audit and scenario analysis, and it must operate at a cashflow (or movement) level of granularity to truly support reconciliation.
Getting this platform right sooner rather than later saves many-a painful decision about what is the least bad solution later on.
If you want to chat with us about your data management problems, we’re happy to lend an ear – get in touch today.
Subscribe to our newsletter
Get stories like this in your inboxSign up
FINBOURNE partners with Kreos Capital, securing a £30 million debt facility to fuel future growth plans
De-risking operational change in capital markets