Unless steps are taken, it’s a problem that is only going to continue to grow. Data sprawl and fragmentation will reach the stage where they become a drain on resources and a dampener on staff productivity.
A new approach to data management
The main issue here is one of scale – not so much the sheer volume of data involved, but how we go about managing it. Instead of a joined-up approach, most companies tend to divide data into more manageable chunks, misleadingly referred to as “systems”, each with its own infrastructure. Infrastructure which is typically hardware-based and dedicated to, for example, backup, disaster recovery, archiving, analytics and so on, with little or no data sharing between their fragmented stores.
Not only is this costly, but it can also lead to multiple copies of the same data propagating across these silos. 73 percent of respondents to a ESG survey reported their organisation stores data in multiple public clouds today in addition to their own data centres. Not only are there massive volumes of copied data, but it is spread everywhere. And that, in turn, can lead to operational issues caused by inconsistencies between those copies, how they’re encoded, stored and accessed. It’s also hugely expensive to do.
Then there’s also the little matter of visibility of data.
Believe it or not, most organisations have little knowledge as to where most of their data resides, let alone who owns and uses it, or whether it contains sensitive information. In other words, most of their data is “dark” making it a considerable risk to the business.
How, for example, do you police compliance if you don’t know what data you’ve got, or whether it’s even all in the same area of jurisdiction? Likewise, how do you protect against ransomware attacks or – at a more mundane level – manage, optimise and make the best use of this most valuable of resources on a day to day basis?
Taking a more effective approach
The typical response is to tackle these issues in the same piecemeal manner, typically, on a per-application or per-function basis.
Don’t – it will only make the problem worse, for example, by creating silos within silos.
Take a look instead at how the big boys do it. Global players like Amazon, Google and Facebook who, with exabytes of data to cope with, ought to have huge data issues, but don’t. And that’s because they’ve sidestepped the hardware-led “system” route in favour of a unified data management approach, using modern technology to avoid these very modern problems.
The details, of course, differ, but each has opted to build an infrastructure using commodity hardware on which everything is virtualised and software-defined. Moreover, as part of that software-defined architecture, each has a modern data management layer comprising the following three key elements: -
- A single unified file system to store data consistently regardless of location, technology or application requirements
- A single logical control plane to manage it
- The ability to run and expose the data services needed by applications regardless of how they consume them
This approach not only saves money by creating, in effect, a single shared data storage pool, it’s also much more efficient, secure and easier to manage. There’s much less risk of multiple copies being created, for example. The same backup and archiving technologies can also be applied across the entire infrastructure estate, so it’s all highly visible – no dark data, no matter where it’s stored.
Make the necessary investments
But (I hear you cry) the likes of Google, Amazon and Facebook have the money and the technical resources to do that. We don’t and we’ve also got lots of legacy solutions which we’d have to upgrade or replace which would be both expensive and disruptive to the business.
The answer to which is, yes, there is a cost and, yes, there will be disruption, but no more than with a lot of other IT investments. How much is your existing legacy setup hurting your business in terms of total cost of ownership, and lack of insight? Plus, you don’t, necessarily, have to sweep everything aside in one fell swoop in order to copy the big boys.
Demand is building, and the IT industry is rapidly waking up to this need making it increasingly possible, and easy, for businesses of any size to deploy a modern data management solution. Moreover, because they’re software-defined, they really can be implemented and managed across an existing infrastructure with minimal disruption, whether on-premise, in the cloud or over a hybrid mix of the two.
Which leaves the question of whether you think it’s worth it, so ponder on this. Data sprawl, data fragmentation and siloed data are big problems already and if you don’t do something they will only get worse. They are acknowledged already as a significant roadblock to business transformation making it all the more imperative to do something about them sooner rather than later, if you have aspirations to enable digitise your business.
That something is modern data management. By leveraging new technologies that are future-proofed and designed to operate with legacy technologies too, they overcome these issues caused by the use of point products. And, by bringing your data back under proper control to support risk mitigation and compliance, you also get a more holistic view of your data for business insight and new service development. Can you really afford not to?