It’s incredible how things have changed
The tech industry’s ability to innovate.
Back in the late 2000’s I recall requesting the purchase of a server in order to build a new product capability. After weeks of form-filling and meetings to discuss the proposals, risks and costs (which were quoted to be beyond $100k over a 2 year period), I was then informed that, even if approved, there would be a 4-6 month lead time to the server being made available to my team.
Since those days, the tech industry’s ability to innovate, experiment, fail fast and scale has been transformed by the emergence of platforms such as AWS, GCP and Azure which provide us with near-instant access to compute, storage and a wealth of other services.
The beauty of these services is in the virtualization of assets. When an EC2 instance is set up in AWS, for example, a physical server is not being set up. It’s a virtual server which shares a physical disk and hardware with at least a few others, but to the user, it appears and behaves (for the most part) like the real thing. The advantage is that the server is instantly available when it is needed, allowing organisations to scale up and down extremely rapidly. In addition, immediate access to compute and storage services enables organisations to innovate and experiment with new product and service offerings in a low-risk way so they are able to respond quickly to market events, changing regulations and opportunities to advance on their competition.
So, on the compute and storage side, organisations are spoiled for choice. However, when it comes to data, the industry at large hasn’t really modernised in the same way. There’s a term “data gravity”, originally coined by Dave McCrory in 2010:
“Data, if large enough, can be virtually impossible to move.”
That is to say, despite existing in the digital realm, data is notoriously difficult to access, difficult to move, and difficult to store. The tools used across the industry to work with data reflect the challenge of data behaving like a physical asset: ETL (extract, transform, load) and ELT (extract, load, transform) tools are still hugely popular for moving and transforming data, and the continued proliferation of centralising data warehouses affirms the concept of storing a ‘physical’ asset. Of course, useful tools like Kafka have made improvements to how we can move data around at high volume and high velocity, but they still treat data as if it is a physical thing.
Within a single company or organisation, the extent of these challenges are significant enough. However, as we see an increasing requirement for multiple companies or organisations to cooperate around single issues (e.g. climate change, vulnerable customers), the challenges related to ‘data gravity’ are massively increased with further concerns around information security and data sovereignty.
All of this means that when organisations want to build data-centric products or services (“where data is the primary and permanent asset, and applications come and go”), most are stuck in the late 2000s with long lead times and high risk.
But wait!
What if we could virtualise data in the same way that we virtualise compute and storage? What if data sources were made available to us, in our environment, with the click of a button? Well, then we could experiment rapidly and cheaply on new solutions, validating any data source’s usefulness, without the complexity of building data pipelines. In fact, in a follow-up article entitled ‘Defying Data Gravity’ in 2011, Dave McCrory states:
“So to make this work, we have to fake the location and presence of the data to make our services and applications appear to have all of the data beneath them locally. ”
This sounds awfully like virtualisation, doesn’t it?
Furthermore, what if in the process of sharing data you also retained sovereignty over your data systems? Well, we could share data without being concerned about giving system access to external parties and we would be confident that bad actors wouldn’t be able to attack our valuable assets.
Wouldn’t this totally transform how business is done? IOTICS enables exactly what I’ve described above.
Using their own individual IOTICSpace, IOTICS’ customers create real-time, dynamic, data ecosystems in which multiple participants (organisations) can virtualise individual assets as digital twins and choose with whom to share those virtual assets using fine-grained access controls. These access controls ensure that data owners only share what they want to share, and to whom they want to give access. Leveraging Semantic Web technology, IOTICSpace not only enables the virtualisation of these assets, but also makes them findable (through search and query) and relatable (through a federated knowledge graph of linked data) across the ecosystem – something that existing Data Visualisation tools cannot do today.
IOTICSpace gives an organisation a dedicated environment to which only they have access, but enables them to join decentralised, networked ecosystems of other spaces to share data for cooperative endeavours. To facilitate sharing of data, the peer-to-peer network silently brokers interactions between virtual assets in the background – meaning that data is not centralised and participants never have direct access to anyone else’s systems.
Organisations who wish to find and consume data, use their own IOTICSpace to search for, find and access virtual assets of interest to them. In this way, IOTICS facilitates the “faking” of the location and presence of someone else’s data in your own local environment. This enables multiple organisations to cooperate rapidly and securely in an agile way, in the development of new solutions.
IOTICS is transforming the way that we build data-centric applications by enabling truly collaborative, secure data ecosystems.
If you want to know more, reach out to me on LinkedIn and I’d be delighted to talk more about what we’re up to.
Quote resources: