The foundations for any distributed application require distributed protocols, and in today's decentralised world, theres arguably nothing better to build upon than blockchains. Since their inception several years ago, there have been a number of additional protocols built on top of blockchains designed to extend their use beyond the transactional boundaries of just exchanging cryptocurrencies. However, due to the complexity of linking multiple transactions together in a sensible way, very few (if any) have been capable of storing large amounts of structured data directly on a blockchain.
Six months ago we made a discovery that got us started on a project. That project evolved into a platform. That platform is now acquiring partnerships for further experimentation. We call it CORTEX - and its not only capable of storing data directly on a blockchain, but it can also be used to deploy several different data models too - from graphs and documents to more traditional relational schemas.
At its core is our blockchain-agnostic data-storage method that combines the predictability of Hierarchically Deterministic (HD) key-mapping with standard (38 byte limited) OP_RETURN functionality. This allows us to define a schema that utilizes different branches of keys for different things. One branch could (for example) represent the name of databases, with each of its descendants representing one of many databases. Each of their descendants could then do the same to represent the individual database tables or collection names, with one branch of each of their descendants then being used to represent fields names, another branch representing field types, one for values, and so on and so forth.
By doing this, we are able to distribute a practically unlimited amount of structured data across a vast array of seemingly random transactions - all linked directly to a single master key. The public key allows anyone with that key and schema to then recreate and view all of the data within that instance (that has not been encrypted prior to it being encoded on the blockchain), whereas the private key is required to add, edit, decrypt data or update schemas.
Since data entry is transactional by default and every event is recorded on the blockchain, there is an inherently immutable audit available that details every change made to any data within any of the databases being linked to that single master key.
While the protocols behind our platform can be complex, weve developed a simplified interface to package them. Using Cortex, individuals and organizations can easily encode, store, and share data across multiple blockchains, as well as access and manage any blockchain protocol or module from a single place with simple point, click, drag and drop commands. It looks and feels much the same as a hosting or database control panel.
Although still in the early stages of its development, were building Cortex around a small group of select partners. Based on their feedback and challenges, weve been building upon our platform with three main requirements in mind that we all agree forms the foundation for any distributed application or decentralized service:
01 Key pair generation, logistics and public broadcasting
02 Decentralized identities, role-management, and authorisation
03 On-chain data storage (used as the central data repository of truth for all Cortex events)
Because were building Cortex with modularity in mind, each of these components can be easily replaced should you require something that is not 100% distributed. For example, you could replace the DB module to use MySQL or MongoDB rather than a blockchain. You could even replace our identity module with oAuth or your own customised login system. This enables organizations to create their own experiences around any of their existing products and services, but with the benefits of using a blockchain as your underlying storage engine, or a way to distribute the need for trust, or even simply just for added two-factor security.
In its current state Cortex is a sandbox environment for creating entirely new concepts. As development continues, well be inviting a number of partners to the platform to help us identify interesting and new non-financial use-cases for the future of distributed data.