Reliable, secure and decentralized information – The issue of data is absolutely central to the adoption of Web3. The KYVE Network project is at the forefront of decentralized solutions, endorsed by the most popular and demanding protocols who are already using its architecture and solutions. After its Cosmos SDK network revealed on its Korellia incentive testnetKYVE today unveiled its Data Pipeline, a tool designed to make life easier for developers across the blockchain ecosystem.
Data Pipeline, du carburant Web3
The public beta version will therefore be online today, December 5, 2022. On this date, the community will finally be able to officially test the potential of the Data Pipeline in practice. On the menu, new and disruptive capacities in terms of data sharing between versions 2 and 3 of the web which are about to shake up the habits of all professionals who handle this data on a daily basis. In short, Data Pipeline offers an easy and customizable access point to anyone wishing to take it up.
Users, analysts, data engineers, software developers, researchers, protocols, all will benefit from this new method which opens the way to real revolutions in methodologies. This is in order to be able to use the verified data from KYVE, without having to worry about recovering the original data. Now, with this product, Web2 data can very easily be sourced on blockchain networks and vice versa, from the Web3 vers Web2, etc…
Complex mechanics, simplified use
Thereby, le Data Pipeline will allow anyone to extract data from the pool of information collected and aggregated by KYVE and then import it to the data backend of your choice. Indeed, the team has made sure to offer a complete and efficient tool, compatible with the most popular backend protocols, such as Snowflake, BigQuery, S3, MongoDB, etc. The tool is built on an ELT framework via the Airbyte platform. A technology that allows personalization of data once exported to the backend of your choice.
Unlike other solutions that are too often complex, no coding skills are needed to use it. Indeed, Data Pipeline was designed to be as simple and smooth as possible so that everyone can find and effectively use the reliable data they need. Anyone can use Data Pipeline to retrieve data from KYVE and integrate it into their project, or use it for their validation node, etc.
Extremely customizable implementation
Starting today, to use Data Pipeline, just visit on KYVE’s githubdownload the code and follow the step-by-step guide in order to run it on your own server. Once installed, you can immediately select a custom source, pulled from one of KYVE’s data pools, customize the sync settings to best suit your needs, and feed all the data you need to your own protocol.
KYVE works with raw and public data, which means when you import it from Data Pipeline, you can transform them as you see fit to adapt them to your use case. A modularity made possible by a structure based on the Airbyte protocol which provides the collection technology as well as a very easy-to-use connector. In addition, again thanks to Airbyte, synchronization and update times are also fully customizable.
Since the birth of the project KYVE, its Web3 “data pool” solution allows data providers to standardize and validate flows from blockchain networks, then store them in backups using permanent data storage solutions such as Arweave. Thus, its architecture already guaranteed the immutability of these resources over time. Today the brand new Data Pipeline adds the qualities of scalability and availability to this oh so necessary protocol for the Web3 revolution that we are all waiting for. To complete its work, the team is waiting for you!
To not miss any of the latest developments, join the KYVE community on Twitter, Discord et Telegram