In the current blockchain landscape, many business scenarios require real-time access to both on-chain and off-chain data for further data value processing. Examples include SmartMoney transaction monitoring based on data filtering, and real-time TVL (Total Value Locked) transaction decision-making based on data statistics. These scenarios demand support for rich streaming datasets, and Chainbase provides an open "dataset" based real-time Sync module to address such challenges.
Currently, most real-time message streaming solutions on the market are primarily based on WebSocket and Webhook models, neither of which can guarantee "exactly once" semantics, ensuring that messages are consumed exactly once.
WebSocket can achieve real-time communication, but due to network instability and message transmission complexity, it may lead to issues such as message misordering, loss, or duplication.
Furthermore, HTTP requests are stateless, meaning they do not ensure request reliability and orderliness. If errors occur during network transmission or if the server fails to handle retries and idempotence correctly, it may result in duplicate or lost requests, thereby failing to guarantee "exactly once" semantics.
In contrast, Kafka offers the advantage of supporting message replayability. As a message queue, Kafka can persist messages to disk. Thus, while ensuring "at least once" semantics, it also supports message replayability, possessing characteristics of "effectively once" and "standby replay."
This is particularly important for streaming applications built on the premise of no data loss. Only systems with message replay capabilities can ensure the reliability and consistency of data processing, meeting "exactly once" semantics. For example, in real-time computation, log collection, and analysis scenarios, if errors occur, processing can be restarted from scratch. This makes Kafka better suited to complex real-time data processing requirements.
Kafka is a distributed stream processing platform designed for high-capacity, low-latency data transfer and persistent storage. It employs a publish-subscribe message queue model and relies on distributed log storage to process messages. Topics serve as destinations for message publication and can be divided into multiple partitions distributed across different storage nodes. Producers are responsible for publishing messages to specific topics, while consumers fetch messages from Kafka based on subscribed topics and partitions. Kafka uses offsets to track consumption state and ensure message order, while also utilizing distributed log storage for message persistence.
In our managed Kafka service, to ensure end-to-end data sequencing in blockchain scenarios, we restrict topics to a single partition.
Blockchain Data ETL to Kafka
Currently, we support data acquisition from multiple chains (Ethereum, Binance Smart Chain, Polygon, etc.) and data types (logs, transactions, transfers, etc.) via Kafka topics.
We have built our own RPC nodes for multiple chains, providing stable and reliable raw data with field formatting consistent with leading ETL tools. You can refer to  for specific schema details. Blockchain data is transparent and highly composable. We provide raw data types such as blocks, logs, transactions, traces, contracts, etc. Users can freely combine these raw data based on their own business scenarios. Below is an example of a transaction message.
To better meet the needs of blockchain transaction monitoring and other scenarios, we have enriched and aggregated data such as Transfers to provide a more comprehensive and insightful view of transaction behavior on the chain. In the future, we will also provide more enriched data types to help customers obtain real-time on-chain data faster and more accurately.
"name": "DeltaCore Integrations",
Kafka Operation and Maintenance
Our professional operations team has years of experience in massive data operations.
With our managed Kafka service, customers can get started immediately with simplified management, automated deployment, comprehensive monitoring, and maintenance features. It's truly a lifesaver. Specifically:
- Simplified Management: Managed Kafka relieves customers of management burdens. It offers automated cluster configuration, deployment, monitoring, and maintenance, reducing the operational team's workload.
- High Reliability: Managed Kafka provides a high availability and redundancy mechanism, ensuring message persistence and reliability. With backup and fault recovery strategies, it can effectively address hardware failures or other unexpected situations.
- Elastic Scalability: Managed Kafka dynamically adjusts "capacity and throughput" according to demand, providing elasticity and scalability. Through a simple control interface or API, you can increase or decrease the size of your Kafka cluster as the workload changes.
- Security Hardening: Managed Kafka offers data encryption, authentication, and access control features. This ensures data confidentiality and integrity, helping organizations comply with compliance requirements.
End-to-End Data Consistency
Based on network protocols like WebSocket or Webhook, "end-to-end data consistency" cannot be guaranteed. For example, when a client sends a data packet to the server, the packet traverses the network and goes through multiple routers and network devices. At a particular router or network device, due to network congestion or technical issues, the packet may be lost. Lost packets don't reach the server, resulting in data loss. The server may wait for the packets to arrive within a specific time interval, and if they aren't received within that time, it considers the data lost. In contrast, Kafka provides the possibility of achieving "end-to-end data consistency" for our servers.
Integrating Kafka into ETL (Extract, Transform, Load) tools, whether open-source or commercial, is a common practice. Kafka, as a high-performance, scalable distributed message queue system, provides reliable data transfer and persistent storage. Integrating Kafka into ETL offers several advantages.
Firstly, Kafka, as a reliable data pipeline, can be used as a data source or destination in the ETL process. Kafka's message queue model ensures data sequencing and reliability. Secondly, Kafka's high throughput and low latency characteristics enable ETL to process and transfer large volumes of data more quickly. Moreover, Kafka can be integrated with other components and tools, such as real-time processing engines (like Apache Flink and Apache Spark Streaming) and open-source ETL tools like Airbyte, and streaming databases like RisingWave. Through Kafka integration, ETL tools can implement real-time data processing, data streaming, and data exchange functions, helping organizations manage and analyze massive data more effectively and support real-time decision-making and insights.
Combining open datasets with Kafka's real-time subscription and synchronization provides a variety of data sources, analysis models, and real-time analysis scenarios for blockchain data analysis systems. This includes accurate data verification and promotes data sharing and openness. This combination provides a more comprehensive and accurate view of blockchain data, drives technological advancement, and accelerates problem-solving.
Chainbase is an open Web3 data infrastructure for accessing, organizing, and analyzing on-chain data at scale. It helps people to better utilize on-chain data by combining rich datasets with open computing technologies in one data platform. Chainbase’s ultimate goal is to increase data freedom in the crypto.
More than 5,000 developers actively interact with our platform and over 200Mn data requests per day as their data backend and integrate chainbase into their main workflow. Additionally, we are working with ~10 top-tier public chains as first-tier validators and managing over US $500Mn tokens non-custodially as a validator provider. Find out more at: chainbase.com
Want to learn more about Chainbase?
Sign up for a free account, and Check out our documentation.