Send new DeSo transactions over AMQP to a message broker (aka a firehose) #1443
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds a string to configure in the env file or yml called "AMQP_PUSH_DEST" / "amqp_push_dest" which enables logic to drop new DeSo blockchain TX to a message queue when blocks are committed. It will wait till the node is fully in sync to start pushing and set a flag that AMQP push is enabled. This basically creates a firehose of DeSo transactions
Any AMQP standard supported message broker should work. I used a Rabbit MQ container on the same node to drop the messages. These brokers can be subscribed to and receive a push of new transactions as they are written to the blockchain and the broker queue.
The advantage of this is that you don't need a lot of storage, no TxIndex is needed and no state consumer or handler is needed. The message brokers are super efficient and there is almost no overhead. Our server rad for a few days now without any issues or crashes.
It ignores some transactions types like validator rewards and focuses on user generated transactions
The message data is in easy to read JSON. the broker can be on another server as well. The message values can be bit different since it comes from the source.
A queue called block_txns will be automatically created. The queue can be durable or transient this is up to you. The message queue can ack a received tx so you don't get it again.
Each message will contain data which contains default fields like:
And the message properties will contain a MessageId which is the TxnHash (which you can use with DeSo explorer)
If you need some help to convert those let me know.
If you want to use this before core merges (maybe they won't) it, you can use the branch and compile a custom core and upload a custom compiled backend with this core to docker.
A Rabbit MQ example to add to your YML mainnet file:
Add this to your backend ENV settings:
- AMQP_PUSH_DEST=amqp://yyy:xxx@rabbitmq:5672/%2F
Add below your mainnet yml docker file.