Infrastructure
Here are the execution environments for the domain logic described previously. Hiveio is designed to manage infrastructure as code through the base Docker images defined. These images are meant to be extended to include your domain logic and additional code dependencies. Depending on the type of container, there are a few opinions made here too.
Specialized Containers
Base images for specific service types have been defined to provide the basic boilerplate service definition for your application. It also ensures a common interface is maintained between the service and actor. For CQRS/ES architectures, some opinions on storage solutions have been made.
- Base
- The least opinionated of them all. This wraps your actor(s) to provide a straightforward interface and standardizes the network payload before calling your actor to perform.
- Producer
- This supports the creation of unordered messages in a CQRS/ES implementation. Validation here is only superficial and defaults to using queues to batch messages for increased performance.
- Consumer
- Message consumption is defined in this image for a CQRS/ES implementation. It is highly recommended that these services be isolated to only message consumption but can support queries against the data as well.
- Stream Processor
- This supports a variety of needs in a CQRS/ES implementation. Domain validation can be achieved through the use of the transaction cache dependency (Redis). CQRS/ES Process Managers and Sagas can be implemented here too.
Unified Transaction Log
The unified transaction log is the centralized storage solution that is the foundation of the CQRS/ES pattern. Think of it as the backbone in your central nervous system. All of your body parts and organs that connect to this backbone are made up of the different microservice types described above. The transaction log's job is to handle multiple inputs/outputs to each of these microservice types while providing the persistence layer. Events are stored here once they have been validated by their producers and are read from here by their consumers.
Here is where our last opinions are made with Kafka and Redis as the unified transaction log and cache respectively. The Stream Processor has implemented a solution leveraging the eXtended Architecture (XA) distributed transaction model via snapshot isolation and two-phase commit techniques to provide domain validation and event order guarantees. The Redis implementation leverages the Redlock algorithm as the distributed locking mechanism to support these techniques.