As mentioned in data flow, in a wolkenkit application, you typically have a client with a task-based UI. This may be a static web site, a mobile application, or anything else. Everytime the user performs a task, the client sends one or more commands to wolkenkit.
In wolkenkit, there are multiple servers that make up an application. The public facing server is called broker, since it acts as the gateway to a wolkenkit application and handles commands, events and queries. This represents CQRS which separates writing (sending commands) from reading (subscribing to events and querying read models):
When the broker receives a command, it forwards the command to a message queue, the so-called command bus. Once this has been done, the broker acknowledges to the client that the command was received and accepted. Then another server, whose responsibility is the write model, fetches the previously accepted commands from the command bus to handle them. This server is called core.
To handle a command, the core replays the needed aggregate from the event store, and hands over the command as well as the replayed aggregate to the command handler you provided as part of your application's write model that was modeled using domain-driven design (DDD). As a result of the command handler, one or more events get published.
These events are written to the event store using event sourcing, and then sent back to the broker via another message queue called event bus. The broker updates any lists that are stored inside the list store using the projections you defined in your application's read model, and finally pushes the events to the client:
Complementary to the basic processing of commands and events described so far, there are also additional services for advanced use cases, e.g. to run workflows, store large files and authenticate users.
To run workflows, the core not only sends published events to the broker, but also to another server called flows. For that, it uses a dedicated message queue called flow bus. Whenever the flow server receives an event, it runs reactions for that event based on your application's stateless and stateful flows.
For storing large files, wolkenkit provides a file storage service called depot. It provides its own API and client SDK to store and retrieve files. Hence, it is independent of the broker and the core, and the client must address depot separately.
To authenticate users wolkenkit uses OpenID Connect, which means that it relies on an external identity provider, such as Auth0 or Keycloak.
All the aforementioned application servers (broker, core and flows) and infrastructure services (event store, list store, depot and the various message queues) run as individual processes, which makes any wolkenkit application a distributed system by default:
Running on Docker
For all processes of a wolkenkit application there are Docker base images. When starting an application using the CLI, these base images are taken as the foundation to build custom Docker images specific to your application. These application-specific images then contain your application's code and configuration.
Finally, the images are run as containers that get connected to each other by using a virtual network. Since the application containers (broker, core and flows) run your application's code, they may need a few npm modules. To avoid having to install them to every single application container, they are only installed once to a shared Docker container named node-modules that is then used as a volume by the other containers.
Finding the code
The code for wolkenkit is located in repositories on GitHub. On Docker Hub, there is an automated build for each repository that is responsible for building the respective Docker image:
|Depot client SDK||wolkenkit-depot-client-js||n/a|
|Shared npm modules||wolkenkit-box-node-modules||wolkenkit-node-modules|