Discover more from Bacalhau
How To Solve Edge Container Orchestration - An Introduction Into Bacalhau Architecture
Learn how Bacalhau is working in the background (7 min)
Using data in a production environment can be quite the challenge. In order to use them, typically, it requires a painful process to orchestrate container deployment. This is where our Open-Source project Bacalhau comes in handy.
Bacalhau helps you manage your container orchestration, allowing you to bring your compute to where your data resides. In the following, we explain how this is accomplished and what architecture is used in the background.
Bacalhau is a peer-to-peer network of interconnected nodes that enable decentralized communication among computers. The Requester and Compute nodes collaboratively constitute a peer-to-peer network, employing a gossiping mechanism (dotted lines) to establish connections and exchange vital information. This network features two distinct types of nodes.
Requester Node: This node takes on various critical responsibilities. This includes managing user requests, assessing available compute nodes, routing job requests to the suitable nodes, and supervising the entire job lifecycle.
Compute Node: The compute node, on the other hand, functions as the central workhorse within the Bacalhau ecosystem. Its executes the assigned jobs and deliveres the corresponding results. Once the results are produced, it will publish the results to a specified remote destination (e.g. S3) and notify the requester of the job completion. Different compute nodes can be harnessed for various job types, based-on their capabilities and resources, thus making them suitable for distinct job requirements. For example, only a few machines may have GPUs, which you’d sub-select for a ML training job.
Above you can see a rough schematic of the architecture behind Bacalhau, with nodes highlighted in blue on the right side.
Users can engage with the Bacalhau network by utilizing the Bacalhau Command-Line Interface (CLI). This interface allows users to forward requests to a designated Requester Node within the network. These requests are sent in JSON through the HTTP protocol, a widely recognized method for transmitting data over the internet.
Let’s dig a bit deeper into the interfaces provided by Bacalhau and see how different parts interact in the network.
Transport: The transport interface is responsible for sending messages about jobs being created, accepted and executed to other compute nodes. It also manages the identity of individual Bacalhau nodes to ensure that messages are only delivered to authorized nodes, which improves network security. To achieve this, the transport interface uses a protocol called bprotocol, which is a point-to-point scheduling protocol that runs over libp2p.
Executor: The executor, a key component of the Bacalhau network, manages job execution and ensures jobs use local storage. It integrates input and output storage volumes into the job during its run. Once the job finishes, the executor combines stdout, stderr, and named output volumes into a results folder, which is then uploaded to a remote location.
Storage Provider: Different storage providers in the network can present the CID (Content Identifier) to the executor in various ways. For example, a POSIX storage provider might represent the CID as a POSIX filesystem, while a library storage provider could stream the CID's contents through a library call. Given this flexibility, storage providers and Executor implementations are loosely coupled, allowing the POSIX and library storage providers to be used across multiple executors.
Publisher: The publisher uploads the job's final results to a remote access point like S3 or IPFS for client access. Below, you can see an example from our earlier network illustration.
Job Submission: When jobs are sent to the requester node, it picks several compute nodes fit for the job and prompts them to execute it. Each job has a concurrency setting, indicating how many nodes should run it simultaneously. The job may also specify volumes, such as certain CIDs. A compute node can bid on a job if it has the required data locally. This means that a node operator can define granular rules about the jobs they are willing to run.
Job Acceptance: When bids from compute nodes arrive back at the requester node, it decides which bids to accept or reject. This decision is based on the previous reputation of each compute node and many other factors, such as locality, hardware resources, and other costs. The requester node will also have the same HTTP or exec hooks to decide if it wants to accept a bid.
Job Execution: Once accepted bids are received by compute nodes, they will execute the job using the executor designed for that job and the storage providers that the executor has mapped. For example, a job could use the
WASMexecutor, or a library storage volume. This would result in a POSIX mount of the storage into a running container or a WASM-style
syscallto stream the storage bytes into the WASM runtime. Each executor will handle storage in a different way, even though each job mentions the same storage volumes.
Publishing: When the results are ready, the publisher uploads the raw results folder that is currently residing on the compute node. The publisher interface mainly consists of a single function responsible for uploading the local results folder to a designated location and returning a storage reference to where it has been uploaded.
Networking: It is possible to run Bacalhau completely disconnected from the main Bacalhau network. This allows you to run private workloads without risking running on public nodes or inadvertently sharing your data outside of your organization. The isolated network will not connect to any public network. Read more on networking.
Input/Output Volumes: A job includes the concept of input/output volumes, and the Docker/WASM executor implementation supports it. This means you can specify your CIDs, URLs, S3 objects as input paths, and also write results to an output volume. In the above example, an input volume flag
-i s3://mybucket/logs-2023-04*is used, which mounts all S3 objects in the bucket
logs-2023-04prefix within the Docker container at the location
/input(root). Output volumes are mounted to the Docker container at the specified location. In the example above, any content written to
/output_folderwill be made available within the "apples" folder in the job results CID. Once the job has run on the executor, the contents of
stderrwill be added to any named output volumes the job has used. This can be seen in the following example.
bacalhau docker run \\ -i s3://mybucket/logs-2023-04*:/input \\ -o apples:/output_folder \\ ubuntu \\ bash -c 'ls /input > /output_folder/file.txt'
After this architectural overview on Bacalhau, one can now submit your first job to Bacalhau, onboard existing Docker or WASM workloads to work with Bacalhau or can also setup your own private Bacalhau network.
Stay tuned as we continue to enhance and expand Bacalhau's capabilities. We are committed to delivering groundbreaking updates and improvements. We're looking for help in several areas. If you're interested in helping out, there are several ways to contribute and you can always reach out to us via Slack or Email.
Thanks for reading our blog post! Subscribe for free to receive new updates.