Bacalhau

Bacalhau

Share this post

Bacalhau
Bacalhau
How To Solve Edge Container Orchestration - An Introduction Into Bacalhau Architecture
User's avatar
Discover more from Bacalhau
Compute Over Data
Already have an account? Sign in

How To Solve Edge Container Orchestration - An Introduction Into Bacalhau Architecture

Learn how Bacalhau is working in the background (7 min)

Michael Hoepler's avatar
Michael Hoepler
Oct 19, 2023
6

Share this post

Bacalhau
Bacalhau
How To Solve Edge Container Orchestration - An Introduction Into Bacalhau Architecture
2
Share

Introduction

Using data in a production environment can be quite the challenge. In order to use them, typically, it requires a painful process to orchestrate container deployment. This is where our Open-Source project Bacalhau comes in handy.

Bacalhau helps you manage your container orchestration, allowing you to bring your compute to where your data resides. In the following, we explain how this is accomplished and what architecture is used in the background.

Nodes

Bacalhau is a peer-to-peer network of interconnected nodes that enable decentralized communication among computers. The Requester and Compute nodes collaboratively constitute a peer-to-peer network, employing a gossiping mechanism (dotted lines) to establish connections and exchange vital information. This network features two distinct types of nodes.

  1. Requester Node: This node takes on various critical responsibilities. This includes managing user requests, assessing available compute nodes, routing job requests to the suitable nodes, and supervising the entire job lifecycle.

  2. Compute Node: The compute node, on the other hand, functions as the central workhorse within the Bacalhau ecosystem. Its executes the assigned jobs and deliveres the corresponding results. Once the results are produced, it will publish the results to a specified remote destination (e.g. S3) and notify the requester of the job completion. Different compute nodes can be harnessed for various job types, based-on their capabilities and resources, thus making them suitable for distinct job requirements. For example, only a few machines may have GPUs, which you’d sub-select for a ML training job.

Above you can see a rough schematic of the architecture behind Bacalhau, with nodes highlighted in blue on the right side.

Users can engage with the Bacalhau network by utilizing the Bacalhau Command-Line Interface (CLI). This interface allows users to forward requests to a designated Requester Node within the network. These requests are sent in JSON through the HTTP protocol, a widely recognized method for transmitting data over the internet.

Interfaces

Let’s dig a bit deeper into the interfaces provided by Bacalhau and see how different parts interact in the network.

  1. Transport: The transport interface is responsible for sending messages about jobs being created, accepted and executed to other compute nodes. It also manages the identity of individual Bacalhau nodes to ensure that messages are only delivered to authorized nodes, which improves network security. To achieve this, the transport interface uses a protocol called bprotocol, which is a point-to-point scheduling protocol that runs over libp2p.

  2. Executor: The executor, a key component of the Bacalhau network, manages job execution and ensures jobs use local storage. It integrates input and output storage volumes into the job during its run. Once the job finishes, the executor combines stdout, stderr, and named output volumes into a results folder, which is then uploaded to a remote location.

  3. Storage Provider: Different storage providers in the network can present the CID (Content Identifier) to the executor in various ways. For example, a POSIX storage provider might represent the CID as a POSIX filesystem, while a library storage provider could stream the CID's contents through a library call. Given this flexibility, storage providers and Executor implementations are loosely coupled, allowing the POSIX and library storage providers to be used across multiple executors.

  4. Publisher: The publisher uploads the job's final results to a remote access point like S3 or IPFS for client access. Below, you can see an example from our earlier network illustration.

Job Lifecycle

  1. Job Submission: When jobs are sent to the requester node, it picks several compute nodes fit for the job and prompts them to execute it. Each job has a concurrency setting, indicating how many nodes should run it simultaneously. The job may also specify volumes, such as certain CIDs. A compute node can bid on a job if it has the required data locally. This means that a node operator can define granular rules about the jobs they are willing to run.

  2. Job Acceptance: When bids from compute nodes arrive back at the requester node, it decides which bids to accept or reject. This decision is based on the previous reputation of each compute node and many other factors, such as locality, hardware resources, and other costs. The requester node will also have the same HTTP or exec hooks to decide if it wants to accept a bid.

  3. Job Execution: Once accepted bids are received by compute nodes, they will execute the job using the executor designed for that job and the storage providers that the executor has mapped. For example, a job could use the docker executor, WASM executor, or a library storage volume. This would result in a POSIX mount of the storage into a running container or a WASM-style syscall to stream the storage bytes into the WASM runtime. Each executor will handle storage in a different way, even though each job mentions the same storage volumes.

  4. Publishing: When the results are ready, the publisher uploads the raw results folder that is currently residing on the compute node. The publisher interface mainly consists of a single function responsible for uploading the local results folder to a designated location and returning a storage reference to where it has been uploaded.

  5. Networking: It is possible to run Bacalhau completely disconnected from the main Bacalhau network. This allows you to run private workloads without risking running on public nodes or inadvertently sharing your data outside of your organization. The isolated network will not connect to any public network. Read more on networking.

  6. Input/Output Volumes: A job includes the concept of input/output volumes, and the Docker/WASM executor implementation supports it. This means you can specify your CIDs, URLs, S3 objects as input paths, and also write results to an output volume. In the above example, an input volume flag -i s3://mybucket/logs-2023-04* is used, which mounts all S3 objects in the bucket mybucket with the logs-2023-04 prefix within the Docker container at the location /input (root). Output volumes are mounted to the Docker container at the specified location. In the example above, any content written to /output_folder will be made available within the "apples" folder in the job results CID. Once the job has run on the executor, the contents of stdout and stderr will be added to any named output volumes the job has used. This can be seen in the following example.

bacalhau docker run \\
  -i s3://mybucket/logs-2023-04*:/input \\
  -o apples:/output_folder \\
  ubuntu \\
  bash -c 'ls /input > /output_folder/file.txt'


What’s Next?

After this architectural overview on Bacalhau, one can now submit your first job to Bacalhau, onboard existing Docker or WASM workloads to work with Bacalhau or can also setup your own private Bacalhau network.

Stay tuned as we continue to enhance and expand Bacalhau's capabilities. We are committed to delivering groundbreaking updates and improvements. We're looking for help in several areas. If you're interested in helping out, there are several ways to contribute and you can always reach out to us via Slack or Email.

The Bacalhau project is available today as Open Source Software. The public GitHub repo can be found here. If you like the project, please give us a star ⭐

If you need professional support from experts, you can always reach out to us via Slack or Email.


⭐️ GitHub

Slack

Website

Thanks for reading our blog post! Subscribe for free to receive new updates.

Tony Evans's avatar
Michael Hoepler's avatar
Tony's avatar
Leonard Aukea's avatar
Laura Hohmann's avatar
6 Likes∙
2 Restacks
6

Share this post

Bacalhau
Bacalhau
How To Solve Edge Container Orchestration - An Introduction Into Bacalhau Architecture
2
Share

Discussion about this post

User's avatar
Tutorial: Building a Distributed Data Warehousing Without a Data Lake
A step-by-step guide (9 min)
Nov 2, 2023 • 
Ross Jones
 and 
Michael Hoepler
6

Share this post

Bacalhau
Bacalhau
Tutorial: Building a Distributed Data Warehousing Without a Data Lake
U.S. Navy Chooses Bacalhau to Manage Predictive Maintenance Workloads
Bacalhau deployment example (4 min)
Nov 29, 2023 • 
Michael Hoepler
 and 
David Aronchick
6

Share this post

Bacalhau
Bacalhau
U.S. Navy Chooses Bacalhau to Manage Predictive Maintenance Workloads
Bacalhau 1.0: Unlocking The Potential of Private Data
New simple job and data moderation features in Bacalhau 1.0 unlock new data sharing, federated learning and compute islands using private data.
Jun 20, 2023 • 
Simon Worthington
6

Share this post

Bacalhau
Bacalhau
Bacalhau 1.0: Unlocking The Potential of Private Data

Ready for more?

© 2025 Expanso
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.