Subliminal is a versatile task management system that enables the storage, retrieval, and execution of task-type requests and their execution records across a robust microservice architecture built on Google Cloud Services. Developed primarily as a learning endeavour, this project showcases a number of advanced technologies, including: - GCP and Docker Fundamentals - Rust Programming Paradigms and Design Structures - Microservice System Concepts - DevOps Workflow Processes and Deployment Strategies
At its core, Subliminal functions as a noraml CRUD (Create, Read, Update, Delete) application, designed to facilitate various operations on Task
objects stored within a database. Additionally, it provides the capability to dispatch these tasks to one or more executor services. These executors, whether hosted in the cloud or locally, execute tasks while asynchronously updating the datastore for reflection.
The overall structure of Subliminal is built upon the foundation laid within the subliminal
crate, accessible via crates.io. This crate houses key utilities necessary for constructing each service within the broader Subliminal ecosystem:
subliminal
and executing tasks sourced from the Message Queuemermaid
flowchart LR
A[API] <--> B[Datastore]
A <--> C[Task Message Queue]
C --> D[Executor]
D --> E((Worker))
D --> F((Worker))
The Datastore
, API
, and Message Queue
service implementations are designed to be generic, so they have no dependencies on user data. As such, they can be (and are) hosted as ready-to-run Docker images hosted on Docker Hub (here and here)
```
gcloud auth login gcloud projects create test-project --name="Test Project" gcloud config set project test-project
gcloud services enable pubsub.googleapis.com gcloud services enable run.googleapis.com gcloud services enable firestore.googleapis.com
gcloud run deploy test-project-datstore --image docker.io/brokenfulcrum/subliminal-datastore:latest --allow-unauthenticated --max-instances 1 --port 8080 --set-env-vars "RUSTLOG=debug" --set-env-vars "GOOGLEPROJECT_ID=test-project"
gcloud run deploy test-project-api --image docker.io/brokenfulcrum/subliminal-api:latest --allow-unauthenticated --max-instances 1 --port 8080 --set-env-vars "USETLS=true" --set-env-vars "RUSTLOG=debug" --set-env-vars "GOOGLEPROJECTID=test-project" --set-env-vars "DATASTORE_ADDRESS=https://test-project-datstore.run.app:443"
gcloud firestore databases create --location=us-west2
gcloud pubsub topics create TaskExecutionUpdates gcloud pubsub subscriptions create TaskExecutionUpdatesSubscription --topic TaskExecutionUpdates --topic-project test-project --push-endpoint https://test-project-api.run.app/taskexecutionupdate
gcloud pubsub topics create TestTaskExecutionQueue gcloud pubsub subscriptions create TestTaskExecutionQueueSubscription --topic TestTaskExecutionQueue --topic-project test-project --enable-exactly-once-delivery ```
Unlike the Datastore
and API
services, the Executor
service is specific to the user. This is where they would define the Task
to execute. For that reason, the user must implement their own Executor
service and "attach" it to their subliminal
GCP project
Luckily, this is made very simple through the ExecutionNodeBuilder
within the subliminal
crate:
```
// 1. Define a task
pub struct TestStruct { pub test: String, }
// 2. Implement the Task
trait on the task struct
impl Task for TestStruct {
fn execute(&self) -> TaskResultData {
thread::sleep(std::time::Duration::fromsecs(5));
TaskResultData {
resultstatus: TaskStatus::Passed,
result_data: Some(json!({"test": "Hello World!"})),
}
}
}
async fn main() {
// 3. Define the execution node, mapping the execution channel to the deserialization type
let node = ExecutionNodeBuilder::new(3, GOOGLEPROJECTID, UPDATESTOPIC)
.await
.withconsumer::
// 4. Start the node
node.build().await.unwrap();
} ```
This allows the user to create an Executor
with an associated consumer
that monitors messages on the TestStructExecutionRequests-sub
subscription, deserializing the received data into an instance of TestStruct
and setting it for execution within the internal Dispatcher