[go: up one dir, main page]

Skip to content

plutomi/plutomi

Repository files navigation

Plutomi

Plutomi is a multi-tenant applicant tracking system that streamlines your entire application process with automated workflows at any scale.

infra

Introduction

Plutomi was inspired by my experience at Grubhub, where managing the recruitment, screening, and onboarding of thousands of contractors every month was a significant challenge. Many of these processes were manual, making it difficult to adapt to our constantly changing business needs. I set out to create an open, flexible, and customizable platform that could automate and streamline these workflows.

Plutomi allows you to create applications for anything from jobs to program enrollments, each with customizable stages, where you can setup rules and automated workflows based on applicant responses or after a certain time period. Here's an example of how a delivery driver application might look like:

  1. Questionnaire - Collects basic applicant info. Applicants are moved to the waiting list if not completed in 30 days.
  2. Waiting List - Pool of idle applicants.
  3. Document Upload - Collects required documents like licenses and insurance.
  4. Final Review - Manual compliance check.
  5. Ready to Drive - Applicants who have completed all stages and are approved. Triggers account creation in an external system via webhook.
  6. Account Creation Failures - Holds applicants whose account creation failed, allowing for troubleshooting and resolution.

Architecture

Plutomi follows a modular monolith architecture, featuring a Remix frontend and an Axum API written in Rust. All core services rely on a single primary OLTP database, MySQL, which handles all operational data rather than splitting data between consumers or services. Blob storage is managed by Cloudflare R2, while features like search and analytics are powered by OpenSearch and ClickHouse. Valkey provides caching & rate limiting.

Infrastructure and Third-Party Tools

We run on Kubernetes (K3S) and manage our infrastructure using AWS CDK for now. We use SES to send emails and normalize those events into our Kafka topics. Optional components include Linkerd for service mesh, Axiom for logging, and Cloudflare for CDN.

Event Streaming Pipeline

Our event streaming pipeline, modeled after Uber's architecture, is powered by Kafka. All event processing is managed by independent consumers written in Rust, which communicate with Kafka rather than directly with each other or the API.

For each entity, we maintain a main Kafka topic along with corresponding retry and dead letter queue (DLQ) topics to handle failures gracefully:

  • Main Topic: The initial destination for all events.

  • Retry Topic: Messages that fail processing in the main topic are rerouted here. Retries implement exponential backoff to prevent overwhelming the system.

  • Dead Letter Queue (DLQ): If a message fails after multiple retries, it's moved to the DLQ for further investigation. Once underlying issues are resolved (e.g., code fixes, service restoration), the messages are reprocessed by moving them back into the retry topic in a controlled manner, ensuring they do not disrupt live traffic.

For more details on the event streaming pipeline and to view the events, refer to EVENT_STREAMING_PIPELINE.md.

Running Locally

Prerequisites:

To setup your datasources, simply run docker compose up -d to run the docker-compose.yaml file. This will start MySQL, Kafka with the required topics, and KafkaUI on ports 3306, 9092, and 9000 respectively.

Credentials for all datasources are admin and password.

Then, simply copy the .env.example file to .env and execute the run.sh script which will run migrations for MySQL (using the migrator service) and start the api and web services, along with the kafka consumers.

$ cp .env.example .env
$ ./scripts/run.sh

You can also run any service individually:

$ ./scripts/run.sh --service <web|api|migrator|consumers>

Deploying

Plutomi is designed to be flexible and can be deployed anywhere you can get your hands on a server, we recommend at least 3. All Docker images are available on Docker Hub. Check out DEPLOYING.md for more information.

Questions / Troubleshooting

Some common issues are documented in TROUBLESHOOTING.md. If you're wondering why certain architectural decisions were made, check the decisions folder as you might find it in there.

If you have other questions, feel free to open a discussion or issue, or contact me on X @notjoswayski or via email at jose@plutomi.com.