Setting up a Distributed System with Redundancy using Vite + React, FastAPI, RabbitMQ, and Keycloak
Learn how to build a scalable, redundant distributed system with React, FastAPI, RabbitMQ, and Keycloak. Covers architecture, authentication, communication, and deployment.
Building a Distributed System with Vite + React, FastAPI, RabbitMQ, and Keycloak
Introduction
Modern applications must be resilient, secure, and scalable. This blog post walks you through setting up a redundant distributed system using a modern frontend with Vite + React, a scalable backend with FastAPI, asynchronous communication via RabbitMQ, and Keycloak for centralized authentication and access control.
We’ll address architecture, redundancy, frontend/backend communication, authentication, and deployment.
Table of Contents
- System Architecture Overview
- Frontend Setup with Vite + React
- Backend Setup with FastAPI
- Backend Communication via RabbitMQ
- Authentication and Access Control with Keycloak
- High Availability and Fault Tolerance
- Deployment and Orchestration
- Conclusion
- References
System Architecture Overview
Redundant System Design
To achieve true redundancy, each critical component of the system (frontend, backend, message broker, and auth server) must be deployed in multiple instances. This ensures that if one instance fails, others can take over without service interruption. Load balancers are used to distribute traffic evenly and detect failures.
The system consists of:
- Multiple frontend (React) and backend (FastAPI) instances
- A reverse proxy (e.g., NGINX, Traefik) for load balancing
- RabbitMQ cluster for async messaging
- Keycloak for centralized user authentication
Logical Separation of Concerns
Each part of the system is responsible for a well-defined set of operations. This division promotes scalability and eases debugging. By decoupling UI, logic, and infrastructure, teams can work in parallel and optimize each service independently.
- Frontend (Client): Handles UI, user sessions, and requests to the backend.
- Backend (Application Server): Processes business logic, API, WebSocket endpoints.
- Infrastructure Services: RabbitMQ (messaging), Keycloak (auth), PostgreSQL/Redis (optional).
Frontend Setup with Vite + React
Vite Project Configuration
Instead of creating a new Vite project from scratch, we use a custom frontend project developed specifically for this chat application: AbelGRubio/frontend-chat. This project is already pre-configured with React, socket integration, and authentication logic.
To get started, simply clone the repository and install the dependencies:
git clone https://github.com/AbelGRubio/frontend-chat.git
cd frontend-chat
npm install
npm run dev
Redundant Deployment Practices
In distributed deployments, multiple frontend replicas should serve static assets. Tools like NGINX or cloud load balancers can balance traffic between them. Ensure that cache busting and versioning are properly configured so that clients always receive up-to-date files.
- Build with
vite build
- Serve via multiple NGINX containers behind a load balancer
- Use CDN or shared storage for static file sync
Authenticated Requests
Use axios
or fetch
with the token from Keycloak:
const token = keycloak.token;
axios.get('/api/data', {
headers: { Authorization: `Bearer ${token}` }
});
Keycloak Integration
npm install keycloak-js
import { Keycloak } from 'keycloak-js';
Use keycloak-js
config in a shared keycloak.ts
file.
Backend Setup with FastAPI
This is the backend service for the chat application, designed to work alongside the frontend project: AbelGRubio/frontend-chat. Built with FastAPI, this backend handles:
- WebSocket connections for real-time messaging
- User session tracking
- Authentication integration (compatible with Keycloak or token-based auth)
- Scalable architecture suitable for containerization and deployment
It provides all necessary APIs and WebSocket endpoints to support the interactive chat features on the frontend.
Running FastAPI Instances
uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 4
or
gunicorn -k uvicorn.workers.UvicornWorker app.main:app
REST & WebSocket Endpoints
@app.get("/data")
async def get_data(): return {"status": "ok"}
@app.websocket("/ws")
async def websocket_endpoint(ws: WebSocket):
await ws.accept()
await ws.send_text("Connected")
JWT Validation via Dependency
This approach uses a custom middleware to decode and verify tokens issued by Keycloak, enabling seamless integration with FastAPI while supporting role-based access control. Each endpoint can rely on the middleware to ensure authentication, simplifying the enforcement of security policies across the application.
from fastapi import Request
from fastapi.responses import JSONResponse
from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
from starlette.responses import Response
from ..configuration import KEYCLOAK_OPENID
class AuthMiddleware(BaseHTTPMiddleware):
"""
Middleware to handle authentication for incoming HTTP requests.
This middleware intercepts requests and determines if they need
authentication based on a predefined list of paths that can bypass
authentication (e.g., docs, health check).
If authentication is required, it checks for the presence of a valid API
key or a valid authorization token (JWT), decoding it via Keycloak OpenID
service.
Attributes:
__jump_paths__ (list): List of URL paths that do not require
authentication.
__auth__ (str): Header key name for authorization token.
"""
__jump_paths__ = ['/docs', '/openapi.json', '/redoc',
'/health', '/favicon.ico']
__auth__ = 'authorization'
def __init__(self, *args, **kwargs):
"""
Initialize the middleware by calling the parent class initializer.
"""
super().__init__(*args, **kwargs)
@staticmethod
def unauthorised(
code: int = 401, msg: str = 'Unauthorised') -> JSONResponse:
"""
Return a JSON response indicating an unauthorized access attempt.
:param code: HTTP status code to return (default 401).
:param msg: Message to include in the response body
(default 'Unauthorised').
:return: JSONResponse with status code and message.
"""
return JSONResponse(status_code=code, content=msg)
def _is_jump_url_(self, request: Request) -> bool:
"""
Check if the requested URL path is in the list of paths that do not
require auth.
:param request: The incoming HTTP request.
:return: True if the path should bypass authentication, False otherwise.
"""
return request.url.path in self.__jump_paths__
def decode_token(self, token: str):
"""
Decode a JWT token after stripping 'Bearer ' prefix.
:param token: Raw token string from the authorization header.
:return: Decoded token payload (usually a dict).
"""
token_ = token.replace('Bearer ', '')
payload = KEYCLOAK_OPENID.decode_token(token_)
return payload
def get_header_token(self, request: Request) -> str:
"""
Get the authorization token from the request headers.
:param request: The incoming HTTP request.
:return: Authorization header value or empty string if missing.
"""
return request.headers.get(self.__auth__, '')
def get_user_config(self, request: Request) -> dict | None:
"""
Extract user configuration by decoding the JWT token from the request.
:param request: The incoming HTTP request.
:return: Decoded token payload dict if valid, else None.
"""
token = self.get_header_token(request)
try:
decode_token = self.decode_token(token)
return decode_token
except Exception:
return None
def is_auth(self, request: Request) -> dict | None:
"""
Check whether the request is authenticated.
Currently implemented by trying to get user config from token.
:param request: The incoming HTTP request.
:return: User configuration dict if authenticated, else None.
"""
return self.get_user_config(request)
async def dispatch(
self, request: Request, call_next: RequestResponseEndpoint
) -> Response:
"""
Process incoming HTTP request, enforcing authentication if required.
If the request path is in the bypass list, it proceeds without checks.
Otherwise, it verifies authentication and returns an unauthorized
response if authentication fails.
:param request: The incoming HTTP request.
:param call_next: The next middleware or request handler callable.
:return: Response from next handler or unauthorized response.
"""
if self._is_jump_url_(request):
return await call_next(request)
response = self.unauthorised()
if self.is_auth(request):
response = await call_next(request)
return response
Backend Communication via RabbitMQ
Install RabbitMQ Cluster
Use Docker or Kubernetes with clustered config:
docker run -d --hostname rabbit1 --name rabbit1 -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Async Messaging with aio-pika
We use a custom RabbitMQ manager to handle connection and disconnection events in a controlled and centralized way. This allows us to maintain a reliable asynchronous messaging flow using aio-pika, ensuring that resources are properly managed and reconnections are handled gracefully when needed.
import asyncio
from typing import Optional
from aio_pika import connect, Message, Channel, Queue
from aio_pika.exceptions import AMQPConnectionError
class RabbitMQManager:
"""
Asynchronous manager for interacting with RabbitMQ using `aio-pika`.
This class provides functionality to connect, publish, and consume
messages from RabbitMQ queues or exchanges. It includes built-in
retry logic and optional WebSocket broadcasting support for
real-time notifications.
:param rabbitmq_url: Connection URL for RabbitMQ.
:param manager: WebSocket manager instance used for broadcasting messages
to clients.
:param max_retries: Maximum number of retry attempts for connecting to
RabbitMQ.
:param logger: Optional logger for debug and error logging.
"""
def __init__(self, rabbitmq_url: str, manager, max_retries: int = 3,
logger=None):
self.rabbitmq_url = rabbitmq_url
self.manager = manager
self.max_retries = max_retries
self.connection = None
self.channel: Optional[Channel] = None
self.queue: Optional[Queue] = None
self.logger = logger
async def connect(self) -> bool:
"""
Attempt to establish a connection with RabbitMQ, retrying on failure.
:return: True if the connection is successful, False otherwise.
"""
for attempt in range(self.max_retries):
try:
self.connection = await connect(self.rabbitmq_url)
self.channel = await self.connection.channel()
return True
except AMQPConnectionError as e:
if self.logger:
self.logger.warning(f"Attempt {attempt + 1} failed: {e}")
await asyncio.sleep(2)
await self.manager.broadcast("Redundancy service is not available.")
return False
async def publish_message(self, queue_name: str, message: str):
"""
Publish a message to a RabbitMQ queue.
:param queue_name: Name of the target queue.
:param message: The message string to publish.
"""
if not self.channel:
if self.logger:
self.logger.warning(
"No RabbitMQ channel available. Attempting to reconnect..."
)
if not await self.connect():
return
try:
await self.channel.default_exchange.publish(
Message(body=message.encode()),
routing_key=queue_name,
)
if self.logger:
self.logger.debug(
f"Message published to {queue_name}: {message}")
except Exception as e:
if self.logger:
self.logger.error(f"Failed to publish message: {e}")
async def publish_message_to_exchange(
self, exchange_name: str, message: str, routing_key: str = ''):
"""
Publish a message to a RabbitMQ exchange.
:param exchange_name: Name of the exchange.
:param message: The message string to publish.
:param routing_key: Optional routing key.
"""
if not self.channel:
if self.logger:
self.logger.warning(
"No RabbitMQ channel available. Attempting to reconnect..."
)
if not await self.connect():
return
try:
exchange = await self.channel.declare_exchange(
exchange_name, type='fanout')
await exchange.publish(
Message(body=message.encode()),
routing_key=routing_key
)
if self.logger:
self.logger.debug(
f"Message published to exchange "
f"{exchange_name}: {message}")
except Exception as e:
if self.logger:
self.logger.error(
f"Failed to publish message to exchange: {e}")
async def consume_messages(self, queue_name: str):
"""
Consume messages from a RabbitMQ queue and broadcast them via
WebSockets.
:param queue_name: Name of the queue to consume from.
"""
if not self.channel:
if self.logger:
self.logger.warning(
"No RabbitMQ channel available. Attempting to reconnect..."
)
if not await self.connect():
return
try:
self.queue = await self.channel.declare_queue(
queue_name, durable=True)
async for message in self.queue:
async with message.process():
if self.logger:
self.logger.debug(
f"Received message: {message.body.decode()}")
await self.manager.broadcast(message.body.decode())
except Exception as e:
if self.logger:
self.logger.error(f"Failed to consume messages: {e}")
await self.manager.broadcast(
"Redundancy service is not available.")
async def consume_messages_from_exchange(self, exchange_name: str):
"""
Consume messages from a RabbitMQ exchange and broadcast them via
WebSockets.
:param exchange_name: Name of the exchange to consume from.
"""
if not self.channel:
if self.logger:
self.logger.warning(
"No RabbitMQ channel available. Attempting to reconnect..."
)
if not await self.connect():
return
try:
exchange = await self.channel.declare_exchange(
exchange_name, type='fanout')
queue = await self.channel.declare_queue('', exclusive=True)
await queue.bind(exchange)
async for message in queue:
async with message.process():
await self.manager.broadcast(message.body.decode())
except Exception as e:
if self.logger:
self.logger.error(
f"Failed to consume messages from exchange: {e}")
await self.manager.broadcast(
"Redundancy service is not available.")
Messaging Patterns
Each pattern fits different communication needs:
-
fanout
: useful for broadcasting logs or system-wide notifications. -
topic
: suitable for routing messages to multiple services based on tags. -
direct
: appropriate when a message should reach only a specific queue. -
fanout: broadcast to multiple services
-
topic: pattern-based routing
-
direct: specific consumer routing
Authentication and Access Control with Keycloak
High Availability Setup
- Use external DB (PostgreSQL)
- Behind a reverse proxy (NGINX)
- Cluster mode with multiple instances
Realm and Role Management
Realms in Keycloak isolate authentication domains, ideal for multi-tenant apps. Clients represent services or apps, and roles define what actions users can perform. This setup enables flexible, centralized access control.
- Create realm
chat
- Add client
frontend
andbackend
- Define roles like
admin
,user
as you desire.
Token Handling
- Frontend: refresh token automatically using
@react-keycloak
- Backend: validate with
pyjwt
and cache JWKs
High Availability and Fault Tolerance
Horizontal Scaling
Scaling horizontally means adding more instances rather than increasing resources of a single one. Stateless applications scale easily—ensure that sessions, if used, are stored in external stores like Redis.
- Scale containers (Kubernetes replicas or Docker Compose services)
- Stateless frontend/backend
Health Checks
Readiness and liveness probes ensure that containers are only included in load balancing when they are actually ready, and restarted if they become unresponsive. This contributes greatly to fault tolerance and smooth deployment rollouts.
- FastAPI:
/health
endpoint - NGINX / Kubernetes readiness probes
Resilience Patterns
- Use
tenacity
for retries - Apply timeout middleware
- Circuit breakers via
pybreaker
or custom
Deployment and Orchestration
Dockerization
version: '3.9'
services:
backend-1:
image: agrubio/backend-python:latest
env_file:
- .env
labels:
- "traefik.enable=true"
- "traefik.http.routers.backend.rule=PathPrefix(`/api`)"
- "traefik.http.routers.backend.rule=Host(`back.localhost`)
- "traefik.http.routers.backend.entrypoints=websecure"
- "traefik.http.routers.backend.tls=true"
- "traefik.http.routers.backend.tls.certresolver=letsencrypt"
- "traefik.http.services.backend.loadbalancer.server.port=8000"
volumes:
- ./chat.db:/app/src/chat.db
networks:
- web
backend-2:
image: agrubio/backend-python:latest
env_file:
- .env
labels:
- "traefik.enable=true"
- "traefik.http.routers.backend.rule=PathPrefix(`/api`)"
- "traefik.http.routers.backend.rule=Host(`back.localhost`)
- "traefik.http.routers.backend.entrypoints=websecure"
- "traefik.http.routers.backend.tls=true"
- "traefik.http.routers.backend.tls.certresolver=letsencrypt"
- "traefik.http.services.backend.loadbalancer.server.port=8000"
volumes:
- ./chat.db:/app/src/chat.db
networks:
- web
front:
image: agrubio/frontend-python:latest
env_file:
- .env
labels:
- "traefik.enable=true"
- "traefik.http.routers.front.rule=PathPrefix(`/`)"
- "traefik.http.routers.front.rule=Host(`front.localhost`)
- "traefik.http.routers.front.entrypoints=websecure"
- "traefik.http.routers.front.tls=true"
- "traefik.http.routers.front.tls.certresolver=letsencrypt"
- "traefik.http.services.front.loadbalancer.server.port=80"
networks:
- web
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672" # RabbitMQ
- "15672:15672" # RabbitMQ Management UI
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: password
networks:
- web
traefik:
image: traefik:v3.4.0-rc1
container_name: traefik
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
- ./traefik/dynamic.yml:/etc/traefik/dynamic.yml:ro
- traefik_acme:/letsencrypt/
- /var/run/docker.sock:/var/run/docker.sock
networks:
- web
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.localhost`)"
- "traefik.http.routers.traefik.service=api@internal"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls=true"
volumes:
chat_db:
Kubernetes or Docker Compose
- Define
services
,volumes
,networks
- Helm charts or
docker-compose.yaml
Secrets and Env Configs
- Use
.env
or Kubernetes Secrets - Mount configs via volumes
CI/CD
CI/CD pipelines automate testing, building, and deployment. Use environment-specific branches, secrets management, and image tagging to ensure consistency and traceability between development, staging, and production.
- GitLab CI/CD or GitHub Actions
- Use pipelines for:
- Testing (pytest)
- Docker builds
- Push to registry
- Deploy to cluster
Conclusion
This architecture enables scalable, redundant, and secure applications. With Vite + React, FastAPI, RabbitMQ, and Keycloak, you gain powerful tooling to build modern distributed systems that can handle growth, failures, and strict access control.
Key takeaways:
- Modularize for maintainability
- Offload async jobs to RabbitMQ
- Use Keycloak for unified auth
- Monitor and scale wisely