
It started with a simple question from our first Objectives & Roadmap session: "Can I run Rhesis on my laptop without dealing with cloud credentials?"
"Of course," I thought. "We already have Dockerfiles for every service. Just throw them in a docker-compose.yml and call it a day. Should take an afternoon."
Spoiler: It took three weeks.
And not because Docker Compose is hard, but because making a multi-service Gen AI platform work locally exposed assumptions we didn't even know we'd made.
Our production setup was beautiful. Backend talking to Cloud SQL. Worker pulling from Redis. Frontend calling APIs. Everything is running in GCP Cloud Run and Cloud SQL, and traffic is handled by load balancers. It just worked.

Figure 1: Architectural Diagram of Rhesis Component
We then attempted to run it all on a single laptop.
This was the most challenging problem we faced, and it took us nearly a week to solve properly.
The issue?
Next.js is a hybrid framework. Half of it runs server-side (in the Next.js container), and half runs client-side (in your browser). They need to talk to the same FastAPI backend, but they're in completely different network contexts.
Here's what was happening:
We'd fix it for the browser, and server-side rendering would break. We'd fix server-side rendering, and client-side API calls would fail. It felt like playing whack-a-mole with network requests.
Our initial attempts were... creative:
Attempt 1: Use localhost:8080 everywhere. Result: Server-side rendering failed silently. Pages would work on refresh (client-side) but break on first load (server-side).
Attempt 2: Use backend:8080 everywhere. Result: The browser was unable to resolve the hostname. Dev tools full of ERR_NAME_NOT_RESOLVED errors.
Attempt 3: Use host.docker.internal. Result: This special Docker hostname felt like a hacky workaround, added extra complexity to our configuration, and we weren't confident it would work reliably across all environments.
We needed a solution that was clean, explicit, and worked for both contexts without special Docker magic.
Each service had its own .env file. Seven environment variables for database config. Five more for Redis. Twelve authentication variables (Auth0, JWT, NextAuth) scattered across different files. When something didn't work, we played "guess which service has the wrong REDIS_URL" for hours.
Our first Docker Compose file looked like this:
Copy-paste hell. And every time we updated one service, we'd forget to update another.
Five services. Five different ideas about which port to use. The backend wanted 8080. So did the worker's health check. The docs site? Also 8080. It was like musical chairs, but for TCP ports, and nobody was playing music.
Local development would randomly fail with "port already in use" errors. We'd hunt down processes with lsof, kill them, restart the containers, and hope for the best.
The backend would start before PostgreSQL was ready. The worker would crash because Redis hadn't initialized. The frontend would make API calls to a backend that didn't exist yet.
We tried sleep commands. We tried restart policies. We tried our best, but nothing worked consistently.
Our first instinct was to keep it simple, with a minimal configuration, just to get things running.
It failed immediately. Environment variables were missing. Services couldn't find each other. We had five containers that might as well have been on different planets.
The lesson: Docker Compose isn't magic. It needs to understand how your services interact with each other.
Then someone on the team remembered YAML anchors. You know, those &anchor and *alias things you see in examples but never actually use?
Turns out they're perfect for this. We created reusable config blocks:
Now our service definitions looked like this:
One change, multiple services updated. Configuration became a design pattern instead of a chore.
The lesson: When you find yourself copy-pasting, there's probably a better way.
This was the breakthrough moment for the Next.js/FastAPI communication problem.
The insight: Next.js requires two distinct URLs, depending on where the code is running. We created a utility that detects the execution context:
Now, when Next.js server-side rendering calls the API, it uses http://backend:8080 (the Docker service name). When browser JavaScript calls the API, it uses http://localhost:8080 (accessible from outside Docker).
But we didn't stop there. We also added Next.js rewrites to make the routing seamless:
Now the frontend can call /api/users (a relative URL), and Next.js automatically routes it to the correct backend depending on context. No hardcoded URLs in application code.
In docker-compose.yml, we configure both URLs:
Bonus fix: We also discovered that macOS resolves localhost to IPv6 (::1) by default, but many services only listen on IPv4 (127.0.0.1). Our URL resolver automatically converts localhost to 127.0.0.1 to avoid this issue across all platforms.
The lesson: Next.js hybrid rendering is powerful, but it requires thinking about network topology from two perspectives simultaneously. Once we embraced that, the solution became obvious.
We stopped letting services race to startup and actually implemented proper health checks:
The backend would wait for PostgreSQL to be ready. The frontend would wait for the backend. Suddenly, docker-compose up became reliable.
The lesson: Don't assume services will be ready just because they've started.
We couldn't eliminate all environment variables—auth credentials and encryption keys are genuinely required. But we could organize them intelligently.
We restructured the .env.example into three clear categories:
For the RHESIS-DEFINED variables, we added sensible defaults:
Now developers only need to focus on the required credentials. Everything else has smart defaults that work out of the box for local development. Want to customize? Override it. Don't care? It works.
The lesson: You can't eliminate configuration, but you can make it obvious which parts actually need attention.
The biggest "aha" moment was understanding that there are actually three network contexts, not two:
postgres:5432backend:8080 (both in Docker)localhost:8080 (browser is on host machine)This seems obvious in retrospect, but it took us embarrassingly long to realize: In Docker Compose, service names become DNS hostnames.
When the backend needs Redis, it doesn't connect to localhost:6379. It connects to redis:6379. The service name is the hostname.
Once we internalized this, all our networking issues disappeared.
We wanted local development to be fast, change code, see results, no rebuilds.
Now changes to the backend or SDK are instantly available inside the container. No more "build-test-repeat" cycles eating up hours.
This one line made development so much more pleasant. Container crash? It restarts. Machine reboots? Services come back up. It's the difference between Docker Compose being a toy and being a serious local development environment.
After three weeks of iteration, here's what stuck with us:
backend resolves to the backend container. Outside Docker (like in a browser), it doesn't exist. This seems obvious in retrospect, but it's a common pitfall that can waste hours of debugging.We're not done. There are challenges we're still figuring out:
Secrets Management: Currently, we utilize .env files for local development. It works, but it's not great for teams. How do you securely share credentials without committing them to Git? We're exploring solutions like pass, 1password-cli, and Docker secrets, but haven't landed on "the way" yet.
Resource Limits: Docker Compose can eat all your RAM if you let it. We've added some basic limits, but tuning them for different machine specs is still manual. A 2020 MacBook Air shouldn't have the same limits as a 2024 M3 Max.
Multi-Platform Builds: ARM vs x86, macOS vs Linux container images can behave differently. We're working on improving CI testing across platforms so that the "works on my machine" problem is truly resolved.
Observability: Running locally is great until something breaks and you have no idea which container is misbehaving. We have health checks, but proper logging, tracing, and metrics for local development are still on the roadmap.
Want to run Rhesis locally? It's genuinely this simple now:
Visit http://localhost:3000 and you're running the full platform frontend, backend, worker, PostgreSQL, Redis, and docs all talking to each other, all on your laptop.
If you encounter any issues (or have ideas for improvement), we'd love to hear from you. Join the Rhesis Discord and let us know what worked, what didn't, and what you'd like to see next.
Getting Rhesis to run locally wasn't just about Docker Compose; it also involved setting up a local environment. It was about accessibility. About making it possible for anyone, whether you're a Fortune 500 enterprise evaluating our platform or a solo developer experimenting with Gen AI testing to run Rhesis without friction.
It's about trust. When you can run our entire platform on your machine, inspect the code, poke at the APIs, and understand how it works, that's transparency. That's open source done right.
And honestly? It's made us better developers. When setting up locally takes five minutes instead of five hours, we iterate faster. We test more. We break less.
If you're building a multi-service platform and don't have a one-command local setup yet, I highly recommend it. In the future, you (and your contributors) will thank you.
Want to learn more about Rhesis? Check out app.rhesis.ai or dive into the docs at docs.rhesis.ai.
Have questions about our Docker Compose setup? The full configuration is open-source. Feel free to use it, fork it, or learn from it.
Want to contribute? We'd love your help. Whether it's fixing a typo, adding a feature, or improving our Docker setup, all contributions are welcome. Start with our guide.