Testing the Bridges
A Story of Two Services
Our “code hike” usually spans more than one repository. In a modern stack, it’s common to have a high-performance core in Rust and a developer-friendly edge or orchestration layer in Python or TypeScript.

The problem is that the “trail” between these two often disappears. You’re looking at a Python HTTP GET call; you know it hits the “Kitchen” service, but you’re blind to the implementation details on the other side. You’re crossing a bridge without knowing its load capacity.
In this article we will explore what Unfaul & fault can do to help you get more clarity.
The Setup: The Kitchen and the Waiter
Let’s look at a concrete example:
- The Kitchen (Rust/Axum): A service that processes food orders.
- The Waiter (Python/FastAPI): An edge API that customers use to place orders. It calls the Kitchen via an internal network over HTTP.
In your IDE, you’re looking at the Waiter’s create_order function. You notice,
that it calls the Kitchen’s /orders route. But the question now is, what
happens if the Kitchen is overwhelmed and slow to respond?
Traditionally, to understand the impact of a change, you’d have to perform a performance test with the two services running. A rather expensive task so you might defer until “a problem is raised” and fallback on overly defensive strategy to capture all sorts of errors and just log whatever is thrown.
The issue is that by doing this, you are failing to properly reason how you ought to deal with such an issue. You might throw a retry or a circuit breaker for good measure but how do you tune either?
Walking the Path with Unfault
Because Unfault has parsed both codebases into a single semantic graph, the “map” is unified. In VS Code, using the Unfault LSP, you can hover over the internal URL in your Python code and see the downstream impact.
$ unfault graphWorkspace: waiterNodes: 3 functions, 0 classes, 3 routes, 1 remote serversEdges: 0 calls, 2 http calls, 0 importsListens on: 8080
Entry points: GET /health └─ health POST /orders └─ create_order └─ → POST localhost:8081/orders (kitchen) GET /kitchen/status └─ kitchen_status └─ → GET localhost:8081/status (kitchen)
Graph ready. Run 'unfault ask' to explore.The graph shows you that POST /orders in Python maps directly to cook_order() in Rust. You’ve just successfully oriented yourself across the language barrier.
When the Trail is Blocked: Introducing fault
Now that we see the bridge, we need to test it. What happens if the Kitchen
service experiences a “brownout”? What if that synchronous lock in Rust causes a
2-second delay?
This is where we use the fault add-on.
Instead of setting up a complex chaos engineering experiment, we can trigger a fault directly from the context of our code. Since Unfault knows the execution path, we can inject a failure specifically into that link.
# Injecting a 5s delay into the specific path we just discoveredunfault addon fault plan -q "latency 5000ms for 30s" --target http://127.0.0.1:8081The Discovery
Now, we’re ready to send some traffic to the Waiter service endpoint:
$ curl -X POST -H "Content-Type: application/json" \ http://0.0.0.0:8080/orders \ -d '{"dish": "pizza"}'
{"detail":"Failed to connect to kitchen service: "}By injecting this delay, we notice something immediately: the Python Waiter
service starts timing out, but it doesn’t do so gracefully. Because we didn’t
configure a specific timeout in our Python httpx client, the worker threads stay
open, waiting for the Rust “oven” to clear.
Under even moderate load, our “Waiter” service, the thing our customers actually see, crashes entirely.
What’s Next?
We’ve identified a critical weakness in our execution path without ever leaving our development environment. We didn’t need a staging environment or a production incident to find this “ghost”.
In the final article of this series, we’ll look at how we use these Findings to close the loop, turning a discovered “fault” into a permanent “Fact” in our graph to ensure the bridge stays strong.