Record HTTP request/response pairs in a controlled environment, inspect a captured request, replay it against another target, and diff the result before promoting a fix.
Security notice Replay is intended for development, staging, canary, and incident-response environments. Do not expose the admin endpoints publicly on the open internet.
Replay is most useful when:
- behavior differs between staging and local
- you need to reproduce a regression using a real traffic sample
- you want to rerun critical requests before promoting a new version to canary
- you are asking, “why did this request work yesterday but break today?” and want a time-machine-style answer
Enable the canonical replay feature in your application:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["extras-replay"] }On the CLI side, cargo-rustapi is enough; replay commands are part of the default installation:
cargo install cargo-rustapiFor the smallest practical setup, start with an in-memory store:
use rustapi_rs::extras::replay::{InMemoryReplayStore, ReplayConfig, ReplayLayer};
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/api/users")]
async fn list_users() -> Json<Vec<&'static str>> {
Json(vec!["Alice", "Bob"])
}
#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
let replay = ReplayLayer::new(
ReplayConfig::new()
.enabled(true)
.admin_token("local-replay-token")
.ttl_secs(900)
.skip_path("/health")
.skip_path("/ready")
.skip_path("/live"),
)
.with_store(InMemoryReplayStore::new(200));
RustApi::auto()
.layer(replay)
.run("127.0.0.1:8080")
.await
}This setup:
- enables replay recording
- protects the admin endpoints with a bearer token
- excludes probe endpoints from recording
- keeps entries for 15 minutes
- stores at most 200 records in memory
Now send requests to the application as usual. The replay middleware captures request/response pairs without changing your application code.
The recording flow looks like this:
- the request passes through
- request metadata and eligible body fields are stored
- response status, headers, and capturable body content are stored
- the record becomes accessible through the admin API and CLI
For a first look, the CLI is the easiest path:
# List recent replay entries
cargo rustapi replay list -s http://localhost:8080 -t local-replay-token
# Filter to a specific endpoint only
cargo rustapi replay list -s http://localhost:8080 -t local-replay-token --method GET --path /api/users --limit 20The list output shows these fields:
- replay ID
- HTTP method
- path
- original response status code
- total duration
Once you find the suspicious request, open the full record:
cargo rustapi replay show <id> -s http://localhost:8080 -t local-replay-tokenThis command typically shows:
- the original request method and URI
- stored headers
- the captured request body
- the original response status/body
- metadata such as duration, client IP, and request ID
You can now run the same request against your local fix, staging, or canary environment:
cargo rustapi replay run <id> -s http://localhost:8080 -t local-replay-token -T http://localhost:3000Practical uses include:
- verifying that the local fix really resolves the incident
- checking whether staging still matches the previous production behavior
- replaying critical endpoints as a pre-deploy smoke test
This is where the real magic happens: compare the replayed response with the original response.
cargo rustapi replay diff <id> -s http://localhost:8080 -t local-replay-token -T http://staging:8080The diff output looks for differences in:
- status code
- response headers
- JSON body fields
That lets you catch subtler regressions too, such as “it still returned 200, but the payload changed.”
During an incident or regression, the recommended flow is:
- Start recording: enable replay in staging/canary with a short TTL.
- Capture the example: replay the real request that triggers the problem.
- List: find the right entry with
cargo rustapi replay list. - Inspect: validate the request/response pair with
cargo rustapi replay show. - Try the fix: rerun the entry against your local build or release candidate with
run. - Diff it: use
diffto confirm the behavior changed as expected. - Turn it off: disable replay recording after the incident or keep the TTL short.
In short: capture → inspect → replay → diff → promote.
All admin endpoints require this header:
Authorization: Bearer <admin_token>
| Method | Path | Description |
|---|---|---|
| GET | /__rustapi/replays |
List recordings |
| GET | /__rustapi/replays/{id} |
Show a single entry |
| POST | /__rustapi/replays/{id}/run?target=URL |
Replay the request against another target |
| POST | /__rustapi/replays/{id}/diff?target=URL |
Replay the request and generate a diff |
| DELETE | /__rustapi/replays/{id} |
Delete an entry |
curl -H "Authorization: Bearer local-replay-token" \
"http://localhost:8080/__rustapi/replays?limit=10"
curl -H "Authorization: Bearer local-replay-token" \
"http://localhost:8080/__rustapi/replays/<id>"
curl -X POST -H "Authorization: Bearer local-replay-token" \
"http://localhost:8080/__rustapi/replays/<id>/run?target=http://staging:8080"
curl -X POST -H "Authorization: Bearer local-replay-token" \
"http://localhost:8080/__rustapi/replays/<id>/diff?target=http://staging:8080"These are the ReplayConfig options you will adjust most often:
use rustapi_rs::extras::replay::ReplayConfig;
let config = ReplayConfig::new()
.enabled(true)
.admin_token("local-replay-token")
.store_capacity(1_000)
.ttl_secs(7_200)
.sample_rate(0.5)
.max_request_body(131_072)
.max_response_body(524_288)
.record_path("/api/orders")
.record_path("/api/users")
.skip_path("/health")
.skip_path("/metrics")
.redact_header("x-custom-secret")
.redact_body_field("password")
.redact_body_field("credit_card")
.admin_route_prefix("/__admin/replays");By default, these headers are stored as [REDACTED]:
authorizationcookiex-api-keyx-auth-token
JSON body redaction works recursively; for example, a password field is masked even inside nested objects.
If you want the records to survive a developer-machine restart, use the filesystem store:
use rustapi_rs::extras::replay::{
FsReplayStore, FsReplayStoreConfig, ReplayConfig, ReplayLayer,
};
let config = ReplayConfig::new()
.enabled(true)
.admin_token("local-replay-token");
let fs_store = FsReplayStore::new(FsReplayStoreConfig {
directory: "./replay-data".into(),
max_file_size: Some(10 * 1024 * 1024),
create_if_missing: true,
});
let replay = ReplayLayer::new(config).with_store(fs_store);If you want to use Redis, object storage, or an enterprise audit backend, implement the ReplayStore trait:
use async_trait::async_trait;
use rustapi_rs::extras::replay::{
ReplayEntry, ReplayQuery, ReplayStore, ReplayStoreResult,
};
#[derive(Clone)]
struct MyCustomStore;
#[async_trait]
impl ReplayStore for MyCustomStore {
async fn store(&self, entry: ReplayEntry) -> ReplayStoreResult<()> {
let _ = entry;
Ok(())
}
async fn get(&self, id: &str) -> ReplayStoreResult<Option<ReplayEntry>> {
let _ = id;
Ok(None)
}
async fn list(&self, query: &ReplayQuery) -> ReplayStoreResult<Vec<ReplayEntry>> {
let _ = query;
Ok(vec![])
}
async fn delete(&self, id: &str) -> ReplayStoreResult<bool> {
let _ = id;
Ok(false)
}
async fn count(&self) -> ReplayStoreResult<usize> {
Ok(0)
}
async fn clear(&self) -> ReplayStoreResult<()> {
Ok(())
}
async fn delete_before(&self, timestamp_ms: u64) -> ReplayStoreResult<usize> {
let _ = timestamp_ms;
Ok(0)
}
fn clone_store(&self) -> Box<dyn ReplayStore> {
Box::new(self.clone())
}
}After setting up replay, run this short check:
- send a request to the application
- use
cargo rustapi replay list -t <token>to confirm the entry appears - use
cargo rustapi replay show <id> -t <token>to verify the stored body/header data - use
cargo rustapi replay diff <id> -t <token> -T <target>to compare the results
If these four steps succeed, the workflow is ready.
The replay system includes several safeguards:
- Disabled by default: it starts with
enabled(false). - Admin token required: admin endpoints require a bearer token.
- Header redaction: sensitive headers are masked.
- Body field redaction: JSON fields can be selectively masked.
- TTL enforced: old records are cleaned up automatically.
- Body size limits: request/response capture is size-limited.
- Bounded storage: the in-memory store is limited with FIFO eviction.
Recommendations:
- do not enable replay behind a publicly exposed production ingress
- use a short TTL
- add application-specific secret fields to the redaction list
- monitor memory usage if you use a large-capacity in-memory store
- consider turning replay recording off after the incident