Skip to content

Possible memory leak #191

Open
Open
@Risers

Description

@Risers

Recently I was trying to stop my app from allocating memory infinitely. After reducing the amount of code and leaving only problematic parts I was left with something like this:

use std::str::FromStr;
use axum::routing::get;
use axum::Router;
use http::StatusCode;
use tracing::{info};
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::{EnvFilter, Layer};
use axum_tracing_opentelemetry::middleware::{OtelAxumLayer, OtelInResponseLayer};
use dotenvy::dotenv;
use init_tracing_opentelemetry::tracing_subscriber_ext::build_otel_layer;

#[tokio::main]
async fn main() {
    dotenv().ok();
    info!("starting server");
    serve().await.expect("server error");
}

async fn serve() -> anyhow::Result<()> {
    let otel_log_level = EnvFilter::from_str(format!("{}", "debug").as_str())
        .expect("error parsing otel log level from config file")
        .add_directive("otel::tracing=trace".parse()?);
    let (layer, guard) = build_otel_layer()?;
    let subscriber = tracing_subscriber::registry()
        .with(layer.with_filter(otel_log_level));
    tracing::subscriber::set_global_default(subscriber)?;


    let app = Router::new()
        .route("/healthz", get(healthz))
        .layer(OtelInResponseLayer::default())
        .layer(OtelAxumLayer::default());

    let listener = tokio::net::TcpListener::bind("0.0.0.0:8080").await?;
    info!("server listening on {}", listener.local_addr()?);
    axum::serve(listener, app).await?;
    Ok(())
}

async fn healthz() -> Result<String, StatusCode> {
    Ok("ok".to_string())
}

My env looks like this:
OTEL_SERVICE_NAME=test OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4317 OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="grpc"

I'm using jeager all in one container. The data is reaching it correctly.

When using this python script:

import aiohttp
import asyncio
async def fetch(session, url):
    """Perform a single GET request."""
    async with session.get(url) as response:
        return await response.text(), response.status

async def perform_requests(session, url, num_requests):
    """Perform GET requests asynchronously."""
    tasks = [fetch(session, url) for _ in range(num_requests)]
    return await asyncio.gather(*tasks)

async def main():
    url = "http://localhost:8080/healthz"
    num_requests = 1000
    async with aiohttp.ClientSession() as session:

        print(f"Performing {num_requests} GET requests to {url}...")
        for _ in range(1000000):
            results = await perform_requests(session, url, num_requests)


if __name__ == "__main__":
    asyncio.run(main())

I see that the memory used by my rust application is increasing a megabyte every few seconds. Seemingly infinitely until using all of my computer's memory.

Library versions:
axum-tracing-opentelemetry = "0.24.1" init-tracing-opentelemetry = { version = "0.24.1", features = ["tracing_subscriber_ext", ] }

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions