Skip to content

[aiohttp] - add raw setup (no-proxy) #9807

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Reskov
Copy link
Contributor

@Reskov Reskov commented Apr 11, 2025

I decided to revert nginx as the default aiohttp proxy until we figure out what is the root cause of the performance degradation.

I added server.py which creates a socket and spawns multiprocessing with port reuse. Performance is comparable to the gunicorn setup. So I set it as the default.

no proxy

./tfb --mode benchmark --test aiohttp --type json --concurrency-levels=32 --duration=30

---------------------------------------------------------
 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   428.04us    1.14ms  18.79ms   93.37%
    Req/Sec    33.68k     4.58k   51.45k    70.33%
  Latency Distribution
     50%  117.00us
     75%  208.00us
     90%  611.00us
     99%    6.52ms
  6039483 requests in 30.07s, 0.97GB read
Requests/sec: 200877.25

gunicorn

./tfb --mode benchmark --test aiohttp-gunicorn --type json --concurrency-levels=32 --duration=30;

---------------------------------------------------------
 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   437.00us    1.16ms  19.93ms   93.31%
    Req/Sec    32.77k     8.33k   58.91k    71.85%
  Latency Distribution
     50%  121.00us
     75%  214.00us
     90%  646.00us
     99%    6.61ms
  5880114 requests in 30.10s, 0.94GB read
Requests/sec: 195363.94
Transfer/sec:     32.05MB

nginx

./tfb --mode benchmark --test aiohttp-nginx --type json --concurrency-levels=32 --duration=30;

---------------------------------------------------------
 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   315.43us  232.91us  12.23ms   89.58%
    Req/Sec    16.36k     0.92k   22.52k    69.46%
  Latency Distribution
     50%  280.00us
     75%  398.00us
     90%  526.00us
     99%    0.85ms
  2935901 requests in 30.10s, 506.78MB read
Requests/sec:  97539.85
Transfer/sec:     16.84MB

@Dreamsorcerer
Copy link
Contributor

Might be worth waiting for some results first, just to verify we see the same difference in performance in the benchmarks.

@Reskov Reskov marked this pull request as draft April 11, 2025 11:50
@Reskov
Copy link
Contributor Author

Reskov commented Apr 11, 2025

Yeah, good point! Let's wait a week until benchmark will start and finish. I convert to draft.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants