Skip to content

[aiohttp] - add raw setup (no-proxy) #9807

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 22, 2025

Conversation

Reskov
Copy link
Contributor

@Reskov Reskov commented Apr 11, 2025

I decided to revert nginx as the default aiohttp proxy until we figure out what is the root cause of the performance degradation.

I added server.py which creates a socket and spawns multiprocessing with port reuse. Performance is comparable to the gunicorn setup. So I set it as the default.

no proxy

./tfb --mode benchmark --test aiohttp --type json --concurrency-levels=32 --duration=30

---------------------------------------------------------
 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   428.04us    1.14ms  18.79ms   93.37%
    Req/Sec    33.68k     4.58k   51.45k    70.33%
  Latency Distribution
     50%  117.00us
     75%  208.00us
     90%  611.00us
     99%    6.52ms
  6039483 requests in 30.07s, 0.97GB read
Requests/sec: 200877.25

gunicorn

./tfb --mode benchmark --test aiohttp-gunicorn --type json --concurrency-levels=32 --duration=30;

---------------------------------------------------------
 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   437.00us    1.16ms  19.93ms   93.31%
    Req/Sec    32.77k     8.33k   58.91k    71.85%
  Latency Distribution
     50%  121.00us
     75%  214.00us
     90%  646.00us
     99%    6.61ms
  5880114 requests in 30.10s, 0.94GB read
Requests/sec: 195363.94
Transfer/sec:     32.05MB

nginx

./tfb --mode benchmark --test aiohttp-nginx --type json --concurrency-levels=32 --duration=30;

---------------------------------------------------------
 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   315.43us  232.91us  12.23ms   89.58%
    Req/Sec    16.36k     0.92k   22.52k    69.46%
  Latency Distribution
     50%  280.00us
     75%  398.00us
     90%  526.00us
     99%    0.85ms
  2935901 requests in 30.10s, 506.78MB read
Requests/sec:  97539.85
Transfer/sec:     16.84MB

@Dreamsorcerer
Copy link
Contributor

Might be worth waiting for some results first, just to verify we see the same difference in performance in the benchmarks.

@Reskov Reskov marked this pull request as draft April 11, 2025 11:50
@Reskov
Copy link
Contributor Author

Reskov commented Apr 11, 2025

Yeah, good point! Let's wait a week until benchmark will start and finish. I convert to draft.

@Dreamsorcerer
Copy link
Contributor

Yeah, seeing 20-70% drop in nginx benchmarks. Anyway, my original idea was to deploy without a proxy, so this is probably the best option regardless.

@Reskov Reskov marked this pull request as ready for review April 21, 2025 13:31
@Dreamsorcerer
Copy link
Contributor

Could maybe also try some things here:
https://gist.github.com/denji/8359866

But, if we're trying to finetune performance of nginx, then it seems a bit pointless as a benchmark for aiohttp. So, maybe it's better to just remove it...

@Reskov
Copy link
Contributor Author

Reskov commented Apr 22, 2025

Yeah, we can try to tune configuration settings, but always will have a proxy overhead.
I found interesting gist that you shared, thanks. And I've applied configurations params locally. I see increase of RPS on higher concurrency, but still is almost twice lower than using gunicorn or raw sockets configuration.

Also I would like to keep nginx run as separate aiohttp-nginx to compare between different proxies setup and due to nginx is documented as preferable setup for the aiohttp according to doc (https://docs.aiohttp.org/en/stable/deployment.html#nginx-gunicorn)

... But nothing is free: running aiohttp application under gunicorn is slightly slower.

Probably it is worth to update documentation, because we are seeing opposite results 😀

Nginx local before

---------------------------------------------------------
 Running Warmup json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   331.25us  396.66us  23.69ms   96.70%
    Req/Sec    16.63k     2.08k   18.95k    89.72%
  Latency Distribution
     50%  275.00us
     75%  388.00us
     90%  522.00us
     99%    1.32ms
  2979037 requests in 30.00s, 514.23MB read
Requests/sec:  99290.99
Transfer/sec:     17.14MB
---------------------------------------------------------
 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   406.48us  566.29us  20.41ms   96.55%
    Req/Sec    14.34k     2.65k   20.78k    67.83%
  Latency Distribution
     50%  313.00us
     75%  460.00us
     90%  647.00us
     99%    2.26ms
  2571949 requests in 30.10s, 443.96MB read
Requests/sec:  85446.46
Transfer/sec:     14.75MB
STARTTIME 1745287993
ENDTIME 1745288023
Benchmark results:
{'results': [{'endTime': 1745288023,
              'latencyAvg': '406.48us',
              'latencyMax': '20.41ms',
              'latencyStdev': '566.29us',
              'startTime': 1745287993,
              'totalRequests': 2571949}]}

Nginx local after applying gist conf

---------------------------------------------------------
 Running Warmup json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   319.13us  307.96us  14.57ms   95.39%
    Req/Sec    16.63k     0.99k   30.65k    79.69%
  Latency Distribution
     50%  274.00us
     75%  391.00us
     90%  520.00us
     99%    0.93ms
  2981661 requests in 30.10s, 494.77MB read
Requests/sec:  99059.42
Transfer/sec:     16.44MB
---------------------------------------------------------
 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   313.05us  328.09us  13.34ms   96.50%
    Req/Sec    17.06k     1.18k   20.71k    74.50%
  Latency Distribution
     50%  269.00us
     75%  379.00us
     90%  500.00us
     99%    0.91ms
  3062869 requests in 30.10s, 508.25MB read
Requests/sec: 101757.34
Transfer/sec:     16.89MB
STARTTIME 1745289100
ENDTIME 1745289130
Benchmark results:
{'results': [{'endTime': 1745289130,
              'latencyAvg': '313.05us',
              'latencyMax': '13.34ms',
              'latencyStdev': '328.09us',
              'startTime': 1745289100,
              'totalRequests': 3062869}]}

Nginx Run TFB

Run ID: 9efd8d95-b908-41b4-8635-f918fccda2aa
commit: 585fcb6a62b13e71d52a745e956f0d377b56cf2e
details
visualize

@Dreamsorcerer
Copy link
Contributor

compare between different proxies

Problem is that isn't really related to aiohttp. If every framework does that, it's going to create a huge amount of additional run time. I'd assume there are other projects that benchmark proxies themselves and compare performance between them.

@msmith-techempower msmith-techempower merged commit 21caa53 into TechEmpower:master Apr 22, 2025
3 checks passed
litongjava pushed a commit to litongjava/FrameworkBenchmarks that referenced this pull request May 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants