-
Notifications
You must be signed in to change notification settings - Fork 2k
[aiohttp] - add raw setup (no-proxy) #9807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Might be worth waiting for some results first, just to verify we see the same difference in performance in the benchmarks. |
Yeah, good point! Let's wait a week until benchmark will start and finish. I convert to draft. |
Yeah, seeing 20-70% drop in nginx benchmarks. Anyway, my original idea was to deploy without a proxy, so this is probably the best option regardless. |
Could maybe also try some things here: But, if we're trying to finetune performance of nginx, then it seems a bit pointless as a benchmark for aiohttp. So, maybe it's better to just remove it... |
Yeah, we can try to tune configuration settings, but always will have a proxy overhead. Also I would like to keep nginx run as separate aiohttp-nginx to compare between different proxies setup and due to nginx is documented as preferable setup for the aiohttp according to doc (https://docs.aiohttp.org/en/stable/deployment.html#nginx-gunicorn)
Probably it is worth to update documentation, because we are seeing opposite results 😀 Nginx local before
Nginx local after applying gist conf
Nginx Run TFBRun ID: 9efd8d95-b908-41b4-8635-f918fccda2aa |
Problem is that isn't really related to aiohttp. If every framework does that, it's going to create a huge amount of additional run time. I'd assume there are other projects that benchmark proxies themselves and compare performance between them. |
I decided to revert
nginx
as the defaultaiohttp
proxy until we figure out what is the root cause of the performance degradation.I added
server.py
which creates a socket and spawns multiprocessing with port reuse. Performance is comparable to the gunicorn setup. So I set it as the default.no proxy
./tfb --mode benchmark --test aiohttp --type json --concurrency-levels=32 --duration=30
gunicorn
./tfb --mode benchmark --test aiohttp-gunicorn --type json --concurrency-levels=32 --duration=30;
nginx
./tfb --mode benchmark --test aiohttp-nginx --type json --concurrency-levels=32 --duration=30;