Replies: 3 comments 1 reply
-
Would you be able to provide a sample Node.js project so we can try to reproduce this? |
Beta Was this translation helpful? Give feedback.
1 reply
-
Ain‘t you using _list endpoint or JS reduce? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Seems to be abandoned, so just leaving this here. CouchDB does not do any enqueueing of requests as described by OP. There are multiple ways to explain this behavior, but we need the client source for that. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When doing parallel requests to CouchDB, it waits before sending all the responses. So I have acceptable response time when doing 1 request (about 600ms) whereas when I do 10 parallel requests (small reads) it returns all answers within the same time frame (all requests return in 3,5 seconds. Event the fastest takes 3,5 seconds).
Description
I have a nodejs backend application that connects to a CouchDB database.
I have then an API route that reads a small object in 2 separate requests in 2 databases (each object is a json with about 3 values, very small).
When I do only 1 concurrent request on this route, it behaves normally (within acceptable time-frames) : the request takes about 600ms to complete.
When I do 10 concurrent request on this route, it doesn't seems right : all requests take (almost) the same time (within a 200 ms range) -> 3.6 seconds.
To make sure that the issue is CouchDB and not the nodejs, I created a "simple route" that responds "OK" directly without contacting CouchDB. When doing 10 concurrent request on this route, I get about 200 to 700ms to execute each request. The histogram of the time for the requests seems fine.
I have also tried to scale out the nodejs backend and the behavior was the same.
Steps to Reproduce
I use https://github.com/rakyll/hey to bench my nodejs.
I have a simple expressjs with the nano nodejs official library to contact the couchdb.
I then created a simple POST route to contact that does 2 separate queries to get 2 simple objects with about 3 to 4 key-value pairs.
I then queried my nodejs with
hey -n 10 -c 10 -q 1 -m POST <url>
which means 10 concurrent request with not more that 1 request at a time for each thread (which is 10).I tried to set the {nodelay, true} parameter to the socker_options of httpd and chttpd in the couchDB config but it did not change anything. The couchdb config is the default one of the latest official docker image. To my knowledge, almost everything is commented.
To summarise :
Expected Behaviour
I would like to have a spread in my response times when doing multiple concurrent request. It's understandable that it takes more time but the fastest request should theoretically be still around 600ms.
It should not take longer (I think it is queuing responses before sending) for the fastest reply to arrive for 1, 10, 20 or even a 100 concurrent requests.
Your Environment
All the request were executed on a Ubuntu VM running Ubuntu 18.04.
The couchDB version is 3.1.0. It was also tested with version 2.3.
The nodejs and CouchDB are executed on the same VM which has 8 cores and 4Go of RAM.
The disk latencies and network latencies were monitored during testing and nothing was out of the ordinary.
When doing 10 request, the couchDB CPU usage spikes to around 720% (7,2 cores used) and when it goes to normal (3% CPU usage) I get the results. I do not think it should be this CPU intensive when doing only 10 concurrent requests.
Feel free to ask more details. The website that will be using this DB needs to support at least 100 req/s which as of today does not seems possible with this benchmark results.
Beta Was this translation helpful? Give feedback.
All reactions