Skip to content

Commit b4ca994

Browse files
authored
Merge pull request #75 from ConsenSys/README-link-to-polling-doc
Reduce verbiage on polling in the README...
2 parents 0fa9d86 + a0f64ee commit b4ca994

File tree

1 file changed

+4
-35
lines changed

1 file changed

+4
-35
lines changed

README.md

Lines changed: 4 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -112,41 +112,7 @@ UUID which can subsequently be used to get status and results.
112112

113113
The closer these two parameters are to the actual time range that is
114114
needed by analysis, the faster the response will get reported back
115-
after completion on the server end. Below we explain
116-
117-
* why we have these two parameters,
118-
* why giving good guesses helps response in reporting results,
119-
* how you can get good guesses.
120-
121-
Until we have a websocket interface so the server can directly
122-
pass back results without any additional action required on the server
123-
side, your REST API requires the client to poll for status. We have
124-
seen that this polling can cause a lot of overhead, if not done
125-
judiciously. So, each request is allowed up to 10 status probes.
126-
127-
We have seen that _no_ analysis request will finish in less than a
128-
certain period of time. Since the number of probe per analysis is
129-
limited, it doesn't make sense to probe before the fastest
130-
analysis-completion time.
131-
132-
The 10 status probes are done in geometrically increasing time
133-
intervals. The first interval is the shortest and the last interval is
134-
the longest. The response rate at the beginning is better than the
135-
response rate at the end, in terms of how much additional time it
136-
takes before the analysis completion is noticed.
137-
138-
However this progression is not fixed. Instead, it takes into account
139-
the maximum amount of time you are willing to wait for a result.
140-
141-
In other words, the shorter the short period of time you give for the
142-
maximum timeout, the shorter the geometric succession of the 10 probes
143-
allotted to an analysis request will be.
144-
145-
To make this clear, if you only want to wait a maximum of two minutes, then
146-
the first delay will be 0.3 seconds, while the delay before last poll
147-
will be about half a minute. If on the other hand you want to wait up
148-
to 2 hours, then the first delay will be 9 seconds, and the last one will
149-
be about 15 minutes.
115+
after completion on the server end.
150116

151117
Good guessing of these two parameters reduces the
152118
unnecessary probe time while providing good response around the declared
@@ -170,6 +136,9 @@ If you are making an analysis within an IDE which saves reports of
170136
past runs, such as truffle or VSCode, the timings can be used for
171137
estimates.
172138

139+
Read more about this [Polling the API to Obtain Job Status](https://docs.mythx.io/en/latest/main/building-security-tools-on-mythx.html?polling-the-api-to-obtain-job-status) in the [MythX API Developer Guide](https://docs.mythx.io/en/latest/main/building-security-tools-on-mythx.html).
140+
141+
173142
# See Also
174143

175144
* [example directory](https://github.com/ConsenSys/armlet/tree/master/example)

0 commit comments

Comments
 (0)