You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-35Lines changed: 4 additions & 35 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -112,41 +112,7 @@ UUID which can subsequently be used to get status and results.
112
112
113
113
The closer these two parameters are to the actual time range that is
114
114
needed by analysis, the faster the response will get reported back
115
-
after completion on the server end. Below we explain
116
-
117
-
* why we have these two parameters,
118
-
* why giving good guesses helps response in reporting results,
119
-
* how you can get good guesses.
120
-
121
-
Until we have a websocket interface so the server can directly
122
-
pass back results without any additional action required on the server
123
-
side, your REST API requires the client to poll for status. We have
124
-
seen that this polling can cause a lot of overhead, if not done
125
-
judiciously. So, each request is allowed up to 10 status probes.
126
-
127
-
We have seen that _no_ analysis request will finish in less than a
128
-
certain period of time. Since the number of probe per analysis is
129
-
limited, it doesn't make sense to probe before the fastest
130
-
analysis-completion time.
131
-
132
-
The 10 status probes are done in geometrically increasing time
133
-
intervals. The first interval is the shortest and the last interval is
134
-
the longest. The response rate at the beginning is better than the
135
-
response rate at the end, in terms of how much additional time it
136
-
takes before the analysis completion is noticed.
137
-
138
-
However this progression is not fixed. Instead, it takes into account
139
-
the maximum amount of time you are willing to wait for a result.
140
-
141
-
In other words, the shorter the short period of time you give for the
142
-
maximum timeout, the shorter the geometric succession of the 10 probes
143
-
allotted to an analysis request will be.
144
-
145
-
To make this clear, if you only want to wait a maximum of two minutes, then
146
-
the first delay will be 0.3 seconds, while the delay before last poll
147
-
will be about half a minute. If on the other hand you want to wait up
148
-
to 2 hours, then the first delay will be 9 seconds, and the last one will
149
-
be about 15 minutes.
115
+
after completion on the server end.
150
116
151
117
Good guessing of these two parameters reduces the
152
118
unnecessary probe time while providing good response around the declared
@@ -170,6 +136,9 @@ If you are making an analysis within an IDE which saves reports of
170
136
past runs, such as truffle or VSCode, the timings can be used for
171
137
estimates.
172
138
139
+
Read more about this [Polling the API to Obtain Job Status](https://docs.mythx.io/en/latest/main/building-security-tools-on-mythx.html?polling-the-api-to-obtain-job-status) in the [MythX API Developer Guide](https://docs.mythx.io/en/latest/main/building-security-tools-on-mythx.html).
0 commit comments