You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the following parser I want the spider to SeleniumRequest all links on a page according to the rules I have specified in the Srapy LinkExtractor 'le'. It seems to me that no matter what wait_time I pass it does that same thing. Am I doing something wrong? Does the wait_time argument need a wait_until to work correctly? Or something else?
def parse(self, response):
for link in self.le.extract_links(response):
yield SeleniumRequest(url=link.url,
callback=self.parse,
wait_time=40
)
The text was updated successfully, but these errors were encountered:
defprocess_request(self, request, spider):
"""Process a request using the selenium driver if applicable"""
[...]
ifrequest.wait_until:
WebDriverWait(self.driver, request.wait_time).until(request.wait_until)
I don't see another reference to wait_time in the code.
It looks like that wait_time is indeed ignored when no wait_until is specified. Otherwise the WebDriver will wait for the specified event, but only for a maximum of wait_time seconds.
In the following parser I want the spider to SeleniumRequest all links on a page according to the rules I have specified in the Srapy LinkExtractor 'le'. It seems to me that no matter what wait_time I pass it does that same thing. Am I doing something wrong? Does the
wait_time
argument need await_until
to work correctly? Or something else?The text was updated successfully, but these errors were encountered: