You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,6 @@ publication metadata as well as full PDF files from **PubMed** or from preprint
17
17
**medRxiv**, **bioRxiv** and **chemRxiv**. It provides a streamlined interface to scrape metadata and comes
18
18
with simple postprocessing functions and plotting routines for meta-analysis.
19
19
20
-
Since v0.2.4 `paperscraper` also supports scraping PDF files directly! Thanks to [@daenuprobst](https://github.com/daenuprobst) for suggestions!
21
20
22
21
## Getting started
23
22
@@ -37,8 +36,8 @@ medrxiv() # Takes ~30min and should result in ~35 MB file
37
36
biorxiv() # Takes ~1h and should result in ~350 MB file
38
37
chemrxiv() # Takes ~45min and should result in ~20 MB file
39
38
```
40
-
*NOTE*: Once the dumps are stored, please make sure to restart the python interpreter
41
-
so that the changes take effect.
39
+
*NOTE*: Once the dumps are stored, please make sure to restart the python interpreter so that the changes take effect.
40
+
*NOTE*: If you experience API connection issues (`ConnectionError`), since v0.2.12 there are automatic retries which you can even control and raise from the default of 10, as in `biorxiv(max_retries=20)`, thanks to [@memray](https://github.com/memray) for contributions!
42
41
43
42
Since v0.2.5 `paperscraper` also allows to scrape {med/bio/chem}rxiv for specific dates! Thanks to [@achouhan93](https://github.com/achouhan93) for contributions!
0 commit comments