Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,12 @@ This is designed to scrape the data from the Github [dependency graph](https://g
1. Clone the repo
2. Run `yarn` - Installs dependencies
3. Run `npx tsc` - Compiles `index.ts`
4. Run the scrapper `node index.js repoOwner/repo dependents.json`
4. Create a blank `dependents.json` file with an empty object `{}` contained inside the file
5. Run the scrapper `node index.js repoOwner/repo dependents.json`
- The command line arguments for the scrapper are as follows:
1. (`githubOwnerAndRepo`) `repoOwner/repo` - This is what's displayed in the Github URL when on the repo page e.g. for this repo it would be `spacesailor24/github-dependents-scraper`
2. (`dependentsFile`) `anything.json` - This file can be named anything, but it needs to be a valid JSON file ending with the `.json` file extension
3. (`resumeCrawl`) `true` or `false` - Eventually this crawler will get rate limited by Github, this flag allows you to run the crawler from where it left off before receiving the rate limit page
3. (`resumeCrawl`) `true` or `false` - Eventually this crawler will get rate limited by Github, this flag allows you to run the crawler from where it left off before receiving the rate limit page. Do not use when first initializing the scrape. Only apply `true` when continuing the scrape from an incomplete state.
- So if the crawler dies because of rate limiting, you'd start it up again with:
```bash
node index.js repoOwner/repo dependents.json true
Expand Down