This project provides a set of helper utilities to pull live data directly from USA Cycling to determine potential upgrade eligibility for cyclists. It can run live or dump and re-used cached results.
- Fetch race results from USA Cycling via live endpoints
- Analyze results to determine upgrade points
- Generate detailed reports on upgrade eligibility
This project uses uv for dependency management and installation. To install all required dependencies based on pyproject.toml and uv.lock, run:
uv venv
uv syncThis will create a virtual environment and install all locked dependencies.
This script uses a Click-based CLI. To run it, use:
python main.py --athlete_name "Firstname Lastname" --cat "4" --lookback 12moYou can also use environment variables or a .env file to supply these options:
ATHLETE_NAMECATEGORYLOOKBACKDISCIPLINE
Below is a summary of all the available command-line parameters:
-
--athlete_name
The name of the athlete to search for on the USA Cycling results website.
Default: Sourced from theATHLETE_NAMEenvironment variable if not provided. -
--category / --cat
The category filter to apply (allowed choices: "1", "2", "3", "4", "5").
Default: Sourced from theCATEGORYenvironment variable if not provided. -
--lookback
Specifies a lookback period for filtering results. Accepts flexible formats such as2y,36mo, etc.
Default: Sourced from theLOOKBACKenvironment variable if not provided.
Callback: Processes the input to compute the date from which results should be considered. -
--discipline
Determines the racing discipline to filter the results. Allowed choices includecx,road, andcyclocross.
Default: Sourced from theDISCIPLINEenvironment variable if not provided.
Callback: Converts input (e.g., "cyclocross") appropriately (e.g., to "cx"). -
--dump
A flag to enable dumping of raw results to JSON files.
Usage: Include--dumpto trigger this behavior. -
--use-cached
A flag to instruct the script to use cached results instead of scraping new data.
Usage: Include--use-cachedto trigger this behavior.
Note: Cannot be used simultaneously with--dump.
- Scrape live data for racing results for a single athlete.
- Structure the results into a DataFrame.
- For each row, navigate to the hyperlink and scrape additional details.
- Union the newly scraped data with the original DataFrame.
- Filter results based on lookback period, category, and other criteria.
- Apply an upgrade-points algorithm to each remaining result:
- Evaluate conditions to compute upgrade points.
- Sum the upgrade points and compare with the upgrade threshold.
- Output a final summary:
- From-to categories
- Time period analyzed
- Total upgrade points obtained
- Upgrade point threshold
- Races that contributed and did not contribute to the total points
flowchart TD
A[Scrape athlete's results] --> B[Structure into DataFrame]
B --> C[For each result, navigate hyperlink & scrape details]
C --> D[Union: combine detailed data with DataFrame]
D --> E[Filter results by lookback & category]
E --> F[Apply upgrade points logic]
F --> G[Sum points & compare to threshold]
G --> H[Generate final output]
For any questions or issues, please open an issue on GitHub.