Skip to content

Conversation

@beaufour
Copy link
Owner

Summary

  • Use extras parameter in getPhotos() to fetch URLs and metadata in bulk
  • Add helper functions to extract URLs from extras and download directly
  • Skip per-photo API calls when pre-fetched data is available
  • Reduces API calls from 3-4 per photo to 0 additional calls

Details

When downloading a photoset, the Flickr API now fetches URL data for multiple sizes (url_o, url_l, url_c, etc.) along with original_format and date_taken in the initial paginated call. This eliminates the need for:

  • photo._getOutputFilename() - extension determined from URL
  • photo._getLargestSizeLabel() - URL availability indicates size availability
  • photo.save() API overhead - direct HTTP download using pre-fetched URL

Test plan

  • Run existing test suite: uv run pytest -v
  • Manual test with a small photoset to verify downloads work
  • Compare API call counts before/after with -v flag

Closes #64

🤖 Generated with Claude Code

beaufour and others added 2 commits January 25, 2026 14:15
- Use extras parameter in getPhotos() to fetch URLs and metadata in bulk
- Add helper functions to extract URLs from extras and download directly
- Skip per-photo API calls when pre-fetched data is available
- Reduces API calls from 3-4 per photo to 0 additional calls

This significantly speeds up large photoset downloads and reduces
the likelihood of hitting API rate limits.

Closes #64

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add tests for _get_url_from_extras() function
- Add tests for _get_extension_from_url() function
- Add tests for _download_file() function
- Add test for download using prefetched URL flow

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Optimize downloading photosets

2 participants