Conversation
Summary of ChangesHello @YangSen-qn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此拉取请求主要将项目版本升级到 Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
storagev2/retrier/retrier.go
Outdated
| } | ||
| return DontRetry | ||
| } | ||
| } No newline at end of file |
| } else if os.IsTimeout(unwrapedErr) { | ||
| return RetryRequest | ||
| } else if unwrapedErr == io.EOF { | ||
| return RetryRequest |
There was a problem hiding this comment.
Semantic Concern: Retrying io.EOF May Be Incorrect
Treating io.EOF as retryable contradicts Go's standard semantics. Per Go docs, io.EOF signals a graceful end of input, not an error. Normal EOFs (server closes connection after complete response) will trigger unnecessary retries.
Issues:
- Retry storms: Legitimate EOFs will retry 3x per host across multiple hosts (6-12+ total attempts)
- Performance: Adds 150-300ms latency overhead per request
- Resource waste: Large uploads re-buffer on each retry
Suggestion: Consider using io.ErrUnexpectedEOF instead, or add context checking to only retry EOF during request phase (not after response headers received). If EOF retry is intentional for specific scenarios, please add a code comment explaining why.
Note: The code already handles "unexpected EOF" via string matching at line 183.
| } else if os.IsTimeout(unwrapedErr) { | ||
| return RetryRequest | ||
| } else if unwrapedErr == io.EOF { | ||
| return RetryRequest |
There was a problem hiding this comment.
Security: Potential DoS Amplification
Malicious servers can exploit this by repeatedly sending premature EOFs, forcing clients to retry up to 3 retries × N hosts times per request. For large file uploads, this amplifies bandwidth consumption and memory usage.
Mitigations needed:
- Add request body size limits for EOF retries (e.g., skip retry for requests >10MB)
- Implement exponential backoff for EOF retries instead of fixed 50-100ms delay
- Consider rate limiting or circuit breaker per endpoint
Current safeguards (context deadlines, retry limits) help but don't fully prevent resource exhaustion.
|
CHANGELOG Documentation Issues The CHANGELOG is missing the main code change and contains inaccuracies: Missing: The io.EOF retry fix (commit d7ec7dc) - the actual functional change in this release Please update CHANGELOG to accurately reflect all 5 commits in this release. |
No description provided.