fix: prevent header duplication & guard JSON parsing in auth proxy#33
Hidden character warning
fix: prevent header duplication & guard JSON parsing in auth proxy#33danielbrunt57 wants to merge 9 commits intoalandtse:mainfrom
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
WalkthroughConstructor extended to accept Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Proxy as AuthCaptureProxy
participant Upstream
participant BG as BackgroundTaskMgr
Client->>Proxy: Send HTTP request (headers + body)
Proxy->>Proxy: Make shallow copy -> req_headers
Proxy->>Proxy: Inspect req_headers Content-Type
alt Content-Type is JSON
Proxy->>Proxy: try parse JSON (catch JSONDecodeError/ValueError)
else
Proxy->>Proxy: preserve raw/form data or multipart handling
end
Proxy->>Upstream: Forward request with headers=req_headers and body
Upstream-->>Proxy: Return response
Proxy->>BG: Register/track any stop/start background task
Proxy-->>Client: Return response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
authcaptureproxy/auth_capture_proxy.py (1)
467-474: Catch httpx timeout exceptions.Request timeouts from httpx will bubble up uncaught and cause 500 errors. Add exception handler for
httpx.TimeoutExceptionto return a friendly error response.Suggested fix
except ClientConnectionError as ex: return await self._build_response( text=f"Error connecting to {site}; please retry: {ex}" ) + except httpx.TimeoutException as ex: + return await self._build_response( + text=f"Error connecting to {site}; request timed out: {ex}" + ) except TooManyRedirects as ex: return await self._build_response( text=f"Error connecting to {site}; too may redirects: {ex}" )
🤖 Fix all issues with AI agents
In `@authcaptureproxy/auth_capture_proxy.py`:
- Line 6: When JSON decoding of the incoming aiohttp Request fails, the code
currently drops the body; instead read and cache the raw bytes via await
request.read() (which is safe because aiohttp caches reads) and set a raw_body
variable; then in the dispatch branch that currently sends an empty body, change
it to call getattr(self.session, method)(site, content=raw_body,
headers=req_headers, follow_redirects=True) so the proxied request forwards the
original bytes; update the logic around JSONDecodeError handling and the branch
that checks raw_body/request body to use this raw_body when present (referencing
request, raw_body, self.session, method, req_headers, and site).
Refactor error handling for connection issues in auth_capture_proxy.py to provide more specific error messages.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@authcaptureproxy/auth_capture_proxy.py`:
- Around line 467-478: The except blocks for ClientConnectionError,
httpx.TimeoutException, and TooManyRedirects in the function using
self._build_response have inconsistent indentation causing a syntax error and
also contain a typo; fix the indentation so all except clauses align with the
surrounding try/except block (make the three except lines and their return await
self._build_response(...) blocks use the same indentation level), and correct
the message in the TooManyRedirects handler from "too may redirects" to "too
many redirects".
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@authcaptureproxy/auth_capture_proxy.py`:
- Line 403: The call to asyncio.create_task(self.stop_proxy(3)) can be
garbage-collected; modify the class so __init__ defines a container like
self._background_tasks: set[asyncio.Task] = set(), then when scheduling use task
= asyncio.create_task(self.stop_proxy(3)) and add it to self._background_tasks;
also add a done callback to remove the task from self._background_tasks (or
await/cleanup elsewhere) so the task reference is retained until completion and
won't be GC'd; refer to the stop_proxy coroutine, __init__, and the new
_background_tasks attribute when making this change.
- Around line 398-399: The regex used for path normalization mistakenly uses r"/
+" (slash + space) instead of collapsing consecutive slashes; update the pattern
to r"/+" wherever it's used (specifically in the comparisons that normalize
self._proxy_url.with_path(...) — e.g., inside the conditional comparing
URL(str(request.url)).path with re.sub(...)) so that multiple adjacent slashes
are reduced to one; apply the same change to the other two occurrences noted
(the checks at the other two similar comparisons referenced in the comment).
- Fix regex pattern for path normalization: r"/ +" should be r"/+". - Add an instance variable to hold background tasks.
|
Fixes runaway header duplication during auth flow by isolating header mutation and guarding JSON parsing.
Resolves CloudFront 500 / Amazon error page during login.
Resolves: #3308
Summary by CodeRabbit
New Features
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.