Skip to content

Conversation

@github-actions
Copy link

@github-actions github-actions bot commented Dec 8, 2025

Fixed dependency issues for Python 3.10

Name Version ID Fix Versions Description
urllib3 2.5.0 CVE-2025-66418 2.6.0 ## Impact urllib3 supports chained HTTP encoding algorithms for response content according to RFC 9110 (e.g., Content-Encoding: gzip, zstd). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, use preload_content=False and ensure that resp.headers["content-encoding"] contains a safe number of encodings before reading the response content.
urllib3 2.5.0 CVE-2025-66471 2.6.0 ### Impact urllib3's streaming API is designed for the efficient handling of large HTTP responses by reading the content in chunks, rather than loading the entire response body into memory at once. When streaming a compressed response, urllib3 can perform decoding or decompression based on the HTTP Content-Encoding header (e.g., gzip, deflate, br, or zstd). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requested a small chunk of data. ### Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier to stream large compressed responses or content from untrusted sources. stream(), read(amt=256), read1(amt=256), read_chunked(amt=256), readinto(b) are examples of urllib3.HTTPResponse method calls using the affected logic unless decoding is disabled explicitly. ### Remediation Upgrade to at least urllib3 v2.6.0 in which the library avoids decompressing data that exceeds the requested amount. If your environment contains a package facilitating the Brotli encoding, upgrade to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 too. These versions are enforced by the urllib3[brotli] extra in the patched versions of urllib3. ### Credits The issue was reported by @Cycloctane. Supplemental information was provided by @stamparm during a security audit performed by 7ASecurity and facilitated by OSTIF.
Name Skip Reason
py-tes URL requirements cannot be pinned to a specific package version

Fixed dependency issues for Python 3.11

Name Version ID Fix Versions Description
urllib3 2.5.0 CVE-2025-66418 2.6.0 ## Impact urllib3 supports chained HTTP encoding algorithms for response content according to RFC 9110 (e.g., Content-Encoding: gzip, zstd). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, use preload_content=False and ensure that resp.headers["content-encoding"] contains a safe number of encodings before reading the response content.
urllib3 2.5.0 CVE-2025-66471 2.6.0 ### Impact urllib3's streaming API is designed for the efficient handling of large HTTP responses by reading the content in chunks, rather than loading the entire response body into memory at once. When streaming a compressed response, urllib3 can perform decoding or decompression based on the HTTP Content-Encoding header (e.g., gzip, deflate, br, or zstd). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requested a small chunk of data. ### Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier to stream large compressed responses or content from untrusted sources. stream(), read(amt=256), read1(amt=256), read_chunked(amt=256), readinto(b) are examples of urllib3.HTTPResponse method calls using the affected logic unless decoding is disabled explicitly. ### Remediation Upgrade to at least urllib3 v2.6.0 in which the library avoids decompressing data that exceeds the requested amount. If your environment contains a package facilitating the Brotli encoding, upgrade to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 too. These versions are enforced by the urllib3[brotli] extra in the patched versions of urllib3. ### Credits The issue was reported by @Cycloctane. Supplemental information was provided by @stamparm during a security audit performed by 7ASecurity and facilitated by OSTIF.
Name Skip Reason
py-tes URL requirements cannot be pinned to a specific package version

Fixed dependency issues for Python 3.12

Name Version ID Fix Versions Description
urllib3 2.5.0 CVE-2025-66418 2.6.0 ## Impact urllib3 supports chained HTTP encoding algorithms for response content according to RFC 9110 (e.g., Content-Encoding: gzip, zstd). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, use preload_content=False and ensure that resp.headers["content-encoding"] contains a safe number of encodings before reading the response content.
urllib3 2.5.0 CVE-2025-66471 2.6.0 ### Impact urllib3's streaming API is designed for the efficient handling of large HTTP responses by reading the content in chunks, rather than loading the entire response body into memory at once. When streaming a compressed response, urllib3 can perform decoding or decompression based on the HTTP Content-Encoding header (e.g., gzip, deflate, br, or zstd). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requested a small chunk of data. ### Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier to stream large compressed responses or content from untrusted sources. stream(), read(amt=256), read1(amt=256), read_chunked(amt=256), readinto(b) are examples of urllib3.HTTPResponse method calls using the affected logic unless decoding is disabled explicitly. ### Remediation Upgrade to at least urllib3 v2.6.0 in which the library avoids decompressing data that exceeds the requested amount. If your environment contains a package facilitating the Brotli encoding, upgrade to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 too. These versions are enforced by the urllib3[brotli] extra in the patched versions of urllib3. ### Credits The issue was reported by @Cycloctane. Supplemental information was provided by @stamparm during a security audit performed by 7ASecurity and facilitated by OSTIF.
Name Skip Reason
py-tes URL requirements cannot be pinned to a specific package version

Fixed dependency issues for Python 3.13

Name Version ID Fix Versions Description
urllib3 2.5.0 CVE-2025-66418 2.6.0 ## Impact urllib3 supports chained HTTP encoding algorithms for response content according to RFC 9110 (e.g., Content-Encoding: gzip, zstd). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, use preload_content=False and ensure that resp.headers["content-encoding"] contains a safe number of encodings before reading the response content.
urllib3 2.5.0 CVE-2025-66471 2.6.0 ### Impact urllib3's streaming API is designed for the efficient handling of large HTTP responses by reading the content in chunks, rather than loading the entire response body into memory at once. When streaming a compressed response, urllib3 can perform decoding or decompression based on the HTTP Content-Encoding header (e.g., gzip, deflate, br, or zstd). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requested a small chunk of data. ### Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier to stream large compressed responses or content from untrusted sources. stream(), read(amt=256), read1(amt=256), read_chunked(amt=256), readinto(b) are examples of urllib3.HTTPResponse method calls using the affected logic unless decoding is disabled explicitly. ### Remediation Upgrade to at least urllib3 v2.6.0 in which the library avoids decompressing data that exceeds the requested amount. If your environment contains a package facilitating the Brotli encoding, upgrade to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 too. These versions are enforced by the urllib3[brotli] extra in the patched versions of urllib3. ### Credits The issue was reported by @Cycloctane. Supplemental information was provided by @stamparm during a security audit performed by 7ASecurity and facilitated by OSTIF.
Name Skip Reason
py-tes URL requirements cannot be pinned to a specific package version

@Cycloctane
Copy link

Your action PRs are triggering spam notifications. Please avoid it.

@github-actions github-actions bot force-pushed the create-pull-request/patch-audit-constraints branch from 4965caa to cfae4a1 Compare December 15, 2025 12:13
@github-actions github-actions bot changed the title Updated constraints due security reasons (triggered on 2025-12-08T12:12:20+00:00 by 362fc1e375bd42010637a3f6a07ef075a7285b23) Updated constraints due security reasons (triggered on 2025-12-15T12:13:24+00:00 by 362fc1e375bd42010637a3f6a07ef075a7285b23) Dec 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants