Updated constraints due security reasons (triggered on 2025-12-15T12:13:24+00:00 by 362fc1e375bd42010637a3f6a07ef075a7285b23) #10
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixed dependency issues for Python 3.10
Content-Encoding: gzip, zstd). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, usepreload_content=Falseand ensure thatresp.headers["content-encoding"]contains a safe number of encodings before reading the response content.Content-Encodingheader (e.g.,gzip,deflate,br, orzstd). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requested a small chunk of data. ### Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier to stream large compressed responses or content from untrusted sources.stream(),read(amt=256),read1(amt=256),read_chunked(amt=256),readinto(b)are examples ofurllib3.HTTPResponsemethod calls using the affected logic unless decoding is disabled explicitly. ### Remediation Upgrade to at least urllib3 v2.6.0 in which the library avoids decompressing data that exceeds the requested amount. If your environment contains a package facilitating the Brotli encoding, upgrade to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 too. These versions are enforced by theurllib3[brotli]extra in the patched versions of urllib3. ### Credits The issue was reported by @Cycloctane. Supplemental information was provided by @stamparm during a security audit performed by 7ASecurity and facilitated by OSTIF.Fixed dependency issues for Python 3.11
Content-Encoding: gzip, zstd). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, usepreload_content=Falseand ensure thatresp.headers["content-encoding"]contains a safe number of encodings before reading the response content.Content-Encodingheader (e.g.,gzip,deflate,br, orzstd). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requested a small chunk of data. ### Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier to stream large compressed responses or content from untrusted sources.stream(),read(amt=256),read1(amt=256),read_chunked(amt=256),readinto(b)are examples ofurllib3.HTTPResponsemethod calls using the affected logic unless decoding is disabled explicitly. ### Remediation Upgrade to at least urllib3 v2.6.0 in which the library avoids decompressing data that exceeds the requested amount. If your environment contains a package facilitating the Brotli encoding, upgrade to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 too. These versions are enforced by theurllib3[brotli]extra in the patched versions of urllib3. ### Credits The issue was reported by @Cycloctane. Supplemental information was provided by @stamparm during a security audit performed by 7ASecurity and facilitated by OSTIF.Fixed dependency issues for Python 3.12
Content-Encoding: gzip, zstd). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, usepreload_content=Falseand ensure thatresp.headers["content-encoding"]contains a safe number of encodings before reading the response content.Content-Encodingheader (e.g.,gzip,deflate,br, orzstd). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requested a small chunk of data. ### Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier to stream large compressed responses or content from untrusted sources.stream(),read(amt=256),read1(amt=256),read_chunked(amt=256),readinto(b)are examples ofurllib3.HTTPResponsemethod calls using the affected logic unless decoding is disabled explicitly. ### Remediation Upgrade to at least urllib3 v2.6.0 in which the library avoids decompressing data that exceeds the requested amount. If your environment contains a package facilitating the Brotli encoding, upgrade to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 too. These versions are enforced by theurllib3[brotli]extra in the patched versions of urllib3. ### Credits The issue was reported by @Cycloctane. Supplemental information was provided by @stamparm during a security audit performed by 7ASecurity and facilitated by OSTIF.Fixed dependency issues for Python 3.13
Content-Encoding: gzip, zstd). However, the number of links in the decompression chain was unbounded allowing a malicious server to insert a virtually unlimited number of compression steps leading to high CPU usage and massive memory allocation for the decompressed data. ## Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier for HTTP requests to untrusted sources unless they disable content decoding explicitly. ## Remediation Upgrade to at least urllib3 v2.6.0 in which the library limits the number of links to 5. If upgrading is not immediately possible, usepreload_content=Falseand ensure thatresp.headers["content-encoding"]contains a safe number of encodings before reading the response content.Content-Encodingheader (e.g.,gzip,deflate,br, orzstd). The library must read compressed data from the network and decompress it until the requested chunk size is met. Any resulting decompressed data that exceeds the requested amount is held in an internal buffer for the next read operation. The decompression logic could cause urllib3 to fully decode a small amount of highly compressed data in a single operation. This can result in excessive resource consumption (high CPU usage and massive memory allocation for the decompressed data; CWE-409) on the client side, even if the application only requested a small chunk of data. ### Affected usages Applications and libraries using urllib3 version 2.5.0 and earlier to stream large compressed responses or content from untrusted sources.stream(),read(amt=256),read1(amt=256),read_chunked(amt=256),readinto(b)are examples ofurllib3.HTTPResponsemethod calls using the affected logic unless decoding is disabled explicitly. ### Remediation Upgrade to at least urllib3 v2.6.0 in which the library avoids decompressing data that exceeds the requested amount. If your environment contains a package facilitating the Brotli encoding, upgrade to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 too. These versions are enforced by theurllib3[brotli]extra in the patched versions of urllib3. ### Credits The issue was reported by @Cycloctane. Supplemental information was provided by @stamparm during a security audit performed by 7ASecurity and facilitated by OSTIF.