[fix] Fixed broken settings and issues with drf-yasg#9
Conversation
nemesifier
commented
Apr 25, 2026
- Fixed exclude pattern
- Fixed various issues with settings not working as expected
- Simplified get_setting
- Excluded swagger-ui* files (from drf-yasg)
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 26 minutes and 25 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (9)
📝 WalkthroughWalkthroughThis PR refactors the configuration system for the minicompress plugin. The main changes include: (1) modifications to the Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Caution Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional.
❌ Failed checks (1 error)
✅ Passed checks (4 passed)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| content = self._read_file_content(path) | ||
| if content is None: | ||
| continue | ||
| processed_count += 1 |
There was a problem hiding this comment.
CRITICAL: Logic error in processed_count placement. The increment happens BEFORE content decoding (line 106-109). If a file fails to decode (UnicodeDecodeError), it's still counted toward max_files but no minification occurs.
Files that fail decoding should NOT count toward the limit. Move processed_count += 1 to after the successful minification (after line 128 where minified_files[path] = minified_path is set).
Code Review SummaryStatus: 1 Issue Found | Recommendation: Consider fixing before merge Overview
Issue Details (click to expand)WARNING
Incremental Review (7ba2e9f..41c9599)No new changes since last review. Same diff as previous commit:
Note: The processed_count placement issue in Files Reviewed (5 files)
Reviewed by kimi-k2.5 · 299,067 tokens |
Coverage Report for CI Build 24939046595Coverage increased (+1.1%) to 93.575%Details
Uncovered ChangesNo uncovered changes found. Coverage Regressions22 previously-covered lines in 2 files lost coverage.
Coverage Stats
💛 - Coveralls |
1e151ef to
9e02542
Compare
0e4ab3e to
13b6d41
Compare
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
README.rst (1)
149-150:⚠️ Potential issue | 🟡 MinorStale exclusion summary in "Supported File Types".
This sentence still lists only
*.min.*and*-min.*, but the documented defaults at lines 122-124 now also include*swagger-ui-*,*.gz,*.br, and*.zip. Either drop the duplicate enumeration here or update it to match.📝 Proposed fix
-Files matching ``*.min.*`` or ``*-min.*`` patterns are excluded from -processing. +Files matching ``MINICOMPRESS_EXCLUDE_PATTERNS`` (e.g., ``*.min.*``, +``*-min.*``, ``*swagger-ui*``, ``*.gz``, ``*.br``, ``*.zip``) are +excluded from processing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.rst` around lines 149 - 150, Update the stale exclusion sentence "Files matching ``*.min.*`` or ``*-min.*`` patterns are excluded from processing." in README.rst so it matches the documented defaults (include ``*swagger-ui-*``, ``*.gz``, ``*.br``, and ``*.zip``) or remove the duplicate sentence entirely; locate the sentence text and either expand it to list the full set of default exclusions or replace it with a short pointer like "See the default exclusions above" to avoid duplication.django_minify_compress_staticfiles/storage.py (2)
99-103: 🧹 Nitpick | 🔵 TrivialInconsistent semantics for
processed_countbetween minify and compress paths.In
process_minificationthe counter is incremented immediately after a successful read (line 103) — i.e., it caps attempted files. Inprocess_compressionthe counter is incremented only after gzip+brotli have completed (line 185) — i.e., it caps successful files (an exception inside thetryskips the increment). The new testtest_max_files_limit_applies_to_attempted_not_reducedcodifies the new minify semantics, but the compress path was not changed to match.Pick one definition for "processed" (most projects want "attempted", since the limit is a guardrail against runaway work) and apply it consistently — otherwise users tuning
MAX_FILES_PER_RUNwill see different effective behavior depending on which mixin runs.Also applies to: 183-185
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@django_minify_compress_staticfiles/storage.py` around lines 99 - 103, The processed_count semantics are inconsistent: process_minification increments processed_count immediately after a successful _read_file_content (counting attempted files) while process_compression only increments after gzip/brotli succeed (counting only fully successful files); change process_compression to match process_minification by incrementing processed_count right after a successful _read_file_content call (before performing gzip/brotli and before any try/except that could skip increments), ensuring both process_minification and process_compression treat "processed" as attempted files and use the same processed_count variable consistently.
138-189: 🧹 Nitpick | 🔵 TrivialHoist per-file
get_settingreads out of the compression loop.
get_setting("GZIP_COMPRESSION")andget_setting("BROTLI_COMPRESSION")are read once per file in the loop, even though they are global toggles. Read once before the loop (and reuse the value at line 142), eliminating both the redundantgetattrto Django settings and the function-call overhead per file.Same pattern for
_read_file_content'sget_setting("MAX_FILE_SIZE")(line 196), which is hit on every file read.♻️ Proposed fix sketch
def process_compression(self, paths, allow_min=False): """Process compression for given paths.""" if not get_setting("ENABLED"): return {} - if not (get_setting("GZIP_COMPRESSION") or get_setting("BROTLI_COMPRESSION")): + gzip_enabled = bool(get_setting("GZIP_COMPRESSION")) + brotli_enabled = bool(get_setting("BROTLI_COMPRESSION")) + if not (gzip_enabled or brotli_enabled): return {} compressed_files = {} - max_files = get_setting("MAX_FILES_PER_RUN") or 1000 + max_files = get_setting("MAX_FILES_PER_RUN") processed_count = 0 ... - if get_setting("GZIP_COMPRESSION"): + if gzip_enabled: ... - if get_setting("BROTLI_COMPRESSION"): + if brotli_enabled:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@django_minify_compress_staticfiles/storage.py` around lines 138 - 189, In process_compression, hoist global toggles by calling get_setting("GZIP_COMPRESSION"), get_setting("BROTLI_COMPRESSION"), and get_setting("MAX_FILES_PER_RUN") once before the loop (e.g., assign gzip_enabled, brotli_enabled, max_files) and reuse those variables instead of calling get_setting(...) per file; likewise, ensure _read_file_content reads get_setting("MAX_FILE_SIZE") once (or accept max_file_size as an argument) so per-file logic in process_compression and the helper _read_file_content uses the cached values (refer to process_compression, gzip_compress, brotli_compress, and _read_file_content to locate the reads) and remove the redundant get_setting calls inside the loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@django_minify_compress_staticfiles/conf.py`:
- Around line 53-60: The validate_settings() function currently only checks
value <= 0 and is type-blind; update it to explicitly ensure each setting is an
int and > 0 (raise ImproperlyConfigured with a clear message when not an int or
when <= 0) for the existing names ("MIN_FILE_SIZE", "MAX_FILE_SIZE") and extend
the validated set to include "MAX_FILES_PER_RUN" and the compression level
settings (the COMPRESSION_LEVEL_* keys used elsewhere), using get_setting() to
fetch values and referencing ImproperlyConfigured on failure; this ensures
floats, bools, and non-numeric strings produce a clear error rather than
TypeError or silent acceptance.
- Around line 32-34: Replace the pattern string "*swagger-ui-*" in the exclusion
list in conf.py with "*swagger-ui*" so that files like "swagger-ui.css" and
"swagger-ui.js" are matched; locate the exclusion list containing the literal
"*swagger-ui-*" entry and update it to "*swagger-ui*" (no code beyond changing
that string).
In `@django_minify_compress_staticfiles/storage.py`:
- Line 90: The current code uses max_files = get_setting("MAX_FILES_PER_RUN") or
1000 which silently replaces explicit falsy values (e.g., 0 or negative) with
1000; remove the "or 1000" fallback in both places where
get_setting("MAX_FILES_PER_RUN") is used (the assignment to max_files and the
second occurrence) and instead add validation in validate_settings() to assert
MAX_FILES_PER_RUN is an integer > 0 (raise/configuration error if not), so
explicit user values are respected and invalid values fail loudly.
In `@django_minify_compress_staticfiles/utils.py`:
- Around line 140-149: The extension-normalization logic duplicated in
utils.should_process and MinicompressStorage.should_process_compression should
be extracted to FileManager as a single reusable property (e.g., a
cached_property named processable_extensions) that returns a normalized list of
truthy keys when supported_extensions is a dict or list(...) otherwise; update
utils.should_process to call FileManager.processable_extensions and update
MinicompressStorage.should_process_compression to use the same property (keeping
the existing call to should_process_file and preserving exclude_patterns via
getattr(self, "exclude_patterns", None) or [] so both call sites use the shared,
cached normalization logic).
In `@tests/test_conf.py`:
- Around line 26-29: The test test_none_value_falls_back_to_default only checks
the boolean ENABLED; expand it to also verify get_setting treats an explicit
None as unset for other types by parametrizing or adding assertions for at least
one numeric setting (e.g., MIN_FILE_SIZE, MAX_FILE_SIZE, COMPRESSION_LEVEL_GZIP)
and one collection setting (e.g., SUPPORTED_EXTENSIONS or EXCLUDE_PATTERNS) when
override_settings(MINICOMPRESS_ENABLED=None) or equivalent per-setting overrides
are used; ensure each assertion compares get_setting("<SETTING_NAME>") to
DEFAULT_SETTINGS["<SETTING_NAME>"] so sentinel handling in get_setting and
DEFAULT_SETTINGS is covered for non-boolean types (refer to get_setting,
DEFAULT_SETTINGS, and test_none_value_falls_back_to_default).
In `@tests/test_storage.py`:
- Around line 314-329: The test's docstring checks that
MINICOMPRESS_MAX_FILES_PER_RUN caps files attempted, but the current assertion
only enforces an upper bound on mocked_read.call_count; add a lower-bound
assertion to ensure the loop actually attempted up to the cap. After calling
self.minifier.process_minification(paths) add an assertion like
self.assertGreaterEqual(mocked_read.call_count, 2) (or assertEqual to 2 if you
want exact behavior) referencing the mocked _read_file_content call_count and
the MINICOMPRESS_MAX_FILES_PER_RUN=2 setting so process_minification is
validated for attempted-file limits.
In `@tests/test_utils.py`:
- Around line 133-171: DEFAULT_SETTINGS["EXCLUDE_PATTERNS"] currently misses the
intended swagger-ui pattern, so update the pattern in conf.py (the
DEFAULT_SETTINGS / EXCLUDE_PATTERNS entry) to include a proper fnmatch like
"*swagger-ui*" (fix whatever broken quoting/formatting at the existing lines
around the EXCLUDE_PATTERNS definition), and add a unit test calling
should_process_file("swagger-ui.css", ["css"],
DEFAULT_SETTINGS["EXCLUDE_PATTERNS"]) asserting False to tests/test_utils.py
(alongside the other exclude-pattern tests) to prevent regressions; reference
the DEFAULT_SETTINGS/EXCLUDE_PATTERNS symbol in conf.py and the
should_process_file helper in tests.
---
Outside diff comments:
In `@django_minify_compress_staticfiles/storage.py`:
- Around line 99-103: The processed_count semantics are inconsistent:
process_minification increments processed_count immediately after a successful
_read_file_content (counting attempted files) while process_compression only
increments after gzip/brotli succeed (counting only fully successful files);
change process_compression to match process_minification by incrementing
processed_count right after a successful _read_file_content call (before
performing gzip/brotli and before any try/except that could skip increments),
ensuring both process_minification and process_compression treat "processed" as
attempted files and use the same processed_count variable consistently.
- Around line 138-189: In process_compression, hoist global toggles by calling
get_setting("GZIP_COMPRESSION"), get_setting("BROTLI_COMPRESSION"), and
get_setting("MAX_FILES_PER_RUN") once before the loop (e.g., assign
gzip_enabled, brotli_enabled, max_files) and reuse those variables instead of
calling get_setting(...) per file; likewise, ensure _read_file_content reads
get_setting("MAX_FILE_SIZE") once (or accept max_file_size as an argument) so
per-file logic in process_compression and the helper _read_file_content uses the
cached values (refer to process_compression, gzip_compress, brotli_compress, and
_read_file_content to locate the reads) and remove the redundant get_setting
calls inside the loop.
In `@README.rst`:
- Around line 149-150: Update the stale exclusion sentence "Files matching
``*.min.*`` or ``*-min.*`` patterns are excluded from processing." in README.rst
so it matches the documented defaults (include ``*swagger-ui-*``, ``*.gz``,
``*.br``, and ``*.zip``) or remove the duplicate sentence entirely; locate the
sentence text and either expand it to list the full set of default exclusions or
replace it with a short pointer like "See the default exclusions above" to avoid
duplication.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 94a31bd1-895e-4eb1-b04a-74dfa12751c5
📒 Files selected for processing (9)
README.rstdjango_minify_compress_staticfiles/conf.pydjango_minify_compress_staticfiles/storage.pydjango_minify_compress_staticfiles/utils.pytests/test_conf.pytests/test_integration.pytests/test_security.pytests/test_storage.pytests/test_utils.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: finish
- GitHub Check: Kilo Code Review
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Code must be formatted usingopenwisp-qa-formatcommand
Avoid unnecessary blank lines - use them sparingly, only where PEP 8 requires them (e.g., between class methods, between imports and code)
Ensureis_safe_path()is used for file path validation to prevent path traversal vulnerabilities
Python code must pass isort, black, and flake8 validation checks
All Python files must end with a blank line
Files:
tests/test_security.pytests/test_storage.pydjango_minify_compress_staticfiles/conf.pytests/test_utils.pytests/test_integration.pytests/test_conf.pydjango_minify_compress_staticfiles/utils.pydjango_minify_compress_staticfiles/storage.py
**/test*.py
📄 CodeRabbit inference engine (AGENTS.md)
Keep test coverage above 90% - every new feature must include tests
Files:
tests/test_security.pytests/test_storage.pytests/test_utils.pytests/test_integration.pytests/test_conf.py
**/*.rst
📄 CodeRabbit inference engine (AGENTS.md)
ReStructuredText files must have valid syntax as validated by QA checks
Files:
README.rst
🔇 Additional comments (4)
tests/test_security.py (1)
241-260: LGTM.Good regression test for the
0vs falsy-fallback bug. The patch targetsgzip.GzipFile/brotli.compressat module level, which works becausestorage.pyimportsgzipandbrotliand references attributes on them rather than rebinding into its own namespace.tests/test_storage.py (1)
118-128: LGTM.Solid coverage for the
keep_bang_comments=bool(get_setting(...))change instorage.py. Tests both CSS and JS minifier paths.tests/test_integration.py (1)
90-123: LGTM.Clean coverage of the new gate-and-validate behavior:
ENABLED=Falseshort-circuits both pipelines, andMIN_FILE_SIZE/MAX_FILE_SIZEboundary values raiseImproperlyConfiguredat storage instantiation. Ifvalidate_settings()is later extended (e.g., to coverMAX_FILES_PER_RUN), please add matching tests here.tests/test_conf.py (1)
57-62: LGTM!The added assertion correctly locks in the new
*swagger-ui-*exclude pattern that matches the expandedDEFAULT_SETTINGS["EXCLUDE_PATTERNS"].
Some files in drf-yasg, a dependency used in the OpenWISP ecosystem, were already minified but don't carry the .min.js suffix and hence don't trigger the exclusion, we can safely exclude these files from minification.
13b6d41 to
274b408
Compare
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
274b408 to
7ba2e9f
Compare
| processed_count += 1 | ||
| except Exception as e: | ||
| logger.error(f"Failed to compress {path}: {e}") | ||
| continue |
There was a problem hiding this comment.
WARNING: Inconsistent processed_count placement. In process_minification (line 103), count increments after reading but before processing. Here, it increments only after successful compression.
Files that fail compression won't count toward MAX_FILES_PER_RUN. For consistent rate limiting, move processed_count += 1 to after line 157 (after content read, before compression).
7ba2e9f to
41c9599
Compare
|
@coderabbitai resolve |
✅ Actions performedComments resolved and changes approved. |