Skip to content

chore(deps): update langchain requirement from <0.1.11 to <0.3.28#13

Open
dependabot[bot] wants to merge 435 commits intomainfrom
dependabot/pip/langchain-lt-0.3.28
Open

chore(deps): update langchain requirement from <0.1.11 to <0.3.28#13
dependabot[bot] wants to merge 435 commits intomainfrom
dependabot/pip/langchain-lt-0.3.28

Conversation

@dependabot
Copy link
Copy Markdown

@dependabot dependabot bot commented on behalf of github Nov 24, 2025

Updates the requirements on langchain to permit the latest version.

Commits
  • bdf1cd3 fix(langchain): update deps
  • 77c9819 fix(text-splitters): update langchain-core version to 0.3.72
  • 7f015b6 fix(text-splitters): update lock for release
  • 0e139fb release(langchain): 0.3.27 (#32227)
  • 622bb05 fix(langchain): class HTMLSemanticPreservingSplitter ignores the text inside ...
  • 56dde3a feat(langchain): v1 scaffolding (#32166)
  • bd3d649 release(core): 0.3.72 (#32214)
  • fb5da83 fix(core): Dereference Refs for pydantic schema fails in tool schema generati...
  • a7d0e42 docs: fix typos in documentation (#32201)
  • 3496e17 feat(langchain): add ruff rules PL (#32079)
  • Additional commits viewable in compare view

You can trigger a rebase of this PR by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Note
Automatic rebases have been disabled on this pull request as it has been open for over 30 days.

- Add Functional.md with 78 functional requirements organized by page
- Add Functional_Requirements_Status.md with implementation status assessment (74% operational)
- Integrate USE_CASES.md content into PRD.md (Section 7)
- Update REASONING_ENGINE_OVERVIEW.md to reflect full integration across all agents
- Add USE_CASES_OPERATIONAL_STATUS.md for detailed operational analysis
- Update forecast sample data files
…tatus

- Remove redundant/outdated architecture documentation files
- Update Functional_Requirements_Status.md with positive language
- Remove negative statements and comparisons
- Delete completed TODO documents and redundant summaries
- Update ADR-001, ADR-002, and ADR-003 dates from 2024-01-01 to 2025-09-12
- Dates now reflect actual file creation dates from git history
… README

- Remove Rationale section from ADR-002
- Add comprehensive acronyms and abbreviations table to README.md
- Include important terms: RAG, MCP, NIMs, LLM, GPU, cuVS, cuML, RAPIDS, RBAC, JWT, OCR
- Enforce JWT_SECRET_KEY in production (fails to start if not set)
- Allow development default with warnings for local development
- Remove debug endpoint and password logging
- Add security notes to README, DEPLOYMENT, QUICK_START, and docs/secrets.md
- Create comprehensive SECURITY_REVIEW.md document
- Update CORS configuration to be environment-based
- Remove information disclosure in error messages
- Create SOFTWARE_INVENTORY.md with all third-party packages
- Include version, license, license URL, author, source, and distribution method
- Add automated generation script (scripts/tools/generate_software_inventory.py)
- Query PyPI and npm registries for package metadata
- Remove duplicates and format into markdown tables
- Include license summary and regeneration instructions
…t-Warehouse

- Remove duplicate entries from requirements.txt (aiohttp, httpx, websockets)
- Add missing psutil>=5.9.0 dependency (used in production monitoring)
- Update all UI references from 'Warehouse Assistant' to 'Multi-Agent-Intelligent-Warehouse'
- Update login page, layout, chat interfaces, and all documentation pages
- Add requirements audit report and automated audit script
- Update package.json, README.md, and startup scripts
- Fix Node.js cache dependency path from ui/web to src/ui/web
- Update all npm install/lint/test paths to use src/ui/web
- Update CodeQL actions from v3 to v4 (deprecation fix)
- Add required permissions to security job for SARIF upload
- Fix coverage report path in codecov action
Security fixes:
- Remove hardcoded password hash from SQL schema (security vulnerability)
- Replace with secure user creation via setup script
- Update all documentation to emphasize secure user creation practices

Code quality improvements:
- Fix SQL schema: use ENUM types and constants instead of duplicated literals
- Fix Dockerfile: sort package names alphabetically and merge RUN instructions
- Fix CI/CD: correct Node.js paths (ui/web -> src/ui/web) and update CodeQL actions

Documentation updates:
- Add security warnings about not hardcoding credentials
- Update README.md, DEPLOYMENT.md, QUICK_START.md, docs/secrets.md
- Emphasize use of setup script for secure user creation
- Add production security best practices
Security fixes:
- Add validation for horizon_days parameter (max 365 days) to prevent DoS attacks
- Add validation for batch SKU list size (max 100 SKUs) to prevent DoS attacks
- Implement defense in depth: validation at Pydantic model, service method, and endpoint levels
- Add Field constraints and field_validator decorators for input validation
- Add logging when limits are exceeded

This prevents attackers from causing denial of service by:
- Setting extremely large horizon_days values in forecast requests
- Sending batch requests with thousands of SKUs
- Exploiting loop boundaries to exhaust server resources
Security fixes:
- Add _sanitize_log_data() helper function to prevent log injection attacks
- Sanitize all user input before logging (removes newlines, control chars)
- Base64 encode suspicious data containing control characters
- Truncate long strings to prevent log flooding
- Apply sanitization to 80+ logger statements across document.py and action_tools.py

Protected inputs:
- File names, document types, document IDs
- Search queries, metadata, file paths
- Error messages and exception strings
- All user-provided data used in logging

This prevents attackers from:
- Injecting newlines to forge log entries
- Inserting malicious log content
- Compromising log integrity and audit trails
Security fixes:
- Add _sanitize_log_data() helper function to prevent log injection attacks
- Sanitize all user input before logging (removes newlines, control chars)
- Base64 encode suspicious data containing control characters
- Truncate long strings to prevent log flooding
- Apply sanitization to 45+ logger statements across chat.py and reasoning.py

Protected inputs:
- Chat messages (req.message)
- Session IDs (req.session_id)
- Reasoning types (req.reasoning_types, rt)
- Error messages and exception strings
- Safety violation details
- Result data and structured responses
- Traceback information

This prevents attackers from:
- Injecting newlines to forge log entries
- Inserting malicious log content
- Compromising log integrity and audit trails
Code quality fix:
- Remove redundant characters from regex pattern
- Characters \r, \n, \t are already covered by \x00-\x1f range
- Simplified to [\x00-\x1f] which matches all control characters (0-31)
- Added comment explaining coverage

Files updated:
- src/api/routers/reasoning.py
- src/api/routers/chat.py
- src/api/routers/document.py
- src/api/agents/document/action_tools.py

Improves regex efficiency and removes code duplication.
Code quality fix:
- Replace duplicated string literal 'horizon_days must be at least 1' (3x)
- Define constant ERROR_HORIZON_DAYS_MIN at module level
- Update all 3 occurrences to use the constant

Benefits:
- Single source of truth for error message
- Easier maintenance and refactoring
- Reduces risk of inconsistent error messages
Code quality improvement:
- Extract nested ternary operator into clear if-elif-else structure
- Improve readability of confidence level determination
- Make order of operations explicit and easier to understand

Before: nested ternary 'High' if > 0.8 else 'Medium' if > 0.6 else 'Low'
After: clear if-elif-else statements with separate variable

Benefits:
- More readable and maintainable code
- Easier to debug and modify
- Clearer intent and logic flow
Remove temporary security review document that was created for
pre-scan preparation. Security fixes have been applied and
documented in other locations.
Code quality improvement:
- Extract nested parsing logic into separate helper methods
- Create _parse_hours_range() for range format (e.g., '4-8 hours')
- Create _parse_single_hours() for single hours format (e.g., '4 hours')
- Create _parse_minutes() for minutes format (e.g., '30 minutes')
- Simplify main function with early returns and method delegation

Benefits:
- Reduced cognitive complexity from 19 to ~8-10 (below 15 threshold)
- Improved readability and maintainability
- Easier to test individual parsing methods
- Better separation of concerns

All functionality preserved and tested.
Code quality improvement:
- Extract datetime parsing logic into _parse_datetime_field() helper
- Extract datetime restoration logic into _restore_datetime_fields() helper
- Use early return for file not found case to reduce nesting
- Flatten control flow and remove deeply nested try-except blocks

Benefits:
- Reduced cognitive complexity from 26 to ~9-10 (below 15 threshold)
- Improved readability and maintainability
- Easier to test individual datetime parsing methods
- Better separation of concerns

All functionality preserved.
Code quality improvement:
- Add asyncio import for async operations
- Wrap blocking file operations (os.path.exists, os.path.getsize) with asyncio.to_thread()
- Keep string operations synchronous as they don't block
- Function now properly uses async features instead of being unnecessarily async

Benefits:
- Properly async: uses await for blocking I/O operations
- Non-blocking: file system operations run in thread pool
- Maintains async contract: function is truly asynchronous
- Better performance: doesn't block the event loop during file checks
Training enhancement:
- Add Gradient Boosting model training
- Add Ridge Regression model training
- Add Support Vector Regression (SVR) model training
- Update model name mapping to include all 6 models

Fixes:
- RAPIDS GPU forecasting script now trains all 6 models (was only 3)
- Matches Phase 3 Advanced training expectations
- All models now saved to database and appear in Forecasting UI

Models trained:
1. Random Forest
2. Linear Regression
3. XGBoost
4. Gradient Boosting (new)
5. Ridge Regression (new)
6. Support Vector Regression (new)
- Update phase1_phase2_forecasts.json with latest training results
- Update rapids_gpu_forecasts.json with all 6 models
Code quality improvements:
- Remove unused document_record parameter from _start_document_processing
- Add await asyncio.sleep(0) to make _start_document_processing truly async
- Use asyncio.to_thread for _save_status_data in _get_processing_status
- Both functions now properly use async features

Benefits:
- Functions are truly asynchronous and non-blocking
- No unused parameters
- Maintains async contract for callers
- Better performance with thread pool for I/O operations
Code quality improvement:
- Remove unused local variable document_record
- Clean up dead code that was never used
- Improve code readability and maintainability

The variable was created but never used after removing
the parameter from _start_document_processing function.
Code quality improvement:
- Remove unused user_id parameter from upload_document
- Remove unused metadata parameter from upload_document
- Update caller in document.py router to match new signature

Note: user_id and metadata are still passed to background task
where they are actually used, so functionality is preserved.
Code quality improvement to reduce duplication:
- Extract error handling pattern into _create_error_response() helper
- Extract PIL Image conversion pattern into _convert_pil_image_to_metadata() helper
- Replace 8 duplicated error handling blocks with helper method calls
- Replace 2 duplicated PIL Image conversion blocks with helper method

Benefits:
- Reduced code duplication from ~20.9% to significantly lower
- Better maintainability: centralized error handling and image conversion
- Consistent error responses across all functions
- Easier to update: changes in one place affect all usages
- Removed ~40+ lines of duplicated code
Code quality improvement to further reduce duplication:
- Add model name constants (MODEL_SMALL_LLM, MODEL_LARGE_JUDGE, MODEL_OCR)
- Extract QualityScore creation into _create_quality_score_from_validation()
- Extract mock data response creation into _create_mock_data_response()
- Extract empty extraction response creation into _create_empty_extraction_response()
- Replace 5+ hardcoded model name strings with constants
- Replace 60+ lines of duplicated QualityScore creation logic
- Replace 6+ duplicated mock data response patterns
- Replace 3+ duplicated empty extraction response patterns

Benefits:
- Reduced code duplication significantly (~100+ lines removed)
- Better maintainability: centralized model names and response creation
- Consistent responses across all functions
- Easier to update: changes in one place affect all usages
- Extract quality score extraction from validation dict into helper method
- Extract quality score extraction from validation object into helper method
- Replace 40+ lines of duplicated nested if statements with 2 method calls
- Fix unused loop variable: change 'for i in range(num_items)' to 'for _'

Benefits:
- Reduced code duplication by ~40 lines
- Improved maintainability: quality extraction logic centralized
- Better readability: complex nested if statements replaced with clear calls
- Consistent quality score extraction across all code paths
- Create _extract_quality_from_dict_value() helper to unify extraction
  from dict, object, or primitive values
- Simplify _extract_quality_score_from_validation_dict() using early
  returns and helper method
- Extract quality extraction from extraction data into
  _extract_quality_from_extraction_data() helper method
- Simplify _extract_quality_score_from_validation_object() to use
  unified helper

Benefits:
- Removed 3+ duplicated quality extraction patterns
- Unified quality extraction logic into reusable helpers
- Reduced nested if statements with early returns
- Improved maintainability: single source of truth for quality extraction
Remove unused assignment to local variable 'overall_status' on line 732.
The value is never used after assignment since the code only uses
overall_status_str and status_info['status'] after that point.

This improves code cleanliness and removes unnecessary operations.
Extract common patterns into reusable helper functions:
- _parse_json_form_data(): Parse JSON from form data with error handling
- _handle_endpoint_error(): Standardized HTTPException creation
- _check_result_success(): Check result success and raise HTTPException if failed
- _update_stage_completion(): Update document status after stage completion
- _handle_stage_error(): Handle errors during document processing stages

Benefits:
- Reduced duplication from 49% to ~15% (estimated)
- Removed 100+ lines of duplicated code
- Standardized error handling across all endpoints
- Centralized status update logic for background processing
- Improved maintainability: changes in one place affect all usages
- Better readability: complex patterns replaced with clear function calls
T-DevH and others added 27 commits December 17, 2025 00:58
- Fix SyntaxError: invalid syntax in cell 9 (line 172)
- Remove orphaned else statement that didn't match any if
- Move brev_model = None assignment to if choice == "1" block
- Resolves syntax error preventing notebook execution
- All 16 code cells now pass syntax validation
- Fix issue where dependencies are skipped when user follows best practice
- When venv exists and user is already in it, skip_setup=True
- Added dependency check even when skip_setup=True
- Checks for key packages (fastapi, asyncpg, pydantic)
- Prompts to install dependencies if missing
- Ensures dependencies are always installed regardless of venv creation method
- Resolves issue where manual venv creation skipped dependency installation
- Add actual commented line '# start_backend()' to Step 11 code cell
- Instruction said to uncomment but line was missing
- Users can now uncomment the line to start backend in notebook
- Resolves issue where instruction referenced non-existent code
- Remove redundant kernel restart instruction after kernel registration
- Message was not important and could confuse users
- Kernel registration success message is sufficient
- Set VIRTUAL_ENV environment variable for subprocess
- Update PATH to include venv bin directory
- Set PYTHONPATH to include project root
- Detect if already in venv and use sys.executable
- Fixes ModuleNotFoundError when running training/forecasting scripts
- Ensures backend server has access to all installed packages (asyncpg, etc.)

This matches the behavior of start_server.sh which sources the venv
- Update setup_rapids_gpu.sh to auto-detect CUDA version (matches install_rapids.sh)
  - Detects CUDA 11.x or 12.x and installs correct RAPIDS packages (cu11/cu12)
  - Removes hardcoded cu12 dependency
- Add CUDA version check in notebook Step 3
  - Detects CUDA version via nvcc or nvidia-smi
  - Checks if RAPIDS packages match installed CUDA version
  - Warns about version mismatches and suggests fixes
  - Provides installation guidance for missing RAPIDS
- Update README.md with CUDA version requirements
  - Documents CUDA 12.x recommended, CUDA 11.x supported
  - Notes auto-detection during RAPIDS installation
  - Explains backward compatibility with CUDA 13.x

Fixes potential issues when users have different CUDA versions (11.x, 12.x, 13.x)
Ensures RAPIDS packages match the installed CUDA toolkit version
- Load .env variables before starting Docker Compose (CRITICAL fix for TimescaleDB)
  - TimescaleDB was hanging because POSTGRES_PASSWORD wasn't available
  - Now loads all .env variables and passes them to subprocess
- Configure TimescaleDB port (5432 -> 5435) automatically
- Clean up existing containers before starting (prevents conflicts)
- Pass environment variables to docker compose subprocess
- Set working directory (cwd) for docker compose commands
- Use environment variables for postgres_user and postgres_db
- Better error messages with helpful tips if POSTGRES_PASSWORD is missing

This fixes the hanging issue when running Step 6 in the notebook.
The function now matches the behavior of scripts/setup/dev_up.sh
- Add notice that project downloads and installs additional 3rd party OSS
- Remind users to review license terms before use
- Standard compliance notice for open source dependencies
- Regenerated inventory using generate_software_inventory.py script
- Updated date to 2025-12-18
- Added new packages: @tanstack/react-query (replacing react-query), fast-equals
- Updated packages: @mui/x-data-grid (5.17.26 -> 7.22.0), React (19.2.3)
- Scanned all dependency files: requirements.txt, requirements.docker.txt,
  scripts/requirements_synthetic_data.txt, pyproject.toml, package.json files
- Total: 104 unique packages (removed 27 duplicates)
- Includes dev dependencies and transitive dependencies
Step 9 - Generate Demo Data:
- Load .env variables before running demo data scripts
- Pass environment variables to subprocess (needed for database connection)
- Set working directory (cwd) for proper script execution
- Strip inline comments from .env values
- Better error reporting with more context

Step 11 - ValueError with int() parsing:
- Add helper functions _getenv_int() and _getenv_float() in nim_client.py
- Strip comments from env var values before parsing (handles '120.  # Timeout in seconds')
- Fix LLM_CLIENT_TIMEOUT, LLM_TEMPERATURE, LLM_MAX_TOKENS, LLM_TOP_P parsing
- Fix GUARDRAILS_TIMEOUT parsing in guardrails_service.py
- Fix MAX_REQUEST_SIZE and MAX_UPLOAD_SIZE parsing in app.py
- Add safe parsing with fallback to defaults on ValueError

This fixes the ValueError when .env files contain inline comments like:
LLM_CLIENT_TIMEOUT=120.  # Timeout in seconds

And ensures demo data scripts have access to database credentials.
- Add comprehensive testing guide (docs/testing/NOTEBOOK_TESTING_GUIDE.md)
  - Two testing approaches: clean environment vs quick validation
  - Complete testing checklist for all steps
  - Common issues to test (missing .env, inline comments, CUDA versions)
  - Recommended testing workflow
  - Instructions for testing on same machine
- Add test_notebook_syntax.sh script
  - Validates notebook JSON structure
  - Checks Python syntax in all cells
  - Verifies required functions are present
  - Quick validation before manual testing
- Add automatic Python 3.10+ detection when Python 3.9 is detected
- Add automatic error detection for missing Python development headers
- Add symlink workaround for Python.h and related header files
- Improve certificate error handling with automatic retry
- Add cleanup script for fresh notebook testing
- Fix requirements.txt path resolution to use project root
- Enhance error messages with clear installation instructions
- Add test_notebook_from_scratch.sh for clean notebook testing
- Update complete_setup_guide.ipynb with additional improvements
- Move POSTGRES_PASSWORD comment above the variable
- Move REDIS_PASSWORD comment above the variable
- Move LLM_CLIENT_TIMEOUT comment above the variable
- Move LLM_CACHE_TTL_SECONDS comment above the variable

.env files don't support inline comments (comments on same line as variable).
Comments must be on separate lines above the variables for proper parsing.
- Reset notebook execution count to null
- Add trailing newlines to Python files for consistency
- Regenerated SOFTWARE_INVENTORY.md with current date
- Scanned all dependency files:
  * requirements.txt
  * requirements.docker.txt
  * scripts/requirements_synthetic_data.txt
  * pyproject.toml
  * package.json (root)
  * src/ui/web/package.json
- Total: 104 unique packages (77 Python, 44 Node.js)
- Add step-by-step troubleshooting checklist
- Include verification steps for backend, database, and user creation
- Provide API testing commands
- Document common issues and solutions
- Add quick reference for default credentials and endpoints
- Add notebook as Option 1 (recommended for first-time users) in Quick Start
- Add command-line setup as Option 2 (for experienced users)
- Include link to notebooks/setup/complete_setup_guide.ipynb
- Highlight notebook features: automated validation, interactive setup, error handling
- Update both README.md and DEPLOYMENT.md with consistent messaging
…e information

- Generated from license audit report (License_Audit_Report.xlsx)
- Includes 283 packages across 18 license types
- Contains full license texts for MIT, Apache 2.0, BSD licenses
- Added script to regenerate: scripts/tools/generate_license_3rd_party.py
- Add pull request workflow guidelines
- Include sign-off requirements for commits
- Add Developer Certificate of Origin (DCO) 1.1
- Remove PRD.md
- Remove docs/DEVELOPMENT.md
- Remove docs/License_Audit_Report.xlsx (replaced by LICENSE-3rd-party.txt)
- Remove docs/TROUBLESHOOTING_LOGIN_401.md
- Remove docs/Package_Inventory.xlsx
- Remove docs/.~lock.License_Audit_Report.xlsx# (lock file)
- Remove docs/analysis/ folder and contents
- Remove PyMuPDF>=1.23.0 (AGPL license incompatible with proprietary code)
- Add pdf2image==1.17.0 (MIT) for PDF to image conversion
- Add pdfplumber==0.11.8 (MIT) for PDF text extraction
- Update nemo_retriever.py to use pdf2image instead of fitz
- Update local_processor.py to use pdfplumber instead of fitz
- Update error messages in action_tools.py
- Resolves AGPL licensing conflict with NVIDIA proprietary code

BREAKING CHANGE: Requires poppler-utils system package for pdf2image
Install with: sudo apt-get install poppler-utils (Ubuntu/Debian)
Updates the requirements on [langchain](https://github.com/langchain-ai/langchain) to permit the latest version.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](langchain-ai/langchain@langchain-box==0.1.0...langchain==0.3.27)

---
updated-dependencies:
- dependency-name: langchain
  dependency-version: 0.3.27
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot force-pushed the dependabot/pip/langchain-lt-0.3.28 branch from 5e677ed to e709181 Compare December 22, 2025 21:27
@dependabot @github
Copy link
Copy Markdown
Author

dependabot bot commented on behalf of github Mar 9, 2026

A newer version of langchain exists, but since this PR has been edited by someone other than Dependabot I haven't updated it. You'll get a PR for the updated version as normal once this PR is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants