lgtm-ai is your AI-powered code review companion. It generates code reviews using your favorite LLMs and helps human reviewers with detailed, context-aware reviewer guides. Supports GitHub, GitLab, and major AI models including GPT-4, Claude, Gemini, and more.
Table of Contents
lgtm review --pr-url "https://gitlab.com/your-repo/-/merge-requests/42" \
--ai-api-key $OPENAI_API_KEY \
--git-api-key $GITLAB_TOKEN \
--model gpt-4.1 \
--publish
This will generate a review like this one:
lgtm guide --pr-url "https://gitlab.com/your-repo/-/merge-requests/42" \
--ai-api-key $OPENAI_API_KEY \
--git-api-key $GITLAB_TOKEN \
--model gpt-4.1 \
--publish
This will generate a reviewer guide like this one:
pip install lgtm-ai
Or you can use the official Docker image:
docker pull elementsinteractive/lgtm-ai
lgtm reads the given pull request and feeds it to several AI agents to generate a code review or a reviewer guide. The philosophy of lgtm is to keep the models out of the picture and totally configurable, so that you can choose which model to use based on pricing, security, data privacy, or whatever is important to you.
If instructed (with the option --publish
), lgtm will publish the review or guide to the pull request page as comments.
Reviews generated by lgtm will be assigned a score, using the following scale:
Score | Description |
---|---|
LGTM 👍 | The PR is generally ready to be merged. |
Nitpicks 🤓 | There are some minor issues, but the PR is almost ready to be merged. |
Needs Work 🔧 | There are some issues with the PR, and it is not ready to be merged. The approach is generally good, the fundamental structure is there, but there are some issues that need to be fixed. |
Needs a Lot of Work 🚨 | Issues are major, overarching, and/or numerous. However, the approach taken is not necessarily wrong. |
Abandon ❌ | The approach taken is wrong, and the author needs to start from scratch. The PR is not ready to be merged as is at all. |
For each review, lgtm may create several inline comments, pointing out specific issues within the PR. These comments belong to a category and have a severity. You can configure which categories you want lgtm to take a look at (see the configuration section below). The available categories are:
Category | Description |
---|---|
Correctness 🎯 | Does the code behave as intended? Identifies logical errors, bugs, incorrect algorithms, broken functionality, or deviations from requirements. |
Quality ✨ | Is the code clean, readable, and maintainable? Evaluates naming, structure, modularity, and adherence to clean code principles (e.g., SOLID, DRY, KISS). |
Testing 🧪 | Are there sufficient and appropriate tests? Includes checking for meaningful test coverage, especially for edge cases and critical paths. Are tests isolated, reliable, and aligned with the behavior being verified? |
Security 🔒 | Does the code follow secure programming practices? Looks for common vulnerabilities such as injection attacks, insecure data handling, improper access control, hardcoded credentials, or lack of input validation. |
There are three available severities for comments:
- LOW 🔵
- MEDIUM 🟡
- HIGH 🔴
lgtm aims to work with as many services as possible, and that includes remote repository providers. At the moment, lgtm supports:
lgtm will autodetect the url of the pull request passed to --pr-url
automatically.
lgtm supports several AI models so you can hook up your preferred LLM to perform reviews for you.
This is the full list of supported models:
Check out the OpenAI platform page to see all available models provided by OpenAI.
To use OpenAI LLMs, you need to provide lgtm with an API Key, which can be generated in the OpenAI platform page for your project, or your user.
Supported OpenAI models
These are the main supported models, though the CLI may support additional ones due to the use of pydantic-ai.
Model name | Description |
---|---|
gpt-4.1 * | Powerful and reliable model for detailed code analysis. Strong at coding tasks. |
gpt-4.1-mini | Lightweight variant of GPT-4.1 offering faster responses and lower cost—ideal for iterative or high-volume reviews. |
gpt-4.1-nano | Ultra-light model focused on speed and affordability—best for basic code checks or initial feedback. |
gpt-4o * | Cutting-edge model with strong reasoning and code capabilities—ideal for detailed, context-aware reviews. |
gpt-4o-mini | Streamlined GPT-4o variant optimized for fast, cost-effective feedback on code. |
o4-mini | - |
o3-mini | - |
o3 | - |
o1-preview | - |
o1-mini | - |
o1 | - |
gpt-4-turbo | - |
gpt-4 | - |
gpt-3.5-turbo | - |
chatgpt-4o-latest | - |
Check out the Gemini developer docs to see all models provided by Google.
To use Gemini LLMs, you need to provide lgtm an API Key, which can be generated here.
These are the main supported models, though the CLI may support additional ones due to the use of pydantic-ai. Gemini timestamps models, so be sure to always use the latest model of each family, if possible.
Supported Google's Gemini models
Model name | Description |
---|---|
gemini-2.5-pro-preview-05-06 | Most advanced publicly available Gemini model. Strong code reasoning and long-context support. Ideal for complex or large reviews. |
gemini-2.0-pro-exp-02-05 | High-performing general-purpose model. Balances accuracy and efficiency—ideal for robust reviews without 2.5's higher cost. |
gemini-2.0-flash | Optimized for low-latency, lower-cost analysis. Ideal for iterative feedback and smaller reviews. |
gemini-1.5-pro | Proven performer with solid context and reasoning. Still excellent for general code understanding. |
gemini-1.5-flash | Lightweight and fast—suited for real-time or continuous code review loops. |
Check out Anthropic documentation to see which models they provide. lgtm works with a subset of Claude models. To use Anthropic LLMs, you need to provide lgtm with an API Key, which can be generated from the Anthropic Console.
Supported Anthropic models
These are the main supported models, though the CLI may support additional ones due to the use of pydantic-ai.
Model name | Description |
---|---|
claude-3-opus-latest | Anthropic's most advanced model. Highest accuracy, reasoning, and context length—ideal for deep code reviews or complex projects. |
claude-3-7-sonnet-latest | High-performance version of Sonnet 3.7. Stronger accuracy and broader context window than 3.5—best for nuanced reviews or mid-to-large projects. |
claude-3-5-sonnet-latest | Optimized for efficiency while maintaining solid accuracy. Better latency and cost-performance—ideal for scalable or frequent reviews. |
claude-3-5-haiku-latest | Lightweight and fast. Designed for quick iterations and basic checks—ideal for continuous or CI-based review flows. |
Check out the Mistral documentation to see all models provided by Mistral.
To use Mistral LLMs, you need to provide lgtm with an API Key, which can be generated from Mistral's Le Platforme.
Supported Mistral AI models
These are the main supported models, though the CLI may support additional ones due to the use of pydantic-ai.
Model name | Description |
---|---|
mistral-large-latest | Mistral's top-tier reasoning model for high-complexity tasks. |
mistral-small | Lightweight and fast. Best used for simple syntax or formatting checks where cost and speed are priorities. |
codestrallatest | Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. |
Check out the DeepSeek documentation to see all models provided by DeepSeek.
At the moment, lgtm only supports DeepSeek from https://api.deepseek.com
: other providers and custom URLs are not supported. However, this is in our roadmap!
To get an API key for DeepSeek, create one at DeepSeek Platform.
Supported DeepSeek models
Model name | Description |
---|---|
deepseek-chat | General-purpose LLM optimized for chat and code assistance. Ideal for standard reviews and developer interactions. |
deepseek-reasoner | Advanced model specialized in reasoning and problem solving—ideal for complex code analysis and critical thinking tasks. |
You can run lgtm against a model available at a custom url (say, models running with ollama at http://localhost:11434/v1). These models need to be compatible with OpenAI. In that case, you need to pass the option --model-url
(and you can choose to skip the option --ai-api-token
). Check out the pydantic-ai documentation to see more information about how lgtm interacts with these models.
lgtm review \
--pr-url https://github.com/group/repo/pull/1 \
--model llama3.2 \
--model-url http://localhost:11434/v1 \
...
lgtm is meant to be integrated into your CI/CD pipeline, so that PR authors can choose to request reviews by running the necessary pipeline step.
For GitLab, you can use this .gitlab-ci.yml step as inspiration:
lgtm-review:
image:
name: docker.io/elementsinteractive/lgtm-ai
entrypoint: [""]
stage: ai-review
needs: []
rules:
- if: $CI_MERGE_REQUEST_ID
when: manual
script:
- lgtm review --pr-url ${MR_URL} --git-api-key ${LGTM_GIT_API_KEY} --ai-api-key ${LGTM_AI_API_KEY} -v
variables:
MR_URL: "${CI_PROJECT_URL}/-/merge_requests/${CI_MERGE_REQUEST_IID}"
For GitHub, we plan to provide a GitHub action soon. In the meantime, check out this repo's lgtm workflow, with which you can trigger reviews in PRs by commenting /lgtm review
.
You can customize how lgtm works by passing cli arguments to it on invocation, or by using the lgtm configuration file.
lgtm uses a .toml
file to configure how it works. It will autodetect a lgtm.toml
file in the current directory, or you can pass a specific file path with the CLI option --config <path>
. These are the available options at the moment:
- technologies: You can specify, as a list of free strings, which technologies lgtm specializes in. This can be helpful for directing the reviewer towards specific technologies. By default, lgtm won't assume any technology and will just review the PR considering itself an "expert" in it.
- categories: lgtm will, by default, evaluate several areas of the given PR (
Quality
,Correctness
,Testing
, andSecurity
). You can choose any subset of these (e.g.: if you are only interested inCorrectness
, you can configurecategories
so that lgtm does not evaluate the other missing areas). - model: Choose which AI model you want lgtm to use.
- model_url: When not using one of the specific supported models from the providers mentioned above, you can pass a custom url where the model is deployed.
- exclude: Instruct lgtm to ignore certain files. This is important to reduce noise in reviews, but also to reduce the amount of tokens used for each review (and to avoid running into token limits). You can specify file patterns (
exclude = ["*.md", "package-lock.json"]
) - output_format: Format of the terminal output of lgtm. Can be
pretty
(default),json
, andmarkdown
. - silent: Do not print the review in the terminal.
- publish: If
true
, it will post the review as comments on the PR page. - ai_api_key: API key to call the selected AI model. Can be given as a CLI argument, or as an environment variable (
LGTM_AI_API_KEY
). - git_api_key: API key to post the review in the source system of the PR. Can be given as a CLI argument, or as an environment variable (
LGTM_GIT_API_KEY
). This is required to not be empty if using a non-local model. - ai_retries: How many times to retry calls to the LLM when they do not succeed. By default, this is set to 1 (no retries at all).
- additional_context: TOML array of extra context to send to the LLM. It supports setting the context directly in the
context
field, passing a relative file path so that lgtm downloads it from the repository, or passing any URL from which to download the context. Each element of the array must containprompt
, and eithercontext
(directly injecting context) orfile_url
(for directing lgtm to download it from there).
Example lgtm.toml
:
technologies = ["Django", "Python"]
categories = ["Correctness", "Quality", "Testing", "Security"]
exclude = ["*.md"]
model = "gpt-4.1"
silent = false
publish = true
ai_retries = 1
[[additional_context]]
prompt = "These are the development guidelines for the team, ensure the PR follows them"
file_url = "https://my.domain.com/dev-guidelines.md"
[[additional_context]]
prompt = "CI pipeline for the repo. Do not report issues that this pipeline would otherwise catch"
file_url = ".github/workflows/pr.yml"
[[additional_context]]
prompt = "Consider these points when making your review"
context = '''
- We avoid using libraries and rely mostly on the stdlib.
- We follow the newest syntax available for Python (3.13).
'''
Alternatively, lgtm also supports pyproject.toml files, you just need to nest the options inside [tool.lgtm]
.
When it comes to preference for selecting options, lgtm follows this preference order:
CLI options
> lgtm.toml
> pyproject.toml
This project uses just
recipes to do all the basic operations (testing the package, formatting the code, etc.).
Installation:
brew install just
# or
snap install --edge --classic just
It requires poetry.
These are the available commands for the justfile:
Available recipes:
help # Shows list of recipes.
venv # Generate the virtual environment.
clean # Cleans all artifacts generated while running this project, including the virtualenv.
test *test-args='' # Runs the tests with the specified arguments (any path or pytest argument).
t *test-args='' # alias for `test`
test-all # Runs all tests including coverage report.
format # Format all code in the project.
lint # Lint all code in the project.
pre-commit *precommit-args # Runs pre-commit with the given arguments (defaults to install).
spellcheck *codespell-args # Spellchecks your markdown files.
lint-commit # Lints commit messages according to conventional commit rules.
To run the tests of this package, simply run:
# All tests
just t
# A single test
just t tests/test_dummy.py
# Pass arguments to pytest like this
just t -k test_dummy -vv
poetry
is the tool we use for managing requirements in this project. The generated virtual environment is kept within the directory of the project (in a directory named .venv
), thanks to the option POETRY_VIRTUALENVS_IN_PROJECT=1
. Refer to the poetry documentation to see the list of available commands.
As a short summary:
-
Add a dependency:
poetry add foo-bar
-
Remove a dependency:
poetry remove foo-bar
-
Update a dependency (within constraints set in
pyproject.toml
):poetry update foo-bar
-
Update the lockfile with the contents of
pyproject.toml
(for instance, when getting a conflict after a rebase):poetry lock
-
Check if
pyproject.toml
is in sync withpoetry.lock
:poetry lock --check
In this project we enforce conventional commits guidelines for commit messages. The usage of commitizen is recommended, but not required. Story numbers (JIRA, etc.) must go in the scope section of the commit message. Example message:
feat(#<issue-number>): add new feature x
Merge requests must be approved before they can be merged to the main
branch, and all the steps in the ci
pipeline must pass.
This project includes an optional pre-commit configuration. Note that all necessary checks are always executed in the ci pipeline, but configuring pre-commit to execute some of them can be beneficial to reduce late errors. To do so, simply execute the following just recipe:
just pre-commit
Feel free to create GitHub Issues for any feature request, bug, or suggestion!
Thanks goes to these wonderful people (emoji key):
Sergio Castillo 💻 🎨 🤔 🚧 |
Jakub Bożanowski 💻 🤔 🚧 |
Sacha Brouté 💻 🤔 |
Daniel 🤔 |
Rooni 💻 |
This project follows the all-contributors specification. Contributions of any kind welcome!