Skip to content

docs: add LoRA fine-tuning tutorial#3601

Draft
chiajunglien wants to merge 2 commits intoAI-Hypercomputer:jackyf/feat/lora-nnxfrom
CIeNET-International:emma/lora-tutorial-final
Draft

docs: add LoRA fine-tuning tutorial#3601
chiajunglien wants to merge 2 commits intoAI-Hypercomputer:jackyf/feat/lora-nnxfrom
CIeNET-International:emma/lora-tutorial-final

Conversation

@chiajunglien
Copy link
Copy Markdown

Description

Start with a short description of what the PR does and how this is a change from
the past.

The rest of the description includes relevant details and context, examples:

  • why is this change being made,
  • the problem being solved and any relevant context,
  • why this is a good solution,
  • some information about the specific implementation,
  • shortcomings of the solution and possible future improvements.

If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/123456
FIXES: #123456

Notice 1: Once all tests pass, the "pull ready" label will automatically be assigned.
This label is used for administrative purposes. Please do not add it manually.

Notice 2: For external contributions, our settings currently require an approval from a MaxText maintainer to trigger CI tests.

Tests

Please describe how you tested this change, and include any instructions and/or
commands to reproduce.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

scan_layers=True
```

Your fine-tuned model checkpoints will be saved here: `$BASE_OUTPUT_DIRECTORY/$RUN_NAME/checkpoints`. No newline at end of file
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also instruct the usage of maxtext_lora_to_hf script

learning_rate="${LEARNING_RATE?}" \
weight_dtype="${WEIGHT_DTYPE?}" \
dtype="${DTYPE?}" \
profiler=xplane \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't need to use profiler in tutorial


Your fine-tuned model checkpoints will be saved here: `$BASE_OUTPUT_DIRECTORY/$RUN_NAME/checkpoints`.

## (Optional) Export Fine-tuned LoRA to Hugging Face Format
Copy link
Copy Markdown
Collaborator

@RexBearIU RexBearIU Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Convert should be more appropriate


```sh
python3 maxtext/checkpoint_conversion/maxtext_to_hf_lora.py \
maxtext/configs/post_train/sft.yml \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we remove maxtext/configs/post_train/sft.yml?


```sh
python3 maxtext/checkpoint_conversion/hf_lora_to_maxtext.py \
maxtext/configs/post_train/sft.yml \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we remove maxtext/configs/post_train/sft.yml?

maxtext/configs/post_train/sft.yml \
model_name="${PRE_TRAINED_MODEL?}" \
load_parameters_path="${BASE_OUTPUT_DIRECTORY?}/${RUN_NAME?}/checkpoints/<step_number>/items" \
base_output_directory="${BASE_OUTPUT_DIRECTORY?}/hf_lora_adaptor" \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest of the file use adapter and we should align

If your LoRA adapter is currently in Hugging Face format, you must convert it to MaxText format before it can be loaded. Use the provided conversion script:

```sh
python3 maxtext/checkpoint_conversion/hf_lora_to_maxtext.py \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should use python3 -m maxtext.checkpoint_conversion.hf_lora_to_maxtext to align the training command

After completing the fine-tuning process, your LoRA weights are stored in MaxText/Orbax format. To use these weights with the Hugging Face ecosystem (e.g., for inference or sharing), convert them back using the `maxtext_lora_to_hf.py` script.

```sh
python3 maxtext/checkpoint_conversion/maxtext_to_hf_lora.py \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should use python3 -m maxtext.checkpoint_conversion.maxtext_to_hf_lora to align the training command

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants