Skip to content
Merged
Show file tree
Hide file tree
Changes from 28 commits
Commits
Show all changes
103 commits
Select commit Hold shift + click to select a range
f10285e
support prompts or token IDs in VLLMClient and update API request han…
qgallouedec Mar 5, 2026
7d2bb67
test
qgallouedec Mar 5, 2026
3b356ac
consistency
qgallouedec Mar 5, 2026
82c4508
fix
qgallouedec Mar 5, 2026
3ea2fcf
another fix
qgallouedec Mar 5, 2026
445f4ba
fix docstring
qgallouedec Mar 5, 2026
8c6c88d
Add support for multi-modal inputs in VLLMClient and vllm_serve
qgallouedec Mar 5, 2026
f617b2d
Merge branch 'main' into vllm-accept-token-ids
qgallouedec Mar 6, 2026
eaffd67
Merge branch 'main' into vllm-accept-token-ids
qgallouedec Mar 6, 2026
f3f6a5d
Move `rollout_func from `_generate_single_turn` to `_generate`
qgallouedec Mar 6, 2026
d417543
fix style
qgallouedec Mar 6, 2026
4b927d6
support multi-image
qgallouedec Mar 6, 2026
029fc1f
style
qgallouedec Mar 6, 2026
20b4039
Merge branch 'vllm-accept-token-ids' into vllm-support-image-with-raw…
qgallouedec Mar 6, 2026
b8e3912
Merge branch 'vllm-support-image-with-raw-token' into move-rollout-func
qgallouedec Mar 6, 2026
07181cb
Fix handling of images in OnlineDPOTrainer to ensure proper structure…
qgallouedec Mar 7, 2026
6ff1e56
Merge branch 'main' into vllm-accept-token-ids
qgallouedec Mar 7, 2026
9f340e4
Merge branch 'vllm-accept-token-ids' into vllm-support-image-with-raw…
qgallouedec Mar 7, 2026
d138be7
Merge branch 'vllm-support-image-with-raw-token' into move-rollout-func
qgallouedec Mar 7, 2026
09128d6
Move tokenization before vLLM generation call
qgallouedec Mar 7, 2026
7fd1711
Fix deadlock issue by ensuring images are always gathered in VLLMGene…
qgallouedec Mar 7, 2026
3ab04b0
Unify tokenization across all generation backends in _generate_single…
qgallouedec Mar 7, 2026
5d6d067
Extract tokenization out of _generate_single_turn into _tokenize_prompts
qgallouedec Mar 7, 2026
b4d2c34
Enhance multimodal input handling in GRPO and RLOO trainers by adding…
qgallouedec Mar 7, 2026
4922362
style
qgallouedec Mar 7, 2026
37c48b3
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 7, 2026
3375aea
Fix re-tokenization bug in tool-calling loop by concatenating token IDs
qgallouedec Mar 7, 2026
638f88a
Enhance _tool_call_loop to support multimodal inputs by adding images…
qgallouedec Mar 7, 2026
9825358
Refactor generation methods in GRPO and RLOO trainers to remove unuse…
qgallouedec Mar 7, 2026
65d62db
Refactor GRPOTrainer generation methods to remove unused extra_fields…
qgallouedec Mar 7, 2026
d1685b1
multimodal
qgallouedec Mar 7, 2026
71de8c0
fix
qgallouedec Mar 7, 2026
0a264a2
Fix tokenization padding issue in GRPOTrainer to handle unpadded inpu…
qgallouedec Mar 7, 2026
0aa0e30
style
qgallouedec Mar 7, 2026
b490357
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 7, 2026
6fd47dc
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 7, 2026
8fecba1
align rloo
qgallouedec Mar 7, 2026
6c093dd
style
qgallouedec Mar 7, 2026
a9a91c7
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 7, 2026
934aae7
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 7, 2026
7e863e1
fix
qgallouedec Mar 7, 2026
f033e63
revert doc modif
qgallouedec Mar 9, 2026
5a1f609
Merge branch 'vllm-accept-token-ids' into vllm-support-image-with-raw…
qgallouedec Mar 9, 2026
1eb3540
Merge branch 'vllm-support-image-with-raw-token' into move-rollout-func
qgallouedec Mar 9, 2026
498a564
Merge branch 'move-rollout-func' into vllm-generate-with-token-ids
qgallouedec Mar 9, 2026
be2ff99
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 9, 2026
5df2069
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 9, 2026
ae8767f
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 9, 2026
d3f7971
Merge branch 'main' into vllm-support-image-with-raw-token
qgallouedec Mar 9, 2026
319d52a
simplify multimodal
qgallouedec Mar 9, 2026
d5e1906
Merge branch 'main' into vllm-support-image-with-raw-token
qgallouedec Mar 9, 2026
4ccadcf
Merge branch 'vllm-support-image-with-raw-token' into move-rollout-func
qgallouedec Mar 9, 2026
2a80df9
Merge branch 'move-rollout-func' into vllm-generate-with-token-ids
qgallouedec Mar 9, 2026
a0df552
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 9, 2026
3350588
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 9, 2026
19ffe9e
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 9, 2026
0558dc9
Merge branch 'main' into move-rollout-func
qgallouedec Mar 9, 2026
6ebb681
Merge branch 'move-rollout-func' into vllm-generate-with-token-ids
qgallouedec Mar 9, 2026
93640e4
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 9, 2026
1c009b0
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 9, 2026
0c1fe0f
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 9, 2026
97a813b
Merge branch 'main' into vllm-generate-with-token-ids
qgallouedec Mar 10, 2026
83ab9bd
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 10, 2026
408fb2e
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
087b5e9
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 10, 2026
ade2831
Merge branch 'main' into vllm-generate-with-token-ids
qgallouedec Mar 10, 2026
258e0a8
Update trl/trainer/grpo_trainer.py
qgallouedec Mar 10, 2026
ef96048
Update trl/trainer/rloo_trainer.py
qgallouedec Mar 10, 2026
0ee6495
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 10, 2026
bb6dc69
Update trl/trainer/grpo_trainer.py
qgallouedec Mar 10, 2026
0effa0d
Update trl/trainer/rloo_trainer.py
qgallouedec Mar 10, 2026
fad1fdd
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
f2d1e01
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 10, 2026
b35f250
Remove unused chat/tool configuration parameters from VLLM and RLOO t…
qgallouedec Mar 10, 2026
040e392
Update trl/generation/vllm_generation.py
qgallouedec Mar 10, 2026
ca2cae3
Update trl/trainer/rloo_trainer.py
qgallouedec Mar 10, 2026
fee553d
Merge branch 'main' into vllm-generate-with-token-ids
qgallouedec Mar 10, 2026
90df2de
Merge branch 'vllm-generate-with-token-ids' into unify-tokenization-g…
qgallouedec Mar 10, 2026
f36c0ea
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
8678382
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 10, 2026
fdaa90a
fix
qgallouedec Mar 10, 2026
6f10cd2
style
qgallouedec Mar 10, 2026
533c337
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
50418e0
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 10, 2026
7e7e3b3
Merge branch 'main' into unify-tokenization-generate
qgallouedec Mar 10, 2026
31d8a0c
Merge branch 'unify-tokenization-generate' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
e88987f
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 10, 2026
8b4f6af
Merge branch 'main' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
a704d89
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 10, 2026
81cf273
Merge branch 'main' into extract-tokenize-prompts
qgallouedec Mar 10, 2026
918686b
Remove dead code: eliminate prompt tokenization logic from GRPOTraine…
qgallouedec Mar 10, 2026
9b8de83
remove unused extra_fields from _generate_single_turn return value
qgallouedec Mar 10, 2026
6c8f55c
style
qgallouedec Mar 10, 2026
130d974
Merge branch 'extract-tokenize-prompts' into fix-retokenization-tool-…
qgallouedec Mar 10, 2026
8b27397
properly merge upstream
qgallouedec Mar 10, 2026
6c9db28
fix
qgallouedec Mar 10, 2026
441725b
Merge branch 'main' into fix-retokenization-tool-loop
qgallouedec Mar 13, 2026
367a79e
align with main
qgallouedec Mar 13, 2026
f3f0f8d
fix
qgallouedec Mar 14, 2026
5147625
Merge branch 'main' into fix-retokenization-tool-loop
qgallouedec Mar 14, 2026
10708ca
Merge branch 'main' into fix-retokenization-tool-loop
qgallouedec Mar 16, 2026
f81f6a9
Merge branch 'main' into fix-retokenization-tool-loop
qgallouedec Mar 18, 2026
f74b5d1
Merge branch 'main' into fix-retokenization-tool-loop
qgallouedec Mar 19, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 38 additions & 12 deletions tests/test_grpo_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,17 +162,44 @@ def test_compute_entropy_all_masked(self):
class TestGRPORolloutDispatch:
def _make_trainer(self):
trainer = object.__new__(GRPOTrainer)
trainer.accelerator = SimpleNamespace(device=torch.device("cpu"), is_main_process=True)
trainer.accelerator = SimpleNamespace(
device=torch.device("cpu"),
is_main_process=True,
gather=lambda t: t,
)
trainer.args = SimpleNamespace(report_to=[])
trainer.model = SimpleNamespace(training=True)
trainer.state = SimpleNamespace(global_step=2)
trainer.state = SimpleNamespace(global_step=2, num_input_tokens_seen=0)
trainer._last_loaded_step = 1
trainer.use_vllm = False
trainer.use_transformers_paged = False
trainer.vllm_generation = SimpleNamespace(sync_weights=MagicMock())
trainer.processing_class = SimpleNamespace(
batch_decode=MagicMock(return_value=["decoded"]),
)
trainer.tools = None
trainer.eos_token_id = 2
trainer.pad_token_id = 0
trainer._metrics = {
"train": {
"num_tokens": [],
**{
k: []
for k in [
"completions/mean_length",
"completions/min_length",
"completions/max_length",
"completions/clipped_ratio",
"completions/mean_terminated_length",
"completions/min_terminated_length",
"completions/max_terminated_length",
]
},
}
}
return trainer

def test_generate_single_turn_prefers_rollout_func(self):
def test_generate_prefers_rollout_func(self):
trainer = self._make_trainer()
trainer.rollout_func = MagicMock(
return_value={
Expand All @@ -183,33 +210,32 @@ def test_generate_single_turn_prefers_rollout_func(self):
}
)

prompt_ids, completion_ids, logprobs, extra_fields = trainer._generate_single_turn(["prompt"])
result = trainer._generate(["prompt"])

assert prompt_ids == [[1]]
assert completion_ids == [[2]]
assert logprobs == [[-0.1]]
assert extra_fields == {"env_mask": [[1]]}
assert result[0] == [[1]] # prompt_ids
assert result[1] == [[2]] # completion_ids
assert result[2] == [[1]] # tool_mask (from env_mask)
trainer.rollout_func.assert_called_once_with(["prompt"], trainer)

def test_generate_single_turn_rollout_func_syncs_vllm_weights_when_needed(self):
def test_generate_rollout_func_syncs_vllm_weights_when_needed(self):
trainer = self._make_trainer()
trainer.use_vllm = True
trainer.rollout_func = MagicMock(
return_value={"prompt_ids": [[1]], "completion_ids": [[2]], "logprobs": [[0.0]]}
)

trainer._generate_single_turn(["prompt"])
trainer._generate(["prompt"])

trainer.vllm_generation.sync_weights.assert_called_once()
assert trainer._last_loaded_step == trainer.state.global_step
trainer.rollout_func.assert_called_once_with(["prompt"], trainer)

def test_generate_single_turn_rollout_func_raises_when_required_keys_are_missing(self):
def test_generate_rollout_func_raises_when_required_keys_are_missing(self):
trainer = self._make_trainer()
trainer.rollout_func = MagicMock(return_value={"prompt_ids": [[1]], "completion_ids": [[2]]})

with pytest.raises(ValueError, match="rollout_func must return keys"):
trainer._generate_single_turn(["prompt"])
trainer._generate(["prompt"])


class TestGRPOTrainer(TrlTestCase):
Expand Down
198 changes: 197 additions & 1 deletion tests/test_vllm_client_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

import pytest
from packaging.version import Version
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
from transformers.testing_utils import torch_device

from trl.generation.vllm_client import VLLMClient
Expand All @@ -31,6 +31,7 @@
kill_process,
require_3_accelerators,
require_torch_multi_accelerator,
require_vision,
require_vllm,
)

Expand Down Expand Up @@ -207,6 +208,31 @@ def multiply(a: int, b: int) -> int:
decoded_prompt = tokenizer.decode(outputs["prompt_ids"][0])
assert "Multiplies two integers." in decoded_prompt

def test_generate_with_token_ids(self):
tokenizer = AutoTokenizer.from_pretrained(self.model_id)
prompts = ["Hello, AI!", "Tell me a joke"]
prompt_token_ids = tokenizer(prompts)["input_ids"]
outputs = self.client.generate(prompt_token_ids)
prompt_ids = outputs["prompt_ids"]
completion_ids = outputs["completion_ids"]

# Check that the outputs are lists
assert isinstance(prompt_ids, list)
assert isinstance(completion_ids, list)

# Check that the number of sequences are equal to the number of prompts
assert len(prompt_ids) == len(prompts)
assert len(completion_ids) == len(prompts)

# Check that prompt_ids match the input token IDs
assert prompt_ids == prompt_token_ids

# Check that the sequences are lists of integers
for seq in prompt_ids:
assert all(isinstance(tok, int) for tok in seq)
for seq in completion_ids:
assert all(isinstance(tok, int) for tok in seq)

def test_generate_with_params(self):
prompts = ["Hello, AI!", "Tell me a joke"]
completion_ids = self.client.generate(prompts, n=2, repetition_penalty=0.9, temperature=0.8, max_tokens=32)[
Expand Down Expand Up @@ -411,6 +437,31 @@ def multiply(a: int, b: int) -> int:
decoded_prompt = tokenizer.decode(outputs["prompt_ids"][0])
assert "Multiplies two integers." in decoded_prompt

def test_generate_with_token_ids(self):
tokenizer = AutoTokenizer.from_pretrained(self.model_id)
prompts = ["Hello, AI!", "Tell me a joke"]
prompt_token_ids = tokenizer(prompts)["input_ids"]
outputs = self.client.generate(prompt_token_ids)
prompt_ids = outputs["prompt_ids"]
completion_ids = outputs["completion_ids"]

# Check that the outputs are lists
assert isinstance(prompt_ids, list)
assert isinstance(completion_ids, list)

# Check that the number of sequences are equal to the number of prompts
assert len(prompt_ids) == len(prompts)
assert len(completion_ids) == len(prompts)

# Check that prompt_ids match the input token IDs
assert prompt_ids == prompt_token_ids

# Check that the sequences are lists of integers
for seq in prompt_ids:
assert all(isinstance(tok, int) for tok in seq)
for seq in completion_ids:
assert all(isinstance(tok, int) for tok in seq)

def test_generate_with_params(self):
prompts = ["Hello, AI!", "Tell me a joke"]
completion_ids = self.client.generate(prompts, n=2, repetition_penalty=0.9, temperature=0.8, max_tokens=32)[
Expand Down Expand Up @@ -536,6 +587,31 @@ def multiply(a: int, b: int) -> int:
decoded_prompt = tokenizer.decode(outputs["prompt_ids"][0])
assert "Multiplies two integers." in decoded_prompt

def test_generate_with_token_ids(self):
tokenizer = AutoTokenizer.from_pretrained(self.model_id)
prompts = ["Hello, AI!", "Tell me a joke"]
prompt_token_ids = tokenizer(prompts)["input_ids"]
outputs = self.client.generate(prompt_token_ids)
prompt_ids = outputs["prompt_ids"]
completion_ids = outputs["completion_ids"]

# Check that the outputs are lists
assert isinstance(prompt_ids, list)
assert isinstance(completion_ids, list)

# Check that the number of sequences are equal to the number of prompts
assert len(prompt_ids) == len(prompts)
assert len(completion_ids) == len(prompts)

# Check that prompt_ids match the input token IDs
assert prompt_ids == prompt_token_ids

# Check that the sequences are lists of integers
for seq in prompt_ids:
assert all(isinstance(tok, int) for tok in seq)
for seq in completion_ids:
assert all(isinstance(tok, int) for tok in seq)

def test_generate_with_params(self):
prompts = ["Hello, AI!", "Tell me a joke"]
completion_ids = self.client.generate(prompts, n=2, repetition_penalty=0.9, temperature=0.8, max_tokens=32)[
Expand Down Expand Up @@ -665,6 +741,31 @@ def multiply(a: int, b: int) -> int:
decoded_prompt = tokenizer.decode(outputs["prompt_ids"][0])
assert "Multiplies two integers." in decoded_prompt

def test_generate_with_token_ids(self):
tokenizer = AutoTokenizer.from_pretrained(self.model_id)
prompts = ["Hello, AI!", "Tell me a joke"]
prompt_token_ids = tokenizer(prompts)["input_ids"]
outputs = self.client.generate(prompt_token_ids)
prompt_ids = outputs["prompt_ids"]
completion_ids = outputs["completion_ids"]

# Check that the outputs are lists
assert isinstance(prompt_ids, list)
assert isinstance(completion_ids, list)

# Check that the number of sequences are equal to the number of prompts
assert len(prompt_ids) == len(prompts)
assert len(completion_ids) == len(prompts)

# Check that prompt_ids match the input token IDs
assert prompt_ids == prompt_token_ids

# Check that the sequences are lists of integers
for seq in prompt_ids:
assert all(isinstance(tok, int) for tok in seq)
for seq in completion_ids:
assert all(isinstance(tok, int) for tok in seq)

def test_generate_with_params(self):
prompts = ["Hello, AI!", "Tell me a joke"]
completion_ids = self.client.generate(prompts, n=2, repetition_penalty=0.9, temperature=0.8, max_tokens=32)[
Expand Down Expand Up @@ -774,3 +875,98 @@ def teardown_class(cls):
# vLLM x pytest (or Popen) seems not to handle process termination well. To avoid zombie processes, we need to
# kill the server process and its children explicitly.
kill_process(cls.server_process)


@pytest.mark.slow
@require_vllm
@require_vision
class TestVLLMClientServerVLM(TrlTestCase):
model_id = "Qwen/Qwen2.5-VL-3B-Instruct"

@classmethod
def setup_class(cls):
# Start the server process
cls.server_process = subprocess.Popen(
["trl", "vllm-serve", "--model", cls.model_id], stdout=subprocess.PIPE, stderr=subprocess.PIPE
)

# Initialize the client (no communicator needed for generation-only tests)
cls.client = VLLMClient(connection_timeout=240, host="localhost")

def test_generate_with_token_ids_and_image(self):
from PIL import Image

processor = AutoProcessor.from_pretrained(self.model_id)
image1 = Image.new("RGB", (64, 64), color="red")
image2 = Image.new("RGB", (64, 64), color="blue")
image3 = Image.new("RGB", (64, 64), color="green")
messages = [
[
{
"role": "user",
"content": [
{"type": "image", "image": image1},
{"type": "image", "image": image2},
{"type": "text", "text": "What are the differences between these two images?"},
],
}
],
[
{
"role": "user",
"content": [
{"type": "image", "image": image3},
{"type": "text", "text": "What is the color of this image?"},
],
}
],
]
prompt_token_ids = processor.apply_chat_template(
conversation=messages, tokenize=True, add_generation_prompt=True
)
outputs = self.client.generate(prompt_token_ids, images=[[image1, image2], [image3]], max_tokens=64)
prompt_ids = outputs["prompt_ids"]
completion_ids = outputs["completion_ids"]

assert len(prompt_ids) == 2
assert len(completion_ids) == 2
assert all(isinstance(tok, int) for tok in prompt_ids[0])
assert all(isinstance(tok, int) for tok in completion_ids[0])

def test_generate_with_token_ids_mixed_images(self):
"""Test a batch where one prompt has an image and the other does not."""
from PIL import Image

processor = AutoProcessor.from_pretrained(self.model_id)
image = Image.new("RGB", (64, 64), color="red")
messages = [
[
{
"role": "user",
"content": [{"type": "image", "image": image}, {"type": "text", "text": "Describe this image."}],
}
],
[
{
"role": "user",
"content": [{"type": "text", "text": "What is 1+1?"}],
}
],
]
prompt_token_ids = processor.apply_chat_template(
conversation=messages, tokenize=True, add_generation_prompt=True
)
outputs = self.client.generate(prompt_token_ids, images=[[image], None], max_tokens=64)
prompt_ids = outputs["prompt_ids"]
completion_ids = outputs["completion_ids"]

assert len(prompt_ids) == 2
assert len(completion_ids) == 2
assert all(isinstance(tok, int) for tok in prompt_ids[0])
assert all(isinstance(tok, int) for tok in prompt_ids[1])
assert all(isinstance(tok, int) for tok in completion_ids[0])
assert all(isinstance(tok, int) for tok in completion_ids[1])

@classmethod
def teardown_class(cls):
kill_process(cls.server_process)
4 changes: 3 additions & 1 deletion trl/experimental/online_dpo/online_dpo_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -750,7 +750,9 @@ def _generate_vllm_server(self, prompts, images=None):
# prompt individually.
ordered_set_of_prompts = all_prompts[:: self.num_generations]
if has_images:
ordered_set_of_images = all_images[:: self.num_generations]
ordered_set_of_images = [
[img] if img is not None else None for img in all_images[:: self.num_generations]
]
else:
ordered_set_of_images = None
completion_ids = self.vllm_client.generate(
Expand Down
19 changes: 12 additions & 7 deletions trl/generation/vllm_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ def check_server(self, total_timeout: float = 0.0, retry_interval: float = 2.0):

def generate(
self,
prompts: list[str],
prompts: list[str] | list[list[int]],
images: list | None = None,
n: int = 1,
repetition_penalty: float = 1.0,
Expand All @@ -219,10 +219,11 @@ def generate(
Generates model completions for the provided prompts.

Args:
prompts (`list[str]`):
List of text prompts for which the model will generate completions.
images (`list[PIL.Image]`, *optional*):
List of PIL Images to send along with the prompts.
prompts (`list[str]` or `list[list[int]]`):
List of text prompts or list of token ID lists for which the model will generate completions.
images (`list[list[PIL.Image] | None]`, *optional*):
List of image lists for VLM support. Each element is a list of PIL images for the corresponding prompt,
or `None` if no images for that prompt.
n (`int`, *optional*, defaults to `1`):
Number of completions to generate for each prompt.
repetition_penalty (`float`, *optional*, defaults to `1.0`):
Expand Down Expand Up @@ -265,8 +266,12 @@ def generate(
"""
url = f"{self.base_url}/generate/"

# Convert PIL images to base64 strings
images = [pil_to_base64(img) for img in images] if images else None
# Convert PIL images to base64 strings. Each element is a list of images for the corresponding prompt,
# or None if no images for that prompt.
if images:
images = [
[pil_to_base64(img) for img in img_list] if img_list is not None else None for img_list in images
]

response = self.session.post(
url,
Expand Down
Loading
Loading