Skip to content

Commit 858a0db

Browse files
committed
make style and update the example.
1 parent b39aa94 commit 858a0db

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

transformers/transformers_simple.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
with hyperparameter optimization using Optuna. In this example, we fine-tune a lightweight
44
pre-trained BERT model on a small subset of the IMDb dataset to classify movie reviews as
55
positive or negative. We optimize the validation accuracy by tuning the learning rate
6-
and batch size. To learn more about transformers' hyperparameter search,
7-
you can check the following documentation:
6+
and batch size. To learn more about transformers' hyperparameter search,
7+
you can check the following documentation:
88
https://huggingface.co/docs/transformers/en/hpo_train.
99
"""
1010

@@ -36,8 +36,12 @@ def tokenize(batch):
3636
return tokenizer(batch["text"], padding="max_length", truncation=True, max_length=512)
3737

3838

39-
tokenized_train = train_dataset.map(tokenize, batched=True).select_columns(["input_ids", "attention_mask", "label"])
40-
tokenized_valid = valid_dataset.map(tokenize, batched=True).select_columns(["input_ids", "attention_mask", "label"])
39+
tokenized_train = train_dataset.map(tokenize, batched=True).select_columns(
40+
["input_ids", "attention_mask", "label"]
41+
)
42+
tokenized_valid = valid_dataset.map(tokenize, batched=True).select_columns(
43+
["input_ids", "attention_mask", "label"]
44+
)
4145

4246

4347
metric = evaluate.load("accuracy")

0 commit comments

Comments
 (0)