Using hyperparameter-search in Trainer

This branch hasnā€™t been merged, but I want to use optuna in my workflow. Although I have tried it, I want to confirm the usage. @sgugger (firstly thanks for the PR) could you please provide instructions on what changes do I need to make to make it work (like defining the search space and then getting results on them, and finding the best hyperparams). I want to confirm if Iā€™m using it in the right manner. Also is the implementation complete ?

5 Likes

Hi there!

This is a work in progress so Iā€™d hold on a tiny bit before starting using it (Iā€™ll actually make some changes today). Iā€™ll add an example in the PR once Iā€™m done (hopefully by end of day) so you (and others) can start playing with it and give us potential feedback, but be prepared for some slight changes in the API as we polish it (we want to support other hp-search platforms such as Ray)

1 Like

Thanks for the reply. Iā€™ll look forward to the example and using it. Iā€™ll hopefully try to contribute if I come across some rough edges. Trainer changes a lot, my inherited trainer code breaks most of the time after each update, so Iā€™m prepared for it :wink:.

Ok, done for today and prepared the road to support ray as well (not working right now though). There is an example on a regression problem in the README cause I didnā€™t want to launch my GPU setup. Will add a real example soon, but it should be enough to get you going.

Could you please tell where that README is ? I checked your recent commits on both trainer_optuna branch and master, didnā€™t see it.

Sorry, not README, I meant the PR first post.

1 Like

I put a real example now.

What are the pros/cons of optuna VS ray?

Both work with the API. I havenā€™t used either long enough to have a strong opinion, but basically ray would be better if you have multiple GPUs and optuna might be better with just one, from what I understood.

2 Likes

FYI, this has been merged in master. Here is an example of use:

from nlp import load_dataset, load_metric
from transformers import AutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, Trainer, TrainingArguments

tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
dataset = load_dataset('glue', 'mrpc')
metric = load_metric('glue', 'mrpc')

def encode(examples):
    outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True)
    return outputs

encoded_dataset = dataset.map(encode, batched=True)
# Won't be necessary when this PR is merged with master since the Trainer will do it automatically
encoded_dataset.set_format(columns=['attention_mask', 'input_ids', 'token_type_ids', 'label'])

def model_init():
    return AutoModelForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True)

def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    predictions = predictions.argmax(axis=-1)
    return metric.compute(predictions=predictions, references=labels)

# Evaluate during training and a bit more often than the default to be able to prune bad trials early.
# Disabling tqdm is a matter of preference.
training_args = TrainingArguments("test", evaluate_during_training=True, eval_steps=500, disable_tqdm=True)
trainer = Trainer(
    args=training_args,
    data_collator=DataCollatorWithPadding(tokenizer),
    train_dataset=encoded_dataset["train"], 
    eval_dataset=encoded_dataset["validation"], 
    model_init=model_init,
    compute_metrics=compute_metrics,
)

# Defaut objective is the sum of all metrics when metrics are provided, so we have to maximize it.
trainer.hyperparameter_search(direction="maximize")

This will use optuna or Ray Tune, depending on which you have installed. If you have both, it will use optuna by default, but you can pass backend="ray" to use Ray Tune. Note that you need an installation from source of nlp to make the example work.

To customize the hyperparameter search space, you can pass a function hp_space to this call. Here is an example if you want to search higher learning rates than the default with optuna:

def my_hp_space(trial):
    return {
        "learning_rate": trial.suggest_float("learning_rate", 1e-4, 1e-2, log=True),
        "num_train_epochs": trial.suggest_int("num_train_epochs", 1, 5),
        "seed": trial.suggest_int("seed", 1, 40),
        "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [4, 8, 16, 32, 64]),
    }

trainer.hyperparameter_search(direction="maximize", hp_space=my_hp_space)

and ray:

def my_hp_space_ray(trial):
    from ray import tune

    return {
        "learning_rate": tune.loguniform(1e-4, 1e-2),
        "num_train_epochs": tune.choice(range(1, 6)),
        "seed": tune.choice(range(1, 41)),
        "per_device_train_batch_size": tune.choice([4, 8, 16, 32, 64]),
    }

trainer.hyperparameter_search(direction="maximize", hp_space=my_hp_space)

If you want to customize the objective to minimize/maximize, pass along a function to compute_objective:

def my_objective(metrics):
    # Your elaborate computation here
    return result_to_optimize

trainer.hyperparameter_search(direction="maximize", compute_objective=my_objective)
16 Likes

Thanks. I was following this PR. I wanted to know which type of hyperparams can be tuned with this approach? Does it work with Default ones only (training_args) ? What if we have custom param that we want to tune (for instance a lambda in an objective function) ?

2 Likes

The hyperparams you can tune must be in the TrainingArguments you passed to your Trainer. If you have custom ones that are not in TrainingArguments, just subclass TrainingArguments and add them in your subclass.

The hp_space function indicates the hyperparameter search space (see the code of the default for optuna or Ray in training_utils.py and adapt it to your needs) and the compute_objective function should return the objective to minize/maximize.

3 Likes

Thank you so much! But I have a problem when defining the Trainer. It said, ā€œinit() got an unexpected keyword argument ā€˜model_initā€™ā€. Is the Trainer doesnā€™t recognize the ā€˜model_initā€™ argument?

I think this error affect next error when I want to call the ā€˜hyperparameter_searchā€™ method. It said, ā€œā€˜Trainerā€™ object has no attribute ā€˜hyperparameter_searchā€™ā€.

What should I do? Very sorry for the very newbie question :pray: and Thankyou before.

This is new so you need an installation from source to use it. It will be in the next release coming soon otherwise.

1 Like

Alright, Iā€™m waiting for it! :rocket:

FYI, You can pip install now to use this feature. No need to build from source.

1 Like

Oh yeah thank you, It seems developed. But Iā€™m still getting problem in hyperparameter_search method. I defined my backend parameter to ā€˜optunaā€™ but the error said: You picked the optuna backend, but it is not installed. Use pip install optuna., though Iā€™ve already pip-installed it before the hyperparameter_search code line. The case was same when I defined the backend parameter into ā€˜rayā€™. Have I make a mistake? I run my code in Google Colab by the way.

It means that it is not installed in your current environment. If you are using notebooks, you have to restart the kernel. Python needs to reload the libraries to see which ones are available.

2 Likes

Oh yeah it has already worked. Thank you so much! :grin:

I wonder if Sylvain or others might have advice on how to make the hyperparameters search more efficient or manageable, time and resource-wise.

Iā€™ve tried slimming down the dataset (500K rows to 90K rows), reducing the number of parameters to tune (to just 1, number of epochs) and changing the ā€œdirectionā€ to ā€œminimizeā€ instead of ā€œmaximizeā€.

Is there something else I can do, aside from further cutting down the size of the dataset? Iā€™m running trials on Colab Pro with GPU/high-RAM enabled, and current version looks like itā€™ll take about 7 hours (perfectly fine for others Iā€™m sure).

I donā€™t suppose thereā€™s an equivalent of RandomizedSearchCV for trainer?