About Hyperparameter Search with Ray Tune

When using hyperparameter_search() we have different choices like Ray, Optuna or HyperOpt.

With Ray, the hyperparameter search uses tune.run().
However, there is a better way that Ray recommends using that is called Tuner.fit() (Added in Ray 2.0).

Tune Experiment Results (tune.ResultGrid) — Ray 2.43.0

:information_source: Note
An ExperimentAnalysis is the output of the tune.run API. It’s now recommended to use Tuner.fit, which outputs a ResultGrid object.

Are there any plans in the future to implement the Tuner for the Ray backend?

1 Like

I think you’ll have to ask Ray’s developer on github.

I think it should be a new feature.

Looking at the source code, it still calling the tune.run()

transformers/src/transformers/integrations/integration_utils.py at main · huggingface/transformers

analysis = ray.tune.run(
        dynamic_modules_import_trainable,
        config=trainer.hp_space(None),
        num_samples=n_trials,
        **kwargs,
    )
    best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3], scope=trainer.args.ray_scope)
    best_run = BestRun(best_trial.trial_id, best_trial.last_result["objective"], best_trial.config, analysis)
    if _tb_writer is not None:
        trainer.add_callback(_tb_writer)
    return best_run
1 Like