Optuna: A hyperparameter optimization framework¶
Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
Key Features¶
Optuna has modern functionalities as follows:
Lightweight, versatile, and platform agnostic architecture
Handle a wide variety of tasks with a simple installation that has few requirements.
-
Define search spaces using familiar Python syntax including conditionals and loops.
Efficient optimization algorithms
Adopt state-of-the-art algorithms for sampling hyper parameters and efficiently pruning unpromising trials.
-
Scale studies to tens or hundreds or workers with little or no changes to the code.
-
Inspect optimization histories from a variety of plotting functions.
Basic Concepts¶
We use the terms study and trial as follows:
Study: optimization based on an objective function
Trial: a single execution of the objective function
Please refer to sample code below. The goal of a study is to find out
the optimal set of hyperparameter values (e.g., classifier
and
svm_c
) through multiple trials (e.g., n_trials=100
). Optuna is
a framework designed for the automation and the acceleration of the
optimization studies.
import ...
# Define an objective function to be minimized.
def objective(trial):
# Invoke suggest methods of a Trial object to generate hyperparameters.
regressor_name = trial.suggest_categorical('classifier', ['SVR', 'RandomForest'])
if regressor_name == 'SVR':
svr_c = trial.suggest_loguniform('svr_c', 1e-10, 1e10)
regressor_obj = sklearn.svm.SVR(C=svr_c)
else:
rf_max_depth = trial.suggest_int('rf_max_depth', 2, 32)
regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)
X, y = sklearn.datasets.load_boston(return_X_y=True)
X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)
regressor_obj.fit(X_train, y_train)
y_pred = regressor_obj.predict(X_val)
error = sklearn.metrics.mean_squared_error(y_val, y_pred)
return error # An objective value linked with the Trial object.
study = optuna.create_study() # Create a new study.
study.optimize(objective, n_trials=100) # Invoke optimization of the objective function.
Communication¶
GitHub Issues for bug reports, feature requests and questions.
Gitter for interactive chat with developers.
Stack Overflow for questions.
Contribution¶
Any contributions to Optuna are welcome! When you send a pull request, please follow the contribution guide.
Reference¶
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next-generation Hyperparameter Optimization Framework. In KDD (arXiv).
Installation¶
Optuna supports Python 3.6 or newer.
We recommend to install Optuna via pip:
$ pip install optuna
You can also install the development version of Optuna from master branch of Git repository:
$ pip install git+https://github.com/optuna/optuna.git
You can also install Optuna via conda:
$ conda install -c conda-forge optuna
Tutorial¶
If you are new to Optuna or want a general introduction, we highly recommend the below video.
Key Features¶
Showcases Optuna’s Key Features.
Note
Click here to download the full example code
Lightweight, versatile, and platform agnostic architecture¶
Optuna is entirely written in Python and has few dependencies. This means that we can quickly move to the real example once you get interested in Optuna.
Quadratic Function Example¶
Usually, Optuna is used to optimize hyperparameters, but as an example, let’s optimize a simple quadratic function: \((x - 2)^2\).
First of all, import optuna
.
import optuna
In optuna, conventionally functions to be optimized are named objective.
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
This function returns the value of \((x - 2)^2\). Our goal is to find the value of x
that minimizes the output of the objective
function. This is the “optimization.”
During the optimization, Optuna repeatedly calls and evaluates the objective function with
different values of x
.
A Trial
object corresponds to a single execution of the objective
function and is internally instantiated upon each invocation of the function.
The suggest APIs (for example, suggest_float()
) are called
inside the objective function to obtain parameters for a trial.
suggest_float()
selects parameters uniformly within the range
provided. In our example, from \(-10\) to \(10\).
To start the optimization, we create a study object and pass the objective function to method
optimize()
as follows.
study = optuna.create_study()
study.optimize(objective, n_trials=100)
You can get the best parameter as follows.
best_params = study.best_params
found_x = best_params["x"]
print("Found x: {}, (x - 2)^2: {}".format(found_x, (found_x - 2) ** 2))
Out:
Found x: 2.0016016629797964, (x - 2)^2: 2.5653243008501516e-06
We can see that the x
value found by Optuna is close to the optimal value of 2
.
Note
When used to search for hyperparameters in machine learning, usually the objective function would return the loss or accuracy of the model.
Study Object¶
Let us clarify the terminology in Optuna as follows:
Trial: A single call of the objective function
Study: An optimization session, which is a set of trials
Parameter: A variable whose value is to be optimized, such as
x
in the above example
In Optuna, we use the study object to manage optimization.
Method create_study()
returns a study object.
A study object has useful properties for analyzing the optimization outcome.
To get the dictionary of parameter name and parameter values:
study.best_params
Out:
{'x': 2.0016016629797964}
To get the best observed value of the objective function:
study.best_value
Out:
2.5653243008501516e-06
To get the best trial:
study.best_trial
Out:
FrozenTrial(number=82, value=2.5653243008501516e-06, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 578971), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 582270), params={'x': 2.0016016629797964}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=82, state=TrialState.COMPLETE)
To get all trials:
study.trials
Out:
[FrozenTrial(number=0, value=65.70855554403302, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 315660), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 315855), params={'x': -6.106081392635594}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=0, state=TrialState.COMPLETE), FrozenTrial(number=1, value=32.90935193400542, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 316153), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 316328), params={'x': -3.736667319446494}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=1, state=TrialState.COMPLETE), FrozenTrial(number=2, value=62.42959099793248, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 316636), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 316806), params={'x': -5.901239839286774}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=2, state=TrialState.COMPLETE), FrozenTrial(number=3, value=35.26674392921999, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 317095), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 317257), params={'x': -3.938580969324237}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=3, state=TrialState.COMPLETE), FrozenTrial(number=4, value=36.19649272227829, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 317549), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 317717), params={'x': -4.016352110895629}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=4, state=TrialState.COMPLETE), FrozenTrial(number=5, value=14.438419464348696, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 318009), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 318179), params={'x': 5.799792029091684}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=5, state=TrialState.COMPLETE), FrozenTrial(number=6, value=62.88312159010636, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 318458), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 318629), params={'x': -5.929887867435855}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=6, state=TrialState.COMPLETE), FrozenTrial(number=7, value=29.196116929196613, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 318913), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 319084), params={'x': 7.403343125250942}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=7, state=TrialState.COMPLETE), FrozenTrial(number=8, value=83.66513766540027, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 319375), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 319551), params={'x': -7.146864909104118}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=8, state=TrialState.COMPLETE), FrozenTrial(number=9, value=135.1044205963803, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 319823), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 320009), params={'x': -9.623442717043014}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=9, state=TrialState.COMPLETE), FrozenTrial(number=10, value=20.08410990921703, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 320281), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 324526), params={'x': 6.4815298625823115}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=10, state=TrialState.COMPLETE), FrozenTrial(number=11, value=23.363682844721218, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 325008), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 331884), params={'x': 6.833599367419813}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=11, state=TrialState.COMPLETE), FrozenTrial(number=12, value=0.9601744059033999, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 332220), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 335235), params={'x': 2.9798848942112537}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=12, state=TrialState.COMPLETE), FrozenTrial(number=13, value=0.03733581771348687, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 335598), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 338605), params={'x': 2.1932247854533338}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=13, state=TrialState.COMPLETE), FrozenTrial(number=14, value=0.038607703475723565, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 338890), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 341954), params={'x': 2.1964884308953674}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=14, state=TrialState.COMPLETE), FrozenTrial(number=15, value=1.9167591525847496, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 342319), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 345519), params={'x': 0.6155292879281449}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=15, state=TrialState.COMPLETE), FrozenTrial(number=16, value=0.7851845007983917, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 345840), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 349152), params={'x': 2.8861063710404027}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=16, state=TrialState.COMPLETE), FrozenTrial(number=17, value=4.22789567026869, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 349460), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 352510), params={'x': -0.05618473641564847}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=17, state=TrialState.COMPLETE), FrozenTrial(number=18, value=3.3390243271430036, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 352807), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 355770), params={'x': 0.17270026346441902}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=18, state=TrialState.COMPLETE), FrozenTrial(number=19, value=0.4879514579084581, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 356086), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 359112), params={'x': 2.698535223097918}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=19, state=TrialState.COMPLETE), FrozenTrial(number=20, value=6.323711245965134, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 359473), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 362513), params={'x': 4.514699036856127}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=20, state=TrialState.COMPLETE), FrozenTrial(number=21, value=61.501786318667264, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 362779), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 365848), params={'x': 9.84230746137049}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=21, state=TrialState.COMPLETE), FrozenTrial(number=22, value=0.0055090197591712835, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 366168), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 369197), params={'x': 1.9257772288366213}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=22, state=TrialState.COMPLETE), FrozenTrial(number=23, value=0.5907873495665922, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 369507), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 372403), params={'x': 1.2313730751745733}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=23, state=TrialState.COMPLETE), FrozenTrial(number=24, value=13.693817917621196, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 372766), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 375898), params={'x': -1.70051589884724}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=24, state=TrialState.COMPLETE), FrozenTrial(number=25, value=11.77372354635492, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 376266), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 379923), params={'x': -1.4312859901726234}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=25, state=TrialState.COMPLETE), FrozenTrial(number=26, value=6.121342139891911, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 380233), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 383494), params={'x': 4.474134624447892}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=26, state=TrialState.COMPLETE), FrozenTrial(number=27, value=57.51620610037797, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 383804), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 386754), params={'x': 9.583943967381218}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=27, state=TrialState.COMPLETE), FrozenTrial(number=28, value=12.798220271228756, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 387026), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 390398), params={'x': -1.577460030696186}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=28, state=TrialState.COMPLETE), FrozenTrial(number=29, value=0.06961954817646306, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 390748), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 393826), params={'x': 1.736144834849755}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=29, state=TrialState.COMPLETE), FrozenTrial(number=30, value=6.628065116324949, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 394096), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 397153), params={'x': 4.574502887224046}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=30, state=TrialState.COMPLETE), FrozenTrial(number=31, value=0.01427966190174364, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 397479), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 400794), params={'x': 1.8805024606874952}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=31, state=TrialState.COMPLETE), FrozenTrial(number=32, value=0.024296654346231916, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 401100), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 404105), params={'x': 2.155873841122338}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=32, state=TrialState.COMPLETE), FrozenTrial(number=33, value=6.50219201525846, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 404375), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 407483), params={'x': -0.5499396101199063}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=33, state=TrialState.COMPLETE), FrozenTrial(number=34, value=2.208619639790776, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 407754), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 410682), params={'x': 3.486142536835137}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=34, state=TrialState.COMPLETE), FrozenTrial(number=35, value=26.364852635967782, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 410987), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 414346), params={'x': -3.134671619097737}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=35, state=TrialState.COMPLETE), FrozenTrial(number=36, value=0.7864190897987554, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 414653), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 417980), params={'x': 1.1131972655664872}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=36, state=TrialState.COMPLETE), FrozenTrial(number=37, value=10.724929576098349, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 418307), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 421598), params={'x': 5.274893826690928}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=37, state=TrialState.COMPLETE), FrozenTrial(number=38, value=2.8939421479135667, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 421929), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 425026), params={'x': 3.7011590601450433}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=38, state=TrialState.COMPLETE), FrozenTrial(number=39, value=31.98847025052037, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 425332), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 428506), params={'x': 7.655835062174318}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=39, state=TrialState.COMPLETE), FrozenTrial(number=40, value=0.03497900308975568, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 428848), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 431912), params={'x': 1.8129732556831626}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=40, state=TrialState.COMPLETE), FrozenTrial(number=41, value=0.24347948880688178, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 432220), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 435217), params={'x': 1.5065635919321703}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=41, state=TrialState.COMPLETE), FrozenTrial(number=42, value=7.4519216113383075, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 435523), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 438782), params={'x': -0.7298208020561181}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=42, state=TrialState.COMPLETE), FrozenTrial(number=43, value=0.02605669904227884, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 439104), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 442047), params={'x': 2.1614208754847986}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=43, state=TrialState.COMPLETE), FrozenTrial(number=44, value=2.5357580135856503, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 442356), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 445654), params={'x': 3.5924063594402185}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=44, state=TrialState.COMPLETE), FrozenTrial(number=45, value=24.115267653016936, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 445964), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 449050), params={'x': -2.910729849321477}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=45, state=TrialState.COMPLETE), FrozenTrial(number=46, value=0.9577824269882834, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 449357), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 452371), params={'x': 1.0213364076515958}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=46, state=TrialState.COMPLETE), FrozenTrial(number=47, value=0.01975075340287809, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 452682), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 455764), params={'x': 2.1405373736871374}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=47, state=TrialState.COMPLETE), FrozenTrial(number=48, value=0.2198871674088895, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 456073), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 459160), params={'x': 2.468921280609965}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=48, state=TrialState.COMPLETE), FrozenTrial(number=49, value=15.899978030779515, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 459502), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 462773), params={'x': 5.987477652699701}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=49, state=TrialState.COMPLETE), FrozenTrial(number=50, value=2.462797941359173, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 463101), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 466379), params={'x': 0.43066958821312173}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=50, state=TrialState.COMPLETE), FrozenTrial(number=51, value=0.019721313014483564, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 466708), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 470198), params={'x': 1.8595674075775728}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=51, state=TrialState.COMPLETE), FrozenTrial(number=52, value=3.527146223068448, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 470512), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 473722), params={'x': 3.8780698131508444}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=52, state=TrialState.COMPLETE), FrozenTrial(number=53, value=0.8533350737439601, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 474036), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 477360), params={'x': 2.923761372727806}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=53, state=TrialState.COMPLETE), FrozenTrial(number=54, value=5.774828492903198, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 477708), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 480978), params={'x': -0.40308728366307944}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=54, state=TrialState.COMPLETE), FrozenTrial(number=55, value=10.13477900203025, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 481295), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 484708), params={'x': 5.183516766412618}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=55, state=TrialState.COMPLETE), FrozenTrial(number=56, value=0.024957755518050746, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 485045), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 488487), params={'x': 2.157980237745266}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=56, state=TrialState.COMPLETE), FrozenTrial(number=57, value=2.0672822294204587, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 488794), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 492230), params={'x': 0.562195343789547}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=57, state=TrialState.COMPLETE), FrozenTrial(number=58, value=1.4280852286732828, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 492569), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 495787), params={'x': 3.195025200016001}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=58, state=TrialState.COMPLETE), FrozenTrial(number=59, value=9.547780256513228, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 496105), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 499387), params={'x': -1.089948261138563}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=59, state=TrialState.COMPLETE), FrozenTrial(number=60, value=3.4946703190260977, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 499732), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 503047), params={'x': 0.13059626644587619}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=60, state=TrialState.COMPLETE), FrozenTrial(number=61, value=0.08182892103647996, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 503378), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 506684), params={'x': 2.286057548469674}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=61, state=TrialState.COMPLETE), FrozenTrial(number=62, value=0.003613552068857152, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 507023), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 510311), params={'x': 1.9398871721771704}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=62, state=TrialState.COMPLETE), FrozenTrial(number=63, value=1.0590064038990625, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 510617), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 513707), params={'x': 0.9709196319533336}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=63, state=TrialState.COMPLETE), FrozenTrial(number=64, value=4.470432960928841, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 514029), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 517047), params={'x': 4.114339840453479}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=64, state=TrialState.COMPLETE), FrozenTrial(number=65, value=0.04933045337615848, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 517317), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 520732), params={'x': 1.7778953999212117}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=65, state=TrialState.COMPLETE), FrozenTrial(number=66, value=0.6162556895378192, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 521074), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 524404), params={'x': 2.7850195472329458}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=66, state=TrialState.COMPLETE), FrozenTrial(number=67, value=0.12349880032670632, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 524725), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 527793), params={'x': 1.6485760390543833}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=67, state=TrialState.COMPLETE), FrozenTrial(number=68, value=9.173086340933, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 528161), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 531554), params={'x': 5.028710342857666}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=68, state=TrialState.COMPLETE), FrozenTrial(number=69, value=18.657133382029038, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 531893), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 535209), params={'x': -2.319390394723431}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=69, state=TrialState.COMPLETE), FrozenTrial(number=70, value=1.4174220460030702, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 535527), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 538848), params={'x': 3.19055535192744}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=70, state=TrialState.COMPLETE), FrozenTrial(number=71, value=0.04421822793454833, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 539130), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 542113), params={'x': 2.2102813066693003}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=71, state=TrialState.COMPLETE), FrozenTrial(number=72, value=1.4847368257715419, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 542416), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 545641), params={'x': 0.781502225783099}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=72, state=TrialState.COMPLETE), FrozenTrial(number=73, value=0.012141065420648214, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 545962), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 549316), params={'x': 2.1101865028968985}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=73, state=TrialState.COMPLETE), FrozenTrial(number=74, value=4.006248118666752, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 549638), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 552872), params={'x': -0.0015614201584601695}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=74, state=TrialState.COMPLETE), FrozenTrial(number=75, value=4.923276150112382, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 553228), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 556521), params={'x': 4.218845679652459}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=75, state=TrialState.COMPLETE), FrozenTrial(number=76, value=0.4148332959572124, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 556842), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 560076), params={'x': 1.3559244640904202}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=76, state=TrialState.COMPLETE), FrozenTrial(number=77, value=0.341456989350815, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 560400), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 563764), params={'x': 2.5843432119489496}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=77, state=TrialState.COMPLETE), FrozenTrial(number=78, value=115.41183100189464, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 564121), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 567533), params={'x': -8.742989853941715}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=78, state=TrialState.COMPLETE), FrozenTrial(number=79, value=1.628073522123934, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 567877), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 571313), params={'x': 3.2759598434605746}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=79, state=TrialState.COMPLETE), FrozenTrial(number=80, value=0.469588469389763, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 571657), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 574978), params={'x': 1.314734745233816}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=80, state=TrialState.COMPLETE), FrozenTrial(number=81, value=4.1708287942184535e-05, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 575315), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 578621), params={'x': 1.993541804590895}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=81, state=TrialState.COMPLETE), FrozenTrial(number=82, value=2.5653243008501516e-06, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 578971), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 582270), params={'x': 2.0016016629797964}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=82, state=TrialState.COMPLETE), FrozenTrial(number=83, value=0.019710678470537823, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 582600), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 586062), params={'x': 1.8596052762012125}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=83, state=TrialState.COMPLETE), FrozenTrial(number=84, value=2.431242925099394, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 586410), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 589876), params={'x': 0.44075565574237396}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=84, state=TrialState.COMPLETE), FrozenTrial(number=85, value=5.072897465015439, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 590229), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 593719), params={'x': -0.2523093626354793}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=85, state=TrialState.COMPLETE), FrozenTrial(number=86, value=0.6355150692538237, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 594071), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 597529), params={'x': 2.7971919902092743}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=86, state=TrialState.COMPLETE), FrozenTrial(number=87, value=0.01026348881115298, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 597880), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 601270), params={'x': 1.8986911217555293}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=87, state=TrialState.COMPLETE), FrozenTrial(number=88, value=0.15848189086878664, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 601598), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 604873), params={'x': 1.60190215917593}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=88, state=TrialState.COMPLETE), FrozenTrial(number=89, value=1.7195366790397328, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 605205), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 608589), params={'x': 0.6886889464967769}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=89, state=TrialState.COMPLETE), FrozenTrial(number=90, value=2.713215355560109, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 608941), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 612259), params={'x': 3.6471840685121104}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=90, state=TrialState.COMPLETE), FrozenTrial(number=91, value=0.013417366774289006, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 612553), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 615707), params={'x': 1.884166642221297}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=91, state=TrialState.COMPLETE), FrozenTrial(number=92, value=0.06398235724413853, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 616060), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 619132), params={'x': 1.7470526591479196}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=92, state=TrialState.COMPLETE), FrozenTrial(number=93, value=0.8117255926437386, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 619519), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 622556), params={'x': 1.0990418474514265}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=93, state=TrialState.COMPLETE), FrozenTrial(number=94, value=0.3261699318780959, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 622882), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 626289), params={'x': 2.571112888909098}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=94, state=TrialState.COMPLETE), FrozenTrial(number=95, value=1.2586433450589471, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 626585), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 629748), params={'x': 3.1218927511393177}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=95, state=TrialState.COMPLETE), FrozenTrial(number=96, value=8.065045189776898, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 630057), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 633441), params={'x': -0.8399023204640153}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=96, state=TrialState.COMPLETE), FrozenTrial(number=97, value=0.01752206717946293, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 633715), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 636834), params={'x': 1.8676290546250314}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=97, state=TrialState.COMPLETE), FrozenTrial(number=98, value=2.8682836088574164, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 637145), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 640339), params={'x': 0.3063992179803954}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=98, state=TrialState.COMPLETE), FrozenTrial(number=99, value=7.429115915052298, datetime_start=datetime.datetime(2020, 12, 4, 4, 8, 3, 640670), datetime_complete=datetime.datetime(2020, 12, 4, 4, 8, 3, 643906), params={'x': 4.725640459608035}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=99, state=TrialState.COMPLETE)]
To get the number of trials:
len(study.trials)
Out:
100
By executing optimize()
again, we can continue the optimization.
study.optimize(objective, n_trials=100)
To get the updated number of trials:
len(study.trials)
Out:
200
As the objective function is so easy that the last 100 trials don’t improve the result. However, we can check the result again:
best_params = study.best_params
found_x = best_params["x"]
print("Found x: {}, (x - 2)^2: {}".format(found_x, (found_x - 2) ** 2))
Out:
Found x: 2.0016016629797964, (x - 2)^2: 2.5653243008501516e-06
Total running time of the script: ( 0 minutes 0.733 seconds)
Note
Click here to download the full example code
Pythonic Search Space¶
For hyperparameter sampling, Optuna provides the following features:
optuna.trial.Trial.suggest_categorical()
for categorical parametersoptuna.trial.Trial.suggest_int()
for integer parametersoptuna.trial.Trial.suggest_float()
for floating point parameters
With optional arguments of step
and log
, we can discretize or take the logarithm of
integer and floating point parameters.
import optuna
def objective(trial):
# Categorical parameter
optimizer = trial.suggest_categorical("optimizer", ["MomentumSGD", "Adam"])
# Integer parameter
num_layers = trial.suggest_int("num_layers", 1, 3)
# Integer parameter (log)
num_channels = trial.suggest_int("num_channels", 32, 512, log=True)
# Integer parameter (discretized)
num_units = trial.suggest_int("num_units", 10, 100, step=5)
# Floating point parameter
dropout_rate = trial.suggest_float("dropout_rate", 0.0, 1.0)
# Floating point parameter (log)
learning_rate = trial.suggest_float("learning_rate", 1e-5, 1e-2, log=True)
# Floating point parameter (discretized)
drop_path_rate = trial.suggest_float("drop_path_rate", 0.0, 1.0, step=0.1)
Defining Parameter Spaces¶
In Optuna, we define search spaces using familiar Python syntax including conditionals and loops.
Also, you can use branches or loops depending on the parameter values.
For more various use, see examples.
Branches:
import sklearn.ensemble
import sklearn.svm
def objective(trial):
classifier_name = trial.suggest_categorical("classifier", ["SVC", "RandomForest"])
if classifier_name == "SVC":
svc_c = trial.suggest_float("svc_c", 1e-10, 1e10, log=True)
classifier_obj = sklearn.svm.SVC(C=svc_c)
else:
rf_max_depth = trial.suggest_int("rf_max_depth", 2, 32, log=True)
classifier_obj = sklearn.ensemble.RandomForestClassifier(max_depth=rf_max_depth)
Loops:
import torch
import torch.nn as nn
def create_model(trial, in_size):
n_layers = trial.suggest_int("n_layers", 1, 3)
layers = []
for i in range(n_layers):
n_units = trial.suggest_int("n_units_l{}".format(i), 4, 128, log=True)
layers.append(nn.Linear(in_size, n_units))
layers.append(nn.ReLU())
in_size = n_units
layers.append(nn.Linear(in_size, 10))
return nn.Sequential(*layers)
The difficulty of optimization increases roughly exponentially with regard to the number of parameters. That is, the number of necessary trials increases exponentially when you increase the number of parameters, so it is recommended to not add unimportant parameters.
Total running time of the script: ( 0 minutes 0.324 seconds)
Note
Click here to download the full example code
Efficient Optimization Algorithms¶
Optuna enables efficient hyperparameter optimization by adopting state-of-the-art algorithms for sampling hyperparameters and pruning efficiently unpromising trials.
Sampling Algorithms¶
Samplers basically continually narrow down the search space using the records of suggested parameter values and evaluated objective values,
leading to an optimal search space which giving off parameters leading to better objective values.
More detailed explanation of how samplers suggest parameters is in optuna.samplers.BaseSampler
.
Optuna provides the following sampling algorithms:
Tree-structured Parzen Estimator algorithm implemented in
optuna.samplers.TPESampler
CMA-ES based algorithm implemented in
optuna.samplers.CmaEsSampler
Grid Search implemented in
optuna.samplers.GridSampler
Random Search implemented in
optuna.samplers.RandomSampler
The default sampler is optuna.samplers.TPESampler
.
Switching Samplers¶
import optuna
By default, Optuna uses TPESampler
as follows.
study = optuna.create_study()
print(f"Sampler is {study.sampler.__class__.__name__}")
Out:
Sampler is TPESampler
If you want to use different samplers for example RandomSampler
and CmaEsSampler
,
study = optuna.create_study(sampler=optuna.samplers.RandomSampler())
print(f"Sampler is {study.sampler.__class__.__name__}")
study = optuna.create_study(sampler=optuna.samplers.CmaEsSampler())
print(f"Sampler is {study.sampler.__class__.__name__}")
Out:
Sampler is RandomSampler
Sampler is CmaEsSampler
Pruning Algorithms¶
Pruners
automatically stop unpromising trials at the early stages of the training (a.k.a., automated early-stopping).
Optuna provides the following pruning algorithms:
Asynchronous Successive Halving algorithm implemted in
optuna.pruners.SuccessiveHalvingPruner
Hyperband algorithm implemented in
optuna.pruners.HyperbandPruner
Median pruning algorithm implemented in
optuna.pruners.MedianPruner
Threshold pruning algorithm implemented in
optuna.pruners.ThresholdPruner
We use optuna.pruners.MedianPruner
in most examples,
though basically it is outperformed by optuna.pruners.SuccessiveHalvingPruner
and
optuna.pruners.HyperbandPruner
as in this benchmark result.
Activating Pruners¶
To turn on the pruning feature, you need to call report()
and should_prune()
after each step of the iterative training.
report()
periodically monitors the intermediate objective values.
should_prune()
decides termination of the trial that does not meet a predefined condition.
We would recommend using integration modules for major machine learning frameworks.
Exclusive list is optuna.integration
and usecases are available in optuna/examples.
import logging
import sys
import sklearn.datasets
import sklearn.linear_model
import sklearn.model_selection
def objective(trial):
iris = sklearn.datasets.load_iris()
classes = list(set(iris.target))
train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(
iris.data, iris.target, test_size=0.25, random_state=0
)
alpha = trial.suggest_loguniform("alpha", 1e-5, 1e-1)
clf = sklearn.linear_model.SGDClassifier(alpha=alpha)
for step in range(100):
clf.partial_fit(train_x, train_y, classes=classes)
# Report intermediate objective value.
intermediate_value = 1.0 - clf.score(valid_x, valid_y)
trial.report(intermediate_value, step)
# Handle pruning based on the intermediate value.
if trial.should_prune():
raise optuna.TrialPruned()
return 1.0 - clf.score(valid_x, valid_y)
Set up the median stopping rule as the pruning condition.
# Add stream handler of stdout to show the messages
optuna.logging.get_logger("optuna").addHandler(logging.StreamHandler(sys.stdout))
study = optuna.create_study(pruner=optuna.pruners.MedianPruner())
study.optimize(objective, n_trials=20)
Out:
A new study created in memory with name: no-name-5263b7fa-04af-48cd-ba4a-e31f21cc72d8
Trial 0 finished with value: 0.1842105263157895 and parameters: {'alpha': 0.016084331257362926}. Best is trial 0 with value: 0.1842105263157895.
Trial 1 finished with value: 0.2894736842105263 and parameters: {'alpha': 0.00033196551722307354}. Best is trial 0 with value: 0.1842105263157895.
Trial 2 finished with value: 0.052631578947368474 and parameters: {'alpha': 0.007563446430642264}. Best is trial 2 with value: 0.052631578947368474.
Trial 3 finished with value: 0.1578947368421053 and parameters: {'alpha': 6.434101603566283e-05}. Best is trial 2 with value: 0.052631578947368474.
Trial 4 finished with value: 0.23684210526315785 and parameters: {'alpha': 0.0008725002214613266}. Best is trial 2 with value: 0.052631578947368474.
Trial 5 pruned.
Trial 6 finished with value: 0.13157894736842102 and parameters: {'alpha': 0.0008013069445256013}. Best is trial 2 with value: 0.052631578947368474.
Trial 7 pruned.
Trial 8 finished with value: 0.2894736842105263 and parameters: {'alpha': 0.024303444481235635}. Best is trial 2 with value: 0.052631578947368474.
Trial 9 finished with value: 0.07894736842105265 and parameters: {'alpha': 0.0015488949943686919}. Best is trial 2 with value: 0.052631578947368474.
Trial 10 pruned.
Trial 11 pruned.
Trial 12 pruned.
Trial 13 pruned.
Trial 14 finished with value: 0.3421052631578947 and parameters: {'alpha': 0.06884231426085968}. Best is trial 2 with value: 0.052631578947368474.
Trial 15 pruned.
Trial 16 pruned.
Trial 17 pruned.
Trial 18 pruned.
Trial 19 finished with value: 0.26315789473684215 and parameters: {'alpha': 0.034356229852158554}. Best is trial 2 with value: 0.052631578947368474.
As you can see, several trials were pruned (stopped) before they finished all of the iterations.
The format of message is "Trial <Trial Number> pruned."
.
Which Sampler and Pruner Should be Used?¶
From the benchmark results which are available at optuna/optuna - wiki “Benchmarks with Kurobako”, at least for not deep learning tasks, we would say that
For
optuna.samplers.RandomSampler
,optuna.pruners.MedianPruner
is the best.For
optuna.samplers.TPESampler
,optuna.pruners.Hyperband
is the best.
However, note that the benchmark is not deep learning. For deep learning tasks, consult the below table from Ozaki et al, Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020,
Parallel Compute Resource |
Categorical/Conditional Hyperparameters |
Recommended Algorithms |
---|---|---|
Limited |
No |
TPE. GP-EI if search space is low-dimensional and continuous. |
Yes |
TPE. GP-EI if search space is low-dimensional and continuous |
|
Sufficient |
No |
CMA-ES, Random Search |
Yes |
Random Search or Genetic Algorithm |
Integration Modules for Pruning¶
To implement pruning mechanism in much simpler forms, Optuna provides integration modules for the following libraries.
For the complete list of Optuna’s integration modules, see optuna.integration
.
For example, XGBoostPruningCallback
introduces pruning without directly changing the logic of training iteration.
(See also example for the entire script.)
pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-error')
bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])
Total running time of the script: ( 0 minutes 2.521 seconds)
Note
Click here to download the full example code
Easy Parallelization¶
It’s straightforward to parallelize optuna.optimize()
.
If you want to manually execute Optuna optimization:
start an RDB server (this example uses MySQL)
create a study with –storage argument
share the study among multiple nodes and processes
Of course, you can use Kubernetes as in the kubernetes examples.
To just see how parallel optimization works in Optuna, check the below video.
Create a Study¶
You can create a study using optuna create-study
command.
Alternatively, in Python script you can use optuna.create_study()
.
$ mysql -u root -e "CREATE DATABASE IF NOT EXISTS example"
$ optuna create-study --study-name "distributed-example" --storage "mysql://root@localhost/example"
[I 2020-07-21 13:43:39,642] A new study created with name: distributed-example
Then, write an optimization script. Let’s assume that foo.py
contains the following code.
import optuna
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2
if __name__ == "__main__":
study = optuna.load_study(
study_name="distributed-example", storage="mysql://root@localhost/example"
)
study.optimize(objective, n_trials=100)
Note
Click here to download the full example code
Quick Visualization for Hyperparameter Optimization Analysis¶
Optuna provides various visualization features in optuna.visualization
to optimization results visually.
This tutorial walks you through this module by visualizing the history of multi-layer preceptron for MNIST implemented in PyTorch.
import random
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import optuna
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
SEED = 42
BATCH_SIZE = 256
DEVICE = torch.device("cpu")
if torch.cuda.is_available():
DEVICE = torch.device("cuda")
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
DIR = ".."
# Reduce the number of samples for faster build.
N_TRAIN_SAMPLES = BATCH_SIZE * 30
N_VALID_SAMPLES = BATCH_SIZE * 10
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
Out:
/home/docs/checkouts/readthedocs.org/user_builds/hvy-optuna/envs/tutorial-new-with-pytorch/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning:
CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
<torch._C.Generator object at 0x7fae21d31870>
Before defining the objective function, prepare some utility functions for training.
def train_model(model, optimizer, train_loader):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
if batch_idx * BATCH_SIZE >= N_TRAIN_SAMPLES:
break
data, target = data.view(data.size(0), -1).to(DEVICE), target.to(DEVICE)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
def eval_model(model, valid_loader):
model.eval()
correct = 0
with torch.no_grad():
for batch_idx, (data, target) in enumerate(valid_loader):
if batch_idx * BATCH_SIZE >= N_VALID_SAMPLES:
break
data, target = data.view(data.size(0), -1).to(DEVICE), target.to(DEVICE)
output = model(data)
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
accuracy = correct / min(len(valid_loader.dataset), N_VALID_SAMPLES)
return accuracy
Define the objective function.
def objective(trial):
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST(
DIR, train=True, download=True, transform=torchvision.transforms.ToTensor()
),
batch_size=BATCH_SIZE,
shuffle=True,
)
valid_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST(
DIR, train=False, download=True, transform=torchvision.transforms.ToTensor()
),
batch_size=BATCH_SIZE,
shuffle=True,
)
layers = []
in_features = 28 * 28
for i in range(3):
# Optimize the number of units of each layer and the initial learning rate.
out_features = trial.suggest_int("n_units_l{}".format(i), 4, 128)
layers.append(nn.Linear(in_features, out_features))
layers.append(nn.ReLU())
in_features = out_features
layers.append(nn.Linear(in_features, 10))
layers.append(nn.LogSoftmax(dim=1))
model = nn.Sequential(*layers).to(DEVICE)
# Sample the initial learning rate from [1e-5, 1e-1] in log space.
optimizer = torch.optim.Adam(
model.parameters(), trial.suggest_float("lr_init", 1e-5, 1e-1, log=True)
)
for step in range(10):
model.train()
train_model(model, optimizer, train_loader)
accuracy = eval_model(model, valid_loader)
# Report intermediate objective value.
trial.report(accuracy, step)
# Handle pruning based on the intermediate value.
if trial.should_prune():
raise optuna.TrialPruned()
return accuracy
Run hyperparameter optimization with optuna.pruners.MedianPruner
.
study = optuna.create_study(
direction="maximize",
sampler=optuna.samplers.TPESampler(seed=SEED),
pruner=optuna.pruners.MedianPruner(),
)
study.optimize(objective, n_trials=100, timeout=600)
Out:
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../MNIST/raw/train-images-idx3-ubyte.gz
0it [00:00, ?it/s]
0%| | 16384/9912422 [00:00<01:10, 140440.45it/s]
68%|######8 | 6758400/9912422 [00:00<00:15, 200442.89it/s]Extracting ../MNIST/raw/train-images-idx3-ubyte.gz to ../MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ../MNIST/raw/train-labels-idx1-ubyte.gz
0it [00:00, ?it/s][AExtracting ../MNIST/raw/train-labels-idx1-ubyte.gz to ../MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ../MNIST/raw/t10k-images-idx3-ubyte.gz
0it [00:00, ?it/s][A[A
1%| | 16384/1648877 [00:00<00:10, 149857.98it/s][A[AExtracting ../MNIST/raw/t10k-images-idx3-ubyte.gz to ../MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ../MNIST/raw/t10k-labels-idx1-ubyte.gz
0it [00:00, ?it/s][A[A[AExtracting ../MNIST/raw/t10k-labels-idx1-ubyte.gz to ../MNIST/raw
Processing...
/home/docs/checkouts/readthedocs.org/user_builds/hvy-optuna/envs/tutorial-new-with-pytorch/lib/python3.8/site-packages/torchvision/datasets/mnist.py:480: UserWarning:
The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
Done!
9920512it [00:01, 8511810.09it/s]
32768it [00:00, 60380.35it/s]
1654784it [00:00, 3518430.52it/s]
8192it [00:00, 37788.85it/s]
Plot functions¶
Visualize the optimization history. See plot_optimization_history()
for the details.
plot_optimization_history(study)
Visualize the learning curves of the trials. See plot_intermediate_values()
for the details.
plot_intermediate_values(study)
Visualize high-dimensional parameter relationships. See plot_parallel_coordinate()
for the details.
plot_parallel_coordinate(study)
Select parameters to visualize.
plot_parallel_coordinate(study, params=["lr_init", "n_units_l0"])
Visualize hyperparameter relationships. See plot_contour()
for the details.
plot_contour(study)
Select parameters to visualize.
plot_contour(study, params=["n_units_l0", "n_units_l1"])
Visualize individual hyperparameters as slice plot. See plot_slice()
for the details.
plot_slice(study)
Select parameters to visualize.
plot_slice(study, params=["n_units_l0", "n_units_l1"])
Visualize parameter importances. See plot_param_importances()
for the details.
plot_param_importances(study)
Visualize empirical distribution function. See plot_edf()
for the details.
plot_edf(study)
Total running time of the script: ( 7 minutes 12.554 seconds)
Recipes¶
Showcases the recipes that might help you using Optuna with comfort.
Note
Click here to download the full example code
Saving/Resuming Study with RDB Backend¶
An RDB backend enables persistent experiments (i.e., to save and resume a study) as well as access to history of studies. In addition, we can run multi-node optimization tasks with this feature, which is described in Easy Parallelization.
In this section, let’s try simple examples running on a local environment with SQLite DB.
Note
You can also utilize other RDB backends, e.g., PostgreSQL or MySQL, by setting the storage argument to the DB’s URL. Please refer to SQLAlchemy’s document for how to set up the URL.
New Study¶
We can create a persistent study by calling create_study()
function as follows.
An SQLite file example.db
is automatically initialized with a new study record.
import logging
import sys
import optuna
# Add stream handler of stdout to show the messages
optuna.logging.get_logger("optuna").addHandler(logging.StreamHandler(sys.stdout))
study_name = "example-study" # Unique identifier of the study.
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.create_study(study_name=study_name, storage=storage_name)
Out:
A new study created in RDB with name: example-study
To run a study, call optimize()
method passing an objective function.
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2
study.optimize(objective, n_trials=3)
Out:
Trial 0 finished with value: 1.2064608709485383 and parameters: {'x': 0.9016098730648849}. Best is trial 0 with value: 1.2064608709485383.
Trial 1 finished with value: 76.80548227608128 and parameters: {'x': -6.763873702654624}. Best is trial 0 with value: 1.2064608709485383.
Trial 2 finished with value: 61.153727127696364 and parameters: {'x': -5.82008485425167}. Best is trial 0 with value: 1.2064608709485383.
Resume Study¶
To resume a study, instantiate a Study
object
passing the study name example-study
and the DB URL sqlite:///example-study.db
.
study = optuna.create_study(study_name=study_name, storage=storage_name, load_if_exists=True)
study.optimize(objective, n_trials=3)
Out:
Using an existing study with name 'example-study' instead of creating a new one.
Trial 3 finished with value: 19.404838504974748 and parameters: {'x': -2.4050923378488616}. Best is trial 0 with value: 1.2064608709485383.
Trial 4 finished with value: 3.4154346902013577 and parameters: {'x': 3.8480894702912405}. Best is trial 0 with value: 1.2064608709485383.
Trial 5 finished with value: 6.431332276211713 and parameters: {'x': 4.536007152239858}. Best is trial 0 with value: 1.2064608709485383.
Experimental History¶
We can access histories of studies and trials via the Study
class.
For example, we can get all trials of example-study
as:
study = optuna.create_study(study_name=study_name, storage=storage_name, load_if_exists=True)
df = study.trials_dataframe(attrs=("number", "value", "params", "state"))
Out:
Using an existing study with name 'example-study' instead of creating a new one.
The method trials_dataframe()
returns a pandas dataframe like:
print(df)
Out:
number value params_x state
0 0 1.206461 0.901610 COMPLETE
1 1 76.805482 -6.763874 COMPLETE
2 2 61.153727 -5.820085 COMPLETE
3 3 19.404839 -2.405092 COMPLETE
4 4 3.415435 3.848089 COMPLETE
5 5 6.431332 4.536007 COMPLETE
A Study
object also provides properties
such as trials
, best_value
,
best_params
(see also Lightweight, versatile, and platform agnostic architecture).
print("Best params: ", study.best_params)
print("Best value: ", study.best_value)
print("Best Trial: ", study.best_trial)
print("Trials: ", study.trials)
Out:
Best params: {'x': 0.9016098730648849}
Best value: 1.2064608709485383
Best Trial: FrozenTrial(number=0, value=1.2064608709485383, datetime_start=datetime.datetime(2020, 12, 4, 4, 15, 21, 798218), datetime_complete=datetime.datetime(2020, 12, 4, 4, 15, 21, 874075), params={'x': 0.9016098730648849}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=1, state=TrialState.COMPLETE)
Trials: [FrozenTrial(number=0, value=1.2064608709485383, datetime_start=datetime.datetime(2020, 12, 4, 4, 15, 21, 798218), datetime_complete=datetime.datetime(2020, 12, 4, 4, 15, 21, 874075), params={'x': 0.9016098730648849}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=1, state=TrialState.COMPLETE), FrozenTrial(number=1, value=76.80548227608128, datetime_start=datetime.datetime(2020, 12, 4, 4, 15, 21, 931130), datetime_complete=datetime.datetime(2020, 12, 4, 4, 15, 21, 964018), params={'x': -6.763873702654624}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=2, state=TrialState.COMPLETE), FrozenTrial(number=2, value=61.153727127696364, datetime_start=datetime.datetime(2020, 12, 4, 4, 15, 22, 71571), datetime_complete=datetime.datetime(2020, 12, 4, 4, 15, 22, 103566), params={'x': -5.82008485425167}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=3, state=TrialState.COMPLETE), FrozenTrial(number=3, value=19.404838504974748, datetime_start=datetime.datetime(2020, 12, 4, 4, 15, 22, 200925), datetime_complete=datetime.datetime(2020, 12, 4, 4, 15, 22, 295092), params={'x': -2.4050923378488616}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=4, state=TrialState.COMPLETE), FrozenTrial(number=4, value=3.4154346902013577, datetime_start=datetime.datetime(2020, 12, 4, 4, 15, 22, 352253), datetime_complete=datetime.datetime(2020, 12, 4, 4, 15, 22, 383256), params={'x': 3.8480894702912405}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=5, state=TrialState.COMPLETE), FrozenTrial(number=5, value=6.431332276211713, datetime_start=datetime.datetime(2020, 12, 4, 4, 15, 22, 436235), datetime_complete=datetime.datetime(2020, 12, 4, 4, 15, 22, 471140), params={'x': 4.536007152239858}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=6, state=TrialState.COMPLETE)]
Total running time of the script: ( 0 minutes 2.594 seconds)
Note
Click here to download the full example code
User Attributes¶
This feature is to annotate experiments with user-defined attributes.
Adding User Attributes to Studies¶
A Study
object provides set_user_attr()
method
to register a pair of key and value as an user-defined attribute.
A key is supposed to be a str
, and a value be any object serializable with json.dumps
.
import sklearn.datasets
import sklearn.model_selection
import sklearn.svm
import optuna
study = optuna.create_study(storage="sqlite:///example.db")
study.set_user_attr("contributors", ["Akiba", "Sano"])
study.set_user_attr("dataset", "MNIST")
We can access annotated attributes with user_attr
property.
study.user_attrs # {'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}
Out:
{'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}
StudySummary
object, which can be retrieved by
get_all_study_summaries()
, also contains user-defined attributes.
study_summaries = optuna.get_all_study_summaries("sqlite:///example.db")
study_summaries[0].user_attrs # {"contributors": ["Akiba", "Sano"], "dataset": "MNIST"}
Out:
{'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}
See also
optuna study set-user-attr
command, which sets an attribute via command line interface.
Adding User Attributes to Trials¶
As with Study
, a Trial
object provides
set_user_attr()
method.
Attributes are set inside an objective function.
def objective(trial):
iris = sklearn.datasets.load_iris()
x, y = iris.data, iris.target
svc_c = trial.suggest_loguniform("svc_c", 1e-10, 1e10)
clf = sklearn.svm.SVC(C=svc_c)
accuracy = sklearn.model_selection.cross_val_score(clf, x, y).mean()
trial.set_user_attr("accuracy", accuracy)
return 1.0 - accuracy # return error for minimization
study.optimize(objective, n_trials=1)
We can access annotated attributes as:
study.trials[0].user_attrs
Out:
{'accuracy': 0.9400000000000001}
Note that, in this example, the attribute is not annotated to a Study
but a single Trial
.
Total running time of the script: ( 0 minutes 0.853 seconds)
Note
Click here to download the full example code
Command-Line Interface¶
Command |
Description |
---|---|
create-study |
Create a new study. |
delete-study |
Delete a specified study. |
dashboard |
Launch web dashboard (beta). |
storage upgrade |
Upgrade the schema of a storage. |
studies |
Show a list of studies. |
study optimize |
Start optimization of a study. |
study set-user-attr |
Set a user attribute to a study. |
Optuna provides command-line interface as shown in the above table.
Let us assume you are not in IPython shell and writing Python script files instead. It is totally fine to write scripts like the following:
import optuna
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2
if __name__ == "__main__":
study = optuna.create_study()
study.optimize(objective, n_trials=100)
print("Best value: {} (params: {})\n".format(study.best_value, study.best_params))
Out:
Best value: 1.999873521068474e-05 (params: {'x': 1.9955280054549804})
However, we can reduce boilerplate codes by using our optuna
command.
Let us assume that foo.py
contains only the following code.
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2
Even so, we can invoke the optimization as follows.
(Don’t care about --storage sqlite:///example.db
for now, which is described in Saving/Resuming Study with RDB Backend.)
$ cat foo.py
def objective(trial):
x = trial.suggest_uniform('x', -10, 10)
return (x - 2) ** 2
$ STUDY_NAME=`optuna create-study --storage sqlite:///example.db`
$ optuna study optimize foo.py objective --n-trials=100 --storage sqlite:///example.db --study-name $STUDY_NAME
[I 2018-05-09 10:40:25,196] Finished a trial resulted in value: 54.353767789264026. Current best value is 54.353767789264026 with parameters: {'x': -5.372500782588228}.
[I 2018-05-09 10:40:25,197] Finished a trial resulted in value: 15.784266965526376. Current best value is 15.784266965526376 with parameters: {'x': 5.972941852774387}.
...
[I 2018-05-09 10:40:26,204] Finished a trial resulted in value: 14.704254135013741. Current best value is 2.280758099793617e-06 with parameters: {'x': 1.9984897821018828}.
Please note that foo.py
only contains the definition of the objective function.
By giving the script file name and the method name of objective function to
optuna study optimize
command, we can invoke the optimization.
Total running time of the script: ( 0 minutes 0.628 seconds)
Note
Click here to download the full example code
User-Defined Sampler¶
Thanks to user-defined samplers, you can:
experiment your own sampling algorithms,
implement task-specific algorithms to refine the optimization performance, or
wrap other optimization libraries to integrate them into Optuna pipelines (e.g.,
SkoptSampler
).
This section describes the internal behavior of sampler classes and shows an example of implementing a user-defined sampler.
Overview of Sampler¶
A sampler has the responsibility to determine the parameter values to be evaluated in a trial.
When a suggest API (e.g., suggest_uniform()
) is called inside an objective function, the corresponding distribution object (e.g., UniformDistribution
) is created internally. A sampler samples a parameter value from the distribution. The sampled value is returned to the caller of the suggest API and evaluated in the objective function.
To create a new sampler, you need to define a class that inherits BaseSampler
.
The base class has three abstract methods;
infer_relative_search_space()
,
sample_relative()
, and
sample_independent()
.
As the method names imply, Optuna supports two types of sampling: one is relative sampling that can consider the correlation of the parameters in a trial, and the other is independent sampling that samples each parameter independently.
At the beginning of a trial, infer_relative_search_space()
is called to provide the relative search space for the trial. Then, sample_relative()
is invoked to sample relative parameters from the search space. During the execution of the objective function, sample_independent()
is used to sample parameters that don’t belong to the relative search space.
Note
Please refer to the document of BaseSampler
for further details.
An Example: Implementing SimulatedAnnealingSampler¶
For example, the following code defines a sampler based on Simulated Annealing (SA):
import numpy as np
import optuna
class SimulatedAnnealingSampler(optuna.samplers.BaseSampler):
def __init__(self, temperature=100):
self._rng = np.random.RandomState()
self._temperature = temperature # Current temperature.
self._current_trial = None # Current state.
def sample_relative(self, study, trial, search_space):
if search_space == {}:
return {}
# Simulated Annealing algorithm.
# 1. Calculate transition probability.
prev_trial = study.trials[-2]
if self._current_trial is None or prev_trial.value <= self._current_trial.value:
probability = 1.0
else:
probability = np.exp(
(self._current_trial.value - prev_trial.value) / self._temperature
)
self._temperature *= 0.9 # Decrease temperature.
# 2. Transit the current state if the previous result is accepted.
if self._rng.uniform(0, 1) < probability:
self._current_trial = prev_trial
# 3. Sample parameters from the neighborhood of the current point.
# The sampled parameters will be used during the next execution of
# the objective function passed to the study.
params = {}
for param_name, param_distribution in search_space.items():
if not isinstance(param_distribution, optuna.distributions.UniformDistribution):
raise NotImplementedError("Only suggest_uniform() is supported")
current_value = self._current_trial.params[param_name]
width = (param_distribution.high - param_distribution.low) * 0.1
neighbor_low = max(current_value - width, param_distribution.low)
neighbor_high = min(current_value + width, param_distribution.high)
params[param_name] = self._rng.uniform(neighbor_low, neighbor_high)
return params
# The rest are unrelated to SA algorithm: boilerplate
def infer_relative_search_space(self, study, trial):
return optuna.samplers.intersection_search_space(study)
def sample_independent(self, study, trial, param_name, param_distribution):
independent_sampler = optuna.samplers.RandomSampler()
return independent_sampler.sample_independent(study, trial, param_name, param_distribution)
Note
In favor of code simplicity, the above implementation doesn’t support some features (e.g., maximization). If you’re interested in how to support those features, please see examples/samplers/simulated_annealing.py.
You can use SimulatedAnnealingSampler
in the same way as built-in samplers as follows:
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
y = trial.suggest_uniform("y", -5, 5)
return x ** 2 + y
sampler = SimulatedAnnealingSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
best_trial = study.best_trial
print("Best value: ", best_trial.value)
print("Parameters that achieve the best value: ", best_trial.params)
Out:
Best value: -1.6654025004244242
Parameters that achieve the best value: {'x': 0.04510260025187485, 'y': -1.6674367449739045}
In this optimization, the values of x
and y
parameters are sampled by using
SimulatedAnnealingSampler.sample_relative
method.
Note
Strictly speaking, in the first trial,
SimulatedAnnealingSampler.sample_independent
method is used to sample parameter values.
Because intersection_search_space()
used in
SimulatedAnnealingSampler.infer_relative_search_space
cannot infer the search space
if there are no complete trials.
Total running time of the script: ( 0 minutes 0.670 seconds)
API Reference¶
optuna¶
The optuna
module is primarily used as an alias for basic Optuna functionality coded in other modules. Currently, two modules are aliased: (1) from optuna.study
, functions regarding the Study lifecycle, and (2) from optuna.exceptions
, the TrialPruned Exception raised when a trial is pruned.
Create a new |
|
Load the existing |
|
Delete a |
|
Get all history of studies stored in a specified storage. |
|
Exception for pruned trials. |
optuna.cli¶
The cli
module implements Optuna’s command-line functionality using the cliff framework.
optuna
[--version]
[-v | -q]
[--log-file LOG_FILE]
[--debug]
[--storage STORAGE]
-
--version
¶
show program’s version number and exit
-
-v
,
--verbose
¶
Increase verbosity of output. Can be repeated.
-
-q
,
--quiet
¶
Suppress output except warnings and errors.
-
--log-file
<LOG_FILE>
¶ Specify a file to log output. Disabled by default.
-
--debug
¶
Show tracebacks on errors.
-
--storage
<STORAGE>
¶ DB URL. (e.g. sqlite:///example.db)
create-study¶
Create a new study.
optuna create-study
[--study-name STUDY_NAME]
[--direction {minimize,maximize}]
[--skip-if-exists]
-
--study-name
<STUDY_NAME>
¶ A human-readable name of a study to distinguish it from others.
-
--direction
<DIRECTION>
¶ Set direction of optimization to a new study. Set ‘minimize’ for minimization and ‘maximize’ for maximization.
-
--skip-if-exists
¶
If specified, the creation of the study is skipped without any error when the study name is duplicated.
This command is provided by the optuna plugin.
dashboard¶
Launch web dashboard (beta).
optuna dashboard
[--study STUDY]
[--study-name STUDY_NAME]
[--out OUT]
[--allow-websocket-origin BOKEH_ALLOW_WEBSOCKET_ORIGINS]
-
--study
<STUDY>
¶ This argument is deprecated. Use –study-name instead.
-
--study-name
<STUDY_NAME>
¶ The name of the study to show on the dashboard.
-
--out
<OUT>
,
-o
<OUT>
¶ Output HTML file path. If it is not given, a HTTP server starts and the dashboard is served.
-
--allow-websocket-origin
<BOKEH_ALLOW_WEBSOCKET_ORIGINS>
¶ Allow websocket access from the specified host(s).Internally, it is used as the value of bokeh’s –allow-websocket-origin option. Please refer to https://bokeh.pydata.org/en/latest/docs/reference/command/subcommands/serve.html for more details.
This command is provided by the optuna plugin.
delete-study¶
Delete a specified study.
optuna delete-study [--study-name STUDY_NAME]
-
--study-name
<STUDY_NAME>
¶ The name of the study to delete.
This command is provided by the optuna plugin.
storage upgrade¶
Upgrade the schema of a storage.
optuna storage upgrade
This command is provided by the optuna plugin.
studies¶
Show a list of studies.
optuna studies
[-f {csv,json,table,value,yaml}]
[-c COLUMN]
[--quote {all,minimal,none,nonnumeric}]
[--noindent]
[--max-width <integer>]
[--fit-width]
[--print-empty]
[--sort-column SORT_COLUMN]
-
-f
<FORMATTER>
,
--format
<FORMATTER>
¶ the output format, defaults to table
-
-c
COLUMN
,
--column
COLUMN
¶ specify the column(s) to include, can be repeated to show multiple columns
-
--quote
<QUOTE_MODE>
¶ when to include quotes, defaults to nonnumeric
-
--noindent
¶
whether to disable indenting the JSON
-
--max-width
<integer>
¶ Maximum display width, <1 to disable. You can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence.
-
--fit-width
¶
Fit the table to the display width. Implied if –max-width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable
-
--print-empty
¶
Print empty table if there is no data to show.
-
--sort-column
SORT_COLUMN
¶ specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated
This command is provided by the optuna plugin.
study optimize¶
Start optimization of a study. Deprecated since version 2.0.0.
optuna study optimize
[--n-trials N_TRIALS]
[--timeout TIMEOUT]
[--n-jobs N_JOBS]
[--study STUDY]
[--study-name STUDY_NAME]
file
method
-
--n-trials
<N_TRIALS>
¶ The number of trials. If this argument is not given, as many trials run as possible.
-
--timeout
<TIMEOUT>
¶ Stop study after the given number of second(s). If this argument is not given, as many trials run as possible.
-
--n-jobs
<N_JOBS>
¶ The number of parallel jobs. If this argument is set to -1, the number is set to CPU counts.
-
--study
<STUDY>
¶ This argument is deprecated. Use –study-name instead.
-
--study-name
<STUDY_NAME>
¶ The name of the study to start optimization on.
-
file
¶
Python script file where the objective function resides.
-
method
¶
The method name of the objective function.
This command is provided by the optuna plugin.
study set-user-attr¶
Set a user attribute to a study.
optuna study set-user-attr
[--study STUDY]
[--study-name STUDY_NAME]
--key KEY
--value VALUE
-
--study
<STUDY>
¶ This argument is deprecated. Use –study-name instead.
-
--study-name
<STUDY_NAME>
¶ The name of the study to set the user attribute to.
-
--key
<KEY>
,
-k
<KEY>
¶ Key of the user attribute.
-
--value
<VALUE>
,
-v
<VALUE>
¶ Value to be set.
This command is provided by the optuna plugin.
optuna.distributions¶
The distributions
module defines various classes representing probability distributions, mainly used to suggest initial hyperparameter values for an optimization trial. Distribution classes inherit from a library-internal BaseDistribution
, and is initialized with specific parameters, such as the low
and high
endpoints for a UniformDistribution
.
Optuna users should not use distribution classes directly, but instead use utility functions provided by Trial
such as suggest_int()
.
A uniform distribution in the linear domain. |
|
A uniform distribution in the log domain. |
|
A discretized uniform distribution in the linear domain. |
|
A uniform distribution on integers. |
|
A uniform distribution on integers in the log domain. |
|
A categorical distribution. |
|
Serialize a distribution to JSON format. |
|
Deserialize a distribution in JSON format. |
|
A function to check compatibility of two distributions. |
optuna.exceptions¶
The exceptions
module defines Optuna-specific exceptions deriving from a base OptunaError
class. Of special importance for library users is the TrialPruned
exception to be raised if optuna.trial.Trial.should_prune()
returns True
for a trial that should be pruned.
Base class for Optuna specific errors. |
|
Exception for pruned trials. |
|
Exception for CLI. |
|
Exception for storage operation. |
|
Exception for a duplicated study name. |
optuna.importance¶
The importance
module provides functionality for evaluating hyperparameter importances based on completed trials in a given study. The utility function get_param_importances()
takes a Study
and optional evaluator as two of its inputs. The evaluator must derive from BaseImportanceEvaluator
, and is initialized as a FanovaImportanceEvaluator
by default when not passed in. Users implementing custom evaluators should refer to either FanovaImportanceEvaluator
or MeanDecreaseImpurityImportanceEvaluator
as a guide, paying close attention to the format of the return value from the Evaluator’s evaluate()
function.
Evaluate parameter importances based on completed trials in the given study. |
|
fANOVA importance evaluator. |
|
Mean Decrease Impurity (MDI) parameter importance evaluator. |
optuna.integration¶
The integration
module contains classes used to integrate Optuna with external machine learning frameworks.
For most of the ML frameworks supported by Optuna, the corresponding Optuna integration class serves only to implement a callback object and functions, compliant with the framework’s specific callback API, to be called with each intermediate step in the model training. The functionality implemented in these callbacks across the different ML frameworks includes:
Reporting intermediate model scores back to the Optuna trial using
optuna.trial.report()
,According to the results of
optuna.trial.Trial.should_prune()
, pruning the current model by raisingoptuna.TrialPruned()
, andReporting intermediate Optuna data such as the current trial number back to the framework, as done in
MLflowCallback
.
For scikit-learn, an integrated OptunaSearchCV
estimator is available that combines scikit-learn BaseEstimator functionality with access to a class-level Study
object.
AllenNLP¶
AllenNLP extension to use optuna with Jsonnet config file. |
|
Save JSON config file after updating with parameters from the best trial in the study. |
|
AllenNLP callback to prune unpromising trials. |
Catalyst¶
Catalyst callback to prune unpromising trials. |
Chainer¶
Chainer extension to prune unpromising trials. |
|
A wrapper of |
fast.ai¶
FastAI callback to prune unpromising trials for fastai. |
Keras¶
Keras callback to prune unpromising trials. |
LightGBM¶
Callback for LightGBM to prune unpromising trials. |
|
Wrapper of LightGBM Training API to tune hyperparameters. |
|
Hyperparameter tuner for LightGBM. |
|
Hyperparameter tuner for LightGBM with cross-validation. |
MLflow¶
Callback to track Optuna trials with MLflow. |
MXNet¶
MXNet callback to prune unpromising trials. |
pycma¶
A Sampler using cma library as the backend. |
|
Wrapper class of PyCmaSampler for backward compatibility. |
PyTorch¶
PyTorch Ignite handler to prune unpromising trials. |
|
PyTorch Lightning callback to prune unpromising trials. |
scikit-learn¶
Hyperparameter search with cross-validation. |
scikit-optimize¶
Sampler using Scikit-Optimize as the backend. |
skorch¶
Skorch callback to prune unpromising trials. |
TensorFlow¶
Callback to track Optuna trials with TensorBoard. |
|
TensorFlow SessionRunHook to prune unpromising trials. |
|
tf.keras callback to prune unpromising trials. |
XGBoost¶
Callback for XGBoost to prune unpromising trials. |
optuna.logging¶
The logging
module implements logging using the Python logging
package. Library users may be especially interested in setting verbosity levels using set_verbosity()
to one of optuna.logging.CRITICAL
(aka optuna.logging.FATAL
), optuna.logging.ERROR
, optuna.logging.WARNING
(aka optuna.logging.WARN
), optuna.logging.INFO
, or optuna.logging.DEBUG
.
Return the current level for the Optuna’s root logger. |
|
Set the level for the Optuna’s root logger. |
|
Disable the default handler of the Optuna’s root logger. |
|
Enable the default handler of the Optuna’s root logger. |
|
Disable propagation of the library log outputs. |
|
Enable propagation of the library log outputs. |
optuna.multi_objective¶
optuna.multi_objective.samplers¶
Base class for multi-objective samplers. |
|
Multi-objective sampler using the NSGA-II algorithm. |
|
Multi-objective sampler using random sampling. |
|
Multi-objective sampler using the MOTPE algorithm. |
optuna.multi_objective.study¶
A study corresponds to a multi-objective optimization task, i.e., a set of trials. |
|
Create a new |
|
Load the existing |
optuna.multi_objective.trial¶
A trial is a process of evaluating an objective function. |
|
Status and results of a |
optuna.multi_objective.visualization¶
Note
optuna.multi_objective.visualization
module uses plotly to create figures,
but JupyterLab cannot render them by default. Please follow this installation guide to
show figures in JupyterLab.
Plot the pareto front of a study. |
optuna.pruners¶
The pruners
module defines a BasePruner
class characterized by an abstract prune()
method, which, for a given trial and its associated study, returns a boolean value representing whether the trial should be pruned. This determination is made based on stored intermediate values of the objective function, as previously reported for the trial using optuna.trial.Trial.report()
. The remaining classes in this module represent child classes, inheriting from BasePruner
, which implement different pruning strategies.
Base class for pruners. |
|
Pruner using the median stopping rule. |
|
Pruner which never prunes trials. |
|
Pruner to keep the specified percentile of the trials. |
|
Pruner using Asynchronous Successive Halving Algorithm. |
|
Pruner using Hyperband. |
|
Pruner to detect outlying metrics of the trials. |
optuna.samplers¶
The samplers
module defines a base class for parameter sampling as described extensively in BaseSampler
. The remaining classes in this module represent child classes, deriving from BaseSampler
, which implement different sampling strategies.
Base class for samplers. |
|
Sampler using grid search. |
|
Sampler using random sampling. |
|
Sampler using TPE (Tree-structured Parzen Estimator) algorithm. |
|
A Sampler using CMA-ES algorithm. |
|
Sampler with partially fixed parameters. |
|
A class to calculate the intersection search space of a |
|
Return the intersection search space of the |
optuna.storages¶
The storages
module defines a BaseStorage
class which abstracts a backend database and provides library-internal interfaces to read/write histories of studies and trials. Library users who wish to use storage solutions other than the default in-memory storage should use one of the child classes of BaseStorage
documented below.
Storage class for RDB backend. |
|
Storage class for Redis backend. |
optuna.structs¶
This module is deprecated, with former functionality moved to optuna.trial
and optuna.study
.
-
class
optuna.structs.
TrialState
[source]¶ State of a
Trial
.-
PRUNED
¶ The
Trial
has been pruned withTrialPruned
.
Deprecated since version 1.4.0: This class is deprecated. Please use
TrialState
instead.-
-
class
optuna.structs.
StudyDirection
[source]¶ Direction of a
Study
.-
NOT_SET
¶ Direction has not been set.
Deprecated since version 1.4.0: This class is deprecated. Please use
StudyDirection
instead.-
-
class
optuna.structs.
FrozenTrial
(number: int, state: optuna.trial._state.TrialState, value: Optional[float], datetime_start: Optional[datetime.datetime], datetime_complete: Optional[datetime.datetime], params: Dict[str, Any], distributions: Dict[str, optuna.distributions.BaseDistribution], user_attrs: Dict[str, Any], system_attrs: Dict[str, Any], intermediate_values: Dict[int, float], trial_id: int)[source]¶ Warning
Deprecated in v1.4.0. This feature will be removed in the future. The removal of this feature is currently scheduled for v3.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/releases/tag/v1.4.0.
This class was moved to
trial
. Please useFrozenTrial
instead.-
property
distributions
¶ Dictionary that contains the distributions of
params
.
-
property
duration
¶ Return the elapsed time taken to complete the trial.
- Returns
The duration.
-
property
last_step
¶ Return the maximum step of intermediate_values in the trial.
- Returns
The maximum step of intermediates.
-
report
(value: float, step: int) → None[source]¶ Interface of report function.
Since
FrozenTrial
is not pruned, this report function does nothing.See also
Please refer to
should_prune()
.- Parameters
value – A value returned from the objective function.
step – Step of the trial (e.g., Epoch of neural network training). Note that pruners assume that
step
starts at zero. For example,MedianPruner
simply checks ifstep
is less thann_warmup_steps
as the warmup mechanism.
-
property
-
class
optuna.structs.
StudySummary
(study_name: str, direction: optuna._study_direction.StudyDirection, best_trial: Optional[optuna.trial._frozen.FrozenTrial], user_attrs: Dict[str, Any], system_attrs: Dict[str, Any], n_trials: int, datetime_start: Optional[datetime.datetime], study_id: int)[source]¶ Warning
Deprecated in v1.4.0. This feature will be removed in the future. The removal of this feature is currently scheduled for v3.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/releases/tag/v1.4.0.
This class was moved to
study
. Please useStudySummary
instead.
optuna.study¶
The study
module implements the Study
object and related functions. A public constructor is available for the Study
class, but direct use of this constructor is not recommended. Instead, library users should create and load a Study
using create_study()
and load_study()
respectively.
A study corresponds to an optimization task, i.e., a set of trials. |
|
Create a new |
|
Load the existing |
|
Delete a |
|
Get all history of studies stored in a specified storage. |
|
Direction of a |
|
Basic attributes and aggregated results of a |
optuna.trial¶
The trial
module contains Trial
related classes and functions.
A Trial
instance represents a process of evaluating an objective function. This instance is passed to an objective function and provides interfaces to get parameter suggestion, manage the trial’s state, and set/get user-defined attributes of the trial, so that Optuna users can define a custom objective function through the interfaces. Basically, Optuna users only use it in their custom objective functions.
A trial is a process of evaluating an objective function. |
|
A trial class which suggests a fixed value for each parameter. |
|
Status and results of a |
|
State of a |
|
Create a new |
optuna.visualization¶
The visualization
module provides utility functions for plotting the optimization process using plotly and matplotlib. Plotting functions take generally take a Study
object and optional parameters passed as a list to a params
argument.
Note
In the optuna.visualization
module, the following functions use plotly to create figures, but JupyterLab cannot
render them by default. Please follow this installation guide to show figures in
JupyterLab.
Plot the parameter relationship as contour plot in a study. |
|
Plot the objective value EDF (empirical distribution function) of a study. |
|
Plot intermediate values of all trials in a study. |
|
Plot optimization history of all trials in a study. |
|
Plot the high-dimentional parameter relationships in a study. |
|
Plot hyperparameter importances. |
|
Plot the parameter relationship as slice plot in a study. |
|
Returns whether visualization with plotly is available or not. |
Note
The following optuna.visualization.matplotlib
module uses Matplotlib as a backend.
optuna.visualization.matplotlib¶
Note
The following functions use Matplotlib as a backend.
Plot the parameter relationship as contour plot in a study with Matplotlib. |
|
Plot the objective value EDF (empirical distribution function) of a study with Matplotlib. |
|
Plot intermediate values of all trials in a study with Matplotlib. |
|
Plot optimization history of all trials in a study with Matplotlib. |
|
Plot the high-dimentional parameter relationships in a study with Matplotlib. |
|
Plot hyperparameter importances with Matplotlib. |
|
Plot the parameter relationship as slice plot in a study with Matplotlib. |
|
Returns whether visualization with Matplotlib is available or not. |
FAQ¶
Can I use Optuna with X? (where X is your favorite ML library)¶
Optuna is compatible with most ML libraries, and it’s easy to use Optuna with those. Please refer to examples.
How to define objective functions that have own arguments?¶
There are two ways to realize it.
First, callable classes can be used for that purpose as follows:
import optuna
class Objective(object):
def __init__(self, min_x, max_x):
# Hold this implementation specific arguments as the fields of the class.
self.min_x = min_x
self.max_x = max_x
def __call__(self, trial):
# Calculate an objective value by using the extra arguments.
x = trial.suggest_uniform("x", self.min_x, self.max_x)
return (x - 2) ** 2
# Execute an optimization by using an `Objective` instance.
study = optuna.create_study()
study.optimize(Objective(-100, 100), n_trials=100)
Second, you can use lambda
or functools.partial
for creating functions (closures) that hold extra arguments.
Below is an example that uses lambda
:
import optuna
# Objective function that takes three arguments.
def objective(trial, min_x, max_x):
x = trial.suggest_uniform("x", min_x, max_x)
return (x - 2) ** 2
# Extra arguments.
min_x = -100
max_x = 100
# Execute an optimization by using the above objective function wrapped by `lambda`.
study = optuna.create_study()
study.optimize(lambda trial: objective(trial, min_x, max_x), n_trials=100)
Please also refer to sklearn_addtitional_args.py example, which reuses the dataset instead of loading it in each trial execution.
Can I use Optuna without remote RDB servers?¶
Yes, it’s possible.
In the simplest form, Optuna works with in-memory storage:
study = optuna.create_study()
study.optimize(objective)
If you want to save and resume studies, it’s handy to use SQLite as the local storage:
study = optuna.create_study(study_name="foo_study", storage="sqlite:///example.db")
study.optimize(objective) # The state of `study` will be persisted to the local SQLite file.
Please see Saving/Resuming Study with RDB Backend for more details.
How can I save and resume studies?¶
There are two ways of persisting studies, which depends if you are using
in-memory storage (default) or remote databases (RDB). In-memory studies can be
saved and loaded like usual Python objects using pickle
or joblib
. For
example, using joblib
:
study = optuna.create_study()
joblib.dump(study, "study.pkl")
And to resume the study:
study = joblib.load("study.pkl")
print("Best trial until now:")
print(" Value: ", study.best_trial.value)
print(" Params: ")
for key, value in study.best_trial.params.items():
print(f" {key}: {value}")
If you are using RDBs, see Saving/Resuming Study with RDB Backend for more details.
How to suppress log messages of Optuna?¶
By default, Optuna shows log messages at the optuna.logging.INFO
level.
You can change logging levels by using optuna.logging.set_verbosity()
.
For instance, you can stop showing each trial result as follows:
optuna.logging.set_verbosity(optuna.logging.WARNING)
study = optuna.create_study()
study.optimize(objective)
# Logs like '[I 2020-07-21 13:41:45,627] Trial 0 finished with value:...' are disabled.
Please refer to optuna.logging
for further details.
How to save machine learning models trained in objective functions?¶
Optuna saves hyperparameter values with its corresponding objective value to storage, but it discards intermediate objects such as machine learning models and neural network weights. To save models or weights, please use features of the machine learning library you used.
We recommend saving optuna.trial.Trial.number
with a model in order to identify its corresponding trial.
For example, you can save SVM models trained in the objective function as follows:
def objective(trial):
svc_c = trial.suggest_loguniform("svc_c", 1e-10, 1e10)
clf = sklearn.svm.SVC(C=svc_c)
clf.fit(X_train, y_train)
# Save a trained model to a file.
with open("{}.pickle".format(trial.number), "wb") as fout:
pickle.dump(clf, fout)
return 1.0 - accuracy_score(y_valid, clf.predict(X_valid))
study = optuna.create_study()
study.optimize(objective, n_trials=100)
# Load the best model.
with open("{}.pickle".format(study.best_trial.number), "rb") as fin:
best_clf = pickle.load(fin)
print(accuracy_score(y_valid, best_clf.predict(X_valid)))
How can I obtain reproducible optimization results?¶
To make the parameters suggested by Optuna reproducible, you can specify a fixed random seed via seed
argument of RandomSampler
or TPESampler
as follows:
sampler = TPESampler(seed=10) # Make the sampler behave in a deterministic way.
study = optuna.create_study(sampler=sampler)
study.optimize(objective)
However, there are two caveats.
First, when optimizing a study in distributed or parallel mode, there is inherent non-determinism. Thus it is very difficult to reproduce the same results in such condition. We recommend executing optimization of a study sequentially if you would like to reproduce the result.
Second, if your objective function behaves in a non-deterministic way (i.e., it does not return the same value even if the same parameters were suggested), you cannot reproduce an optimization. To deal with this problem, please set an option (e.g., random seed) to make the behavior deterministic if your optimization target (e.g., an ML library) provides it.
How are exceptions from trials handled?¶
Trials that raise exceptions without catching them will be treated as failures, i.e. with the FAIL
status.
By default, all exceptions except TrialPruned
raised in objective functions are propagated to the caller of optimize()
.
In other words, studies are aborted when such exceptions are raised.
It might be desirable to continue a study with the remaining trials.
To do so, you can specify in optimize()
which exception types to catch using the catch
argument.
Exceptions of these types are caught inside the study and will not propagate further.
You can find the failed trials in log messages.
[W 2018-12-07 16:38:36,889] Setting status of trial#0 as TrialState.FAIL because of \
the following error: ValueError('A sample error in objective.')
You can also find the failed trials by checking the trial states as follows:
study.trials_dataframe()
number |
state |
value |
… |
params |
system_attrs |
0 |
TrialState.FAIL |
… |
0 |
Setting status of trial#0 as TrialState.FAIL because of the following error: ValueError(‘A test error in objective.’) |
|
1 |
TrialState.COMPLETE |
1269 |
… |
1 |
See also
The catch
argument in optimize()
.
How are NaNs returned by trials handled?¶
Trials that return NaN
(float('nan')
) are treated as failures, but they will not abort studies.
Trials which return NaN
are shown as follows:
[W 2018-12-07 16:41:59,000] Setting status of trial#2 as TrialState.FAIL because the \
objective function returned nan.
What happens when I dynamically alter a search space?¶
Since parameters search spaces are specified in each call to the suggestion API, e.g.
suggest_uniform()
and suggest_int()
,
it is possible to, in a single study, alter the range by sampling parameters from different search
spaces in different trials.
The behavior when altered is defined by each sampler individually.
Note
Discussion about the TPE sampler. https://github.com/optuna/optuna/issues/822
How can I use two GPUs for evaluating two trials simultaneously?¶
If your optimization target supports GPU (CUDA) acceleration and you want to specify which GPU is used, the easiest way is to set CUDA_VISIBLE_DEVICES
environment variable:
# On a terminal.
#
# Specify to use the first GPU, and run an optimization.
$ export CUDA_VISIBLE_DEVICES=0
$ optuna study optimize foo.py objective --study-name foo --storage sqlite:///example.db
# On another terminal.
#
# Specify to use the second GPU, and run another optimization.
$ export CUDA_VISIBLE_DEVICES=1
$ optuna study optimize bar.py objective --study-name bar --storage sqlite:///example.db
Please refer to CUDA C Programming Guide for further details.
How can I test my objective functions?¶
When you test objective functions, you may prefer fixed parameter values to sampled ones.
In that case, you can use FixedTrial
, which suggests fixed parameter values based on a given dictionary of parameters.
For instance, you can input arbitrary values of \(x\) and \(y\) to the objective function \(x + y\) as follows:
def objective(trial):
x = trial.suggest_uniform("x", -1.0, 1.0)
y = trial.suggest_int("y", -5, 5)
return x + y
objective(FixedTrial({"x": 1.0, "y": -1})) # 0.0
objective(FixedTrial({"x": -1.0, "y": -4})) # -5.0
Using FixedTrial
, you can write unit tests as follows:
# A test function of pytest
def test_objective():
assert 1.0 == objective(FixedTrial({"x": 1.0, "y": 0}))
assert -1.0 == objective(FixedTrial({"x": 0.0, "y": -1}))
assert 0.0 == objective(FixedTrial({"x": -1.0, "y": 1}))
How do I avoid running out of memory (OOM) when optimizing studies?¶
If the memory footprint increases as you run more trials, try to periodically run the garbage collector.
Specify gc_after_trial
to True
when calling optimize()
or call gc.collect()
inside a callback.
def objective(trial):
x = trial.suggest_uniform("x", -1.0, 1.0)
y = trial.suggest_int("y", -5, 5)
return x + y
study = optuna.create_study()
study.optimize(objective, n_trials=10, gc_after_trial=True)
# `gc_after_trial=True` is more or less identical to the following.
study.optimize(objective, n_trials=10, callbacks=[lambda study, trial: gc.collect()])
There is a performance trade-off for running the garbage collector, which could be non-negligible depending on how fast your objective function otherwise is. Therefore, gc_after_trial
is False
by default.
Note that the above examples are similar to running the garbage collector inside the objective function, except for the fact that gc.collect()
is called even when errors, including TrialPruned
are raised.
Note
ChainerMNStudy
does currently not provide gc_after_trial
nor callbacks for optimize()
.
When using this class, you will have to call the garbage collector inside the objective function.