Skip to content

Commit

Permalink
Add Hyperopt example for BERT classifier (#186)
Browse files Browse the repository at this point in the history
* Add Hyperopt example for BERT classifier

* Address review comments

* Address review comments

* Remove hyperparams from config_downstream

* Add URL for Batch object. Removed unused args

* Add docs for Batch and FieldBatch

* Address review comments
  • Loading branch information
AvinashBukkittu authored Sep 6, 2019
1 parent aa31d85 commit 84db3fd
Show file tree
Hide file tree
Showing 9 changed files with 448 additions and 52 deletions.
5 changes: 5 additions & 0 deletions docs/code/data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,11 @@ Data Loaders
Data Iterators
===============

:hidden:`Batch`
~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: texar.torch.data.Batch
:members:

:hidden:`DataIterator`
~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: texar.torch.data.DataIterator
Expand Down
24 changes: 24 additions & 0 deletions examples/bert/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ To summarize, this example showcases:
* Building and fine-tuning on downstream tasks
* Use of Texar `RecordData` module for data loading and processing
* Use of Texar `Executor` module for simplified training loops and TensorBoard visualization
* Use of [Hyperopt]((https://github.com/hyperopt/hyperopt)) library to tune hyperparameters with
`Executor` module

Future work:

Expand Down Expand Up @@ -178,3 +180,25 @@ tensorboard --logdir runs/
```

![Visualizing loss/accuarcy on Tensorboard](tbx.png)

## Hyperparameter tuning with Executor

To run this example, please install `hyperopt` by issuing the following command

```commandline
pip install hyperopt
```

`bert_with_hypertuning_main.py` shows an example of how to tune hyperparameters with Executor using `hyperopt`.
To run this example, run the following command

```commandline
python bert_with_hypertuning_main.py
```

In this simple example, the hyperparameters to be tuned are provided as a `dict` in
`bert_hypertuning_config_classifier.py` which are fed into `objective_func()` . We use `TPE`
(Tree-structured Parzen Estimator) algorithm for tuning the hyperparams (provided in `hyperopt`
library). The example runs for 3 trials to find the best hyperparam settings. The final model is
saved in `output_dir` provided by the user. More information about the libary can be
found at [Hyperopt](https://github.com/hyperopt/hyperopt)
2 changes: 1 addition & 1 deletion examples/bert/bert_classifier_main.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
config_downstream = importlib.import_module(args.config_downstream)
config_downstream = {
k: v for k, v in config_downstream.__dict__.items()
if not k.startswith('__')}
if not k.startswith('__') and k != "hyperparams"}

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

Expand Down
2 changes: 1 addition & 1 deletion examples/bert/bert_classifier_using_executor_main.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
config_downstream = importlib.import_module(args.config_downstream)
config_downstream = {
k: v for k, v in config_downstream.__dict__.items()
if not k.startswith('__')}
if not k.startswith('__') and k != "hyperparams"}

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

Expand Down
Loading

0 comments on commit 84db3fd

Please sign in to comment.