Skip to content

Commit

Permalink
Report end_learning_rate_factor that is not tuned during training.
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 300204816
  • Loading branch information
jaehlee authored and copybara-github committed Mar 10, 2020
1 parent 2f343d0 commit 014cff1
Showing 1 changed file with 14 additions and 0 deletions.
14 changes: 14 additions & 0 deletions batch_science/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,18 @@ The fields in `trial_id/metadata.json` have the following meanings:
* `steps`: The number of training steps taken.
* `trial_id`: The trial id within the study.

**Note** Following studies use fixed `end_learning_rate_factor` which
is not reported in `parameters` field in `trial_id/metadata.json`.

```
batch_science/cifar_10/resnet_8/{nesterov_momentum, sgd}
"end_learning_rate_factor": 1e-2
batch_science/imagenet/vgg_11/nesterov_momentum
"end_learning_rate_factor": 1e-3
```


The `measurements.csv` file contains data for each evaluation performed during
training each trial. It looks like this:

Expand All @@ -213,6 +225,8 @@ training each trial. It looks like this:
Note that different models have different metrics available, and that the time
between successive evaluations is not necessarily constant.



## Summary of all available data

| | Dataset (Base Directory) | Model | Optimizer | Batch Size | Complete Trials | Incomplete Trials | Infeasible Trials |
Expand Down

0 comments on commit 014cff1

Please sign in to comment.