Skip to content

Commit

Permalink
Add steps and requirements for running evaluation script
Browse files Browse the repository at this point in the history
  • Loading branch information
ncoop57 committed Oct 9, 2021
1 parent 1711eb3 commit 9e22134
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 0 deletions.
21 changes: 21 additions & 0 deletions evaluation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# How to Evaluate

## Human Eval

The following steps are required to run the Human Eval step:
1. Ensure you are using python3.7 as required by [human-eval](https://github.com/openai/human-eval). We recommend conda:
```
conda create -n human-eval python=3.7
```
2. Install the dependencies in this folder
```
pip install -r requirements.txt
```
3. Install human-eval by following the instructions on the [human-eval repo](https://github.com/openai/human-eval#usage)


With the following requirements performed you can now run the `evaluation.py` script:
```
python evaluate.py --model_name_or_path=model_name_or_path --human_eval_path=<path/to/human-eval/data/HumanEval.jsonl.gz> --out_path=./model_results
```
So for example if you want to evaluate the EleutherAI GPT Neo 125M
4 changes: 4 additions & 0 deletions evaluation/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
torch
fastcore
transformers
tqdm

0 comments on commit 9e22134

Please sign in to comment.