Skip to content

Files

Latest commit

author
ncoop57
Oct 9, 2021
d2ba68d · Oct 9, 2021

History

History

evaluation

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Jul 19, 2021
Jul 17, 2021
Jul 15, 2021
Jul 25, 2021
Oct 9, 2021
Jul 16, 2021
Jul 15, 2021
Jul 16, 2021
Aug 22, 2021
Jul 20, 2021
Jul 15, 2021
Oct 9, 2021

How to Evaluate

Human Eval

The following steps are required to run the Human Eval step:

  1. Ensure you are using python3.7 as required by human-eval. We recommend conda:
conda create -n human-eval python=3.7
  1. Install the dependencies in this folder
pip install -r requirements.txt
  1. Install human-eval by following the instructions on the human-eval repo

With the following requirements performed you can now run the evaluation.py script:

python evaluate.py --model_name_or_path=model_name_or_path --human_eval_path=<path/to/human-eval/data/HumanEval.jsonl.gz> --out_path=./model_results

So for example if you want to evaluate the EleutherAI GPT Neo 125M

python evaluate.py EleutherAI/gpt-neo-125M ../dependency_repos/human-eval/data/HumanEval.jsonl.gz model_results/