Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Log eval metrics performed during training to files #2602

Open
skandermoalla opened this issue Jan 22, 2025 · 0 comments
Open

[Question] Log eval metrics performed during training to files #2602

skandermoalla opened this issue Jan 22, 2025 · 0 comments
Labels
📚 documentation Improvements or additions to documentation ❓ question Seeking clarification or more information

Comments

@skandermoalla
Copy link
Contributor

In the example scripts there is a final trainer.evaluate that performs a last evaluation and outputs eval metrics that are saved to a file with other calls like trainer.save_metrics (https://github.com/huggingface/trl/blob/d4222a1e08def2be56572eb2973ef3bf50143a4f/trl/scripts/dpo.py#L128C1-L131C46).

    if training_args.eval_strategy != "no":
        metrics = trainer.evaluate()
        trainer.log_metrics("eval", metrics)
        trainer.save_metrics("eval", metrics)

Is there a way to save these metrics to files during the regular evaluations performed by the trainer too? I coudn't find anything in the TrainingArguments.
Also are the metrics logged during these eval to W&B accumulated over the whole eval dataset or refer to a single eval batch?

Thanks

@github-actions github-actions bot added ❓ question Seeking clarification or more information 📚 documentation Improvements or additions to documentation labels Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
📚 documentation Improvements or additions to documentation ❓ question Seeking clarification or more information
Projects
None yet
Development

No branches or pull requests

1 participant