📄 Paper | 🤗 Dataset | 📝 Documentation | 🙏 Citation
We present PPTAgent, an innovative system that automatically generates presentations from documents. Drawing inspiration from human presentation creation methods, our system employs a two-step process to ensure excellence in overall quality. Additionally, we introduce PPTEval, a comprehensive evaluation framework that assesses presentations across multiple dimensions.
Tip
🚀 Get started quickly with our pre-built Docker image - See Docker instructions
casestudy.mp4
- Dynamic Content Generation: Creates slides with seamlessly integrated text and images
- Smart Reference Learning: Leverages existing presentations without requiring manual annotation
- Comprehensive Quality Assessment: Evaluates presentations through multiple quality metrics
PPTAgent follows a two-phase approach:
- Analysis Phase: Extracts and learns from patterns in reference presentations
- Generation Phase: Develops structured outlines and produces visually cohesive slides
Our system's workflow is illustrated below:
PPTEval evaluates presentations across three dimensions:
- Content: Check the accuracy and relevance of the slides.
- Design: Assesses the visual appeal and consistency.
- Coherence: Ensures the logical flow of ideas.
The workflow of PPTEval is shown below:
So you want to contribute? Yay! 🎉
This project is actively maintained! We welcome:
- Issues: Bug reports, feature requests, and questions
- Pull Requests: Code improvements, documentation updates, and fixes
- Discussions: Share your ideas and experiences
- Improve test cases for each module to ensure the system won't break
- Enhance documentation, including project documentation and code comments
- Refactor code, including replacing print statements with logger
If you find this project helpful, please use the following to cite it:
@article{zheng2025pptagent,
title={PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides},
author={Zheng, Hao and Guan, Xinyan and Kong, Hao and Zheng, Jia and Lin, Hongyu and Lu, Yaojie and He, Ben and Han, Xianpei and Sun, Le},
journal={arXiv preprint arXiv:2501.03936},
year={2025}
}