Homepage: https://sosp2023.mpi-sws.org/
- Efficient Memory Management for Large Language Model Serving with PagedAttention [Paper] [arXiv] [Code] [Homepage]
- UC Berkeley & Stanford & UCSD
- vLLM, PagedAttention
- Oobleck: Resilient Distributed Training of Large Models Using Pipeline Templates [Paper] [arXiv] [Code]
- UMich SymbioticLab & AWS & PKU
- Gemini: Fast Failure Recovery in Distributed Training with In-Memory Checkpoints [Paper]
- Rice & AWS
- UGache: A Unified GPU Cache for Embedding-based Deep Learning [Personal Notes] [Paper]
- SJTU
- Multi-GPU embedding cache; exploit cross-GPU interconnects (NVLink, NVSwitch).
- Bagpipe: Accelerating Deep Recommendation Model Training [Paper]
- UW-Madison & UChicago