-
Notifications
You must be signed in to change notification settings - Fork 214
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' of github.com:Luodian/Otter into main
- Loading branch information
Showing
33 changed files
with
1,705 additions
and
3,535 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,21 +1,14 @@ | ||
MIT License | ||
MIT License for Non-Commercial Use | ||
|
||
Copyright (c) 2023 Anas Awadalla, Irena Gao, Joshua Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt. | ||
Copyright (c) 2023 Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, Ziwei Liu | ||
|
||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
S-Lab, Nanyang Technological University | ||
Microsoft Research, Redmond | ||
|
||
The above copyright notice and this permission notice shall be included in all | ||
copies or substantial portions of the Software. | ||
Permission is hereby granted, free of charge, to any person obtaining a copy of the Otter model and MIMIC-IT Dataset (the "Software"), to use, copy, modify, merge, and distribute copies of the Software, subject to the following conditions: | ||
1. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. | ||
2. The Software may not be used for commercial purposes. For the purposes of this license, commercial use includes, but is not limited to: integration of the Software into a product or service that generates revenue, incorporation of the Software into a commercial offering, or using the Software in the course of performing services for which payment is received. | ||
3. Redistributions of the Software must retain the above copyright notice, this list of conditions, and the following disclaimer. | ||
4. Neither the names of the copyright holders nor the names of its contributors may be used to endorse or promote products derived from this Software without specific prior written permission. | ||
|
||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
SOFTWARE. | ||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF, OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
## 🤗 Hugging Face Model | ||
|
||
You can use the 🦩 Flamingo model / 🦦 Otter model as a 🤗 Hugging Face model with only a few lines! One-click and then model configs/weights are downloaded automatically. | ||
|
||
``` python | ||
from flamingo import FlamingoModel | ||
flamingo_model = FlamingoModel.from_pretrained("luodian/openflamingo-9b-hf", device_map=auto) | ||
|
||
from otter import OtterModel | ||
otter_model = OtterModel.from_pretrained("luodian/otter-9b-hf", device_map=auto) | ||
``` | ||
|
||
Previous [OpenFlamingo](https://github.com/mlfoundations/open_flamingo) was developed with [DistributedDataParallel](https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel) (DDP) on A100 cluster. Loading OpenFlamingo-9B to GPU requires **at least 33G GPU memory**, which is only available on A100 GPUs. | ||
|
||
In order to allow more researchers without access to A100 machines to try training OpenFlamingo, we wrap the OpenFlamingo model into a 🤗 hugging Face model ([Jinghao](https://king159.github.io/) has submitted a [PR](https://github.com/huggingface/transformers/pull/23063) to the /huggingface/transformers!). Via `device_map=auto`, the large model is sharded across multiple GPUs when loading and training. This can help researchers who do not have access to A100-80G GPUs to achieve similar throughput in training, testing on 4x RTX-3090-24G GPUs, and model deployment on 2x RTX-3090-24G GPUs. Specific details are below (may vary depending on the CPU and disk performance, as we conducted training on different machines). | ||
|
||
<div style="text-align:center"> | ||
<img src="https://i.postimg.cc/LsNs55zG/table.png" width="100%" height="100%"> | ||
</div> | ||
|
||
<!-- --- | ||
<div style="text-align:center"> | ||
<img src="https://i.postimg.cc/tTcCdcv5/efficiency.png" width="100%" height="100%"> | ||
</div> --> | ||
|
||
Our Otter model is also developed in this way and it's deployed on the 🤗 Hugging Face model hub. Our model can be hosted on two RTX-3090-24G GPUs and achieve a similar speed to one A100-80G machine. |
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
## 🪩 Serving Demo | ||
|
||
We will show you how to host a demo on your own computer using gradio. | ||
|
||
## Preparation | ||
|
||
### Download the checkpoints | ||
|
||
The 🦦 Otter checkpoint and the 🦩 Open Flamingo checkpoint can be auto-downloaded with the code below. | ||
|
||
## Start Demo | ||
|
||
### Launch a controller | ||
|
||
```Shell | ||
python -m pipeline.serve.controller --host 0.0.0.0 --port 10000 | ||
``` | ||
|
||
### Launch a model worker | ||
|
||
```Shell | ||
# Init our 🦦 Otter model on GPU | ||
CUDA_VISIBLE_DEVICES=0,1 python -m pipeline.serve.model_worker --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model_name otter --checkpoint_path luodian/otter-9b-hf --num_gpus 2 --limit_model_concurrency 200 | ||
# Init our 🦦 Otter video model on CPU | ||
CUDA_VISIBLE_DEVICES=0,1 python -m pipeline.serve.model_worker --controller http://localhost:10000 --port 40002 --worker http://localhost:40002 --model_name otter_video --checkpoint_path checkpoint/otter9B_DC_fullset_16frames/ --num_gpus 2 --limit_model_concurrency 200 --load_bit 16 | ||
# Init original open flamingo model on GPU | ||
CUDA_VISIBLE_DEVICES=2,3 python -m pipeline.serve.model_worker --controller http://localhost:10000 --port 40001 --worker http://localhost:40001 --model_name open_flamingo --checkpoint_path luodian/openflamingo-9b-hf --num_gpus 2 --limit_model_concurrency 200 | ||
|
||
# Init original open flamingo model on CPU | ||
python -m pipeline.serve.model_worker --controller http://localhost:10000 --port 40001 --worker http://localhost:40001 --model_name open_flamingo_original --checkpoint_path luodian/openflamingo-9b-hf --num_gpus 0 | ||
``` | ||
|
||
Wait until the process finishes loading the model and you see "Uvicorn running on ...". | ||
|
||
### Launch a gradio web server | ||
|
||
```Shell | ||
# Image demo | ||
python -m pipeline.serve.gradio_web_server --controller http://localhost:10000 --port 7861 | ||
# Video demo | ||
python -m pipeline.serve.gradio_web_server_video --controller http://localhost:10000 --port 7862 | ||
``` | ||
|
||
Now, you can open your browser and chat with the model! |
Oops, something went wrong.