Skip to content

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

License

Notifications You must be signed in to change notification settings

DinoHub/yolov7_pipeline

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YOLOv7 package

Adapted/Forked from WongKinYiu's repo

Last "merge" date: 7th Sept 2022

Changes from original repo

  • YOLOv7 can be used as a package, with minimal requirements for inference only

Using YOLOv7 as a Package for Inference

To use YOLOv7 as a package for inference, follow these steps:

Setting up Using Docker Compose (Recommended)

Using Docker Compose is recommended for setting up the environment. Follow these steps:

  1. Clone the Repository: Clone the YOLOv7 repository. It doesn't need to be in the same folder as your main project. Checkout the inference branch.

    git clone https://github.com/DinoHub/yolov7_pipeline.git
    git checkout inference
  2. Edit Configurations:

    Edit configurations in build/docker-compose.yaml and build/.env accordingly. Best practice would be to have a /data folder for data (images/videos/etc.) and a /models folder for model related items, e.g., weights or cfgs.

  3. Build and Run Docker Container:

    Build the Docker container and start the Docker Compose:

    cd build
    docker-compose up
  4. Execute Scripts:

    Once the container is built, open another terminal and enter the container to execute your scripts. Replace yolov7_inference with your image name if you've changed it.

    cd build
    docker-compose exec yolov7_inference bash

    You can now run inference scripts inside the container. Refer to the Running YOLOv7 for Inference section for details.

Manual Set Up

If you prefer manual setup, follow these steps:

  1. Clone the Repository: Clone the YOLOv7 repository. It doesn't need to be in the same folder as your main project. Checkout the inference branch.

    git clone https://github.com/DinoHub/yolov7_pipeline.git
    git checkout inference
  2. Install the requirements for YOLOv7:

    pip install -r build/requirements.txt
  3. Install YOLOv7 Package:

    Navigate to your project folder and install YOLOv7 as a package.

    python3 -m pip install --no-cache-dir /path/to/yolov7
    

    Alternatively, use an editable package for faster builds:

    python3 -m pip install -e /path/to/yolov7
    

    Note: /path/to/yolov7 should be the /src folder of this repo

Running YOLOv7 for Inference

  1. Download Weights:

    Use the provided script to download desired weights:

    cd yolov7/weights
    ./get_weights.sh yolov7 yolov7-e6
  2. Import YOLOv7 Wrapper Class:

    In your code, import the YOLOv7 wrapper class for inference. Alternatively, refer to scripts/inference.py for example usage:

    from yolov7.yolov7 import YOLOv7

Official YOLOv7

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

PWC Hugging Face Spaces Open In Colab arxiv.org

Performance

MS COCO

Model Test Size APtest AP50test AP75test batch 1 fps batch 32 average time
YOLOv7 640 51.4% 69.7% 55.9% 161 fps 2.8 ms
YOLOv7-X 640 53.1% 71.2% 57.8% 114 fps 4.3 ms
YOLOv7-W6 1280 54.9% 72.6% 60.1% 84 fps 7.6 ms
YOLOv7-E6 1280 56.0% 73.5% 61.2% 56 fps 12.3 ms
YOLOv7-D6 1280 56.6% 74.0% 61.8% 44 fps 15.0 ms
YOLOv7-E6E 1280 56.8% 74.4% 62.1% 36 fps 18.7 ms

Citation

@article{wang2022yolov7,
  title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
  author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
  journal={arXiv preprint arXiv:2207.02696},
  year={2022}
}

Acknowledgements

Expand

About

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.5%
  • Other 1.5%