YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance.
YOLOv6-nano achieves 35.0 mAP on COCO val2017 dataset with 1242 FPS on T4 using TensorRT FP16 for bs32 inference, and YOLOv6-s achieves 43.1 mAP on COCO val2017 dataset with 520 FPS on T4 using TensorRT FP16 for bs32 inference.
YOLOv6 is composed of the following methods:
- Hardware-friendly Design for Backbone and Neck
- Efficient Decoupled Head with SIoU Loss
- YOLOv6 m/l/x model.
- Deployment for MNN/TNN/NCNN/CoreML...
- Quantization tools
git clone https://github.com/meituan/YOLOv6
cd YOLOv6
pip install -r requirements.txt
First, download a pretrained model from the YOLOv6 release
Second, run inference with tools/infer.py
python tools/infer.py --weights yolov6s.pt --source img.jpg / imgdir
yolov6n.pt
Single GPU
python tools/train.py --batch 32 --conf configs/yolov6s.py --data data/coco.yaml --device 0
configs/yolov6n.py
Multi GPUs (DDP mode recommended)
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 256 --conf configs/yolov6s.py --data data/coco.yaml --device 0,1,2,3,4,5,6,7
configs/yolov6n.py
- conf: select config file to specify network/optimizer/hyperparameters
- data: prepare COCO dataset, YOLO format coco labes and specify dataset paths in data.yaml
- make sure your dataset structure as fellows:
├── coco
│ ├── annotations
│ │ ├── instances_train2017.json
│ │ └── instances_val2017.json
│ ├── images
│ │ ├── train2017
│ │ └── val2017
│ ├── labels
│ │ ├── train2017
│ │ ├── val2017
│ ├── LICENSE
│ ├── README.txt
Reproduce mAP on COCO val2017 dataset
python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s.pt --task val
yolov6n.pt
If your training process is corrupted, you can resume training by
# single GPU traning.
python tools/train.py --resume
# multi GPU training.
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --resume
Your can also specify a checkpoint path to --resume
parameter by
# remember replace /path/to/your/checkpoint/path to the checkpoint path which you want to resume training.
--resume /path/to/your/checkpoint/path
Model | Size | mAPval 0.5:0.95 |
SpeedV100 fp16 b32 (ms) |
SpeedV100 fp32 b32 (ms) |
SpeedT4 trt fp16 b1 (fps) |
SpeedT4 trt fp16 b32 (fps) |
Params (M) |
Flops (G) |
---|---|---|---|---|---|---|---|---|
YOLOv6-n | 416 640 |
30.8 35.0 |
0.3 0.5 |
0.4 0.7 |
1100 788 |
2716 1242 |
4.3 4.3 |
4.7 11.1 |
YOLOv6-tiny | 640 | 41.3 | 0.9 | 1.5 | 425 | 602 | 15.0 | 36.7 |
YOLOv6-s | 640 | 43.1 | 1.0 | 1.7 | 373 | 520 | 17.2 | 44.2 |
- Comparisons of the mAP and speed of different object detectors are tested on COCO val2017 dataset.
- Refer to Test speed tutorial to reproduce the speed results of YOLOv6.
- Params and Flops of YOLOv6 are estimated on deployed model.
- Speed results of other methods are tested in our environment using official codebase and model if not found from the corresponding official release.
- YOLOv6 NCNN Android app demo: ncnn-android-yolov6 from FeiGeChuanShu
- YOLOv6 ONNXRuntime/MNN/TNN C++: YOLOv6-ORT, YOLOv6-MNN and YOLOv6-TNN from DefTruth
- YOLOv6 TensorRT Python: yolov6-tensorrt-python from Linaom1214
- YOLOv6 TensorRT Windows C++: yolort from Wei Zeng
- YOLOv6 Quantization and Auto Compression Example YOLOv6-ACT from PaddleSlim
- YOLOv6 web demo on Huggingface Spaces with Gradio.
- Tutorial: How to train YOLOv6 on a custom dataset
- Demo of YOLOv6 inference on Google Colab