id
stringlengths
6
6
text
stringlengths
20
17.2k
title
stringclasses
1 value
271114
--- comments: true description: Discover VisionEye's object mapping and tracking powered by Ultralytics YOLO11. Simulate human eye precision, track objects, and calculate distances effortlessly. keywords: VisionEye, YOLO11, Ultralytics, object mapping, object tracking, distance calculation, computer vision, AI, machine learning, Python, tutorial --- # VisionEye View Object Mapping using Ultralytics YOLO11 🚀 ## What is VisionEye Object Mapping? [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) VisionEye offers the capability for computers to identify and pinpoint objects, simulating the observational [precision](https://www.ultralytics.com/glossary/precision) of the human eye. This functionality enables computers to discern and focus on specific objects, much like the way the human eye observes details from a particular viewpoint. ## Samples | VisionEye View | VisionEye View With Object Tracking | VisionEye View With Distance Calculation | | :----------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | ![VisionEye View Object Mapping using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/visioneye-view-object-mapping-yolov8.avif) | ![VisionEye View Object Mapping with Object Tracking using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/visioneye-object-mapping-with-tracking.avif) | ![VisionEye View with Distance Calculation using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/visioneye-distance-calculation-yolov8.avif) | | VisionEye View Object Mapping using Ultralytics YOLO11 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLO11 | VisionEye View with Distance Calculation using Ultralytics YOLO11 | !!! example "VisionEye Object Mapping using YOLO11" === "VisionEye Object Mapping" ```python import cv2 from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors model = YOLO("yolo11n.pt") names = model.model.names cap = cv2.VideoCapture("path/to/video/file.mp4") w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h)) center_point = (-10, h) while True: ret, im0 = cap.read() if not ret: print("Video frame is empty or video processing has been successfully completed.") break results = model.predict(im0) boxes = results[0].boxes.xyxy.cpu() clss = results[0].boxes.cls.cpu().tolist() annotator = Annotator(im0, line_width=2) for box, cls in zip(boxes, clss): annotator.box_label(box, label=names[int(cls)], color=colors(int(cls))) annotator.visioneye(box, center_point) out.write(im0) cv2.imshow("visioneye-pinpoint", im0) if cv2.waitKey(1) & 0xFF == ord("q"): break out.release() cap.release() cv2.destroyAllWindows() ``` === "VisionEye Object Mapping with Object Tracking" ```python import cv2 from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors model = YOLO("yolo11n.pt") cap = cv2.VideoCapture("path/to/video/file.mp4") w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) out = cv2.VideoWriter("visioneye-pinpoint.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h)) center_point = (-10, h) while True: ret, im0 = cap.read() if not ret: print("Video frame is empty or video processing has been successfully completed.") break annotator = Annotator(im0, line_width=2) results = model.track(im0, persist=True) boxes = results[0].boxes.xyxy.cpu() if results[0].boxes.id is not None: track_ids = results[0].boxes.id.int().cpu().tolist() for box, track_id in zip(boxes, track_ids): annotator.box_label(box, label=str(track_id), color=colors(int(track_id))) annotator.visioneye(box, center_point) out.write(im0) cv2.imshow("visioneye-pinpoint", im0) if cv2.waitKey(1) & 0xFF == ord("q"): break out.release() cap.release() cv2.destroyAllWindows() ``` === "VisionEye with Distance Calculation" ```python import math import cv2 from ultralytics import YOLO from ultralytics.utils.plotting import Annotator model = YOLO("yolo11n.pt") cap = cv2.VideoCapture("Path/to/video/file.mp4") w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) out = cv2.VideoWriter("visioneye-distance-calculation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h)) center_point = (0, h) pixel_per_meter = 10 txt_color, txt_background, bbox_clr = ((0, 0, 0), (255, 255, 255), (255, 0, 255)) while True: ret, im0 = cap.read() if not ret: print("Video frame is empty or video processing has been successfully completed.") break annotator = Annotator(im0, line_width=2) results = model.track(im0, persist=True) boxes = results[0].boxes.xyxy.cpu() if results[0].boxes.id is not None: track_ids = results[0].boxes.id.int().cpu().tolist() for box, track_id in zip(boxes, track_ids): annotator.box_label(box, label=str(track_id), color=bbox_clr) annotator.visioneye(box, center_point) x1, y1 = int((box[0] + box[2]) // 2), int((box[1] + box[3]) // 2) # Bounding box centroid distance = (math.sqrt((x1 - center_point[0]) ** 2 + (y1 - center_point[1]) ** 2)) / pixel_per_meter text_size, _ = cv2.getTextSize(f"Distance: {distance:.2f} m", cv2.FONT_HERSHEY_SIMPLEX, 1.2, 3) cv2.rectangle(im0, (x1, y1 - text_size[1] - 10), (x1 + text_size[0] + 10, y1), txt_background, -1) cv2.putText(im0, f"Distance: {distance:.2f} m", (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 1.2, txt_color, 3) out.write(im0) cv2.imshow("visioneye-distance-calculation", im0) if cv2.waitKey(1) & 0xFF == ord("q"): break out.release() cap.release() cv2.destroyAllWindows() ``` ### `visioneye` Arguments | Name | Type | Default | Description | | ----------- | ------- | ---------------- | ------------------------------ | | `color` | `tuple` | `(235, 219, 11)` | Line and object centroid color | | `pin_color` | `tuple` | `(255, 0, 255)` | VisionEye pinpoint color | ## Note For any inquiries, feel free to post your questions in the [Ultralytics Issue Section](https://github.com/ultralytics/ultralytics/issues/new/choose) or the discussion section mentioned below. ##
271115
FAQ ### How do I start using VisionEye Object Mapping with Ultralytics YOLO11? To start using VisionEye Object Mapping with Ultralytics YOLO11, first, you'll need to install the Ultralytics YOLO package via pip. Then, you can use the sample code provided in the documentation to set up [object detection](https://www.ultralytics.com/glossary/object-detection) with VisionEye. Here's a simple example to get you started: ```python import cv2 from ultralytics import YOLO model = YOLO("yolo11n.pt") cap = cv2.VideoCapture("path/to/video/file.mp4") while True: ret, frame = cap.read() if not ret: break results = model.predict(frame) for result in results: # Perform custom logic with result pass cv2.imshow("visioneye", frame) if cv2.waitKey(1) & 0xFF == ord("q"): break cap.release() cv2.destroyAllWindows() ``` ### What are the key features of VisionEye's object tracking capability using Ultralytics YOLO11? VisionEye's object tracking with Ultralytics YOLO11 allows users to follow the movement of objects within a video frame. Key features include: 1. **Real-Time Object Tracking**: Keeps up with objects as they move. 2. **Object Identification**: Utilizes YOLO11's powerful detection algorithms. 3. **Distance Calculation**: Calculates distances between objects and specified points. 4. **Annotation and Visualization**: Provides visual markers for tracked objects. Here's a brief code snippet demonstrating tracking with VisionEye: ```python import cv2 from ultralytics import YOLO model = YOLO("yolo11n.pt") cap = cv2.VideoCapture("path/to/video/file.mp4") while True: ret, frame = cap.read() if not ret: break results = model.track(frame, persist=True) for result in results: # Annotate and visualize tracking pass cv2.imshow("visioneye-tracking", frame) if cv2.waitKey(1) & 0xFF == ord("q"): break cap.release() cv2.destroyAllWindows() ``` For a comprehensive guide, visit the [VisionEye Object Mapping with Object Tracking](#samples). ### How can I calculate distances with VisionEye's YOLO11 model? Distance calculation with VisionEye and Ultralytics YOLO11 involves determining the distance of detected objects from a specified point in the frame. It enhances spatial analysis capabilities, useful in applications such as autonomous driving and surveillance. Here's a simplified example: ```python import math import cv2 from ultralytics import YOLO model = YOLO("yolo11n.pt") cap = cv2.VideoCapture("path/to/video/file.mp4") center_point = (0, 480) # Example center point pixel_per_meter = 10 while True: ret, frame = cap.read() if not ret: break results = model.track(frame, persist=True) for result in results: # Calculate distance logic distances = [ (math.sqrt((box[0] - center_point[0]) ** 2 + (box[1] - center_point[1]) ** 2)) / pixel_per_meter for box in results ] cv2.imshow("visioneye-distance", frame) if cv2.waitKey(1) & 0xFF == ord("q"): break cap.release() cv2.destroyAllWindows() ``` For detailed instructions, refer to the [VisionEye with Distance Calculation](#samples). ### Why should I use Ultralytics YOLO11 for object mapping and tracking? Ultralytics YOLO11 is renowned for its speed, [accuracy](https://www.ultralytics.com/glossary/accuracy), and ease of integration, making it a top choice for object mapping and tracking. Key advantages include: 1. **State-of-the-art Performance**: Delivers high accuracy in real-time object detection. 2. **Flexibility**: Supports various tasks such as detection, tracking, and distance calculation. 3. **Community and Support**: Extensive documentation and active GitHub community for troubleshooting and enhancements. 4. **Ease of Use**: Intuitive API simplifies complex tasks, allowing for rapid deployment and iteration. For more information on applications and benefits, check out the [Ultralytics YOLO11 documentation](https://docs.ultralytics.com/models/yolov8/). ### How can I integrate VisionEye with other [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) tools like Comet or ClearML? Ultralytics YOLO11 can integrate seamlessly with various machine learning tools like Comet and ClearML, enhancing experiment tracking, collaboration, and reproducibility. Follow the detailed guides on [how to use YOLOv5 with Comet](https://www.ultralytics.com/blog/how-to-use-yolov5-with-comet) and [integrate YOLO11 with ClearML](https://docs.ultralytics.com/integrations/clearml/) to get started. For further exploration and integration examples, check our [Ultralytics Integrations Guide](https://docs.ultralytics.com/integrations/).
271120
? Training a custom object detection model with Ultralytics YOLO is straightforward. Start by preparing your dataset in the correct format and installing the Ultralytics package. Use the following code to initiate training: !!! example === "Python" ```python from ultralytics import YOLO model = YOLO("yolo11n.pt") # Load a pre-trained YOLO model model.train(data="path/to/dataset.yaml", epochs=50) # Train on custom dataset ``` === "CLI" ```bash yolo task=detect mode=train model=yolo11n.pt data=path/to/dataset.yaml epochs=50 ``` For detailed dataset formatting and additional options, refer to our [Tips for Model Training](model-training-tips.md) guide. ### What performance metrics should I use to evaluate my YOLO model? Evaluating your YOLO model performance is crucial to understanding its efficacy. Key metrics include [Mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP), [Intersection over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou) (IoU), and F1 score. These metrics help assess the accuracy and [precision](https://www.ultralytics.com/glossary/precision) of object detection tasks. You can learn more about these metrics and how to improve your model in our [YOLO Performance Metrics](yolo-performance-metrics.md) guide. ### Why should I use Ultralytics HUB for my computer vision projects? Ultralytics HUB is a no-code platform that simplifies managing, training, and deploying YOLO models. It supports seamless integration, real-time tracking, and cloud training, making it ideal for both beginners and professionals. Discover more about its features and how it can streamline your workflow with our [Ultralytics HUB](https://docs.ultralytics.com/hub/) quickstart guide. ### What are the common issues faced during YOLO model training, and how can I resolve them? Common issues during YOLO model training include data formatting errors, model architecture mismatches, and insufficient [training data](https://www.ultralytics.com/glossary/training-data). To address these, ensure your dataset is correctly formatted, check for compatible model versions, and augment your training data. For a comprehensive list of solutions, refer to our [YOLO Common Issues](yolo-common-issues.md) guide. ### How can I deploy my YOLO model for real-time object detection on edge devices? Deploying YOLO models on edge devices like NVIDIA Jetson and Raspberry Pi requires converting the model to a compatible format such as TensorRT or TFLite. Follow our step-by-step guides for [NVIDIA Jetson](nvidia-jetson.md) and [Raspberry Pi](raspberry-pi.md) deployments to get started with real-time object detection on edge hardware. These guides will walk you through installation, configuration, and performance optimization.
271125
--- comments: true description: Find best practices, optimization strategies, and troubleshooting advice for training computer vision models. Improve your model training efficiency and accuracy. keywords: Model Training Machine Learning, AI Model Training, Number of Epochs, How to Train a Model in Machine Learning, Machine Learning Best Practices, What is Model Training --- # Machine Learning Best Practices and Tips for Model Training ## Introduction One of the most important steps when working on a [computer vision project](./steps-of-a-cv-project.md) is model training. Before reaching this step, you need to [define your goals](./defining-project-goals.md) and [collect and annotate your data](./data-collection-and-annotation.md). After [preprocessing the data](./preprocessing_annotated_data.md) to make sure it is clean and consistent, you can move on to training your model. <p align="center"> <br> <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/GIrFEoR5PoU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen> </iframe> <br> <strong>Watch:</strong> Model Training Tips | How to Handle Large Datasets | Batch Size, GPU Utilization and <a href="https://www.ultralytics.com/glossary/mixed-precision">Mixed Precision</a> </p> So, what is [model training](../modes/train.md)? Model training is the process of teaching your model to recognize visual patterns and make predictions based on your data. It directly impacts the performance and accuracy of your application. In this guide, we'll cover best practices, optimization techniques, and troubleshooting tips to help you train your computer vision models effectively. ## How to Train a [Machine Learning](https://www.ultralytics.com/glossary/machine-learning-ml) Model A computer vision model is trained by adjusting its internal parameters to minimize errors. Initially, the model is fed a large set of labeled images. It makes predictions about what is in these images, and the predictions are compared to the actual labels or contents to calculate errors. These errors show how far off the model's predictions are from the true values. During training, the model iteratively makes predictions, calculates errors, and updates its parameters through a process called [backpropagation](https://www.ultralytics.com/glossary/backpropagation). In this process, the model adjusts its internal parameters (weights and biases) to reduce the errors. By repeating this cycle many times, the model gradually improves its accuracy. Over time, it learns to recognize complex patterns such as shapes, colors, and textures. <p align="center"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/backpropagation-diagram.avif" alt="What is Backpropagation?"> </p> This learning process makes it possible for the [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) model to perform various [tasks](../tasks/index.md), including [object detection](../tasks/detect.md), [instance segmentation](../tasks/segment.md), and [image classification](../tasks/classify.md). The ultimate goal is to create a model that can generalize its learning to new, unseen images so that it can accurately understand visual data in real-world applications. Now that we know what is happening behind the scenes when we train a model, let's look at points to consider when training a model.
271126
## Training on Large Datasets There are a few different aspects to think about when you are planning on using a large dataset to train a model. For example, you can adjust the batch size, control the GPU utilization, choose to use multiscale training, etc. Let's walk through each of these options in detail. ### Batch Size and GPU Utilization When training models on large datasets, efficiently utilizing your GPU is key. Batch size is an important factor. It is the number of data samples that a machine learning model processes in a single training iteration. Using the maximum batch size supported by your GPU, you can fully take advantage of its capabilities and reduce the time model training takes. However, you want to avoid running out of GPU memory. If you encounter memory errors, reduce the batch size incrementally until the model trains smoothly. With respect to YOLO11, you can set the `batch_size` parameter in the [training configuration](../modes/train.md) to match your GPU capacity. Also, setting `batch=-1` in your training script will automatically determine the [batch size](https://www.ultralytics.com/glossary/batch-size) that can be efficiently processed based on your device's capabilities. By fine-tuning the batch size, you can make the most of your GPU resources and improve the overall training process. ### Subset Training Subset training is a smart strategy that involves training your model on a smaller set of data that represents the larger dataset. It can save time and resources, especially during initial model development and testing. If you are running short on time or experimenting with different model configurations, subset training is a good option. When it comes to YOLO11, you can easily implement subset training by using the `fraction` parameter. This parameter lets you specify what fraction of your dataset to use for training. For example, setting `fraction=0.1` will train your model on 10% of the data. You can use this technique for quick iterations and tuning your model before committing to training a model using a full dataset. Subset training helps you make rapid progress and identify potential issues early on. ### Multi-scale Training Multiscale training is a technique that improves your model's ability to generalize by training it on images of varying sizes. Your model can learn to detect objects at different scales and distances and become more robust. For example, when you train YOLO11, you can enable multiscale training by setting the `scale` parameter. This parameter adjusts the size of training images by a specified factor, simulating objects at different distances. For example, setting `scale=0.5` will reduce the image size by half, while `scale=2.0` will double it. Configuring this parameter allows your model to experience a variety of image scales and improve its detection capabilities across different object sizes and scenarios. ### Caching Caching is an important technique to improve the efficiency of training machine learning models. By storing preprocessed images in memory, caching reduces the time the GPU spends waiting for data to be loaded from the disk. The model can continuously receive data without delays caused by disk I/O operations. Caching can be controlled when training YOLO11 using the `cache` parameter: - _`cache=True`_: Stores dataset images in RAM, providing the fastest access speed but at the cost of increased memory usage. - _`cache='disk'`_: Stores the images on disk, slower than RAM but faster than loading fresh data each time. - _`cache=False`_: Disables caching, relying entirely on disk I/O, which is the slowest option. ### Mixed Precision Training Mixed precision training uses both 16-bit (FP16) and 32-bit (FP32) floating-point types. The strengths of both FP16 and FP32 are leveraged by using FP16 for faster computation and FP32 to maintain precision where needed. Most of the [neural network](https://www.ultralytics.com/glossary/neural-network-nn)'s operations are done in FP16 to benefit from faster computation and lower memory usage. However, a master copy of the model's weights is kept in FP32 to ensure accuracy during the weight update steps. You can handle larger models or larger batch sizes within the same hardware constraints. <p align="center"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/mixed-precision-training-overview.avif" alt="Mixed Precision Training Overview"> </p> To implement mixed precision training, you'll need to modify your training scripts and ensure your hardware (like GPUs) supports it. Many modern [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) frameworks, such as [Tensorflow](https://www.ultralytics.com/glossary/tensorflow), offer built-in support for mixed precision. Mixed precision training is straightforward when working with YOLO11. You can use the `amp` flag in your training configuration. Setting `amp=True` enables Automatic Mixed Precision (AMP) training. Mixed precision training is a simple yet effective way to optimize your model training process. ### Pre-trained Weights Using pretrained weights is a smart way to speed up your model's training process. Pretrained weights come from models already trained on large datasets, giving your model a head start. [Transfer learning](https://www.ultralytics.com/glossary/transfer-learning) adapts pretrained models to new, related tasks. Fine-tuning a pre-trained model involves starting with these weights and then continuing training on your specific dataset. This method of training results in faster training times and often better performance because the model starts with a solid understanding of basic features. The `pretrained` parameter makes transfer learning easy with YOLO11. Setting `pretrained=True` will use default pre-trained weights, or you can specify a path to a custom pre-trained model. Using pre-trained weights and transfer learning effectively boosts your model's capabilities and reduces training costs. ### Other Techniques to Consider When Handling a Large Dataset There are a couple of other techniques to consider when handling a large dataset: - **[Learning Rate](https://www.ultralytics.com/glossary/learning-rate) Schedulers**: Implementing learning rate schedulers dynamically adjusts the learning rate during training. A well-tuned learning rate can prevent the model from overshooting minima and improve stability. When training YOLO11, the `lrf` parameter helps manage learning rate scheduling by setting the final learning rate as a fraction of the initial rate. - **Distributed Training**: For handling large datasets, distributed training can be a game-changer. You can reduce the training time by spreading the training workload across multiple GPUs or machines. ## The Number of Epochs To Train For When training a model, an epoch refers to one complete pass through the entire training dataset. During an epoch, the model processes each example in the training set once and updates its parameters based on the learning algorithm. Multiple epochs are usually needed to allow the model to learn and refine its parameters over time. A common question that comes up is how to determine the number of epochs to train the model for. A good starting point is 300 epochs. If the model overfits early, you can reduce the number of epochs. If [overfitting](https://www.ultralytics.com/glossary/overfitting) does not occur after 300 epochs, you can extend the training to 600, 1200, or more epochs. However, the ideal number of epochs can vary based on your dataset's size and project goals. Larger datasets might require more epochs for the model to learn effectively, while smaller datasets might need fewer epochs to avoid overfitting. With respect to YOLO11, you can set the `epochs` parameter in your training script. ## Early Stopping Early stopping is a valuable technique for optimizing model training. By monitoring validation performance, you can halt training once the model stops improving. You can save computational resources and prevent overfitting. The process involves setting a patience parameter that determines how many [epochs](https://www.ultralytics.com/glossary/epoch) to wait for an improvement in validation metrics before stopping training. If the model's performance does not improve within these epochs, training is stopped to avoid wasting time and resources. <p align="center"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/early-stopping-overview.avif" alt="Early Stopping Overview"> </p> For YOLO11, you can enable early stopping by setting the patience parameter in your training configuration. For example, `patience=5` means training will stop if there's no improvement in validation metrics for 5 consecutive epochs. Using this method ensures the training process remains efficient and achieves optimal performance without excessive computation.
271127
## Choosing Between Cloud and Local Training There are two options for training your model: cloud training and local training. Cloud training offers scalability and powerful hardware and is ideal for handling large datasets and complex models. Platforms like Google Cloud, AWS, and Azure provide on-demand access to high-performance GPUs and TPUs, speeding up training times and enabling experiments with larger models. However, cloud training can be expensive, especially for long periods, and data transfer can add to costs and latency. Local training provides greater control and customization, letting you tailor your environment to specific needs and avoid ongoing cloud costs. It can be more economical for long-term projects, and since your data stays on-premises, it's more secure. However, local hardware may have resource limitations and require maintenance, which can lead to longer training times for large models. ## Selecting an Optimizer An optimizer is an algorithm that adjusts the weights of your neural network to minimize the [loss function](https://www.ultralytics.com/glossary/loss-function), which measures how well the model is performing. In simpler terms, the optimizer helps the model learn by tweaking its parameters to reduce errors. Choosing the right optimizer directly affects how quickly and accurately the model learns. You can also fine-tune optimizer parameters to improve model performance. Adjusting the learning rate sets the size of the steps when updating parameters. For stability, you might start with a moderate learning rate and gradually decrease it over time to improve long-term learning. Additionally, setting the momentum determines how much influence past updates have on current updates. A common value for momentum is around 0.9. It generally provides a good balance. ### Common Optimizers Different optimizers have various strengths and weaknesses. Let's take a glimpse at a few common optimizers. - **SGD (Stochastic Gradient Descent)**: - Updates model parameters using the gradient of the loss function with respect to the parameters. - Simple and efficient but can be slow to converge and might get stuck in local minima. - **Adam (Adaptive Moment Estimation)**: - Combines the benefits of both SGD with momentum and RMSProp. - Adjusts the learning rate for each parameter based on estimates of the first and second moments of the gradients. - Well-suited for noisy data and sparse gradients. - Efficient and generally requires less tuning, making it a recommended optimizer for YOLO11. - **RMSProp (Root Mean Square Propagation)**: - Adjusts the learning rate for each parameter by dividing the gradient by a running average of the magnitudes of recent gradients. - Helps in handling the vanishing gradient problem and is effective for [recurrent neural networks](https://www.ultralytics.com/glossary/recurrent-neural-network-rnn). For YOLO11, the `optimizer` parameter lets you choose from various optimizers, including SGD, Adam, AdamW, NAdam, RAdam, and RMSProp, or you can set it to `auto` for automatic selection based on model configuration. ## Connecting with the Community Being part of a community of computer vision enthusiasts can help you solve problems and learn faster. Here are some ways to connect, get help, and share ideas. ### Community Resources - **GitHub Issues:** Visit the [YOLO11 GitHub repository](https://github.com/ultralytics/ultralytics/issues) and use the Issues tab to ask questions, report bugs, and suggest new features. The community and maintainers are very active and ready to help. - **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://discord.com/invite/ultralytics) to chat with other users and developers, get support, and share your experiences. ### Official Documentation - **Ultralytics YOLO11 Documentation:** Check out the [official YOLO11 documentation](./index.md) for detailed guides and helpful tips on various computer vision projects. Using these resources will help you solve challenges and stay up-to-date with the latest trends and practices in the computer vision community. ## Key Takeaways Training computer vision models involves following good practices, optimizing your strategies, and solving problems as they arise. Techniques like adjusting batch sizes, mixed [precision](https://www.ultralytics.com/glossary/precision) training, and starting with pre-trained weights can make your models work better and train faster. Methods like subset training and early stopping help you save time and resources. Staying connected with the community and keeping up with new trends will help you keep improving your model training skills. ## FAQ ### How can I improve GPU utilization when training a large dataset with Ultralytics YOLO? To improve GPU utilization, set the `batch_size` parameter in your training configuration to the maximum size supported by your GPU. This ensures that you make full use of the GPU's capabilities, reducing training time. If you encounter memory errors, incrementally reduce the batch size until training runs smoothly. For YOLO11, setting `batch=-1` in your training script will automatically determine the optimal batch size for efficient processing. For further information, refer to the [training configuration](../modes/train.md). ### What is mixed precision training, and how do I enable it in YOLO11? Mixed precision training utilizes both 16-bit (FP16) and 32-bit (FP32) floating-point types to balance computational speed and precision. This approach speeds up training and reduces memory usage without sacrificing model [accuracy](https://www.ultralytics.com/glossary/accuracy). To enable mixed precision training in YOLO11, set the `amp` parameter to `True` in your training configuration. This activates Automatic Mixed Precision (AMP) training. For more details on this optimization technique, see the [training configuration](../modes/train.md). ### How does multiscale training enhance YOLO11 model performance? Multiscale training enhances model performance by training on images of varying sizes, allowing the model to better generalize across different scales and distances. In YOLO11, you can enable multiscale training by setting the `scale` parameter in the training configuration. For example, `scale=0.5` reduces the image size by half, while `scale=2.0` doubles it. This technique simulates objects at different distances, making the model more robust across various scenarios. For settings and more details, check out the [training configuration](../modes/train.md). ### How can I use pre-trained weights to speed up training in YOLO11? Using pre-trained weights can significantly reduce training times and improve model performance by starting from a model that already understands basic features. In YOLO11, you can set the `pretrained` parameter to `True` or specify a path to custom pre-trained weights in your training configuration. This approach, known as transfer learning, leverages knowledge from large datasets to adapt to your specific task. Learn more about pre-trained weights and their advantages [here](../modes/train.md). ### What is the recommended number of epochs for training a model, and how do I set this in YOLO11? The number of epochs refers to the complete passes through the training dataset during model training. A typical starting point is 300 epochs. If your model overfits early, you can reduce the number. Alternatively, if overfitting isn't observed, you might extend training to 600, 1200, or more epochs. To set this in YOLO11, use the `epochs` parameter in your training script. For additional advice on determining the ideal number of epochs, refer to this section on [number of epochs](#the-number-of-epochs-to-train-for).
271128
--- comments: true description: Master instance segmentation and tracking with Ultralytics YOLO11. Learn techniques for precise object identification and tracking. keywords: instance segmentation, tracking, YOLO11, Ultralytics, object detection, machine learning, computer vision, python --- # Instance Segmentation and Tracking using Ultralytics YOLO11 🚀 ## What is [Instance Segmentation](https://www.ultralytics.com/glossary/instance-segmentation)? [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) instance segmentation involves identifying and outlining individual objects in an image, providing a detailed understanding of spatial distribution. Unlike [semantic segmentation](https://www.ultralytics.com/glossary/semantic-segmentation), it uniquely labels and precisely delineates each object, crucial for tasks like [object detection](https://www.ultralytics.com/glossary/object-detection) and medical imaging. There are two types of instance segmentation tracking available in the Ultralytics package: - **Instance Segmentation with Class Objects:** Each class object is assigned a unique color for clear visual separation. - **Instance Segmentation with Object Tracks:** Every track is represented by a distinct color, facilitating easy identification and tracking. <p align="center"> <br> <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/75G_S1Ngji8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen> </iframe> <br> <strong>Watch:</strong> Instance Segmentation with Object Tracking using Ultralytics YOLO11 </p> ## Samples | Instance Segmentation | Instance Segmentation + Object Tracking | | :----------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | ![Ultralytics Instance Segmentation](https://github.com/ultralytics/docs/releases/download/0/ultralytics-instance-segmentation.avif) | ![Ultralytics Instance Segmentation with Object Tracking](https://github.com/ultralytics/docs/releases/download/0/ultralytics-instance-segmentation-object-tracking.avif) | | Ultralytics Instance Segmentation 😍 | Ultralytics Instance Segmentation with Object Tracking 🔥 | !!! example "Instance Segmentation and Tracking" === "Instance Segmentation" ```python import cv2 from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors model = YOLO("yolo11n-seg.pt") # segmentation model names = model.model.names cap = cv2.VideoCapture("path/to/video/file.mp4") w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) out = cv2.VideoWriter("instance-segmentation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h)) while True: ret, im0 = cap.read() if not ret: print("Video frame is empty or video processing has been successfully completed.") break results = model.predict(im0) annotator = Annotator(im0, line_width=2) if results[0].masks is not None: clss = results[0].boxes.cls.cpu().tolist() masks = results[0].masks.xy for mask, cls in zip(masks, clss): color = colors(int(cls), True) txt_color = annotator.get_txt_color(color) annotator.seg_bbox(mask=mask, mask_color=color, label=names[int(cls)], txt_color=txt_color) out.write(im0) cv2.imshow("instance-segmentation", im0) if cv2.waitKey(1) & 0xFF == ord("q"): break out.release() cap.release() cv2.destroyAllWindows() ``` === "Instance Segmentation with Object Tracking" ```python from collections import defaultdict import cv2 from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors track_history = defaultdict(lambda: []) model = YOLO("yolo11n-seg.pt") # segmentation model cap = cv2.VideoCapture("path/to/video/file.mp4") w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) out = cv2.VideoWriter("instance-segmentation-object-tracking.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h)) while True: ret, im0 = cap.read() if not ret: print("Video frame is empty or video processing has been successfully completed.") break annotator = Annotator(im0, line_width=2) results = model.track(im0, persist=True) if results[0].boxes.id is not None and results[0].masks is not None: masks = results[0].masks.xy track_ids = results[0].boxes.id.int().cpu().tolist() for mask, track_id in zip(masks, track_ids): color = colors(int(track_id), True) txt_color = annotator.get_txt_color(color) annotator.seg_bbox(mask=mask, mask_color=color, label=str(track_id), txt_color=txt_color) out.write(im0) cv2.imshow("instance-segmentation-object-tracking", im0) if cv2.waitKey(1) & 0xFF == ord("q"): break out.release() cap.release() cv2.destroyAllWindows() ``` ### `seg_bbox` Arguments | Name | Type | Default | Description | | ------------ | ------- | --------------- | -------------------------------------------- | | `mask` | `array` | `None` | Segmentation mask coordinates | | `mask_color` | `RGB` | `(255, 0, 255)` | Mask color for every segmented box | | `label` | `str` | `None` | Label for segmented object | | `txt_color` | `RGB` | `None` | Label color for segmented and tracked object | ## Note For any inquiries, feel free to post your questions in the [Ultralytics Issue Section](https://github.com/ultralytics/ultralytics/issues/new/choose) or the discussion section mentioned below. ## FAQ #
271129
## How do I perform instance segmentation using Ultralytics YOLO11? To perform instance segmentation using Ultralytics YOLO11, initialize the YOLO model with a segmentation version of YOLO11 and process video frames through it. Here's a simplified code example: !!! example === "Python" ```python import cv2 from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors model = YOLO("yolo11n-seg.pt") # segmentation model cap = cv2.VideoCapture("path/to/video/file.mp4") w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) out = cv2.VideoWriter("instance-segmentation.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h)) while True: ret, im0 = cap.read() if not ret: break results = model.predict(im0) annotator = Annotator(im0, line_width=2) if results[0].masks is not None: clss = results[0].boxes.cls.cpu().tolist() masks = results[0].masks.xy for mask, cls in zip(masks, clss): annotator.seg_bbox(mask=mask, mask_color=colors(int(cls), True), det_label=model.model.names[int(cls)]) out.write(im0) cv2.imshow("instance-segmentation", im0) if cv2.waitKey(1) & 0xFF == ord("q"): break out.release() cap.release() cv2.destroyAllWindows() ``` Learn more about instance segmentation in the [Ultralytics YOLO11 guide](#what-is-instance-segmentation). ### What is the difference between instance segmentation and object tracking in Ultralytics YOLO11? Instance segmentation identifies and outlines individual objects within an image, giving each object a unique label and mask. Object tracking extends this by assigning consistent labels to objects across video frames, facilitating continuous tracking of the same objects over time. Learn more about the distinctions in the [Ultralytics YOLO11 documentation](#samples). ### Why should I use Ultralytics YOLO11 for instance segmentation and tracking over other models like Mask R-CNN or Faster R-CNN? Ultralytics YOLO11 offers real-time performance, superior [accuracy](https://www.ultralytics.com/glossary/accuracy), and ease of use compared to other models like Mask R-CNN or Faster R-CNN. YOLO11 provides a seamless integration with Ultralytics HUB, allowing users to manage models, datasets, and training pipelines efficiently. Discover more about the benefits of YOLO11 in the [Ultralytics blog](https://www.ultralytics.com/blog/introducing-ultralytics-yolov8). ### How can I implement object tracking using Ultralytics YOLO11? To implement object tracking, use the `model.track` method and ensure that each object's ID is consistently assigned across frames. Below is a simple example: !!! example === "Python" ```python from collections import defaultdict import cv2 from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors track_history = defaultdict(lambda: []) model = YOLO("yolo11n-seg.pt") # segmentation model cap = cv2.VideoCapture("path/to/video/file.mp4") w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) out = cv2.VideoWriter("instance-segmentation-object-tracking.avi", cv2.VideoWriter_fourcc(*"MJPG"), fps, (w, h)) while True: ret, im0 = cap.read() if not ret: break annotator = Annotator(im0, line_width=2) results = model.track(im0, persist=True) if results[0].boxes.id is not None and results[0].masks is not None: masks = results[0].masks.xy track_ids = results[0].boxes.id.int().cpu().tolist() for mask, track_id in zip(masks, track_ids): annotator.seg_bbox(mask=mask, mask_color=colors(track_id, True), track_label=str(track_id)) out.write(im0) cv2.imshow("instance-segmentation-object-tracking", im0) if cv2.waitKey(1) & 0xFF == ord("q"): break out.release() cap.release() cv2.destroyAllWindows() ``` Find more in the [Instance Segmentation and Tracking section](#samples). ### Are there any datasets provided by Ultralytics suitable for training YOLO11 models for instance segmentation and tracking? Yes, Ultralytics offers several datasets suitable for training YOLO11 models, including segmentation and tracking datasets. Dataset examples, structures, and instructions for use can be found in the [Ultralytics Datasets documentation](https://docs.ultralytics.com/datasets/).
271130
--- comments: true description: Learn to deploy Ultralytics YOLOv8 on NVIDIA Jetson devices with our detailed guide. Explore performance benchmarks and maximize AI capabilities. keywords: Ultralytics, YOLOv8, NVIDIA Jetson, JetPack, AI deployment, performance benchmarks, embedded systems, deep learning, TensorRT, computer vision --- # Quick Start Guide: NVIDIA Jetson with Ultralytics YOLOv8 This comprehensive guide provides a detailed walkthrough for deploying Ultralytics YOLOv8 on [NVIDIA Jetson](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/) devices. Additionally, it showcases performance benchmarks to demonstrate the capabilities of YOLOv8 on these small and powerful devices. <p align="center"> <br> <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/mUybgOlSxxA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen> </iframe> <br> <strong>Watch:</strong> How to Setup NVIDIA Jetson with Ultralytics YOLOv8 </p> <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/nvidia-jetson-ecosystem.avif" alt="NVIDIA Jetson Ecosystem"> !!! note This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running the latest stable JetPack release of [JP6.0](https://developer.nvidia.com/embedded/jetpack-sdk-60), JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.1](https://developer.nvidia.com/embedded/jetpack-sdk-461). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy. ## What is NVIDIA Jetson? NVIDIA Jetson is a series of embedded computing boards designed to bring accelerated AI (artificial intelligence) computing to edge devices. These compact and powerful devices are built around NVIDIA's GPU architecture and are capable of running complex AI algorithms and [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models directly on the device, without needing to rely on [cloud computing](https://www.ultralytics.com/glossary/cloud-computing) resources. Jetson boards are often used in robotics, autonomous vehicles, industrial automation, and other applications where AI inference needs to be performed locally with low latency and high efficiency. Additionally, these boards are based on the ARM64 architecture and runs on lower power compared to traditional GPU computing devices. ## NVIDIA Jetson Series Comparison [Jetson Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/) is the latest iteration of the NVIDIA Jetson family based on NVIDIA Ampere architecture which brings drastically improved AI performance when compared to the previous generations. Below table compared few of the Jetson devices in the ecosystem. | | Jetson AGX Orin 64GB | Jetson Orin NX 16GB | Jetson Orin Nano 8GB | Jetson AGX Xavier | Jetson Xavier NX | Jetson Nano | | ----------------- | ----------------------------------------------------------------- | ---------------------------------------------------------------- | ------------------------------------------------------------- | ----------------------------------------------------------- | ------------------------------------------------------------- | --------------------------------------------- | | AI Performance | 275 TOPS | 100 TOPS | 40 TOPs | 32 TOPS | 21 TOPS | 472 GFLOPS | | GPU | 2048-core NVIDIA Ampere architecture GPU with 64 Tensor Cores | 1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores | 1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores | 512-core NVIDIA Volta architecture GPU with 64 Tensor Cores | 384-core NVIDIA Volta™ architecture GPU with 48 Tensor Cores | 128-core NVIDIA Maxwell™ architecture GPU | | GPU Max Frequency | 1.3 GHz | 918 MHz | 625 MHz | 1377 MHz | 1100 MHz | 921MHz | | CPU | 12-core NVIDIA Arm® Cortex A78AE v8.2 64-bit CPU 3MB L2 + 6MB L3 | 8-core NVIDIA Arm® Cortex A78AE v8.2 64-bit CPU 2MB L2 + 4MB L3 | 6-core Arm® Cortex®-A78AE v8.2 64-bit CPU 1.5MB L2 + 4MB L3 | 8-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8MB L2 + 4MB L3 | 6-core NVIDIA Carmel Arm®v8.2 64-bit CPU 6MB L2 + 4MB L3 | Quad-Core Arm® Cortex®-A57 MPCore processor | | CPU Max Frequency | 2.2 GHz | 2.0 GHz | 1.5 GHz | 2.2 GHz | 1.9 GHz | 1.43GHz | | Memory | 64GB 256-bit LPDDR5 204.8GB/s | 16GB 128-bit LPDDR5 102.4GB/s | 8GB 128-bit LPDDR5 68 GB/s | 32GB 256-bit LPDDR4x 136.5GB/s | 8GB 128-bit LPDDR4x 59.7GB/s | 4GB 64-bit LPDDR4 25.6GB/s" | For a more detailed comparison table, please visit the **Technical Specifications** section of [official NVIDIA Jetson page](https://developer.nvidia.com/embedded/jetson-modules). ## What is NVIDIA JetPack? [NVIDIA JetPack SDK](https://developer.nvidia.com/embedded/jetpack) powering the Jetson modules is the most comprehensive solution and provides full development environment for building end-to-end accelerated AI applications and shortens time to market. JetPack includes Jetson Linux with bootloader, Linux kernel, Ubuntu desktop environment, and a complete set of libraries for acceleration of GPU computing, multimedia, graphics, and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv). It also includes samples, documentation, and developer tools for both host computer and developer kit, and supports higher level SDKs such as DeepStream for streaming video analytics, Isaac for robotics, and Riva for conversational AI. ## Flash JetPack to NVIDIA Jetson The first step after getting your hands on an NVIDIA Jetson device is to flash NVIDIA JetPack to the device. There are several different way of flashing NVIDIA Jetson devices. 1. If you own an official NVIDIA Development Kit such as the Jetson Orin Nano Developer Kit, you can [download an image and prepare an SD card with JetPack for booting the device](https://developer.nvidia.com/embedded/learn/get-started-jetson-orin-nano-devkit). 2. If you own any other NVIDIA Development Kit, you can [flash JetPack to the device using SDK Manager](https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html). 3. If you own a Seeed Studio reComputer J4012 device, you can [flash JetPack to the included SSD](https://wiki.seeedstudio.com/reComputer_J4012_Flash_Jetpack/) and if you own a Seeed Studio reComputer J1020 v2 device, you can [flash JetPack to the eMMC/ SSD](https://wiki.seeedstudio.com/reComputer_J2021_J202_Flash_Jetpack/). 4. If you own any other third party device powered by the NVIDIA Jetson module, it is recommended to follow [command-line flashing](https://docs.nvidia.com/jetson/archives/r35.5.0/DeveloperGuide/IN/QuickStart.html). !!! note For methods 3 and 4 above, after flashing the system and booting the device, please enter "sudo apt update && sudo apt install nvidia-jetpack -y" on the device terminal to install all the remaining JetPack components needed. ## JetPack S
271131
upport Based on Jetson Device The below table highlights NVIDIA JetPack versions supported by different NVIDIA Jetson devices. | | JetPack 4 | JetPack 5 | JetPack 6 | | ----------------- | --------- | --------- | --------- | | Jetson Nano | ✅ | ❌ | ❌ | | Jetson TX2 | ✅ | ❌ | ❌ | | Jetson Xavier NX | ✅ | ✅ | ❌ | | Jetson AGX Xavier | ✅ | ✅ | ❌ | | Jetson AGX Orin | ❌ | ✅ | ✅ | | Jetson Orin NX | ❌ | ✅ | ✅ | | Jetson Orin Nano | ❌ | ✅ | ✅ | ## Quick Start with Docker The fastest way to get started with Ultralytics YOLOv8 on NVIDIA Jetson is to run with pre-built docker images for Jetson. Refer to the table above and choose the JetPack version according to the Jetson device you own. === "JetPack 4" ```bash t=ultralytics/ultralytics:latest-jetson-jetpack4 sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t ``` === "JetPack 5" ```bash t=ultralytics/ultralytics:latest-jetson-jetpack5 sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t ``` === "JetPack 6" ```bash t=ultralytics/ultralytics:latest-jetson-jetpack6 sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t ``` After this is done, skip to [Use TensorRT on NVIDIA Jetson section](#use-tensorrt-on-nvidia-jetson). ## Start with Native Installation For a native instal
271132
lation without Docker, please refer to the steps below. ### Run on JetPack 6.x #### Install Ultralytics Package Here we will install Ultralytics package on the Jetson with optional dependencies so that we can export the [PyTorch](https://www.ultralytics.com/glossary/pytorch) models to other different formats. We will mainly focus on [NVIDIA TensorRT exports](../integrations/tensorrt.md) because TensorRT will make sure we can get the maximum performance out of the Jetson devices. 1. Update packages list, install pip and upgrade to latest ```bash sudo apt update sudo apt install python3-pip -y pip install -U pip ``` 2. Install `ultralytics` pip package with optional dependencies ```bash pip install ultralytics[export] ``` 3. Reboot the device ```bash sudo reboot ``` #### Install PyTorch and Torchvision The above ultralytics installation will install Torch and Torchvision. However, these 2 packages installed via pip are not compatible to run on Jetson platform which is based on ARM64 architecture. Therefore, we need to manually install pre-built PyTorch pip wheel and compile/ install Torchvision from source. Install `torch 2.3.0` and `torchvision 0.18` according to JP6.0 ```bash sudo apt-get install libopenmpi-dev libopenblas-base libomp-dev -y pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/torch-2.3.0-cp310-cp310-linux_aarch64.whl pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/torchvision-0.18.0a0+6043bc2-cp310-cp310-linux_aarch64.whl ``` Visit the [PyTorch for Jetson page](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048) to access all different versions of PyTorch for different JetPack versions. For a more detailed list on the PyTorch, Torchvision compatibility, visit the [PyTorch and Torchvision compatibility page](https://github.com/pytorch/vision). #### Install `onnxruntime-gpu` The [onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu/) package hosted in PyPI does not have `aarch64` binaries for the Jetson. So we need to manually install this package. This package is needed for some of the exports. All different `onnxruntime-gpu` packages corresponding to different JetPack and Python versions are listed [here](https://elinux.org/Jetson_Zoo#ONNX_Runtime). However, here we will download and install `onnxruntime-gpu 1.18.0` with `Python3.10` support. ```bash wget https://nvidia.box.com/shared/static/48dtuob7meiw6ebgfsfqakc9vse62sg4.whl -O onnxruntime_gpu-1.18.0-cp310-cp310-linux_aarch64.whl pip install onnxruntime_gpu-1.18.0-cp310-cp310-linux_aarch64.whl ``` !!! note `onnxruntime-gpu` will automatically revert back the numpy version to latest. So we need to reinstall numpy to `1.23.5` to fix an issue by executing: `pip install numpy==1.23.5` ### Run on JetPack 5.x #### Install Ultralytics Package Here we will install Ultralytics package on the Jetson with optional dependencies so that we can export the PyTorch models to other different formats. We will mainly focus on [NVIDIA TensorRT exports](../integrations/tensorrt.md) because TensorRT will make sure we can get the maximum performance out of the Jetson devices. 1. Update packages list, install pip and upgrade to latest ```bash sudo apt update sudo apt install python3-pip -y pip install -U pip ``` 2. Install `ultralytics` pip package with optional dependencies ```bash pip install ultralytics[export] ``` 3. Reboot the device ```bash sudo reboot ``` #### Install PyTorch and Torchvision The above ultralytics installation will install Torch and Torchvision. However, these 2 packages installed via pip are not compatible to run on Jetson platform which is based on ARM64 architecture. Therefore, we need to manually install pre-built PyTorch pip wheel and compile/ install Torchvision from source. 1. Uninstall currently installed PyTorch and Torchvision ```bash pip uninstall torch torchvision ``` 2. Install PyTorch 2.1.0 according to JP5.1.3 ```bash sudo apt-get install -y libopenblas-base libopenmpi-dev wget https://developer.download.nvidia.com/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl -O torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl pip install torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl ``` 3. Install Torchvision v0.16.2 according to PyTorch v2.1.0 ```bash sudo apt install -y libjpeg-dev zlib1g-dev git clone https://github.com/pytorch/vision torchvision cd torchvision git checkout v0.16.2 python3 setup.py install --user ``` Visit the [PyTorch for Jetson page](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048) to access all different versions of PyTorch for different JetPack versions. For a more detailed list on the PyTorch, Torchvision compatibility, visit the [PyTorch and Torchvision compatibility page](https://github.com/pytorch/vision). #### Install `onnxruntime-gpu` The [onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu/) package hosted in PyPI does not have `aarch64` binaries for the Jetson. So we need to manually install this package. This package is needed for some of the exports. All different `onnxruntime-gpu` packages corresponding to different JetPack and Python versions are listed [here](https://elinux.org/Jetson_Zoo#ONNX_Runtime). However, here we will download and install `onnxruntime-gpu 1.17.0` with `Python3.8` support. ```bash wget https://nvidia.box.com/shared/static/zostg6agm00fb6t5uisw51qi6kpcuwzd.whl -O onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl pip install onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl ``` !!! note `onnxruntime-gpu` will automatically revert back the numpy version to latest. So we need to reinstall numpy to `1.23.5` to fix an issue by executing: `pip install numpy==1.23.5` ## Use TensorRT on NVIDIA Jetson Out of all the model
271133
export formats supported by Ultralytics, TensorRT delivers the best inference performance when working with NVIDIA Jetson devices and our recommendation is to use TensorRT with Jetson. We also have a detailed document on TensorRT [here](../integrations/tensorrt.md). ### Convert Model to TensorRT and Run Inference The YOLOv8n model in PyTorch format is converted to TensorRT to run inference with the exported model. !!! example === "Python" ```python from ultralytics import YOLO # Load a YOLOv8n PyTorch model model = YOLO("yolov8n.pt") # Export the model to TensorRT model.export(format="engine") # creates 'yolov8n.engine' # Load the exported TensorRT model trt_model = YOLO("yolov8n.engine") # Run inference results = trt_model("https://ultralytics.com/images/bus.jpg") ``` === "CLI" ```bash # Export a YOLOv8n PyTorch model to TensorRT format yolo export model=yolov8n.pt format=engine # creates 'yolov8n.engine' # Run inference with the exported model yolo predict model=yolov8n.engine source='https://ultralytics.com/images/bus.jpg' ``` ### Use NVIDIA Deep Learning Accelerator (DLA) [NVIDIA Deep Learning Accelerator (DLA)](https://developer.nvidia.com/deep-learning-accelerator) is a specialized hardware component built into NVIDIA Jetson devices that optimizes deep learning inference for energy efficiency and performance. By offloading tasks from the GPU (freeing it up for more intensive processes), DLA enables models to run with lower power consumption while maintaining high throughput, ideal for embedded systems and real-time AI applications. The following Jetson devices are equipped with DLA hardware: - Jetson Orin NX 16GB - Jetson AGX Orin Series - Jetson AGX Xavier Series - Jetson Xavier NX Series !!! example === "Python" ```python from ultralytics import YOLO # Load a YOLOv8n PyTorch model model = YOLO("yolov8n.pt") # Export the model to TensorRT with DLA enabled (only works with FP16 or INT8) model.export(format="engine", device="dla:0", half=True) # dla:0 or dla:1 corresponds to the DLA cores # Load the exported TensorRT model trt_model = YOLO("yolov8n.engine") # Run inference results = trt_model("https://ultralytics.com/images/bus.jpg") ``` === "CLI" ```bash # Export a YOLOv8n PyTorch model to TensorRT format with DLA enabled (only works with FP16 or INT8) yolo export model=yolov8n.pt format=engine device="dla:0" half=True # dla:0 or dla:1 corresponds to the DLA cores # Run inference with the exported model on the DLA yolo predict model=yolov8n.engine source='https://ultralytics.com/images/bus.jpg' ``` !!! note Visit the [Export page](../modes/export.md#arguments) to access additional arguments when exporting models to different model formats ## NVIDIA Jetson Orin YOLOv8 Benchmarks YOLOv8 benchmarks were run by the Ultralytics team on 10 different model formats measuring speed and [accuracy](https://www.ultralytics.com/glossary/accuracy): PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN. Benchmarks were run on Seeed Studio reComputer J4012 powered by Jetson Orin NX 16GB device at FP32 [precision](https://www.ultralytics.com/glossary/precision) with default input image size of 640. ### Comparison Chart Even though all model exports are working with NVIDIA Jetson, we have only included **PyTorch, TorchScript, TensorRT** for the comparison chart below because, they make use of the GPU on the Jetson and are guaranteed to produce the best results. All the other exports only utilize the CPU and the performance is not as good as the above three. You can find benchmarks for all exports in the section after this chart. <div style="text-align: center;"> <img width="800" src="https://github.com/ultralytics/docs/releases/download/0/nvidia-jetson-ecosystem-2.avif" alt="NVIDIA Jetson Ecosystem"> </div> ### Detailed Comparison Table The below table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination. !!! performance === "YOLOv8n" | Format | Status
271135
| 4029.36 | | TF Lite | ✅ | 260.4 | 0.7479 | 8772.86 | | PaddlePaddle | ✅ | 520.8 | 0.7479 | 10619.53 | | NCNN | ✅ | 260.4 | 0.7646 | 376.38 | [Explore more benchmarking efforts by Seeed Studio](https://www.seeedstudio.com/blog/2023/03/30/yolov8-performance-benchmarks-on-nvidia-jetson-devices) running on different versions of NVIDIA Jetson hardware. ## Reproduce Our Results To reproduce the above Ultralytics benchmarks on all export [formats](../modes/export.md) run this code: !!! example === "Python" ```python from ultralytics import YOLO # Load a YOLOv8n PyTorch model model = YOLO("yolov8n.pt") # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats results = model.benchmarks(data="coco8.yaml", imgsz=640) ``` === "CLI" ```bash # Benchmark YOLOv8n speed and accuracy on the COCO8 dataset for all all export formats yolo benchmark model=yolov8n.pt data=coco8.yaml imgsz=640 ``` Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco8.yaml' (4 val images), or `data='coco.yaml'` (5000 val images). ## Best Practices when using NVIDIA Jetson When using NVIDIA Jetson, there are a couple of best practices to follow in order to enable maximum performance on the NVIDIA Jetson running YOLOv8. 1. Enable MAX Power Mode Enabling MAX Power Mode on the Jetson will make sure all CPU, GPU cores are turned on. ```bash sudo nvpmodel -m 0 ``` 2. Enable Jetson Clocks Enabling Jetson Clocks will make sure all CPU, GPU cores are clocked at their maximum frequency. ```bash sudo jetson_clocks ``` 3. Install Jetson Stats Application We can use jetson stats application to monitor the temperatures of the system components and check other system details such as view CPU, GPU, RAM utilization, change power modes, set to max clocks, check JetPack information ```bash sudo apt update sudo pip install jetson-stats sudo reboot jtop ``` <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/jetson-stats-application.avif" alt="Jetson Stats"> ## Next Steps Congratulations on successfully setting up YOLOv8 on your NVIDIA Jetson! For further learning and support, visit more guide at [Ultralytics YOLOv8 Docs](../index.md)! ## FAQ ### How do I deploy Ultralytics YOLOv8 on NVIDIA Jetson devices? Deploying Ultralytics YOLOv8 on NVIDIA Jetson devices is a straightforward process. First, flash your Jetson device with the NVIDIA JetPack SDK. Then, either use a pre-built Docker image for quick setup or manually install the required packages. Detailed steps for each approach can be found in sections [Quick Start with Docker](#quick-start-with-docker) and [Start with Native Installation](#start-with-native-installation). ### What performance benchmarks can I expect from YOLOv8 models on NVIDIA Jetson devices? YOLOv8 models have been benchmarked on various NVIDIA Jetson devices showing significant performance improvements. For example, the TensorRT format delivers the best inference performance. The table in the [Detailed Comparison Table](#detailed-comparison-table) section provides a comprehensive view of performance metrics like mAP50-95 and inference time across different model formats. ### Why should I use TensorRT for deploying YOLOv8 on NVIDIA Jetson? TensorRT is highly recommended for deploying YOLOv8 models on NVIDIA Jetson due to its optimal performance. It accelerates inference by leveraging the Jetson's GPU capabilities, ensuring maximum efficiency and speed. Learn more about how to convert to TensorRT and run inference in the [Use TensorRT on NVIDIA Jetson](#use-tensorrt-on-nvidia-jetson) section. ### How can I install PyTorch and Torchvision on NVIDIA Jetson? To install PyTorch and Torchvision on NVIDIA Jetson, first uninstall any existing versions that may have been installed via pip. Then, manually install the compatible PyTorch and Torchvision versions for the Jetson's ARM64 architecture. Detailed instructions for this process are provided in the [Install PyTorch and Torchvision](#install-pytorch-and-torchvision) section. ### What are the best practices for maximizing performance on NVIDIA Jetson when using YOLOv8? To maximize performance on NVIDIA Jetson with YOLOv8, follow these best practices: 1. Enable MAX Power Mode to utilize all CPU and GPU cores. 2. Enable Jetson Clocks to run all cores at their maximum frequency. 3. Install the Jetson Stats application for monitoring system metrics. For commands and additional details, refer to the [Best Practices when using NVIDIA Jetson](#best-practices-when-using-nvidia-jetson) section.
271139
## Step 3: [Data Augmentation](https://www.ultralytics.com/glossary/data-augmentation) and Splitting Your Dataset After collecting and annotating your image data, it's important to first split your dataset into training, validation, and test sets before performing data augmentation. Splitting your dataset before augmentation is crucial to test and validate your model on original, unaltered data. It helps accurately assess how well the model generalizes to new, unseen data. Here's how to split your data: - **Training Set:** It is the largest portion of your data, typically 70-80% of the total, used to train your model. - **Validation Set:** Usually around 10-15% of your data; this set is used to tune hyperparameters and validate the model during training, helping to prevent [overfitting](https://www.ultralytics.com/glossary/overfitting). - **Test Set:** The remaining 10-15% of your data is set aside as the test set. It is used to evaluate the model's performance on unseen data after training is complete. After splitting your data, you can perform data augmentation by applying transformations like rotating, scaling, and flipping images to artificially increase the size of your dataset. Data augmentation makes your model more robust to variations and improves its performance on unseen images. <p align="center"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/examples-of-data-augmentations.avif" alt="Examples of Data Augmentations"> </p> Libraries like [OpenCV](https://www.ultralytics.com/glossary/opencv), Albumentations, and [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) offer flexible augmentation functions that you can use. Additionally, some libraries, such as Ultralytics, have [built-in augmentation settings](../modes/train.md) directly within its model training function, simplifying the process. To understand your data better, you can use tools like [Matplotlib](https://matplotlib.org/) or [Seaborn](https://seaborn.pydata.org/) to visualize the images and analyze their distribution and characteristics. Visualizing your data helps identify patterns, anomalies, and the effectiveness of your augmentation techniques. You can also use [Ultralytics Explorer](../datasets/explorer/index.md), a tool for exploring computer vision datasets with semantic search, SQL queries, and vector similarity search. <p align="center"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/explorer-dashboard-screenshot-1.avif" alt="The Ultralytics Explorer Tool"> </p> By properly [understanding, splitting, and augmenting your data](./preprocessing_annotated_data.md), you can develop a well-trained, validated, and tested model that performs well in real-world applications. ## Step 4: Model Training Once your dataset is ready for training, you can focus on setting up the necessary environment, managing your datasets, and training your model. First, you'll need to make sure your environment is configured correctly. Typically, this includes the following: - Installing essential libraries and frameworks like TensorFlow, [PyTorch](https://www.ultralytics.com/glossary/pytorch), or [Ultralytics](../quickstart.md). - If you are using a GPU, installing libraries like CUDA and cuDNN will help enable GPU acceleration and speed up the training process. Then, you can load your training and validation datasets into your environment. Normalize and preprocess the data through resizing, format conversion, or augmentation. With your model selected, configure the layers and specify hyperparameters. Compile the model by setting the [loss function](https://www.ultralytics.com/glossary/loss-function), optimizer, and performance metrics. Libraries like Ultralytics simplify the training process. You can [start training](../modes/train.md) by feeding data into the model with minimal code. These libraries handle weight adjustments, [backpropagation](https://www.ultralytics.com/glossary/backpropagation), and validation automatically. They also offer tools to monitor progress and adjust hyperparameters easily. After training, save the model and its weights with a few commands. It's important to keep in mind that proper dataset management is vital for efficient training. Use version control for datasets to track changes and ensure reproducibility. Tools like [DVC (Data Version Control)](../integrations/dvc.md) can help manage large datasets. ## Step 5: Model Evaluation and Model [Finetuning](https://www.ultralytics.com/glossary/fine-tuning) It's important to assess your model's performance using various metrics and refine it to improve [accuracy](https://www.ultralytics.com/glossary/accuracy). [Evaluating](../modes/val.md) helps identify areas where the model excels and where it may need improvement. Fine-tuning ensures the model is optimized for the best possible performance. - **[Performance Metrics](./yolo-performance-metrics.md):** Use metrics like accuracy, [precision](https://www.ultralytics.com/glossary/precision), [recall](https://www.ultralytics.com/glossary/recall), and F1-score to evaluate your model's performance. These metrics provide insights into how well your model is making predictions. - **[Hyperparameter Tuning](./hyperparameter-tuning.md):** Adjust hyperparameters to optimize model performance. Techniques like grid search or random search can help find the best hyperparameter values. - Fine-Tuning: Make small adjustments to the model architecture or training process to enhance performance. This might involve tweaking [learning rates](https://www.ultralytics.com/glossary/learning-rate), [batch sizes](https://www.ultralytics.com/glossary/batch-size), or other model parameters. ## Step 6: Model Testing In this step, you can make sure that your model performs well on completely unseen data, confirming its readiness for deployment. The difference between model testing and model evaluation is that it focuses on verifying the final model's performance rather than iteratively improving it. It's important to thoroughly test and debug any common issues that may arise. Test your model on a separate test dataset that was not used during training or validation. This dataset should represent real-world scenarios to ensure the model's performance is consistent and reliable. Also, address common problems such as overfitting, [underfitting](https://www.ultralytics.com/glossary/underfitting), and data leakage. Use techniques like cross-validation and [anomaly detection](https://www.ultralytics.com/glossary/anomaly-detection) to identify and fix these issues. ## Step 7: [Model Deployment](https://www.ultralytics.com/glossary/model-deployment) Once your model has been thoroughly tested, it's time to deploy it. Deployment involves making your model available for use in a production environment. Here are the steps to deploy a computer vision model: - Setting Up the Environment: Configure the necessary infrastructure for your chosen deployment option, whether it's cloud-based (AWS, Google Cloud, Azure) or edge-based (local devices, IoT). - **[Exporting the Model](../modes/export.md):** Export your model to the appropriate format (e.g., ONNX, TensorRT, CoreML for YOLO11) to ensure compatibility with your deployment platform. - **Deploying the Model:** Deploy the model by setting up APIs or endpoints and integrating it with your application. - **Ensuring Scalability**: Implement load balancers, auto-scaling groups, and monitoring tools to manage resources and handle increasing data and user requests. ## Step 8: Monitoring, Maintenance, and Documentation Once your model is deployed, it's important to continuously monitor its performance, maintain it to handle any issues, and document the entire process for future reference and improvements. Monitoring tools can help you track key performance indicators (KPIs) and detect anomalies or drops in accuracy. By monitoring the model, you can be aware of model drift, where the model's performance declines over time due to changes in the input data. Periodically retrain the model with updated data to maintain accuracy and relevance. <p align="center"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/model-monitoring-maintenance-loop.avif" alt="Model Monitoring"> </p> In addition to monitoring and maintenance, documentation is also key. Thoroughly document the entire process, including model architecture, training procedures, hyperparameters, data preprocessing steps, and any changes made during deployment and maintenance. Good documentation ensures reproducibility and makes future updates or troubleshooting easier. By effectively monitoring, maintaining, and documenting your model, you can ensure it remains accurate, reliable, and easy to manage over its lifecycle.
271141
--- comments: true description: Learn to extract isolated objects from inference results using Ultralytics Predict Mode. Step-by-step guide for segmentation object isolation. keywords: Ultralytics, segmentation, object isolation, Predict Mode, YOLO11, machine learning, object detection, binary mask, image processing --- # Isolating Segmentation Objects After performing the [Segment Task](../tasks/segment.md), it's sometimes desirable to extract the isolated objects from the inference results. This guide provides a generic recipe on how to accomplish this using the Ultralytics [Predict Mode](../modes/predict.md). <p align="center"> <img src="https://github.com/ultralytics/docs/releases/download/0/isolated-object-segmentation.avif" alt="Example Isolated Object Segmentation"> </p> ## Recipe Walk Through 1. See the [Ultralytics Quickstart Installation section](../quickstart.md) for a quick walkthrough on installing the required libraries. *** 2. Load a model and run `predict()` method on a source. ```python from ultralytics import YOLO # Load a model model = YOLO("yolo11n-seg.pt") # Run inference results = model.predict() ``` !!! question "No Prediction Arguments?" Without specifying a source, the example images from the library will be used: ``` 'ultralytics/assets/bus.jpg' 'ultralytics/assets/zidane.jpg' ``` This is helpful for rapid testing with the `predict()` method. For additional information about Segmentation Models, visit the [Segment Task](../tasks/segment.md#models) page. To learn more about `predict()` method, see [Predict Mode](../modes/predict.md) section of the Documentation. *** 3. Now iterate over the results and the contours. For workflows that want to save an image to file, the source image `base-name` and the detection `class-label` are retrieved for later use (optional). ```{ .py .annotate } from pathlib import Path import numpy as np # (2) Iterate detection results (helpful for multiple images) for r in res: img = np.copy(r.orig_img) img_name = Path(r.path).stem # source image base-name # Iterate each object contour (multiple detections) for ci, c in enumerate(r): # (1) Get detection class name label = c.names[c.boxes.cls.tolist().pop()] ``` 1. To learn more about working with detection results, see [Boxes Section for Predict Mode](../modes/predict.md#boxes). 2. To learn more about `predict()` results see [Working with Results for Predict Mode](../modes/predict.md#working-with-results) ??? info "For-Loop" A single image will only iterate the first loop once. A single image with only a single detection will iterate each loop _only_ once. *** 4. Start with generating a binary mask from the source image and then draw a filled contour onto the mask. This will allow the object to be isolated from the other parts of the image. An example from `bus.jpg` for one of the detected `person` class objects is shown on the right. ![Binary Mask Image](https://github.com/ultralytics/ultralytics/assets/62214284/59bce684-fdda-4b17-8104-0b4b51149aca){ width="240", align="right" } ```{ .py .annotate } import cv2 # Create binary mask b_mask = np.zeros(img.shape[:2], np.uint8) # (1) Extract contour result contour = c.masks.xy.pop() # (2) Changing the type contour = contour.astype(np.int32) # (3) Reshaping contour = contour.reshape(-1, 1, 2) # Draw contour onto mask _ = cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED) ``` 1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks). 2. Here the values are cast into `np.int32` for compatibility with `drawContours()` function from [OpenCV](https://www.ultralytics.com/glossary/opencv). 3. The OpenCV `drawContours()` function expects contours to have a shape of `[N, 1, 2]` expand section below for more details. <details> <summary> Expand to understand what is happening when defining the <code>contour</code> variable.</summary> <p> - `c.masks.xy` :: Provides the coordinates of the mask contour points in the format `(x, y)`. For more details, refer to the [Masks Section from Predict Mode](../modes/predict.md#masks). - `.pop()` :: As `masks.xy` is a list containing a single element, this element is extracted using the `pop()` method. - `.astype(np.int32)` :: Using `masks.xy` will return with a data type of `float32`, but this won't be compatible with the OpenCV `drawContours()` function, so this will change the data type to `int32` for compatibility. - `.reshape(-1, 1, 2)` :: Reformats the data into the required shape of `[N, 1, 2]` where `N` is the number of contour points, with each point represented by a single entry `1`, and the entry is composed of `2` values. The `-1` denotes that the number of values along this dimension is flexible. </details> <p></p> <details> <summary> Expand for an explanation of the <code>drawContours()</code> configuration.</summary> <p> - Encapsulating the `contour` variable within square brackets, `[contour]`, was found to effectively generate the desired contour mask during testing. - The value `-1` specified for the `drawContours()` parameter instructs the function to draw all contours present in the image. - The `tuple` `(255, 255, 255)` represents the color white, which is the desired color for drawing the contour in this binary mask. - The addition of `cv2.FILLED` will color all pixels enclosed by the contour boundary the same, in this case, all enclosed pixels will be white. - See [OpenCV Documentation on `drawContours()`](https://docs.opencv.org/4.8.0/d6/d6e/group__imgproc__draw.html#ga746c0625f1781f1ffc9056259103edbc) for more information. </details> <p></p> ***
271142
5. Next there are 2 options for how to move forward with the image from this point and a subsequent option for each. ### Object Isolation Options !!! example === "Black Background Pixels" ```python # Create 3-channel mask mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR) # Isolate object with binary mask isolated = cv2.bitwise_and(mask3ch, img) ``` ??? question "How does this work?" - First, the binary mask is first converted from a single-channel image to a three-channel image. This conversion is necessary for the subsequent step where the mask and the original image are combined. Both images must have the same number of channels to be compatible with the blending operation. - The original image and the three-channel binary mask are merged using the OpenCV function `bitwise_and()`. This operation retains <u>only</u> pixel values that are greater than zero `(> 0)` from both images. Since the mask pixels are greater than zero `(> 0)` <u>only</u> within the contour region, the pixels remaining from the original image are those that overlap with the contour. ### Isolate with Black Pixels: Sub-options ??? info "Full-size Image" There are no additional steps required if keeping full size image. <figure markdown> ![Example Full size Isolated Object Image Black Background](https://github.com/ultralytics/docs/releases/download/0/full-size-isolated-object-black-background.avif){ width=240 } <figcaption>Example full-size output</figcaption> </figure> ??? info "Cropped object Image" Additional steps required to crop image to only include object region. ![Example Crop Isolated Object Image Black Background](https://github.com/ultralytics/docs/releases/download/0/example-crop-isolated-object-image-black-background.avif){ align="right" } ```{ .py .annotate } # (1) Bounding box coordinates x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32) # Crop image to object region iso_crop = isolated[y1:y2, x1:x2] ``` 1. For more information on [bounding box](https://www.ultralytics.com/glossary/bounding-box) results, see [Boxes Section from Predict Mode](../modes/predict.md/#boxes) ??? question "What does this code do?" - The `c.boxes.xyxy.cpu().numpy()` call retrieves the bounding boxes as a NumPy array in the `xyxy` format, where `xmin`, `ymin`, `xmax`, and `ymax` represent the coordinates of the bounding box rectangle. See [Boxes Section from Predict Mode](../modes/predict.md/#boxes) for more details. - The `squeeze()` operation removes any unnecessary dimensions from the NumPy array, ensuring it has the expected shape. - Converting the coordinate values using `.astype(np.int32)` changes the box coordinates data type from `float32` to `int32`, making them compatible for image cropping using index slices. - Finally, the bounding box region is cropped from the image using index slicing. The bounds are defined by the `[ymin:ymax, xmin:xmax]` coordinates of the detection bounding box. === "Transparent Background Pixels" ```python # Isolate object with transparent background (when saved as PNG) isolated = np.dstack([img, b_mask]) ``` ??? question "How does this work?" - Using the NumPy `dstack()` function (array stacking along depth-axis) in conjunction with the binary mask generated, will create an image with four channels. This allows for all pixels outside of the object contour to be transparent when saving as a `PNG` file. ### Isolate with Transparent Pixels: Sub-options ??? info "Full-size Image" There are no additional steps required if keeping full size image. <figure markdown> ![Example Full size Isolated Object Image No Background](https://github.com/ultralytics/docs/releases/download/0/example-full-size-isolated-object-image-no-background.avif){ width=240 } <figcaption>Example full-size output + transparent background</figcaption> </figure> ??? info "Cropped object Image" Additional steps required to crop image to only include object region. ![Example Crop Isolated Object Image No Background](https://github.com/ultralytics/docs/releases/download/0/example-crop-isolated-object-image-no-background.avif){ align="right" } ```{ .py .annotate } # (1) Bounding box coordinates x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32) # Crop image to object region iso_crop = isolated[y1:y2, x1:x2] ``` 1. For more information on bounding box results, see [Boxes Section from Predict Mode](../modes/predict.md/#boxes) ??? question "What does this code do?" - When using `c.boxes.xyxy.cpu().numpy()`, the bounding boxes are returned as a NumPy array, using the `xyxy` box coordinates format, which correspond to the points `xmin, ymin, xmax, ymax` for the bounding box (rectangle), see [Boxes Section from Predict Mode](../modes/predict.md/#boxes) for more information. - Adding `squeeze()` ensures that any extraneous dimensions are removed from the NumPy array. - Converting the coordinate values using `.astype(np.int32)` changes the box coordinates data type from `float32` to `int32` which will be compatible when cropping the image using index slices. - Finally the image region for the bounding box is cropped using index slicing, where the bounds are set using the `[ymin:ymax, xmin:xmax]` coordinates of the detection bounding box. ??? question "What if I want the cropped object **including** the background?" This is a built in feature for the Ultralytics library. See the `save_crop` argument for [Predict Mode Inference Arguments](../modes/predict.md/#inference-arguments) for details. *** 6. <u>What to do next is entirely left to you as the developer.</u> A basic example of one possible next step (saving the image to file for future use) is shown. - **NOTE:** this step is optional and can be skipped if not required for your specific use case. ??? example "Example Final Step" ```python # Save isolated object to file _ = cv2.imwrite(f"{img_name}_{label}-{ci}.png", iso_crop) ``` - In this example, the `img_name` is the base-name of the source image file, `label` is the detected class-name, and `ci` is the index of the [object detection](https://www.ultralytics.com/glossary/object-detection) (in case of multiple instances with the same class name).
271143
## Full Example code Here, all steps from the previous section are combined into a single block of code. For repeated use, it would be optimal to define a function to do some or all commands contained in the `for`-loops, but that is an exercise left to the reader. ```{ .py .annotate } from pathlib import Path import cv2 import numpy as np from ultralytics import YOLO m = YOLO("yolo11n-seg.pt") # (4)! res = m.predict() # (3)! # Iterate detection results (5) for r in res: img = np.copy(r.orig_img) img_name = Path(r.path).stem # Iterate each object contour (6) for ci, c in enumerate(r): label = c.names[c.boxes.cls.tolist().pop()] b_mask = np.zeros(img.shape[:2], np.uint8) # Create contour mask (1) contour = c.masks.xy.pop().astype(np.int32).reshape(-1, 1, 2) _ = cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED) # Choose one: # OPTION-1: Isolate object with black background mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR) isolated = cv2.bitwise_and(mask3ch, img) # OPTION-2: Isolate object with transparent background (when saved as PNG) isolated = np.dstack([img, b_mask]) # OPTIONAL: detection crop (from either OPT1 or OPT2) x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32) iso_crop = isolated[y1:y2, x1:x2] # TODO your actions go here (2) ``` 1. The line populating `contour` is combined into a single line here, where it was split to multiple above. 2. {==What goes here is up to you!==} 3. See [Predict Mode](../modes/predict.md) for additional information. 4. See [Segment Task](../tasks/segment.md#models) for more information. 5. Learn more about [Working with Results](../modes/predict.md#working-with-results) 6. Learn more about [Segmentation Mask Results](../modes/predict.md#masks) ## FAQ ### How do I isolate objects using Ultralytics YOLO11 for segmentation tasks? To isolate objects using Ultralytics YOLO11, follow these steps: 1. **Load the model and run inference:** ```python from ultralytics import YOLO model = YOLO("yolo11n-seg.pt") results = model.predict(source="path/to/your/image.jpg") ``` 2. **Generate a binary mask and draw contours:** ```python import cv2 import numpy as np img = np.copy(results[0].orig_img) b_mask = np.zeros(img.shape[:2], np.uint8) contour = results[0].masks.xy[0].astype(np.int32).reshape(-1, 1, 2) cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED) ``` 3. **Isolate the object using the binary mask:** ```python mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR) isolated = cv2.bitwise_and(mask3ch, img) ``` Refer to the guide on [Predict Mode](../modes/predict.md) and the [Segment Task](../tasks/segment.md) for more information. ### What options are available for saving the isolated objects after segmentation? Ultralytics YOLO11 offers two main options for saving isolated objects: 1. **With a Black Background:** ```python mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR) isolated = cv2.bitwise_and(mask3ch, img) ``` 2. **With a Transparent Background:** ```python isolated = np.dstack([img, b_mask]) ``` For further details, visit the [Predict Mode](../modes/predict.md) section. ### How can I crop isolated objects to their bounding boxes using Ultralytics YOLO11? To crop isolated objects to their bounding boxes: 1. **Retrieve bounding box coordinates:** ```python x1, y1, x2, y2 = results[0].boxes.xyxy[0].cpu().numpy().astype(np.int32) ``` 2. **Crop the isolated image:** ```python iso_crop = isolated[y1:y2, x1:x2] ``` Learn more about bounding box results in the [Predict Mode](../modes/predict.md#boxes) documentation. ### Why should I use Ultralytics YOLO11 for object isolation in segmentation tasks? Ultralytics YOLO11 provides: - **High-speed** real-time object detection and segmentation. - **Accurate bounding box and mask generation** for precise object isolation. - **Comprehensive documentation** and easy-to-use API for efficient development. Explore the benefits of using YOLO in the [Segment Task documentation](../tasks/segment.md). ### Can I save isolated objects including the background using Ultralytics YOLO11? Yes, this is a built-in feature in Ultralytics YOLO11. Use the `save_crop` argument in the `predict()` method. For example: ```python results = model.predict(source="path/to/your/image.jpg", save_crop=True) ``` Read more about the `save_crop` argument in the [Predict Mode Inference Arguments](../modes/predict.md#inference-arguments) section.
271144
--- comments: true description: Learn how to manage and optimize queues using Ultralytics YOLO11 to reduce wait times and increase efficiency in various real-world applications. keywords: queue management, YOLO11, Ultralytics, reduce wait times, efficiency, customer satisfaction, retail, airports, healthcare, banks --- # Queue Management using Ultralytics YOLO11 🚀 ## What is Queue Management? Queue management using [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) involves organizing and controlling lines of people or vehicles to reduce wait times and enhance efficiency. It's about optimizing queues to improve customer satisfaction and system performance in various settings like retail, banks, airports, and healthcare facilities. <p align="center"> <br> <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/gX5kSRD56Gs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen> </iframe> <br> <strong>Watch:</strong> How to Implement Queue Management with Ultralytics YOLO11 | Airport and Metro Station </p> ## Advantages of Queue Management? - **Reduced Waiting Times:** Queue management systems efficiently organize queues, minimizing wait times for customers. This leads to improved satisfaction levels as customers spend less time waiting and more time engaging with products or services. - **Increased Efficiency:** Implementing queue management allows businesses to allocate resources more effectively. By analyzing queue data and optimizing staff deployment, businesses can streamline operations, reduce costs, and improve overall productivity. ## Real World Applications | Logistics | Retail | | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------: | | ![Queue management at airport ticket counter using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/queue-management-airport-ticket-counter-ultralytics-yolov8.avif) | ![Queue monitoring in crowd using Ultralytics YOLO11](https://github.com/ultralytics/docs/releases/download/0/queue-monitoring-crowd-ultralytics-yolov8.avif) | | Queue management at airport ticket counter Using Ultralytics YOLO11 | Queue monitoring in crowd Ultralytics YOLO11 | !!! example "Queue Management using YOLO11 Example" === "Queue Manager" ```python import cv2 from ultralytics import solutions cap = cv2.VideoCapture("Path/to/video/file.mp4") assert cap.isOpened(), "Error reading video file" w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)] queue = solutions.QueueManager( model="yolo11n.pt", region=queue_region, ) while cap.isOpened(): success, im0 = cap.read() if success: out = queue.process_queue(im0) video_writer.write(im0) if cv2.waitKey(1) & 0xFF == ord("q"): break continue print("Video frame is empty or video processing has been successfully completed.") break cap.release() cv2.destroyAllWindows() ``` === "Queue Manager Specific Classes" ```python import cv2 from ultralytics import solutions cap = cv2.VideoCapture("Path/to/video/file.mp4") assert cap.isOpened(), "Error reading video file" w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS)) video_writer = cv2.VideoWriter("queue_management.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) queue_region = [(20, 400), (1080, 404), (1080, 360), (20, 360)] queue = solutions.QueueManager( model="yolo11n.pt", classes=3, ) while cap.isOpened(): success, im0 = cap.read() if success: out = queue.process_queue(im0) video_writer.write(im0) if cv2.waitKey(1) & 0xFF == ord("q"): break continue print("Video frame is empty or video processing has been successfully completed.") break cap.release() cv2.destroyAllWindows() ``` ### Arguments `QueueManager` | Name | Type | Default | Description | | ------------ | ------ | -------------------------- | ---------------------------------------------------- | | `model` | `str` | `None` | Path to Ultralytics YOLO Model File | | `region` | `list` | `[(20, 400), (1260, 400)]` | List of points defining the queue region. | | `line_width` | `int` | `2` | Line thickness for bounding boxes. | | `show` | `bool` | `False` | Flag to control whether to display the video stream. | ### Arguments `model.track` {% include "macros/track-args.md" %} ##
271149
## Common Issues ### Installation Errors Installation errors can arise due to various reasons, such as incompatible versions, missing dependencies, or incorrect environment setups. First, check to make sure you are doing the following: - You're using Python 3.8 or later as recommended. - Ensure that you have the correct version of [PyTorch](https://www.ultralytics.com/glossary/pytorch) (1.8 or later) installed. - Consider using virtual environments to avoid conflicts. - Follow the [official installation guide](../quickstart.md) step by step. Additionally, here are some common installation issues users have encountered, along with their respective solutions: - Import Errors or Dependency Issues - If you're getting errors during the import of YOLO11, or you're having issues related to dependencies, consider the following troubleshooting steps: - **Fresh Installation**: Sometimes, starting with a fresh installation can resolve unexpected issues. Especially with libraries like Ultralytics, where updates might introduce changes to the file tree structure or functionalities. - **Update Regularly**: Ensure you're using the latest version of the library. Older versions might not be compatible with recent updates, leading to potential conflicts or issues. - **Check Dependencies**: Verify that all required dependencies are correctly installed and are of the compatible versions. - **Review Changes**: If you initially cloned or installed an older version, be aware that significant updates might affect the library's structure or functionalities. Always refer to the official documentation or changelogs to understand any major changes. - Remember, keeping your libraries and dependencies up-to-date is crucial for a smooth and error-free experience. - Running YOLO11 on GPU - If you're having trouble running YOLO11 on GPU, consider the following troubleshooting steps: - **Verify CUDA Compatibility and Installation**: Ensure your GPU is CUDA compatible and that CUDA is correctly installed. Use the `nvidia-smi` command to check the status of your NVIDIA GPU and CUDA version. - **Check PyTorch and CUDA Integration**: Ensure PyTorch can utilize CUDA by running `import torch; print(torch.cuda.is_available())` in a Python terminal. If it returns 'True', PyTorch is set up to use CUDA. - **Environment Activation**: Ensure you're in the correct environment where all necessary packages are installed. - **Update Your Packages**: Outdated packages might not be compatible with your GPU. Keep them updated. - **Program Configuration**: Check if the program or code specifies GPU usage. In YOLO11, this might be in the settings or configuration. ### Model Training Issues This section will address common issues faced while training and their respective explanations and solutions. #### Verification of Configuration Settings **Issue**: You are unsure whether the configuration settings in the `.yaml` file are being applied correctly during model training. **Solution**: The configuration settings in the `.yaml` file should be applied when using the `model.train()` function. To ensure that these settings are correctly applied, follow these steps: - Confirm that the path to your `.yaml` configuration file is correct. - Make sure you pass the path to your `.yaml` file as the `data` argument when calling `model.train()`, as shown below: ```python model.train(data="/path/to/your/data.yaml", batch=4) ``` #### Accelerating Training with Multiple GPUs **Issue**: Training is slow on a single GPU, and you want to speed up the process using multiple GPUs. **Solution**: Increasing the [batch size](https://www.ultralytics.com/glossary/batch-size) can accelerate training, but it's essential to consider GPU memory capacity. To speed up training with multiple GPUs, follow these steps: - Ensure that you have multiple GPUs available. - Modify your .yaml configuration file to specify the number of GPUs to use, e.g., gpus: 4. - Increase the batch size accordingly to fully utilize the multiple GPUs without exceeding memory limits. - Modify your training command to utilize multiple GPUs: ```python # Adjust the batch size and other settings as needed to optimize training speed model.train(data="/path/to/your/data.yaml", batch=32, multi_scale=True) ``` #### Continuous Monitoring Parameters **Issue**: You want to know which parameters should be continuously monitored during training, apart from loss. **Solution**: While loss is a crucial metric to monitor, it's also essential to track other metrics for model performance optimization. Some key metrics to monitor during training include: - Precision - Recall - [Mean Average Precision](https://www.ultralytics.com/glossary/mean-average-precision-map) (mAP) You can access these metrics from the training logs or by using tools like TensorBoard or wandb for visualization. Implementing early stopping based on these metrics can help you achieve better results. #### Tools for Tracking Training Progress **Issue**: You are looking for recommendations on tools to track training progress. **Solution**: To track and visualize training progress, you can consider using the following tools: - [TensorBoard](https://www.tensorflow.org/tensorboard): TensorBoard is a popular choice for visualizing training metrics, including loss, [accuracy](https://www.ultralytics.com/glossary/accuracy), and more. You can integrate it with your YOLO11 training process. - [Comet](https://bit.ly/yolov8-readme-comet): Comet provides an extensive toolkit for experiment tracking and comparison. It allows you to track metrics, hyperparameters, and even model weights. Integration with YOLO models is also straightforward, providing you with a complete overview of your experiment cycle. - [Ultralytics HUB](https://hub.ultralytics.com/): Ultralytics HUB offers a specialized environment for tracking YOLO models, giving you a one-stop platform to manage metrics, datasets, and even collaborate with your team. Given its tailored focus on YOLO, it offers more customized tracking options. Each of these tools offers its own set of advantages, so you may want to consider the specific needs of your project when making a choice. #### How to Check if Training is Happening on the GPU **Issue**: The 'device' value in the training logs is 'null,' and you're unsure if training is happening on the GPU. **Solution**: The 'device' value being 'null' typically means that the training process is set to automatically use an available GPU, which is the default behavior. To ensure training occurs on a specific GPU, you can manually set the 'device' value to the GPU index (e.g., '0' for the first GPU) in your .yaml configuration file: ```yaml device: 0 ``` This will explicitly assign the training process to the specified GPU. If you wish to train on the CPU, set 'device' to 'cpu'. Keep an eye on the 'runs' folder for logs and metrics to monitor training progress effectively. #### Key Considerations for Effective Model Training Here are some things to keep in mind, if you are facing issues related to model training. **Dataset Format and Labels** - Importance: The foundation of any [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) model lies in the quality and format of the data it is trained on. - Recommendation: Ensure that your custom dataset and its associated labels adhere to the expected format. It's crucial to verify that annotations are accurate and of high quality. Incorrect or subpar annotations can derail the model's learning process, leading to unpredictable outcomes. **Model Convergence** - Importance: Achieving model convergence ensures that the model has sufficiently learned from the [training data](https://www.ultralytics.com/glossary/training-data). - Recommendation: When training a model 'from scratch', it's vital to ensure that the model reaches a satisfactory level of convergence. This might necessitate a longer training duration, with more [epochs](https://www.ultralytics.com/glossary/epoch), compared to when you're fine-tuning an existing model. **[Learning Rate](https://www.ultralytics.com/glossary/learning-rate) and Batch Size** - Importance: These hyperparameters play a pivotal role in determining how the model updates its weights during training. - Recommendation: Regularly evaluate if the chosen learning rate and batch size are optimal for your specific dataset. Parameters that are not in harmony with the dataset's characteristics can hinder the model's performance. **Class Distribution** - Importance: The distribution of classes in your dataset can influence the model's prediction tendencies. - Recommendation: Regularly assess the distribution of classes within your dataset. If there's a class imbalance, there's a risk that the model will develop a bias towards the more prevalent class. This bias can be evident in the confusion matrix, where the model might predominantly predict the majority class. **Cross-Check with Pretrained Weights** - Importance: Leveraging pretrained weights can provide a solid starting point for model training, especially when data is limited. - Recommendation: As a diagnostic step, consider training your model using the same data but initializing it with pretrained weights. If this approach yields a well-formed confusion matrix, it could suggest that the 'from scratch' model might require further training or adjustments.
271150
### Issues Related to Model Predictions This section will address common issues faced during model prediction. #### Getting Bounding Box Predictions With Your YOLO11 Custom Model **Issue**: When running predictions with a custom YOLO11 model, there are challenges with the format and visualization of the bounding box coordinates. **Solution**: - Coordinate Format: YOLO11 provides bounding box coordinates in absolute pixel values. To convert these to relative coordinates (ranging from 0 to 1), you need to divide by the image dimensions. For example, let's say your image size is 640x640. Then you would do the following: ```python # Convert absolute coordinates to relative coordinates x1 = x1 / 640 # Divide x-coordinates by image width x2 = x2 / 640 y1 = y1 / 640 # Divide y-coordinates by image height y2 = y2 / 640 ``` - File Name: To obtain the file name of the image you're predicting on, access the image file path directly from the result object within your prediction loop. #### Filtering Objects in YOLO11 Predictions **Issue**: Facing issues with how to filter and display only specific objects in the prediction results when running YOLO11 using the Ultralytics library. **Solution**: To detect specific classes use the classes argument to specify the classes you want to include in the output. For instance, to detect only cars (assuming 'cars' have class index 2): ```shell yolo task=detect mode=segment model=yolo11n-seg.pt source='path/to/car.mp4' show=True classes=2 ``` #### Understanding Precision Metrics in YOLO11 **Issue**: Confusion regarding the difference between box precision, mask precision, and [confusion matrix](https://www.ultralytics.com/glossary/confusion-matrix) precision in YOLO11. **Solution**: Box precision measures the accuracy of predicted bounding boxes compared to the actual ground truth boxes using IoU (Intersection over Union) as the metric. Mask precision assesses the agreement between predicted segmentation masks and ground truth masks in pixel-wise object classification. Confusion matrix precision, on the other hand, focuses on overall classification accuracy across all classes and does not consider the geometric accuracy of predictions. It's important to note that a [bounding box](https://www.ultralytics.com/glossary/bounding-box) can be geometrically accurate (true positive) even if the class prediction is wrong, leading to differences between box precision and confusion matrix precision. These metrics evaluate distinct aspects of a model's performance, reflecting the need for different evaluation metrics in various tasks. #### Extracting Object Dimensions in YOLO11 **Issue**: Difficulty in retrieving the length and height of detected objects in YOLO11, especially when multiple objects are detected in an image. **Solution**: To retrieve the bounding box dimensions, first use the Ultralytics YOLO11 model to predict objects in an image. Then, extract the width and height information of bounding boxes from the prediction results. ```python from ultralytics import YOLO # Load a pre-trained YOLO11 model model = YOLO("yolo11n.pt") # Specify the source image source = "https://ultralytics.com/images/bus.jpg" # Make predictions results = model.predict(source, save=True, imgsz=320, conf=0.5) # Extract bounding box dimensions boxes = results[0].boxes.xywh.cpu() for box in boxes: x, y, w, h = box print(f"Width of Box: {w}, Height of Box: {h}") ``` ### Deployment Challenges #### GPU Deployment Issues **Issue:** Deploying models in a multi-GPU environment can sometimes lead to unexpected behaviors like unexpected memory usage, inconsistent results across GPUs, etc. **Solution:** Check for default GPU initialization. Some frameworks, like PyTorch, might initialize CUDA operations on a default GPU before transitioning to the designated GPUs. To bypass unexpected default initializations, specify the GPU directly during deployment and prediction. Then, use tools to monitor GPU utilization and memory usage to identify any anomalies in real-time. Also, ensure you're using the latest version of the framework or library. #### Model Conversion/Exporting Issues **Issue:** During the process of converting or exporting machine learning models to different formats or platforms, users might encounter errors or unexpected behaviors. **Solution:** - Compatibility Check: Ensure that you are using versions of libraries and frameworks that are compatible with each other. Mismatched versions can lead to unexpected errors during conversion. - Environment Reset: If you're using an interactive environment like Jupyter or Colab, consider restarting your environment after making significant changes or installations. A fresh start can sometimes resolve underlying issues. - Official Documentation: Always refer to the official documentation of the tool or library you are using for conversion. It often contains specific guidelines and best practices for model exporting. - Community Support: Check the library or framework's official repository for similar issues reported by other users. The maintainers or community might have provided solutions or workarounds in discussion threads. - Update Regularly: Ensure that you are using the latest version of the tool or library. Developers frequently release updates that fix known bugs or improve functionality. - Test Incrementally: Before performing a full conversion, test the process with a smaller model or dataset to identify potential issues early on. ## Community and Support Engaging with a community of like-minded individuals can significantly enhance your experience and success in working with YOLO11. Below are some channels and resources you may find helpful. ### Forums and Channels for Getting Help **GitHub Issues:** The YOLO11 repository on GitHub has an [Issues tab](https://github.com/ultralytics/ultralytics/issues) where you can ask questions, report bugs, and suggest new features. The community and maintainers are active here, and it's a great place to get help with specific problems. **Ultralytics Discord Server:** Ultralytics has a [Discord server](https://discord.com/invite/ultralytics) where you can interact with other users and the developers. ### Official Documentation and Resources **Ultralytics YOLO11 Docs**: The [official documentation](../index.md) provides a comprehensive overview of YOLO11, along with guides on installation, usage, and troubleshooting. These resources should provide a solid foundation for troubleshooting and improving your YOLO11 projects, as well as connecting with others in the YOLO11 community. ## Conclusion Troubleshooting is an integral part of any development process, and being equipped with the right knowledge can significantly reduce the time and effort spent in resolving issues. This guide aimed to address the most common challenges faced by users of the YOLO11 model within the Ultralytics ecosystem. By understanding and addressing these common issues, you can ensure smoother project progress and achieve better results with your [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) tasks. Remember, the Ultralytics community is a valuable resource. Engaging with fellow developers and experts can provide additional insights and solutions that might not be covered in standard documentation. Always keep learning, experimenting, and sharing your experiences to contribute to the collective knowledge of the community. Happy troubleshooting!
271153
--- comments: true description: Explore effective methods for testing computer vision models to make sure they are reliable, perform well, and are ready to be deployed. keywords: Overfitting and Underfitting in Machine Learning, Model Testing, Data Leakage Machine Learning, Testing a Model, Testing Machine Learning Models, How to Test AI Models --- # A Guide on Model Testing ## Introduction After [training](./model-training-tips.md) and [evaluating](./model-evaluation-insights.md) your model, it's time to test it. Model testing involves assessing how well it performs in real-world scenarios. Testing considers factors like accuracy, reliability, fairness, and how easy it is to understand the model's decisions. The goal is to make sure the model performs as intended, delivers the expected results, and fits into the [overall objective of your application](./defining-project-goals.md) or project. Model testing is quite similar to model evaluation, but they are two distinct [steps in a computer vision project](./steps-of-a-cv-project.md). Model evaluation involves metrics and plots to assess the model's accuracy. On the other hand, model testing checks if the model's learned behavior is the same as expectations. In this guide, we'll explore strategies for testing your [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) models. ## Model Testing Vs. Model Evaluation First, let's understand the difference between model evaluation and testing with an example. Suppose you have trained a computer vision model to recognize cats and dogs, and you want to deploy this model at a pet store to monitor the animals. During the model evaluation phase, you use a labeled dataset to calculate metrics like accuracy, [precision](https://www.ultralytics.com/glossary/precision), [recall](https://www.ultralytics.com/glossary/recall), and F1 score. For instance, the model might have an accuracy of 98% in distinguishing between cats and dogs in a given dataset. After evaluation, you test the model using images from a pet store to see how well it identifies cats and dogs in more varied and realistic conditions. You check if it can correctly label cats and dogs when they are moving, in different lighting conditions, or partially obscured by objects like toys or furniture. Model testing checks that the model behaves as expected outside the controlled evaluation environment. ## Preparing for Model Testing Computer vision models learn from datasets by detecting patterns, making predictions, and evaluating their performance. These [datasets](./preprocessing_annotated_data.md) are usually divided into training and testing sets to simulate real-world conditions. [Training data](https://www.ultralytics.com/glossary/training-data) teaches the model while testing data verifies its accuracy. Here are two points to keep in mind before testing your model: - **Realistic Representation:** The previously unseen testing data should be similar to the data that the model will have to handle when deployed. This helps get a realistic understanding of the model's capabilities. - **Sufficient Size:** The size of the testing dataset needs to be large enough to provide reliable insights into how well the model performs. ## Testing Your Computer Vision Model Here are the key steps to take to test your computer vision model and understand its performance. - **Run Predictions:** Use the model to make predictions on the test dataset. - **Compare Predictions:** Check how well the model's predictions match the actual labels (ground truth). - **Calculate Performance Metrics:** [Compute metrics](./yolo-performance-metrics.md) like accuracy, precision, recall, and F1 score to understand the model's strengths and weaknesses. Testing focuses on how these metrics reflect real-world performance. - **Visualize Results:** Create visual aids like confusion matrices and ROC curves. These help you spot specific areas where the model might not be performing well in practical applications. Next, the testing results can be analyzed: - **Misclassified Images:** Identify and review images that the model misclassified to understand where it is going wrong. - **Error Analysis:** Perform a thorough error analysis to understand the types of errors (e.g., false positives vs. false negatives) and their potential causes. - **Bias and Fairness:** Check for any biases in the model's predictions. Ensure that the model performs equally well across different subsets of the data, especially if it includes sensitive attributes like race, gender, or age. ## Testing Your YOLO11 Model To test your YOLO11 model, you can use the validation mode. It's a straightforward way to understand the model's strengths and areas that need improvement. Also, you'll need to format your test dataset correctly for YOLO11. For more details on how to use the validation mode, check out the [Model Validation](../modes/val.md) docs page. ## Using YOLO11 to Predict on Multiple Test Images If you want to test your trained YOLO11 model on multiple images stored in a folder, you can easily do so in one go. Instead of using the validation mode, which is typically used to evaluate model performance on a validation set and provide detailed metrics, you might just want to see predictions on all images in your test set. For this, you can use the [prediction mode](../modes/predict.md). ### Difference Between Validation and Prediction Modes - **[Validation Mode](../modes/val.md):** Used to evaluate the model's performance by comparing predictions against known labels (ground truth). It provides detailed metrics such as accuracy, precision, recall, and F1 score. - **[Prediction Mode](../modes/predict.md):** Used to run the model on new, unseen data to generate predictions. It does not provide detailed performance metrics but allows you to see how the model performs on real-world images. ## Running YOLO11 Predictions Without Custom Training If you are interested in testing the basic YOLO11 model to understand whether it can be used for your application without custom training, you can use the prediction mode. While the model is pre-trained on datasets like COCO, running predictions on your own dataset can give you a quick sense of how well it might perform in your specific context. ## Overfitting and [Underfitting](https://www.ultralytics.com/glossary/underfitting) in [Machine Learning](https://www.ultralytics.com/glossary/machine-learning-ml) When testing a machine learning model, especially in computer vision, it's important to watch out for overfitting and underfitting. These issues can significantly affect how well your model works with new data. ### Overfitting Overfitting happens when your model learns the training data too well, including the noise and details that don't generalize to new data. In computer vision, this means your model might do great with training images but struggle with new ones. #### Signs of Overfitting - **High Training Accuracy, Low Validation Accuracy:** If your model performs very well on training data but poorly on validation or [test data](https://www.ultralytics.com/glossary/test-data), it's likely overfitting. - **Visual Inspection:** Sometimes, you can see overfitting if your model is too sensitive to minor changes or irrelevant details in images. ### Underfitting Underfitting occurs when your model can't capture the underlying patterns in the data. In computer vision, an underfitted model might not even recognize objects correctly in the training images. #### Signs of Underfitting - **Low Training Accuracy:** If your model can't achieve high accuracy on the training set, it might be underfitting. - **Visual Misclassification:** Consistent failure to recognize obvious features or objects suggests underfitting. ### Balancing Overfitting and Underfitting The key is to find a balance between overfitting and underfitting. Ideally, a model should perform well on both training and validation datasets. Regularly monitoring your model's performance through metrics and visual inspections, along with applying the right strategies, can help you achieve the best results. <p align="center"> <img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/overfitting-underfitting-appropriate-fitting.avif" alt="Overfitting and Underfitting Overview"> </p>
271169
--- comments: true description: Explore Ultralytics HUB for easy training, analysis, preview, deployment and sharing of custom vision AI models using YOLOv8. Start training today!. keywords: Ultralytics HUB, YOLOv8, custom AI models, model training, model deployment, model analysis, vision AI --- # Ultralytics HUB Models [Ultralytics HUB](https://www.ultralytics.com/hub) models provide a streamlined solution for training vision AI models on custom datasets. The process is user-friendly and efficient, involving a simple three-step creation and accelerated training powered by Ultralytics YOLOv8. During training, real-time updates on model metrics are available so that you can monitor each step of the progress. Once training is completed, you can preview your model and easily deploy it to real-world applications. Therefore, [Ultralytics HUB](https://www.ultralytics.com/hub) offers a comprehensive yet straightforward system for model creation, training, evaluation, and deployment. <p align="center"> <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/YVlkq5H2tAQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen> </iframe> <br> <strong>Watch:</strong> Ultralytics HUB Training and Validation Overview </p> ## Train Model Navigate to the [Models](https://hub.ultralytics.com/models) page by clicking on the **Models** button in the sidebar and click on the **Train Model** button on the top right of the page. ![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Models button in the sidebar and one to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-page.avif) ??? tip You can train a model directly from the [Home](https://hub.ultralytics.com/home) page. ![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Train Model card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-card.avif) This action will trigger the **Train Model** dialog which has three simple steps: ### 1. Dataset In this step, you have to select the dataset you want to train your model on. After you selected a dataset, click **Continue**. ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to a dataset and one to the Continue button](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-dialog-dataset-step.avif) ??? tip You can skip this step if you train a model directly from the Dataset page. ![Ultralytics HUB screenshot of the Dataset page with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-dataset-page-train-model-button.avif) ### 2. Model In this step, you have to choose the project in which you want to create your model, the name of your model and your model's architecture. ![Ultralytics HUB screenshot of the Train Model dialog with arrows pointing to the project dropdown, model name and Continue button](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-dialog.avif) ??? note Ultralytics HUB will try to pre-select the project. If you opened the **Train Model** dialog as described above, [Ultralytics HUB](https://www.ultralytics.com/hub) will pre-select the last project you used. If you opened the **Train Model** dialog from the Project page, [Ultralytics HUB](https://www.ultralytics.com/hub) will pre-select the project you were inside of. ![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-button.avif) In case you don't have a project created yet, you can set the name of your project in this step and it will be created together with your model. !!! info You can read more about the available [YOLOv8](https://docs.ultralytics.com/models/yolov8/) (and [YOLOv5](https://docs.ultralytics.com/models/yolov5/)) architectures in our documentation. By default, your model will use a pre-trained model (trained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset) to reduce training time. You can change this behavior and tweak your model's configuration by opening the **Advanced Model Configuration** accordion. ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Advanced Model Configuration accordion](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-2.avif) !!! note You can easily change the most common model configuration options (such as the number of epochs) but you can also use the **Custom** option to access all [Train Settings](https://docs.ultralytics.com/modes/train/#train-settings) relevant to [Ultralytics HUB](https://www.ultralytics.com/hub). <p align="center"> <br> <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/Unt4Lfid7aY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen> </iframe> <br> <strong>Watch:</strong> How to Configure Ultralytics YOLOv8 Training Parameters in Ultralytics HUB </p> Alternatively, you start training from one of your previously trained models by clicking on the **Custom** tab. ![Ultralytics HUB screenshot of the Train Model dialog with an arrow pointing to the Custom tab](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-dialog-3.avif) When you're happy with your model configuration, click **Continue**.
271174
### Segmentation !!! example "Segmentation Model" === "`ultralytics`" ```python from ultralytics import YOLO # Load model model = YOLO("yolov8n-seg.pt") # Run inference results = model("image.jpg") # Print image.jpg results in JSON format print(results[0].tojson()) ``` === "cURL" ```bash curl -X POST "https://predict.ultralytics.com" \ -H "x-api-key: API_KEY" \ -F "model=https://hub.ultralytics.com/models/MODEL_ID" \ -F "file=@/path/to/image.jpg" \ -F "imgsz=640" \ -F "conf=0.25" \ -F "iou=0.45" ``` === "Python" ```python import requests # API URL url = "https://predict.ultralytics.com" # Headers, use actual API_KEY headers = {"x-api-key": "API_KEY"} # Inference arguments (use actual MODEL_ID) data = {"model": "https://hub.ultralytics.com/models/MODEL_ID", "imgsz": 640, "conf": 0.25, "iou": 0.45} # Load image and send request with open("path/to/image.jpg", "rb") as image_file: files = {"file": image_file} response = requests.post(url, headers=headers, files=files, data=data) print(response.json()) ``` === "Response" ```json { "images": [ { "results": [ { "class": 0, "name": "person", "confidence": 0.92, "box": { "x1": 118, "x2": 416, "y1": 112, "y2": 660 }, "segments": { "x": [ 266.015625, 266.015625, 258.984375, ... ], "y": [ 110.15625, 113.67188262939453, 120.70311737060547, ... ] } } ], "shape": [ 750, 600 ], "speed": { "inference": 200.8, "postprocess": 0.8, "preprocess": 2.8 } } ], "metadata": ... } ``` ### Pose !!! example "Pose Model" === "`ultralytics`" ```python from ultralytics import YOLO # Load model model = YOLO("yolov8n-pose.pt") # Run inference results = model("image.jpg") # Print image.jpg results in JSON format print(results[0].tojson()) ``` === "cURL" ```bash curl -X POST "https://predict.ultralytics.com" \ -H "x-api-key: API_KEY" \ -F "model=https://hub.ultralytics.com/models/MODEL_ID" \ -F "file=@/path/to/image.jpg" \ -F "imgsz=640" \ -F "conf=0.25" \ -F "iou=0.45" ``` === "Python" ```python import requests # API URL url = "https://predict.ultralytics.com" # Headers, use actual API_KEY headers = {"x-api-key": "API_KEY"} # Inference arguments (use actual MODEL_ID) data = {"model": "https://hub.ultralytics.com/models/MODEL_ID", "imgsz": 640, "conf": 0.25, "iou": 0.45} # Load image and send request with open("path/to/image.jpg", "rb") as image_file: files = {"file": image_file} response = requests.post(url, headers=headers, files=files, data=data) print(response.json()) ``` === "Response" ```json { "images": [ { "results": [ { "class": 0, "name": "person", "confidence": 0.92, "box": { "x1": 118, "x2": 416, "y1": 112, "y2": 660 }, "keypoints": { "visible": [ 0.9909399747848511, 0.8162999749183655, 0.9872099757194519, ... ], "x": [ 316.3871765136719, 315.9374694824219, 304.878173828125, ... ], "y": [ 156.4207763671875, 148.05775451660156, 144.93240356445312, ... ] } } ], "shape": [ 750, 600 ], "speed": { "inference": 200.8, "postprocess": 0.8, "preprocess": 2.8 } } ], "metadata": ... } ```
271289
--- description: Explore Ultralytics image augmentation techniques like MixUp, Mosaic, and Random Perspective for enhancing model training. Improve your deep learning models now. keywords: Ultralytics, image augmentation, MixUp, Mosaic, Random Perspective, deep learning, model training, YOLO --- # Reference for `ultralytics/data/augment.py` !!! note This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/augment.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/augment.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/data/augment.py) 🛠️. Thank you 🙏! <br> ## ::: ultralytics.data.augment.BaseTransform <br><br><hr><br> ## ::: ultralytics.data.augment.Compose <br><br><hr><br> ## ::: ultralytics.data.augment.BaseMixTransform <br><br><hr><br> ## ::: ultralytics.data.augment.Mosaic <br><br><hr><br> ## ::: ultralytics.data.augment.MixUp <br><br><hr><br> ## ::: ultralytics.data.augment.RandomPerspective <br><br><hr><br> ## ::: ultralytics.data.augment.RandomHSV <br><br><hr><br> ## ::: ultralytics.data.augment.RandomFlip <br><br><hr><br> ## ::: ultralytics.data.augment.LetterBox <br><br><hr><br> ## ::: ultralytics.data.augment.CopyPaste <br><br><hr><br> ## ::: ultralytics.data.augment.Albumentations <br><br><hr><br> ## ::: ultralytics.data.augment.Format <br><br><hr><br> ## ::: ultralytics.data.augment.RandomLoadText <br><br><hr><br> ## ::: ultralytics.data.augment.ClassifyLetterBox <br><br><hr><br> ## ::: ultralytics.data.augment.CenterCrop <br><br><hr><br> ## ::: ultralytics.data.augment.ToTensor <br><br><hr><br> ## ::: ultralytics.data.augment.v8_transforms <br><br><hr><br> ## ::: ultralytics.data.augment.classify_transforms <br><br><hr><br> ## ::: ultralytics.data.augment.classify_augmentations <br><br>
271307
# Ultralytics YOLO Frequently Asked Questions (FAQ) This FAQ section addresses common questions and issues users might encounter while working with [Ultralytics](https://www.ultralytics.com/) YOLO repositories. ## FAQ ### What is Ultralytics and what does it offer? Ultralytics is a [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) AI company specializing in state-of-the-art object detection and [image segmentation](https://www.ultralytics.com/glossary/image-segmentation) models, with a focus on the YOLO (You Only Look Once) family. Their offerings include: - Open-source implementations of [YOLO11](https://docs.ultralytics.com/models/yolov8/) and [YOLO11](https://docs.ultralytics.com/models/yolo11/) - A wide range of [pre-trained models](https://docs.ultralytics.com/models/) for various computer vision tasks - A comprehensive [Python package](https://docs.ultralytics.com/usage/python/) for seamless integration of YOLO models into projects - Versatile [tools](https://docs.ultralytics.com/modes/) for training, testing, and deploying models - [Extensive documentation](https://docs.ultralytics.com/) and a supportive community ### How do I install the Ultralytics package? Installing the Ultralytics package is straightforward using pip: ``` pip install ultralytics ``` For the latest development version, install directly from the GitHub repository: ``` pip install git+https://github.com/ultralytics/ultralytics.git ``` Detailed installation instructions can be found in the [quickstart guide](https://docs.ultralytics.com/quickstart/). ### What are the system requirements for running Ultralytics models? Minimum requirements: - Python 3.7+ - [PyTorch](https://www.ultralytics.com/glossary/pytorch) 1.7+ - CUDA-compatible GPU (for GPU acceleration) Recommended setup: - Python 3.8+ - PyTorch 1.10+ - NVIDIA GPU with CUDA 11.2+ - 8GB+ RAM - 50GB+ free disk space (for dataset storage and model training) For troubleshooting common issues, visit the [YOLO Common Issues](https://docs.ultralytics.com/guides/yolo-common-issues/) page. ### How can I train a custom YOLO11 model on my own dataset? To train a custom YOLO11 model: 1. Prepare your dataset in YOLO format (images and corresponding label txt files). 2. Create a YAML file describing your dataset structure and classes. 3. Use the following Python code to start training: ```python from ultralytics import YOLO # Load a model model = YOLO("yolov8n.yaml") # build a new model from scratch model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training) # Train the model results = model.train(data="path/to/your/data.yaml", epochs=100, imgsz=640) ``` For a more in-depth guide, including data preparation and advanced training options, refer to the comprehensive [training guide](https://docs.ultralytics.com/modes/train/). ### What pretrained models are available in Ultralytics? Ultralytics offers a diverse range of pretrained YOLO11 models for various tasks: - Object Detection: YOLO11n, YOLO11s, YOLO11m, YOLO11l, YOLO11x - [Instance Segmentation](https://www.ultralytics.com/glossary/instance-segmentation): YOLO11n-seg, YOLO11s-seg, YOLO11m-seg, YOLO11l-seg, YOLO11x-seg - Classification: YOLO11n-cls, YOLO11s-cls, YOLO11m-cls, YOLO11l-cls, YOLO11x-cls These models vary in size and complexity, offering different trade-offs between speed and [accuracy](https://www.ultralytics.com/glossary/accuracy). Explore the full range of [pretrained models](https://docs.ultralytics.com/models/yolov8/) to find the best fit for your project. ### How do I perform inference using a trained Ultralytics model? To perform inference with a trained model: ```python from ultralytics import YOLO # Load a model model = YOLO("path/to/your/model.pt") # Perform inference results = model("path/to/image.jpg") # Process results for r in results: print(r.boxes) # print bbox predictions print(r.masks) # print mask predictions print(r.probs) # print class probabilities ``` For advanced inference options, including batch processing and video inference, check out the detailed [prediction guide](https://docs.ultralytics.com/modes/predict/). ### Can Ultralytics models be deployed on edge devices or in production environments? Absolutely! Ultralytics models are designed for versatile deployment across various platforms: - Edge devices: Optimize inference on devices like NVIDIA Jetson or Intel Neural Compute Stick using TensorRT, ONNX, or OpenVINO. - Mobile: Deploy on Android or iOS devices by converting models to TFLite or Core ML. - Cloud: Leverage frameworks like [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) Serving or PyTorch Serve for scalable cloud deployments. - Web: Implement in-browser inference using ONNX.js or TensorFlow.js. Ultralytics provides export functions to convert models to various formats for deployment. Explore the wide range of [deployment options](https://docs.ultralytics.com/guides/model-deployment-options/) to find the best solution for your use case. ### What's the difference between YOLOv8 and YOLO11? Key distinctions include: - Architecture: YOLO11 features an improved backbone and head design for enhanced performance. - Performance: YOLO11 generally offers superior accuracy and speed compared to YOLOv8. - Tasks: YOLO11 natively supports [object detection](https://www.ultralytics.com/glossary/object-detection), instance segmentation, and classification in a unified framework. - Codebase: YOLO11 is implemented with a more modular and extensible architecture, facilitating easier customization and extension. - Training: YOLO11 incorporates advanced training techniques like multi-dataset training and hyperparameter evolution for improved results. For an in-depth comparison of features and performance metrics, visit the [YOLO](https://www.ultralytics.com/yolo) comparison page. ### How can I contribute to the Ultralytics open-source project? Contributing to Ultralytics is a great way to improve the project and expand your skills. Here's how you can get involved: 1. Fork the Ultralytics repository on GitHub. 2. Create a new branch for your feature or bug fix. 3. Make your changes and ensure all tests pass. 4. Submit a pull request with a clear description of your changes. 5. Participate in the code review process. You can also contribute by reporting bugs, suggesting features, or improving documentation. For detailed guidelines and best practices, refer to the [contributing guide](https://docs.ultralytics.com/help/contributing/). ### How do I install the Ultralytics package in Python? Installing the Ultralytics package in Python is simple. Use pip by running the following command in your terminal or command prompt: ```bash pip install ultralytics ``` For the cutting-edge development version, install directly from the GitHub repository: ```bash pip install git+https://github.com/ultralytics/ultralytics.git ``` For environment-specific installation instructions and troubleshooting tips, consult the comprehensive [quickstart guide](https://docs.ultralytics.com/quickstart/). ### What are the main features of Ultralytics YOLO? Ultralytics YOLO boasts a rich set of features for advanced object detection and image segmentation: - Real-Time Detection: Efficiently detect and classify objects in real-time scenarios. - Pre-Trained Models: Access a variety of [pretrained models](https://docs.ultralytics.com/models/yolov8/) that balance speed and accuracy for different use cases. - Custom Training: Easily fine-tune models on custom datasets with the flexible [training pipeline](https://docs.ultralytics.com/modes/train/). - Wide [Deployment Options](https://docs.ultralytics.com/guides/model-deployment-options/): Export models to various formats like TensorRT, ONNX, and CoreML for deployment across different platforms. - Extensive Documentation: Benefit from comprehensive [documentation](https://docs.ultralytics.com/) and a supportive community to guide you through your computer vision journey. Explore the [YOLO models page](https://docs.ultralytics.com/models/yolov8/) for an in-depth look at the capabilities and architectures of different YOLO versions.
271308
### How can I improve the performance of my YOLO model? Enhancing your YOLO model's performance can be achieved through several techniques: 1. [Hyperparameter Tuning](https://www.ultralytics.com/glossary/hyperparameter-tuning): Experiment with different hyperparameters using the [Hyperparameter Tuning Guide](https://docs.ultralytics.com/guides/hyperparameter-tuning/) to optimize model performance. 2. [Data Augmentation](https://www.ultralytics.com/glossary/data-augmentation): Implement techniques like flip, scale, rotate, and color adjustments to enhance your training dataset and improve model generalization. 3. [Transfer Learning](https://www.ultralytics.com/glossary/transfer-learning): Leverage pre-trained models and fine-tune them on your specific dataset using the [Train YOLO11](https://docs.ultralytics.com/modes/train/) guide. 4. Export to Efficient Formats: Convert your model to optimized formats like TensorRT or ONNX for faster inference using the [Export guide](../modes/export.md). 5. Benchmarking: Utilize the [Benchmark Mode](https://docs.ultralytics.com/modes/benchmark/) to measure and improve inference speed and accuracy systematically. ### Can I deploy Ultralytics YOLO models on mobile and edge devices? Yes, Ultralytics YOLO models are designed for versatile deployment, including mobile and edge devices: - Mobile: Convert models to TFLite or CoreML for seamless integration into Android or iOS apps. Refer to the [TFLite Integration Guide](https://docs.ultralytics.com/integrations/tflite/) and [CoreML Integration Guide](https://docs.ultralytics.com/integrations/coreml/) for platform-specific instructions. - Edge Devices: Optimize inference on devices like NVIDIA Jetson or other edge hardware using TensorRT or ONNX. The [Edge TPU Integration Guide](https://docs.ultralytics.com/integrations/edge-tpu/) provides detailed steps for edge deployment. For a comprehensive overview of deployment strategies across various platforms, consult the [deployment options guide](https://docs.ultralytics.com/guides/model-deployment-options/). ### How can I perform inference using a trained Ultralytics YOLO model? Performing inference with a trained Ultralytics YOLO model is straightforward: 1. Load the Model: ```python from ultralytics import YOLO model = YOLO("path/to/your/model.pt") ``` 2. Run Inference: ```python results = model("path/to/image.jpg") for r in results: print(r.boxes) # print bounding box predictions print(r.masks) # print mask predictions print(r.probs) # print class probabilities ``` For advanced inference techniques, including batch processing, video inference, and custom preprocessing, refer to the detailed [prediction guide](https://docs.ultralytics.com/modes/predict/). ### Where can I find examples and tutorials for using Ultralytics? Ultralytics provides a wealth of resources to help you get started and master their tools: - 📚 [Official documentation](https://docs.ultralytics.com/): Comprehensive guides, API references, and best practices. - 💻 [GitHub repository](https://github.com/ultralytics/ultralytics): Source code, example scripts, and community contributions. - ✍️ [Ultralytics blog](https://www.ultralytics.com/blog): In-depth articles, use cases, and technical insights. - 💬 [Community forums](https://community.ultralytics.com/): Connect with other users, ask questions, and share your experiences. - 🎥 [YouTube channel](https://www.youtube.com/ultralytics?sub_confirmation=1): Video tutorials, demos, and webinars on various Ultralytics topics. These resources provide code examples, real-world use cases, and step-by-step guides for various tasks using Ultralytics models. If you need further assistance, don't hesitate to consult the Ultralytics documentation or reach out to the community through [GitHub Issues](https://github.com/ultralytics/ultralytics/issues) or the official [discussion forum](https://github.com/orgs/ultralytics/discussions).
271323
"line_points = [(20, 400), (1080, 400)] # Line coordinates\n", "\n", "# Initialize the video writer to save the output video\n", "video_writer = cv2.VideoWriter(\"object_counting_output.avi\", cv2.VideoWriter_fourcc(*\"mp4v\"), fps, (w, h))\n", "\n", "# Initialize the Object Counter with visualization options and other parameters\n", "counter = solutions.ObjectCounter(\n", " show=True, # Display the image during processing\n", " region=line_points, # Region of interest points\n", " model=yolo11n.pt, # Ultralytics YOLO11 model file\n", " line_width=2, # Thickness of the lines and bounding boxes\n", ")\n", "\n", "# Process video frames in a loop\n", "while cap.isOpened():\n", " success, im0 = cap.read()\n", " if not success:\n", " print(\"Video frame is empty or video processing has been successfully completed.\")\n", " break\n", "\n", " # Use the Object Counter to count objects in the frame and get the annotated image\n", " im0 = counter.count(im0)\n", "\n", " # Write the annotated frame to the output video\n", " video_writer.write(im0)\n", "\n", "# Release the video capture and writer objects\n", "cap.release()\n", "video_writer.release()\n", "\n", "# Close all OpenCV windows\n", "cv2.destroyAllWindows()" ] }, { "cell_type": "markdown", "metadata": { "id": "QrlKg-y3fEyD" }, "source": [ "# Additional Resources\n", "\n", "## Community Support\n", "\n", "For more information on counting objects with Ultralytics, you can explore the comprehensive [Ultralytics Object Counting Docs](https://docs.ultralytics.com/guides/object-counting/). This guide covers everything from basic concepts to advanced techniques, ensuring you get the most out of counting and visualization.\n", "\n", "## Ultralytics ⚡ Resources\n", "\n", "At Ultralytics, we are committed to providing cutting-edge AI solutions. Here are some key resources to learn more about our company and get involved with our community:\n", "\n", "- [Ultralytics HUB](https://ultralytics.com/hub): Simplify your AI projects with Ultralytics HUB, our no-code tool for effortless YOLO training and deployment.\n", "- [Ultralytics Licensing](https://ultralytics.com/license): Review our licensing terms to understand how you can use our software in your projects.\n", "- [About Us](https://ultralytics.com/about): Discover our mission, vision, and the story behind Ultralytics.\n", "- [Join Our Team](https://ultralytics.com/work): Explore career opportunities and join our team of talented professionals.\n", "\n", "## YOLO11 🚀 Resources\n", "\n", "YOLO11 is the latest evolution in the YOLO series, offering state-of-the-art performance in object detection and image segmentation. Here are some essential resources to help you get started with YOLO11:\n", "\n", "- [GitHub](https://github.com/ultralytics/ultralytics): Access the YOLO11 repository on GitHub, where you can find the source code, contribute to the project, and report issues.\n", "- [Docs](https://docs.ultralytics.com/): Explore the official documentation for YOLO11, including installation guides, tutorials, and detailed API references.\n", "- [Discord](https://ultralytics.com/discord): Join our Discord community to connect with other users, share your projects, and get help from the Ultralytics team.\n", "\n", "These resources are designed to help you leverage the full potential of Ultralytics' offerings and YOLO11. Whether you're a beginner or an experienced developer, you'll find the information and support you need to succeed." ] } ], "metadata": { "accelerator": "GPU", "colab": { "gpuType": "T4", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }
271325
"# Dictionary to store tracking history with default empty lists\n", "track_history = defaultdict(lambda: [])\n", "\n", "# Load the YOLO model with segmentation capabilities\n", "model = YOLO(\"yolo11n-seg.pt\")\n", "\n", "# Open the video file\n", "cap = cv2.VideoCapture(\"path/to/video/file.mp4\")\n", "\n", "# Retrieve video properties: width, height, and frames per second\n", "w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))\n", "\n", "# Initialize video writer to save the output video with the specified properties\n", "out = cv2.VideoWriter(\"instance-segmentation-object-tracking.avi\", cv2.VideoWriter_fourcc(*\"MJPG\"), fps, (w, h))\n", "\n", "while True:\n", " # Read a frame from the video\n", " ret, im0 = cap.read()\n", " if not ret:\n", " print(\"Video frame is empty or video processing has been successfully completed.\")\n", " break\n", "\n", " # Create an annotator object to draw on the frame\n", " annotator = Annotator(im0, line_width=2)\n", "\n", " # Perform object tracking on the current frame\n", " results = model.track(im0, persist=True)\n", "\n", " # Check if tracking IDs and masks are present in the results\n", " if results[0].boxes.id is not None and results[0].masks is not None:\n", " # Extract masks and tracking IDs\n", " masks = results[0].masks.xy\n", " track_ids = results[0].boxes.id.int().cpu().tolist()\n", "\n", " # Annotate each mask with its corresponding tracking ID and color\n", " for mask, track_id in zip(masks, track_ids):\n", " annotator.seg_bbox(mask=mask, mask_color=colors(track_id, True), track_label=str(track_id))\n", "\n", " # Write the annotated frame to the output video\n", " out.write(im0)\n", " # Display the annotated frame\n", " cv2.imshow(\"instance-segmentation-object-tracking\", im0)\n", "\n", " # Exit the loop if 'q' is pressed\n", " if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n", " break\n", "\n", "# Release the video writer and capture objects, and close all OpenCV windows\n", "out.release()\n", "cap.release()\n", "cv2.destroyAllWindows()" ] }, { "cell_type": "markdown", "metadata": { "id": "QrlKg-y3fEyD" }, "source": [ "# Additional Resources\n", "\n", "## Community Support\n", "\n", "For more information on using tracking with Ultralytics, you can explore the comprehensive [Ultralytics Tracking Docs](https://docs.ultralytics.com/modes/track/). This guide covers everything from basic concepts to advanced techniques, ensuring you get the most out of tracking and visualization.\n", "\n", "## Ultralytics ⚡ Resources\n", "\n", "At Ultralytics, we are committed to providing cutting-edge AI solutions. Here are some key resources to learn more about our company and get involved with our community:\n", "\n", "- [Ultralytics HUB](https://ultralytics.com/hub): Simplify your AI projects with Ultralytics HUB, our no-code tool for effortless YOLO training and deployment.\n", "- [Ultralytics Licensing](https://ultralytics.com/license): Review our licensing terms to understand how you can use our software in your projects.\n", "- [About Us](https://ultralytics.com/about): Discover our mission, vision, and the story behind Ultralytics.\n", "- [Join Our Team](https://ultralytics.com/work): Explore career opportunities and join our team of talented professionals.\n", "\n", "## YOLO11 🚀 Resources\n", "\n", "YOLO11 is the latest evolution in the YOLO series, offering state-of-the-art performance in object detection and image segmentation. Here are some essential resources to help you get started with YOLO11:\n", "\n", "- [GitHub](https://github.com/ultralytics/ultralytics): Access the YOLO11 repository on GitHub, where you can find the source code, contribute to the project, and report issues.\n", "- [Docs](https://docs.ultralytics.com/): Explore the official documentation for YOLO11, including installation guides, tutorials, and detailed API references.\n", "- [Discord](https://ultralytics.com/discord): Join our Discord community to connect with other users, share your projects, and get help from the Ultralytics team.\n", "\n", "These resources are designed to help you leverage the full potential of Ultralytics' offerings and YOLO11. Whether you're a beginner or an experienced developer, you'll find the information and support you need to succeed." ] } ], "metadata": { "accelerator": "GPU", "colab": { "gpuType": "T4", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }
271331
"\u001b[34m\u001b[1moptimizer:\u001b[0m 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... \n", "\u001b[34m\u001b[1moptimizer:\u001b[0m AdamW(lr=0.000119, momentum=0.9) with parameter groups 81 weight(decay=0.0), 88 weight(decay=0.0005), 87 bias(decay=0.0)\n", "\u001b[34m\u001b[1mTensorBoard: \u001b[0mmodel graph visualization added ✅\n", "Image sizes 640 train, 640 val\n", "Using 2 dataloader workers\n", "Logging results to \u001b[1mruns/detect/train\u001b[0m\n", "Starting training for 3 epochs...\n", "\n", " Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n", " 1/3 0.719G 1.004 3.249 1.367 30 640: 100% 1/1 [00:00<00:00, 1.16it/s]\n", " Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 5.07it/s]\n", " all 4 17 0.58 0.85 0.849 0.631\n", "\n", " Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n", " 2/3 0.715G 1.31 4.043 1.603 35 640: 100% 1/1 [00:00<00:00, 6.88it/s]\n", " Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 9.08it/s]\n", " all 4 17 0.581 0.85 0.851 0.63\n", "\n", " Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n", " 3/3 0.692G 1.134 3.174 1.599 18 640: 100% 1/1 [00:00<00:00, 6.75it/s]\n", " Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 11.60it/s]\n", " all 4 17 0.582 0.85 0.855 0.632\n", "\n", "3 epochs completed in 0.003 hours.\n", "Optimizer stripped from runs/detect/train/weights/last.pt, 5.5MB\n", "Optimizer stripped from runs/detect/train/weights/best.pt, 5.5MB\n", "\n", "Validating runs/detect/train/weights/best.pt...\n", "Ultralytics 8.3.2 🚀 Python-3.10.12 torch-2.4.1+cu121 CUDA:0 (Tesla T4, 15102MiB)\n", "YOLO11n summary (fused): 238 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs\n", " Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 23.42it/s]\n", " all 4 17 0.579 0.85 0.855 0.615\n", " person 3 10 0.579 0.6 0.623 0.268\n", " dog 1 1 0.549 1 0.995 0.697\n", " horse 1 2 0.553 1 0.995 0.675\n", " elephant 1 2 0.364 0.5 0.528 0.261\n", " umbrella 1 1 0.571 1 0.995 0.895\n", " potted plant 1 1 0.857 1 0.995 0.895\n", "Speed: 0.2ms preprocess, 4.3ms inference, 0.0ms loss, 1.2ms postprocess per image\n", "Results saved to \u001b[1mruns/detect/train\u001b[0m\n", "💡 Learn more at https://docs.ultralytics.com/modes/train\n" ] } ] }, { "cell_type": "markdown", "source": [ "# 4. Export\n", "\n", "Export a YOLO11 model to any supported format below with the `format` argument, i.e. `format=onnx`. See [YOLO11 Export Docs](https://docs.ultralytics.com/modes/export/) for more information.\n", "\n", "- 💡 ProTip: Export to [ONNX](https://docs.ultralytics.com/integrations/onnx/) or [OpenVINO](https://docs.ultralytics.com/integrations/openvino/) for up to 3x CPU speedup. \n", "- 💡 ProTip: Export to [TensorRT](https://docs.ultralytics.com/integrations/tensorrt/) for up to 5x GPU speedup.\n", "\n", "| Format | `format` Argument | Model | Metadata | Arguments |\n", "|--------------------------------------------------------------------------|-------------------|---------------------------|----------|----------------------------------------------------------------------|\n", "| [PyTorch](https://pytorch.org/) | - | `yolo11n.pt` | ✅ | - |\n", "| [TorchScript](https://docs.ultralytics.com/integrations/torchscript) | `torchscript` | `yolo11n.torchscript` | ✅ | `imgsz`, `optimize`, `batch` |\n", "| [ONNX](https://docs.ultralytics.com/integrations/onnx) | `onnx` | `yolo11n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset`, `batch` |\n", "| [OpenVINO](https://docs.ultralytics.com/integrations/openvino) | `openvino` | `yolo11n_openvino_model/` | ✅ | `imgsz`, `half`, `int8`, `batch` |\n", "| [TensorRT](https://docs.ultralytics.com/integrations/tensorrt) | `engine` | `yolo11n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace`, `int8`, `batch` |\n", "| [CoreML](https://docs.ultralytics.com/integrations/coreml) | `coreml` | `yolo11n.mlpackage` | ✅ | `imgsz`, `half`, `int8`, `nms`, `batch` |\n", "| [TF SavedModel](https://docs.ultralytics.com/integrations/tf-savedmodel) | `saved_model` | `yolo11n_saved_model/` | ✅ | `imgsz`, `keras`, `int8`, `batch` |\n", "| [TF GraphDef](https://docs.ultralytics.com/integrations/tf-graphdef) | `pb` | `yolo11n.pb` | ❌ | `imgsz`, `batch` |\n", "| [TF Lite](https://docs.ultralytics.com/integrations/tflite) | `tflite` | `yolo11n.tflite` | ✅ | `imgsz`, `half`, `int8`, `batch` |\n",
271344
# Ultralytics YOLO 🚀, AGPL-3.0 license import argparse import cv2.dnn import numpy as np from ultralytics.utils import ASSETS, yaml_load from ultralytics.utils.checks import check_yaml CLASSES = yaml_load(check_yaml("coco8.yaml"))["names"] colors = np.random.uniform(0, 255, size=(len(CLASSES), 3)) def draw_bounding_box(img, class_id, confidence, x, y, x_plus_w, y_plus_h): """ Draws bounding boxes on the input image based on the provided arguments. Args: img (numpy.ndarray): The input image to draw the bounding box on. class_id (int): Class ID of the detected object. confidence (float): Confidence score of the detected object. x (int): X-coordinate of the top-left corner of the bounding box. y (int): Y-coordinate of the top-left corner of the bounding box. x_plus_w (int): X-coordinate of the bottom-right corner of the bounding box. y_plus_h (int): Y-coordinate of the bottom-right corner of the bounding box. """ label = f"{CLASSES[class_id]} ({confidence:.2f})" color = colors[class_id] cv2.rectangle(img, (x, y), (x_plus_w, y_plus_h), color, 2) cv2.putText(img, label, (x - 10, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2) def main(onnx_model, input_image): """ Main function to load ONNX model, perform inference, draw bounding boxes, and display the output image. Args: onnx_model (str): Path to the ONNX model. input_image (str): Path to the input image. Returns: list: List of dictionaries containing detection information such as class_id, class_name, confidence, etc. """ # Load the ONNX model model: cv2.dnn.Net = cv2.dnn.readNetFromONNX(onnx_model) # Read the input image original_image: np.ndarray = cv2.imread(input_image) [height, width, _] = original_image.shape # Prepare a square image for inference length = max((height, width)) image = np.zeros((length, length, 3), np.uint8) image[0:height, 0:width] = original_image # Calculate scale factor scale = length / 640 # Preprocess the image and prepare blob for model blob = cv2.dnn.blobFromImage(image, scalefactor=1 / 255, size=(640, 640), swapRB=True) model.setInput(blob) # Perform inference outputs = model.forward() # Prepare output array outputs = np.array([cv2.transpose(outputs[0])]) rows = outputs.shape[1] boxes = [] scores = [] class_ids = [] # Iterate through output to collect bounding boxes, confidence scores, and class IDs for i in range(rows): classes_scores = outputs[0][i][4:] (minScore, maxScore, minClassLoc, (x, maxClassIndex)) = cv2.minMaxLoc(classes_scores) if maxScore >= 0.25: box = [ outputs[0][i][0] - (0.5 * outputs[0][i][2]), outputs[0][i][1] - (0.5 * outputs[0][i][3]), outputs[0][i][2], outputs[0][i][3], ] boxes.append(box) scores.append(maxScore) class_ids.append(maxClassIndex) # Apply NMS (Non-maximum suppression) result_boxes = cv2.dnn.NMSBoxes(boxes, scores, 0.25, 0.45, 0.5) detections = [] # Iterate through NMS results to draw bounding boxes and labels for i in range(len(result_boxes)): index = result_boxes[i] box = boxes[index] detection = { "class_id": class_ids[index], "class_name": CLASSES[class_ids[index]], "confidence": scores[index], "box": box, "scale": scale, } detections.append(detection) draw_bounding_box( original_image, class_ids[index], scores[index], round(box[0] * scale), round(box[1] * scale), round((box[0] + box[2]) * scale), round((box[1] + box[3]) * scale), ) # Display the image with bounding boxes cv2.imshow("image", original_image) cv2.waitKey(0) cv2.destroyAllWindows() return detections if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--model", default="yolov8n.onnx", help="Input your ONNX model.") parser.add_argument("--img", default=str(ASSETS / "bus.jpg"), help="Path to input image.") args = parser.parse_args() main(args.model, args.img)
271345
# YOLOv8 - Int8-TFLite Runtime Welcome to the YOLOv8 Int8 TFLite Runtime for efficient and optimized object detection project. This README provides comprehensive instructions for installing and using our YOLOv8 implementation. ## Installation Ensure a smooth setup by following these steps to install necessary dependencies. ### Installing Required Dependencies Install all required dependencies with this simple command: ```bash pip install -r requirements.txt ``` ### Installing `tflite-runtime` To load TFLite models, install the `tflite-runtime` package using: ```bash pip install tflite-runtime ``` ### Installing `tensorflow-gpu` (For NVIDIA GPU Users) Leverage GPU acceleration with NVIDIA GPUs by installing `tensorflow-gpu`: ```bash pip install tensorflow-gpu ``` **Note:** Ensure you have compatible GPU drivers installed on your system. ### Installing `tensorflow` (CPU Version) For CPU usage or non-NVIDIA GPUs, install TensorFlow with: ```bash pip install tensorflow ``` ## Usage Follow these instructions to run YOLOv8 after successful installation. Convert the YOLOv8 model to Int8 TFLite format: ```bash yolo export model=yolov8n.pt imgsz=640 format=tflite int8 ``` Locate the Int8 TFLite model in `yolov8n_saved_model`. Choose `best_full_integer_quant` or verify quantization at [Netron](https://netron.app/). Then, execute the following in your terminal: ```bash python main.py --model yolov8n_full_integer_quant.tflite --img image.jpg --conf-thres 0.5 --iou-thres 0.5 ``` Replace `best_full_integer_quant.tflite` with your model file's path, `image.jpg` with your input image, and adjust the confidence (conf-thres) and IoU thresholds (iou-thres) as necessary. ### Output The output is displayed as annotated images, showcasing the model's detection capabilities: ![image](https://github.com/wamiqraza/Attribute-recognition-and-reidentification-Market1501-dataset/blob/main/img/bus.jpg)
271347
ss Yolov8TFLite: """Class for performing object detection using YOLOv8 model converted to TensorFlow Lite format.""" def __init__(self, tflite_model, input_image, confidence_thres, iou_thres): """ Initializes an instance of the Yolov8TFLite class. Args: tflite_model: Path to the TFLite model. input_image: Path to the input image. confidence_thres: Confidence threshold for filtering detections. iou_thres: IoU (Intersection over Union) threshold for non-maximum suppression. """ self.tflite_model = tflite_model self.input_image = input_image self.confidence_thres = confidence_thres self.iou_thres = iou_thres # Load the class names from the COCO dataset self.classes = yaml_load(check_yaml("coco8.yaml"))["names"] # Generate a color palette for the classes self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3)) def draw_detections(self, img, box, score, class_id): """ Draws bounding boxes and labels on the input image based on the detected objects. Args: img: The input image to draw detections on. box: Detected bounding box. score: Corresponding detection score. class_id: Class ID for the detected object. Returns: None """ # Extract the coordinates of the bounding box x1, y1, w, h = box # Retrieve the color for the class ID color = self.color_palette[class_id] # Draw the bounding box on the image cv2.rectangle(img, (int(x1), int(y1)), (int(x1 + w), int(y1 + h)), color, 2) # Create the label text with class name and score label = f"{self.classes[class_id]}: {score:.2f}" # Calculate the dimensions of the label text (label_width, label_height), _ = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1) # Calculate the position of the label text label_x = x1 label_y = y1 - 10 if y1 - 10 > label_height else y1 + 10 # Draw a filled rectangle as the background for the label text cv2.rectangle( img, (int(label_x), int(label_y - label_height)), (int(label_x + label_width), int(label_y + label_height)), color, cv2.FILLED, ) # Draw the label text on the image cv2.putText(img, label, (int(label_x), int(label_y)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 1, cv2.LINE_AA) def preprocess(self): """ Preprocesses the input image before performing inference. Returns: image_data: Preprocessed image data ready for inference. """ # Read the input image using OpenCV self.img = cv2.imread(self.input_image) print("image before", self.img) # Get the height and width of the input image self.img_height, self.img_width = self.img.shape[:2] letterbox = LetterBox(new_shape=[img_width, img_height], auto=False, stride=32) image = letterbox(image=self.img) image = [image] image = np.stack(image) image = image[..., ::-1].transpose((0, 3, 1, 2)) img = np.ascontiguousarray(image) # n, h, w, c image = img.astype(np.float32) return image / 255 def postprocess(self, input_image, output): """ Performs post-processing on the model's output to extract bounding boxes, scores, and class IDs. Args: input_image (numpy.ndarray): The input image. output (numpy.ndarray): The output of the model. Returns: numpy.ndarray: The input image with detections drawn on it. """ # Transpose predictions outside the loop output = [np.transpose(pred) for pred in output] boxes = [] scores = [] class_ids = [] # Vectorize extraction of bounding boxes, scores, and class IDs for pred in output: x, y, w, h = pred[:, 0], pred[:, 1], pred[:, 2], pred[:, 3] x1 = x - w / 2 y1 = y - h / 2 boxes.extend(np.column_stack([x1, y1, w, h])) # Argmax and score extraction for all predictions at once idx = np.argmax(pred[:, 4:], axis=1) scores.extend(pred[np.arange(pred.shape[0]), idx + 4]) class_ids.extend(idx) # Precompute gain and pad once img_height, img_width = input_image.shape[:2] gain = min(img_width / self.img_width, img_height / self.img_height) pad = ( round((img_width - self.img_width * gain) / 2 - 0.1), round((img_height - self.img_height * gain) / 2 - 0.1), ) # Non-Maximum Suppression (NMS) in one go indices = cv2.dnn.NMSBoxes(boxes, scores, self.confidence_thres, self.iou_thres) # Process selected indices for i in indices.flatten(): box = boxes[i] box[0] = (box[0] - pad[0]) / gain box[1] = (box[1] - pad[1]) / gain box[2] = box[2] / gain box[3] = box[3] / gain score = scores[i] class_id = class_ids[i] if score > 0.25: # Draw the detection on the input image self.draw_detections(input_image, box, score, class_id) return input_image def main(self): """ Performs inference using a TFLite model and returns the output image with drawn detections. Returns: output_img: The output image with drawn detections. """ # Create an interpreter for the TFLite model interpreter = tflite.Interpreter(model_path=self.tflite_model) self.model = interpreter interpreter.allocate_tensors() # Get the model inputs input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Store the shape of the input for later use input_shape = input_details[0]["shape"] self.input_width = input_shape[1] self.input_height = input_shape[2] # Preprocess the image data img_data = self.preprocess() img_data = img_data # img_data = img_data.cpu().numpy() # Set the input tensor to the interpreter print(input_details[0]["index"]) print(img_data.shape) img_data = img_data.transpose((0, 2, 3, 1)) scale, zero_point = input_details[0]["quantization"] img_data_int8 = (img_data / scale + zero_point).astype(np.int8) interpreter.set_tensor(input_details[0]["index"], img_data_int8) # Run inference interpreter.invoke() # Get the output tensor from the interpreter output = interpreter.get_tensor(output_details[0]["index"]) scale, zero_point = output_details[0]["quantization"] output = (output.astype(np.float32) - zero_point) * scale output[:, [0, 2]] *= img_width output[:, [1, 3]] *= img_height print(output) # Perform post-processing on the outputs to obtain output image. return self.postprocess(self.img, output)
271350
# Ultralytics YOLO 🚀, AGPL-3.0 license import argparse import cv2 import numpy as np import onnxruntime as ort import torch from ultralytics.utils import ASSETS, yaml_load from ultralytics.utils.checks import check_requirements, check_yaml class YOLOv8: """YOLOv8 object detection model class for handling inference and visualization.""" def __init__(self, onnx_model, input_image, confidence_thres, iou_thres): """ Initializes an instance of the YOLOv8 class. Args: onnx_model: Path to the ONNX model. input_image: Path to the input image. confidence_thres: Confidence threshold for filtering detections. iou_thres: IoU (Intersection over Union) threshold for non-maximum suppression. """ self.onnx_model = onnx_model self.input_image = input_image self.confidence_thres = confidence_thres self.iou_thres = iou_thres # Load the class names from the COCO dataset self.classes = yaml_load(check_yaml("coco8.yaml"))["names"] # Generate a color palette for the classes self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3)) def draw_detections(self, img, box, score, class_id): """ Draws bounding boxes and labels on the input image based on the detected objects. Args: img: The input image to draw detections on. box: Detected bounding box. score: Corresponding detection score. class_id: Class ID for the detected object. Returns: None """ # Extract the coordinates of the bounding box x1, y1, w, h = box # Retrieve the color for the class ID color = self.color_palette[class_id] # Draw the bounding box on the image cv2.rectangle(img, (int(x1), int(y1)), (int(x1 + w), int(y1 + h)), color, 2) # Create the label text with class name and score label = f"{self.classes[class_id]}: {score:.2f}" # Calculate the dimensions of the label text (label_width, label_height), _ = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1) # Calculate the position of the label text label_x = x1 label_y = y1 - 10 if y1 - 10 > label_height else y1 + 10 # Draw a filled rectangle as the background for the label text cv2.rectangle( img, (label_x, label_y - label_height), (label_x + label_width, label_y + label_height), color, cv2.FILLED ) # Draw the label text on the image cv2.putText(img, label, (label_x, label_y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 1, cv2.LINE_AA) def preprocess(self): """ Preprocesses the input image before performing inference. Returns: image_data: Preprocessed image data ready for inference. """ # Read the input image using OpenCV self.img = cv2.imread(self.input_image) # Get the height and width of the input image self.img_height, self.img_width = self.img.shape[:2] # Convert the image color space from BGR to RGB img = cv2.cvtColor(self.img, cv2.COLOR_BGR2RGB) # Resize the image to match the input shape img = cv2.resize(img, (self.input_width, self.input_height)) # Normalize the image data by dividing it by 255.0 image_data = np.array(img) / 255.0 # Transpose the image to have the channel dimension as the first dimension image_data = np.transpose(image_data, (2, 0, 1)) # Channel first # Expand the dimensions of the image data to match the expected input shape image_data = np.expand_dims(image_data, axis=0).astype(np.float32) # Return the preprocessed image data return image_data def postprocess(self, input_image, output): """ Performs post-processing on the model's output to extract bounding boxes, scores, and class IDs. Args: input_image (numpy.ndarray): The input image. output (numpy.ndarray): The output of the model. Returns: numpy.ndarray: The input image with detections drawn on it. """ # Transpose and squeeze the output to match the expected shape outputs = np.transpose(np.squeeze(output[0])) # Get the number of rows in the outputs array rows = outputs.shape[0] # Lists to store the bounding boxes, scores, and class IDs of the detections boxes = [] scores = [] class_ids = [] # Calculate the scaling factors for the bounding box coordinates x_factor = self.img_width / self.input_width y_factor = self.img_height / self.input_height # Iterate over each row in the outputs array for i in range(rows): # Extract the class scores from the current row classes_scores = outputs[i][4:] # Find the maximum score among the class scores max_score = np.amax(classes_scores) # If the maximum score is above the confidence threshold if max_score >= self.confidence_thres: # Get the class ID with the highest score class_id = np.argmax(classes_scores) # Extract the bounding box coordinates from the current row x, y, w, h = outputs[i][0], outputs[i][1], outputs[i][2], outputs[i][3] # Calculate the scaled coordinates of the bounding box left = int((x - w / 2) * x_factor) top = int((y - h / 2) * y_factor) width = int(w * x_factor) height = int(h * y_factor) # Add the class ID, score, and box coordinates to the respective lists class_ids.append(class_id) scores.append(max_score) boxes.append([left, top, width, height]) # Apply non-maximum suppression to filter out overlapping bounding boxes indices = cv2.dnn.NMSBoxes(boxes, scores, self.confidence_thres, self.iou_thres) # Iterate over the selected indices after non-maximum suppression for i in indices: # Get the box, score, and class ID corresponding to the index box = boxes[i] score = scores[i] class_id = class_ids[i] # Draw the detection on the input image self.draw_detections(input_image, box, score, class_id) # Return the modified input image return input_image def main(self): """ Performs inference using an ONNX model and returns the output image with drawn detections. Returns: output_img: The output image with drawn detections. """ # Create an inference session using the ONNX model and specify execution providers session = ort.InferenceSession(self.onnx_model, providers=["CUDAExecutionProvider", "CPUExecutionProvider"]) # Get the model inputs model_inputs = session.get_inputs() # Store the shape of the input for later use input_shape = model_inputs[0].shape self.input_width = input_shape[2] self.input_height = input_shape[3] # Preprocess the image data img_data = self.preprocess() # Run inference using the preprocessed image data outputs = session.run(None, {model_inputs[0].name: img_data}) # Perform post-processing on the outputs to obtain output image. return self.postprocess(self.img, outputs) # output image if __name__ == "__main__": # Create an argument parser to handle command-line arguments parser = argparse.ArgumentParser() parser.add_argument("--model", type=str, default="yolov8n.onnx", help="Input your ONNX model.") parser.add_argument("--img", type=str, default=str(ASSETS / "bus.jpg"), help="Path to input image.") parser.add_argument("--conf-thres", type=float, default=0.5, help="Confidence threshold") parser.add_argument("--iou-thres", type=float, default=0.5, help="NMS IoU threshold") args = parser.parse_args() # Check the requirements and select the appropriate backend (CPU or GPU) check_requirements("onnxruntime-gpu" if torch.cuda.is_available() else "onnxruntime") # Create an instance of the YOLOv8 class with the specified arguments detection = YOLOv8(args.model, args.img, args.conf_thres, args.iou_thres) # Perform object detection and obtain the output image output_image = detection.main() # Display the output image in a window cv2.namedWindow("Output", cv2.WINDOW_NORMAL) cv2.imshow("Output", output_image) # Wait for a key press to exit cv2.waitKey(0)
271363
# Ultralytics YOLO 🚀, AGPL-3.0 license import argparse import cv2 import numpy as np import onnxruntime as ort from ultralytics.utils import ASSETS, yaml_load from ultralytics.utils.checks import check_yaml from ultralytics.utils.plotting import Colors class YOLOv8Seg: """YOLOv8 segmentation model.""" def __init__(self, onnx_model): """ Initialization. Args: onnx_model (str): Path to the ONNX model. """ # Build Ort session self.session = ort.InferenceSession( onnx_model, providers=["CUDAExecutionProvider", "CPUExecutionProvider"] if ort.get_device() == "GPU" else ["CPUExecutionProvider"], ) # Numpy dtype: support both FP32 and FP16 onnx model self.ndtype = np.half if self.session.get_inputs()[0].type == "tensor(float16)" else np.single # Get model width and height(YOLOv8-seg only has one input) self.model_height, self.model_width = [x.shape for x in self.session.get_inputs()][0][-2:] # Load COCO class names self.classes = yaml_load(check_yaml("coco8.yaml"))["names"] # Create color palette self.color_palette = Colors() def __call__(self, im0, conf_threshold=0.4, iou_threshold=0.45, nm=32): """ The whole pipeline: pre-process -> inference -> post-process. Args: im0 (Numpy.ndarray): original input image. conf_threshold (float): confidence threshold for filtering predictions. iou_threshold (float): iou threshold for NMS. nm (int): the number of masks. Returns: boxes (List): list of bounding boxes. segments (List): list of segments. masks (np.ndarray): [N, H, W], output masks. """ # Pre-process im, ratio, (pad_w, pad_h) = self.preprocess(im0) # Ort inference preds = self.session.run(None, {self.session.get_inputs()[0].name: im}) # Post-process boxes, segments, masks = self.postprocess( preds, im0=im0, ratio=ratio, pad_w=pad_w, pad_h=pad_h, conf_threshold=conf_threshold, iou_threshold=iou_threshold, nm=nm, ) return boxes, segments, masks def preprocess(self, img): """ Pre-processes the input image. Args: img (Numpy.ndarray): image about to be processed. Returns: img_process (Numpy.ndarray): image preprocessed for inference. ratio (tuple): width, height ratios in letterbox. pad_w (float): width padding in letterbox. pad_h (float): height padding in letterbox. """ # Resize and pad input image using letterbox() (Borrowed from Ultralytics) shape = img.shape[:2] # original image shape new_shape = (self.model_height, self.model_width) r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) ratio = r, r new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) pad_w, pad_h = (new_shape[1] - new_unpad[0]) / 2, (new_shape[0] - new_unpad[1]) / 2 # wh padding if shape[::-1] != new_unpad: # resize img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) top, bottom = int(round(pad_h - 0.1)), int(round(pad_h + 0.1)) left, right = int(round(pad_w - 0.1)), int(round(pad_w + 0.1)) img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=(114, 114, 114)) # Transforms: HWC to CHW -> BGR to RGB -> div(255) -> contiguous -> add axis(optional) img = np.ascontiguousarray(np.einsum("HWC->CHW", img)[::-1], dtype=self.ndtype) / 255.0 img_process = img[None] if len(img.shape) == 3 else img return img_process, ratio, (pad_w, pad_h) def postprocess(self, preds, im0, ratio, pad_w, pad_h, conf_threshold, iou_threshold, nm=32): """ Post-process the prediction. Args: preds (Numpy.ndarray): predictions come from ort.session.run(). im0 (Numpy.ndarray): [h, w, c] original input image. ratio (tuple): width, height ratios in letterbox. pad_w (float): width padding in letterbox. pad_h (float): height padding in letterbox. conf_threshold (float): conf threshold. iou_threshold (float): iou threshold. nm (int): the number of masks. Returns: boxes (List): list of bounding boxes. segments (List): list of segments. masks (np.ndarray): [N, H, W], output masks. """ x, protos = preds[0], preds[1] # Two outputs: predictions and protos # Transpose dim 1: (Batch_size, xywh_conf_cls_nm, Num_anchors) -> (Batch_size, Num_anchors, xywh_conf_cls_nm) x = np.einsum("bcn->bnc", x) # Predictions filtering by conf-threshold x = x[np.amax(x[..., 4:-nm], axis=-1) > conf_threshold] # Create a new matrix which merge these(box, score, cls, nm) into one # For more details about `numpy.c_()`: https://numpy.org/doc/1.26/reference/generated/numpy.c_.html x = np.c_[x[..., :4], np.amax(x[..., 4:-nm], axis=-1), np.argmax(x[..., 4:-nm], axis=-1), x[..., -nm:]] # NMS filtering x = x[cv2.dnn.NMSBoxes(x[:, :4], x[:, 4], conf_threshold, iou_threshold)] # Decode and return if len(x) > 0: # Bounding boxes format change: cxcywh -> xyxy x[..., [0, 1]] -= x[..., [2, 3]] / 2 x[..., [2, 3]] += x[..., [0, 1]] # Rescales bounding boxes from model shape(model_height, model_width) to the shape of original image x[..., :4] -= [pad_w, pad_h, pad_w, pad_h] x[..., :4] /= min(ratio) # Bounding boxes boundary clamp x[..., [0, 2]] = x[:, [0, 2]].clip(0, im0.shape[1]) x[..., [1, 3]] = x[:, [1, 3]].clip(0, im0.shape[0]) # Process masks masks = self.process_mask(protos[0], x[:, 6:], x[:, :4], im0.shape) # Masks -> Segments(contours) segments = self.masks2segments(masks) return x[..., :6], segments, masks # boxes, segments, masks else: return [], [], [] @staticmethod def masks2segments(masks): """ Takes a list of masks(n,h,w) and returns a list of segments(n,xy), from https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/ops.py. Args: masks (numpy.ndarray): the output of the model, which is a tensor of shape (batch_size, 160, 160). Returns: segments (List): list of segment masks. """ segments = [] for x in masks.astype("uint8"): c = cv2.findContours(x, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0] # CHAIN_APPROX_SIMPLE if c: c = np.array(c[np.array([len(x) for x in c]).argmax()]).reshape(-1, 2) else: c = np.zeros((0, 2)) # no segments found segments.append(c.astype("float32")) return segments
271364
aticmethod def crop_mask(masks, boxes): """ Takes a mask and a bounding box, and returns a mask that is cropped to the bounding box, from https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/ops.py. Args: masks (Numpy.ndarray): [n, h, w] tensor of masks. boxes (Numpy.ndarray): [n, 4] tensor of bbox coordinates in relative point form. Returns: (Numpy.ndarray): The masks are being cropped to the bounding box. """ n, h, w = masks.shape x1, y1, x2, y2 = np.split(boxes[:, :, None], 4, 1) r = np.arange(w, dtype=x1.dtype)[None, None, :] c = np.arange(h, dtype=x1.dtype)[None, :, None] return masks * ((r >= x1) * (r < x2) * (c >= y1) * (c < y2)) def process_mask(self, protos, masks_in, bboxes, im0_shape): """ Takes the output of the mask head, and applies the mask to the bounding boxes. This produces masks of higher quality but is slower, from https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/ops.py. Args: protos (numpy.ndarray): [mask_dim, mask_h, mask_w]. masks_in (numpy.ndarray): [n, mask_dim], n is number of masks after nms. bboxes (numpy.ndarray): bboxes re-scaled to original image shape. im0_shape (tuple): the size of the input image (h,w,c). Returns: (numpy.ndarray): The upsampled masks. """ c, mh, mw = protos.shape masks = np.matmul(masks_in, protos.reshape((c, -1))).reshape((-1, mh, mw)).transpose(1, 2, 0) # HWN masks = np.ascontiguousarray(masks) masks = self.scale_mask(masks, im0_shape) # re-scale mask from P3 shape to original input image shape masks = np.einsum("HWN -> NHW", masks) # HWN -> NHW masks = self.crop_mask(masks, bboxes) return np.greater(masks, 0.5) @staticmethod def scale_mask(masks, im0_shape, ratio_pad=None): """ Takes a mask, and resizes it to the original image size, from https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/ops.py. Args: masks (np.ndarray): resized and padded masks/images, [h, w, num]/[h, w, 3]. im0_shape (tuple): the original image shape. ratio_pad (tuple): the ratio of the padding to the original image. Returns: masks (np.ndarray): The masks that are being returned. """ im1_shape = masks.shape[:2] if ratio_pad is None: # calculate from im0_shape gain = min(im1_shape[0] / im0_shape[0], im1_shape[1] / im0_shape[1]) # gain = old / new pad = (im1_shape[1] - im0_shape[1] * gain) / 2, (im1_shape[0] - im0_shape[0] * gain) / 2 # wh padding else: pad = ratio_pad[1] # Calculate tlbr of mask top, left = int(round(pad[1] - 0.1)), int(round(pad[0] - 0.1)) # y, x bottom, right = int(round(im1_shape[0] - pad[1] + 0.1)), int(round(im1_shape[1] - pad[0] + 0.1)) if len(masks.shape) < 2: raise ValueError(f'"len of masks shape" should be 2 or 3, but got {len(masks.shape)}') masks = masks[top:bottom, left:right] masks = cv2.resize( masks, (im0_shape[1], im0_shape[0]), interpolation=cv2.INTER_LINEAR ) # INTER_CUBIC would be better if len(masks.shape) == 2: masks = masks[:, :, None] return masks def draw_and_visualize(self, im, bboxes, segments, vis=False, save=True): """ Draw and visualize results. Args: im (np.ndarray): original image, shape [h, w, c]. bboxes (numpy.ndarray): [n, 4], n is number of bboxes. segments (List): list of segment masks. vis (bool): imshow using OpenCV. save (bool): save image annotated. Returns: None """ # Draw rectangles and polygons im_canvas = im.copy() for (*box, conf, cls_), segment in zip(bboxes, segments): # draw contour and fill mask cv2.polylines(im, np.int32([segment]), True, (255, 255, 255), 2) # white borderline cv2.fillPoly(im_canvas, np.int32([segment]), self.color_palette(int(cls_), bgr=True)) # draw bbox rectangle cv2.rectangle( im, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])), self.color_palette(int(cls_), bgr=True), 1, cv2.LINE_AA, ) cv2.putText( im, f"{self.classes[cls_]}: {conf:.3f}", (int(box[0]), int(box[1] - 9)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, self.color_palette(int(cls_), bgr=True), 2, cv2.LINE_AA, ) # Mix image im = cv2.addWeighted(im_canvas, 0.3, im, 0.7, 0) # Show image if vis: cv2.imshow("demo", im) cv2.waitKey(0) cv2.destroyAllWindows() # Save image if save: cv2.imwrite("demo.jpg", im) if __name__ == "__main__": # Create an argument parser to handle command-line arguments parser = argparse.ArgumentParser() parser.add_argument("--model", type=str, required=True, help="Path to ONNX model") parser.add_argument("--source", type=str, default=str(ASSETS / "bus.jpg"), help="Path to input image") parser.add_argument("--conf", type=float, default=0.25, help="Confidence threshold") parser.add_argument("--iou", type=float, default=0.45, help="NMS IoU threshold") args = parser.parse_args() # Build model model = YOLOv8Seg(args.model) # Read image by OpenCV img = cv2.imread(args.source) # Inference boxes, segments, _ = model(img, conf_threshold=args.conf, iou_threshold=args.iou) # Draw bboxes and polygons if len(boxes) > 0: model.draw_and_visualize(img, boxes, segments, vis=False, save=True)
271368
use ndarray::{Array, Axis, IxDyn}; #[derive(Clone, PartialEq, Default)] pub struct YOLOResult { // YOLO tasks results of an image pub probs: Option<Embedding>, pub bboxes: Option<Vec<Bbox>>, pub keypoints: Option<Vec<Vec<Point2>>>, pub masks: Option<Vec<Vec<u8>>>, } impl std::fmt::Debug for YOLOResult { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("YOLOResult") .field( "Probs(top5)", &format_args!("{:?}", self.probs().map(|probs| probs.topk(5))), ) .field("Bboxes", &self.bboxes) .field("Keypoints", &self.keypoints) .field( "Masks", &format_args!("{:?}", self.masks().map(|masks| masks.len())), ) .finish() } } impl YOLOResult { pub fn new( probs: Option<Embedding>, bboxes: Option<Vec<Bbox>>, keypoints: Option<Vec<Vec<Point2>>>, masks: Option<Vec<Vec<u8>>>, ) -> Self { Self { probs, bboxes, keypoints, masks, } } pub fn probs(&self) -> Option<&Embedding> { self.probs.as_ref() } pub fn keypoints(&self) -> Option<&Vec<Vec<Point2>>> { self.keypoints.as_ref() } pub fn masks(&self) -> Option<&Vec<Vec<u8>>> { self.masks.as_ref() } pub fn bboxes(&self) -> Option<&Vec<Bbox>> { self.bboxes.as_ref() } pub fn bboxes_mut(&mut self) -> Option<&mut Vec<Bbox>> { self.bboxes.as_mut() } } #[derive(Debug, PartialEq, Clone, Default)] pub struct Point2 { // A point2d with x, y, conf x: f32, y: f32, confidence: f32, } impl Point2 { pub fn new_with_conf(x: f32, y: f32, confidence: f32) -> Self { Self { x, y, confidence } } pub fn new(x: f32, y: f32) -> Self { Self { x, y, ..Default::default() } } pub fn x(&self) -> f32 { self.x } pub fn y(&self) -> f32 { self.y } pub fn confidence(&self) -> f32 { self.confidence } } #[derive(Debug, Clone, PartialEq, Default)] pub struct Embedding { // An float32 n-dims tensor data: Array<f32, IxDyn>, } impl Embedding { pub fn new(data: Array<f32, IxDyn>) -> Self { Self { data } } pub fn data(&self) -> &Array<f32, IxDyn> { &self.data } pub fn topk(&self, k: usize) -> Vec<(usize, f32)> { let mut probs = self .data .iter() .enumerate() .map(|(a, b)| (a, *b)) .collect::<Vec<_>>(); probs.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap()); let mut topk = Vec::new(); for &(id, confidence) in probs.iter().take(k) { topk.push((id, confidence)); } topk } pub fn norm(&self) -> Array<f32, IxDyn> { let std_ = self.data.mapv(|x| x * x).sum_axis(Axis(0)).mapv(f32::sqrt); self.data.clone() / std_ } pub fn top1(&self) -> (usize, f32) { self.topk(1)[0] } } #[derive(Debug, Clone, PartialEq, Default)] pub struct Bbox { // a bounding box around an object xmin: f32, ymin: f32, width: f32, height: f32, id: usize, confidence: f32, } impl Bbox { pub fn new_from_xywh(xmin: f32, ymin: f32, width: f32, height: f32) -> Self { Self { xmin, ymin, width, height, ..Default::default() } } pub fn new(xmin: f32, ymin: f32, width: f32, height: f32, id: usize, confidence: f32) -> Self { Self { xmin, ymin, width, height, id, confidence, } } pub fn width(&self) -> f32 { self.width } pub fn height(&self) -> f32 { self.height } pub fn xmin(&self) -> f32 { self.xmin } pub fn ymin(&self) -> f32 { self.ymin } pub fn xmax(&self) -> f32 { self.xmin + self.width } pub fn ymax(&self) -> f32 { self.ymin + self.height } pub fn tl(&self) -> Point2 { Point2::new(self.xmin, self.ymin) } pub fn br(&self) -> Point2 { Point2::new(self.xmax(), self.ymax()) } pub fn cxcy(&self) -> Point2 { Point2::new(self.xmin + self.width / 2., self.ymin + self.height / 2.) } pub fn id(&self) -> usize { self.id } pub fn confidence(&self) -> f32 { self.confidence } pub fn area(&self) -> f32 { self.width * self.height } pub fn intersection_area(&self, another: &Bbox) -> f32 { let l = self.xmin.max(another.xmin); let r = (self.xmin + self.width).min(another.xmin + another.width); let t = self.ymin.max(another.ymin); let b = (self.ymin + self.height).min(another.ymin + another.height); (r - l + 1.).max(0.) * (b - t + 1.).max(0.) } pub fn union(&self, another: &Bbox) -> f32 { self.area() + another.area() - self.intersection_area(another) } pub fn iou(&self, another: &Bbox) -> f32 { self.intersection_area(another) / self.union(another) } }
271378
# Ultralytics YOLO 🚀, AGPL-3.0 license __version__ = "8.3.23" import os # Set ENV variables (place before imports) if not os.environ.get("OMP_NUM_THREADS"): os.environ["OMP_NUM_THREADS"] = "1" # default for reduced CPU utilization during training from ultralytics.models import NAS, RTDETR, SAM, YOLO, FastSAM, YOLOWorld from ultralytics.utils import ASSETS, SETTINGS from ultralytics.utils.checks import check_yolo as checks from ultralytics.utils.downloads import download settings = SETTINGS __all__ = ( "__version__", "ASSETS", "YOLO", "YOLOWorld", "NAS", "SAM", "FastSAM", "RTDETR", "checks", "download", "settings", )
271390
f pb: # https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt LOGGER.info(f"Loading {w} for TensorFlow GraphDef inference...") import tensorflow as tf from ultralytics.engine.exporter import gd_outputs def wrap_frozen_graph(gd, inputs, outputs): """Wrap frozen graphs for deployment.""" x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped ge = x.graph.as_graph_element return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs)) gd = tf.Graph().as_graph_def() # TF GraphDef with open(w, "rb") as f: gd.ParseFromString(f.read()) frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs=gd_outputs(gd)) try: # find metadata in SavedModel alongside GraphDef metadata = next(Path(w).resolve().parent.rglob(f"{Path(w).stem}_saved_model*/metadata.yaml")) except StopIteration: pass # TFLite or TFLite Edge TPU elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu from tflite_runtime.interpreter import Interpreter, load_delegate except ImportError: import tensorflow as tf Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate if edgetpu: # TF Edge TPU https://coral.ai/software/#edgetpu-runtime device = device[3:] if str(device).startswith("tpu") else ":0" LOGGER.info(f"Loading {w} on device {device[1:]} for TensorFlow Lite Edge TPU inference...") delegate = {"Linux": "libedgetpu.so.1", "Darwin": "libedgetpu.1.dylib", "Windows": "edgetpu.dll"}[ platform.system() ] interpreter = Interpreter( model_path=w, experimental_delegates=[load_delegate(delegate, options={"device": device})], ) else: # TFLite LOGGER.info(f"Loading {w} for TensorFlow Lite inference...") interpreter = Interpreter(model_path=w) # load TFLite model interpreter.allocate_tensors() # allocate input_details = interpreter.get_input_details() # inputs output_details = interpreter.get_output_details() # outputs # Load metadata try: with zipfile.ZipFile(w, "r") as model: meta_file = model.namelist()[0] metadata = ast.literal_eval(model.read(meta_file).decode("utf-8")) except zipfile.BadZipFile: pass # TF.js elif tfjs: raise NotImplementedError("YOLOv8 TF.js inference is not currently supported.") # PaddlePaddle elif paddle: LOGGER.info(f"Loading {w} for PaddlePaddle inference...") check_requirements("paddlepaddle-gpu" if cuda else "paddlepaddle") import paddle.inference as pdi # noqa w = Path(w) if not w.is_file(): # if not *.pdmodel w = next(w.rglob("*.pdmodel")) # get *.pdmodel file from *_paddle_model dir config = pdi.Config(str(w), str(w.with_suffix(".pdiparams"))) if cuda: config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0) predictor = pdi.create_predictor(config) input_handle = predictor.get_input_handle(predictor.get_input_names()[0]) output_names = predictor.get_output_names() metadata = w.parents[1] / "metadata.yaml" # NCNN elif ncnn: LOGGER.info(f"Loading {w} for NCNN inference...") check_requirements("git+https://github.com/Tencent/ncnn.git" if ARM64 else "ncnn") # requires NCNN import ncnn as pyncnn net = pyncnn.Net() net.opt.use_vulkan_compute = cuda w = Path(w) if not w.is_file(): # if not *.param w = next(w.glob("*.param")) # get *.param file from *_ncnn_model dir net.load_param(str(w)) net.load_model(str(w.with_suffix(".bin"))) metadata = w.parent / "metadata.yaml" # NVIDIA Triton Inference Server elif triton: check_requirements("tritonclient[all]") from ultralytics.utils.triton import TritonRemoteModel model = TritonRemoteModel(w) # Any other format (unsupported) else: from ultralytics.engine.exporter import export_formats raise TypeError( f"model='{w}' is not a supported model format. Ultralytics supports: {export_formats()['Format']}\n" f"See https://docs.ultralytics.com/modes/predict for help." ) # Load external metadata YAML if isinstance(metadata, (str, Path)) and Path(metadata).exists(): metadata = yaml_load(metadata) if metadata and isinstance(metadata, dict): for k, v in metadata.items(): if k in {"stride", "batch"}: metadata[k] = int(v) elif k in {"imgsz", "names", "kpt_shape"} and isinstance(v, str): metadata[k] = eval(v) stride = metadata["stride"] task = metadata["task"] batch = metadata["batch"] imgsz = metadata["imgsz"] names = metadata["names"] kpt_shape = metadata.get("kpt_shape") elif not (pt or triton or nn_module): LOGGER.warning(f"WARNING ⚠️ Metadata not found for 'model={weights}'") # Check names if "names" not in locals(): # names missing names = default_class_names(data) names = check_class_names(names) # Disable gradients if pt: for p in model.parameters(): p.requires_grad = False self.__dict__.update(locals()) # assign all variables to self def forward(self, im, augment=False, visualize=False, embed=None): """ Runs inference on the YOLOv8 MultiBackend model. Args: im (torch.Tensor): The image tensor to perform inference on. augment (bool): whether to perform data augmentation during inference, defaults to False visualize (bool): whether to visualize the output predictions, defaults to False embed (list, optional): A list of feature vectors/embeddings to return. Returns: (tuple): Tuple containing the raw output tensor, and processed output for visualization (if visualize=True) """ b, ch, h, w = im.shape # batch, channel, height, width if self.fp16 and im.dtype != torch.float16: im = im.half() # to FP16 if self.nhwc: im = im.permute(0, 2, 3, 1) # torch BCHW to numpy BHWC shape(1,320,192,3) # PyTorch
271400
ss Detect(nn.Module): """YOLO Detect head for detection models.""" dynamic = False # force grid reconstruction export = False # export mode end2end = False # end2end max_det = 300 # max_det shape = None anchors = torch.empty(0) # init strides = torch.empty(0) # init legacy = False # backward compatibility for v3/v5/v8/v9 models def __init__(self, nc=80, ch=()): """Initializes the YOLO detection layer with specified number of classes and channels.""" super().__init__() self.nc = nc # number of classes self.nl = len(ch) # number of detection layers self.reg_max = 16 # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x) self.no = nc + self.reg_max * 4 # number of outputs per anchor self.stride = torch.zeros(self.nl) # strides computed during build c2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], min(self.nc, 100)) # channels self.cv2 = nn.ModuleList( nn.Sequential(Conv(x, c2, 3), Conv(c2, c2, 3), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch ) self.cv3 = ( nn.ModuleList(nn.Sequential(Conv(x, c3, 3), Conv(c3, c3, 3), nn.Conv2d(c3, self.nc, 1)) for x in ch) if self.legacy else nn.ModuleList( nn.Sequential( nn.Sequential(DWConv(x, x, 3), Conv(x, c3, 1)), nn.Sequential(DWConv(c3, c3, 3), Conv(c3, c3, 1)), nn.Conv2d(c3, self.nc, 1), ) for x in ch ) ) self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity() if self.end2end: self.one2one_cv2 = copy.deepcopy(self.cv2) self.one2one_cv3 = copy.deepcopy(self.cv3) def forward(self, x): """Concatenates and returns predicted bounding boxes and class probabilities.""" if self.end2end: return self.forward_end2end(x) for i in range(self.nl): x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1) if self.training: # Training path return x y = self._inference(x) return y if self.export else (y, x) def forward_end2end(self, x): """ Performs forward pass of the v10Detect module. Args: x (tensor): Input tensor. Returns: (dict, tensor): If not in training mode, returns a dictionary containing the outputs of both one2many and one2one detections. If in training mode, returns a dictionary containing the outputs of one2many and one2one detections separately. """ x_detach = [xi.detach() for xi in x] one2one = [ torch.cat((self.one2one_cv2[i](x_detach[i]), self.one2one_cv3[i](x_detach[i])), 1) for i in range(self.nl) ] for i in range(self.nl): x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1) if self.training: # Training path return {"one2many": x, "one2one": one2one} y = self._inference(one2one) y = self.postprocess(y.permute(0, 2, 1), self.max_det, self.nc) return y if self.export else (y, {"one2many": x, "one2one": one2one}) def _inference(self, x): """Decode predicted bounding boxes and class probabilities based on multiple-level feature maps.""" # Inference path shape = x[0].shape # BCHW x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2) if self.dynamic or self.shape != shape: self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5)) self.shape = shape if self.export and self.format in {"saved_model", "pb", "tflite", "edgetpu", "tfjs"}: # avoid TF FlexSplitV ops box = x_cat[:, : self.reg_max * 4] cls = x_cat[:, self.reg_max * 4 :] else: box, cls = x_cat.split((self.reg_max * 4, self.nc), 1) if self.export and self.format in {"tflite", "edgetpu"}: # Precompute normalization factor to increase numerical stability # See https://github.com/ultralytics/ultralytics/issues/7371 grid_h = shape[2] grid_w = shape[3] grid_size = torch.tensor([grid_w, grid_h, grid_w, grid_h], device=box.device).reshape(1, 4, 1) norm = self.strides / (self.stride[0] * grid_size) dbox = self.decode_bboxes(self.dfl(box) * norm, self.anchors.unsqueeze(0) * norm[:, :2]) else: dbox = self.decode_bboxes(self.dfl(box), self.anchors.unsqueeze(0)) * self.strides return torch.cat((dbox, cls.sigmoid()), 1) def bias_init(self): """Initialize Detect() biases, WARNING: requires stride availability.""" m = self # self.model[-1] # Detect() module # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1 # ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # nominal class frequency for a, b, s in zip(m.cv2, m.cv3, m.stride): # from a[-1].bias.data[:] = 1.0 # box b[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2) # cls (.01 objects, 80 classes, 640 img) if self.end2end: for a, b, s in zip(m.one2one_cv2, m.one2one_cv3, m.stride): # from a[-1].bias.data[:] = 1.0 # box b[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2) # cls (.01 objects, 80 classes, 640 img) def decode_bboxes(self, bboxes, anchors): """Decode bounding boxes.""" return dist2bbox(bboxes, anchors, xywh=not self.end2end, dim=1) @staticmethod def postprocess(preds: torch.Tensor, max_det: int, nc: int = 80): """ Post-processes YOLO model predictions. Args: preds (torch.Tensor): Raw predictions with shape (batch_size, num_anchors, 4 + nc) with last dimension format [x, y, w, h, class_probs]. max_det (int): Maximum detections per image. nc (int, optional): Number of classes. Default: 80. Returns: (torch.Tensor): Processed predictions with shape (batch_size, min(max_det, num_anchors), 6) and last dimension format [x, y, w, h, max_class_prob, class_index]. """ batch_size, anchors, _ = preds.shape # i.e. shape(16,8400,84) boxes, scores = preds.split([4, nc], dim=-1) index = scores.amax(dim=-1).topk(min(max_det, anchors))[1].unsqueeze(-1) boxes = boxes.gather(dim=1, index=index.repeat(1, 1, 4)) scores = scores.gather(dim=1, index=index.repeat(1, 1, nc)) scores, index = scores.flatten(1).topk(min(max_det, anchors)) i = torch.arange(batch_size)[..., None] # batch indices return torch.cat([boxes[i, index // nc], scores[..., None], (index % nc)[..., None].float()], dim=-1)
271422
# Ultralytics YOLO 🚀, AGPL-3.0 license import io import time import cv2 import torch from ultralytics.utils.checks import check_requirements from ultralytics.utils.downloads import GITHUB_ASSETS_STEMS def inference(model=None): """Performs real-time object detection on video input using YOLO in a Streamlit web application.""" check_requirements("streamlit>=1.29.0") # scope imports for faster ultralytics package load speeds import streamlit as st from ultralytics import YOLO # Hide main menu style menu_style_cfg = """<style>MainMenu {visibility: hidden;}</style>""" # Main title of streamlit application main_title_cfg = """<div><h1 style="color:#FF64DA; text-align:center; font-size:40px; font-family: 'Archivo', sans-serif; margin-top:-50px;margin-bottom:20px;"> Ultralytics YOLO Streamlit Application </h1></div>""" # Subtitle of streamlit application sub_title_cfg = """<div><h4 style="color:#042AFF; text-align:center; font-family: 'Archivo', sans-serif; margin-top:-15px; margin-bottom:50px;"> Experience real-time object detection on your webcam with the power of Ultralytics YOLO! 🚀</h4> </div>""" # Set html page configuration st.set_page_config(page_title="Ultralytics Streamlit App", layout="wide", initial_sidebar_state="auto") # Append the custom HTML st.markdown(menu_style_cfg, unsafe_allow_html=True) st.markdown(main_title_cfg, unsafe_allow_html=True) st.markdown(sub_title_cfg, unsafe_allow_html=True) # Add ultralytics logo in sidebar with st.sidebar: logo = "https://raw.githubusercontent.com/ultralytics/assets/main/logo/Ultralytics_Logotype_Original.svg" st.image(logo, width=250) # Add elements to vertical setting menu st.sidebar.title("User Configuration") # Add video source selection dropdown source = st.sidebar.selectbox( "Video", ("webcam", "video"), ) vid_file_name = "" if source == "video": vid_file = st.sidebar.file_uploader("Upload Video File", type=["mp4", "mov", "avi", "mkv"]) if vid_file is not None: g = io.BytesIO(vid_file.read()) # BytesIO Object vid_location = "ultralytics.mp4" with open(vid_location, "wb") as out: # Open temporary file as bytes out.write(g.read()) # Read bytes into file vid_file_name = "ultralytics.mp4" elif source == "webcam": vid_file_name = 0 # Add dropdown menu for model selection available_models = [x.replace("yolo", "YOLO") for x in GITHUB_ASSETS_STEMS if x.startswith("yolo11")] if model: available_models.insert(0, model.split(".pt")[0]) # insert model without suffix as *.pt is added later selected_model = st.sidebar.selectbox("Model", available_models) with st.spinner("Model is downloading..."): model = YOLO(f"{selected_model.lower()}.pt") # Load the YOLO model class_names = list(model.names.values()) # Convert dictionary to list of class names st.success("Model loaded successfully!") # Multiselect box with class names and get indices of selected classes selected_classes = st.sidebar.multiselect("Classes", class_names, default=class_names[:3]) selected_ind = [class_names.index(option) for option in selected_classes] if not isinstance(selected_ind, list): # Ensure selected_options is a list selected_ind = list(selected_ind) enable_trk = st.sidebar.radio("Enable Tracking", ("Yes", "No")) conf = float(st.sidebar.slider("Confidence Threshold", 0.0, 1.0, 0.25, 0.01)) iou = float(st.sidebar.slider("IoU Threshold", 0.0, 1.0, 0.45, 0.01)) col1, col2 = st.columns(2) org_frame = col1.empty() ann_frame = col2.empty() fps_display = st.sidebar.empty() # Placeholder for FPS display if st.sidebar.button("Start"): videocapture = cv2.VideoCapture(vid_file_name) # Capture the video if not videocapture.isOpened(): st.error("Could not open webcam.") stop_button = st.button("Stop") # Button to stop the inference while videocapture.isOpened(): success, frame = videocapture.read() if not success: st.warning("Failed to read frame from webcam. Please make sure the webcam is connected properly.") break prev_time = time.time() # Store initial time for FPS calculation # Store model predictions if enable_trk == "Yes": results = model.track(frame, conf=conf, iou=iou, classes=selected_ind, persist=True) else: results = model(frame, conf=conf, iou=iou, classes=selected_ind) annotated_frame = results[0].plot() # Add annotations on frame # Calculate model FPS curr_time = time.time() fps = 1 / (curr_time - prev_time) # display frame org_frame.image(frame, channels="BGR") ann_frame.image(annotated_frame, channels="BGR") if stop_button: videocapture.release() # Release the capture torch.cuda.empty_cache() # Clear CUDA memory st.stop() # Stop streamlit app # Display FPS in sidebar fps_display.metric("FPS", f"{fps:.2f}") # Release the capture videocapture.release() # Clear CUDA memory torch.cuda.empty_cache() # Destroy window cv2.destroyAllWindows() # Main function call if __name__ == "__main__": inference()
271423
# Ultralytics YOLO 🚀, AGPL-3.0 license from collections import defaultdict import cv2 from ultralytics import YOLO from ultralytics.utils import DEFAULT_CFG_DICT, DEFAULT_SOL_DICT, LOGGER from ultralytics.utils.checks import check_imshow, check_requirements class BaseSolution: """ A base class for managing Ultralytics Solutions. This class provides core functionality for various Ultralytics Solutions, including model loading, object tracking, and region initialization. Attributes: LineString (shapely.geometry.LineString): Class for creating line string geometries. Polygon (shapely.geometry.Polygon): Class for creating polygon geometries. Point (shapely.geometry.Point): Class for creating point geometries. CFG (Dict): Configuration dictionary loaded from a YAML file and updated with kwargs. region (List[Tuple[int, int]]): List of coordinate tuples defining a region of interest. line_width (int): Width of lines used in visualizations. model (ultralytics.YOLO): Loaded YOLO model instance. names (Dict[int, str]): Dictionary mapping class indices to class names. env_check (bool): Flag indicating whether the environment supports image display. track_history (collections.defaultdict): Dictionary to store tracking history for each object. Methods: extract_tracks: Apply object tracking and extract tracks from an input image. store_tracking_history: Store object tracking history for a given track ID and bounding box. initialize_region: Initialize the counting region and line segment based on configuration. display_output: Display the results of processing, including showing frames or saving results. Examples: >>> solution = BaseSolution(model="yolov8n.pt", region=[(0, 0), (100, 0), (100, 100), (0, 100)]) >>> solution.initialize_region() >>> image = cv2.imread("image.jpg") >>> solution.extract_tracks(image) >>> solution.display_output(image) """ def __init__(self, **kwargs): """Initializes the BaseSolution class with configuration settings and YOLO model for Ultralytics solutions.""" check_requirements("shapely>=2.0.0") from shapely.geometry import LineString, Point, Polygon self.LineString = LineString self.Polygon = Polygon self.Point = Point # Load config and update with args DEFAULT_SOL_DICT.update(kwargs) DEFAULT_CFG_DICT.update(kwargs) self.CFG = {**DEFAULT_SOL_DICT, **DEFAULT_CFG_DICT} LOGGER.info(f"Ultralytics Solutions: ✅ {DEFAULT_SOL_DICT}") self.region = self.CFG["region"] # Store region data for other classes usage self.line_width = ( self.CFG["line_width"] if self.CFG["line_width"] is not None else 2 ) # Store line_width for usage # Load Model and store classes names self.model = YOLO(self.CFG["model"] if self.CFG["model"] else "yolov8n.pt") self.names = self.model.names # Initialize environment and region setup self.env_check = check_imshow(warn=True) self.track_history = defaultdict(list) def extract_tracks(self, im0): """ Applies object tracking and extracts tracks from an input image or frame. Args: im0 (ndarray): The input image or frame. Examples: >>> solution = BaseSolution() >>> frame = cv2.imread("path/to/image.jpg") >>> solution.extract_tracks(frame) """ self.tracks = self.model.track(source=im0, persist=True, classes=self.CFG["classes"]) # Extract tracks for OBB or object detection self.track_data = self.tracks[0].obb or self.tracks[0].boxes if self.track_data and self.track_data.id is not None: self.boxes = self.track_data.xyxy.cpu() self.clss = self.track_data.cls.cpu().tolist() self.track_ids = self.track_data.id.int().cpu().tolist() else: LOGGER.warning("WARNING ⚠️ no tracks found!") self.boxes, self.clss, self.track_ids = [], [], [] def store_tracking_history(self, track_id, box): """ Stores the tracking history of an object. This method updates the tracking history for a given object by appending the center point of its bounding box to the track line. It maintains a maximum of 30 points in the tracking history. Args: track_id (int): The unique identifier for the tracked object. box (List[float]): The bounding box coordinates of the object in the format [x1, y1, x2, y2]. Examples: >>> solution = BaseSolution() >>> solution.store_tracking_history(1, [100, 200, 300, 400]) """ # Store tracking history self.track_line = self.track_history[track_id] self.track_line.append(((box[0] + box[2]) / 2, (box[1] + box[3]) / 2)) if len(self.track_line) > 30: self.track_line.pop(0) def initialize_region(self): """Initialize the counting region and line segment based on configuration settings.""" if self.region is None: self.region = [(20, 400), (1080, 404), (1080, 360), (20, 360)] self.r_s = ( self.Polygon(self.region) if len(self.region) >= 3 else self.LineString(self.region) ) # region or line def display_output(self, im0): """ Display the results of the processing, which could involve showing frames, printing counts, or saving results. This method is responsible for visualizing the output of the object detection and tracking process. It displays the processed frame with annotations, and allows for user interaction to close the display. Args: im0 (numpy.ndarray): The input image or frame that has been processed and annotated. Examples: >>> solution = BaseSolution() >>> frame = cv2.imread("path/to/image.jpg") >>> solution.display_output(frame) Notes: - This method will only display output if the 'show' configuration is set to True and the environment supports image display. - The display can be closed by pressing the 'q' key. """ if self.CFG.get("show") and self.env_check: cv2.imshow("Ultralytics Solutions", im0) if cv2.waitKey(1) & 0xFF == ord("q"): return
271427
# Ultralytics YOLO 🚀, AGPL-3.0 license from collections import abc from itertools import repeat from numbers import Number from typing import List import numpy as np from .ops import ltwh2xywh, ltwh2xyxy, xywh2ltwh, xywh2xyxy, xyxy2ltwh, xyxy2xywh def _ntuple(n): """From PyTorch internals.""" def parse(x): """Parse bounding boxes format between XYWH and LTWH.""" return x if isinstance(x, abc.Iterable) else tuple(repeat(x, n)) return parse to_2tuple = _ntuple(2) to_4tuple = _ntuple(4) # `xyxy` means left top and right bottom # `xywh` means center x, center y and width, height(YOLO format) # `ltwh` means left top and width, height(COCO format) _formats = ["xyxy", "xywh", "ltwh"] __all__ = ("Bboxes",) # tuple or list class Bboxes: """ A class for handling bounding boxes. The class supports various bounding box formats like 'xyxy', 'xywh', and 'ltwh'. Bounding box data should be provided in numpy arrays. Attributes: bboxes (numpy.ndarray): The bounding boxes stored in a 2D numpy array. format (str): The format of the bounding boxes ('xyxy', 'xywh', or 'ltwh'). Note: This class does not handle normalization or denormalization of bounding boxes. """ def __init__(self, bboxes, format="xyxy") -> None: """Initializes the Bboxes class with bounding box data in a specified format.""" assert format in _formats, f"Invalid bounding box format: {format}, format must be one of {_formats}" bboxes = bboxes[None, :] if bboxes.ndim == 1 else bboxes assert bboxes.ndim == 2 assert bboxes.shape[1] == 4 self.bboxes = bboxes self.format = format # self.normalized = normalized def convert(self, format): """Converts bounding box format from one type to another.""" assert format in _formats, f"Invalid bounding box format: {format}, format must be one of {_formats}" if self.format == format: return elif self.format == "xyxy": func = xyxy2xywh if format == "xywh" else xyxy2ltwh elif self.format == "xywh": func = xywh2xyxy if format == "xyxy" else xywh2ltwh else: func = ltwh2xyxy if format == "xyxy" else ltwh2xywh self.bboxes = func(self.bboxes) self.format = format def areas(self): """Return box areas.""" return ( (self.bboxes[:, 2] - self.bboxes[:, 0]) * (self.bboxes[:, 3] - self.bboxes[:, 1]) # format xyxy if self.format == "xyxy" else self.bboxes[:, 3] * self.bboxes[:, 2] # format xywh or ltwh ) # def denormalize(self, w, h): # if not self.normalized: # return # assert (self.bboxes <= 1.0).all() # self.bboxes[:, 0::2] *= w # self.bboxes[:, 1::2] *= h # self.normalized = False # # def normalize(self, w, h): # if self.normalized: # return # assert (self.bboxes > 1.0).any() # self.bboxes[:, 0::2] /= w # self.bboxes[:, 1::2] /= h # self.normalized = True def mul(self, scale): """ Multiply bounding box coordinates by scale factor(s). Args: scale (int | tuple | list): Scale factor(s) for four coordinates. If int, the same scale is applied to all coordinates. """ if isinstance(scale, Number): scale = to_4tuple(scale) assert isinstance(scale, (tuple, list)) assert len(scale) == 4 self.bboxes[:, 0] *= scale[0] self.bboxes[:, 1] *= scale[1] self.bboxes[:, 2] *= scale[2] self.bboxes[:, 3] *= scale[3] def add(self, offset): """ Add offset to bounding box coordinates. Args: offset (int | tuple | list): Offset(s) for four coordinates. If int, the same offset is applied to all coordinates. """ if isinstance(offset, Number): offset = to_4tuple(offset) assert isinstance(offset, (tuple, list)) assert len(offset) == 4 self.bboxes[:, 0] += offset[0] self.bboxes[:, 1] += offset[1] self.bboxes[:, 2] += offset[2] self.bboxes[:, 3] += offset[3] def __len__(self): """Return the number of boxes.""" return len(self.bboxes) @classmethod def concatenate(cls, boxes_list: List["Bboxes"], axis=0) -> "Bboxes": """ Concatenate a list of Bboxes objects into a single Bboxes object. Args: boxes_list (List[Bboxes]): A list of Bboxes objects to concatenate. axis (int, optional): The axis along which to concatenate the bounding boxes. Defaults to 0. Returns: Bboxes: A new Bboxes object containing the concatenated bounding boxes. Note: The input should be a list or tuple of Bboxes objects. """ assert isinstance(boxes_list, (list, tuple)) if not boxes_list: return cls(np.empty(0)) assert all(isinstance(box, Bboxes) for box in boxes_list) if len(boxes_list) == 1: return boxes_list[0] return cls(np.concatenate([b.bboxes for b in boxes_list], axis=axis)) def __getitem__(self, index) -> "Bboxes": """ Retrieve a specific bounding box or a set of bounding boxes using indexing. Args: index (int, slice, or np.ndarray): The index, slice, or boolean array to select the desired bounding boxes. Returns: Bboxes: A new Bboxes object containing the selected bounding boxes. Raises: AssertionError: If the indexed bounding boxes do not form a 2-dimensional matrix. Note: When using boolean indexing, make sure to provide a boolean array with the same length as the number of bounding boxes. """ if isinstance(index, int): return Bboxes(self.bboxes[index].view(1, -1)) b = self.bboxes[index] assert b.ndim == 2, f"Indexing on Bboxes with {index} failed to return a matrix!" return Bboxes(b)
271428
ss Instances: """ Container for bounding boxes, segments, and keypoints of detected objects in an image. Attributes: _bboxes (Bboxes): Internal object for handling bounding box operations. keypoints (ndarray): keypoints(x, y, visible) with shape [N, 17, 3]. Default is None. normalized (bool): Flag indicating whether the bounding box coordinates are normalized. segments (ndarray): Segments array with shape [N, 1000, 2] after resampling. Args: bboxes (ndarray): An array of bounding boxes with shape [N, 4]. segments (list | ndarray, optional): A list or array of object segments. Default is None. keypoints (ndarray, optional): An array of keypoints with shape [N, 17, 3]. Default is None. bbox_format (str, optional): The format of bounding boxes ('xywh' or 'xyxy'). Default is 'xywh'. normalized (bool, optional): Whether the bounding box coordinates are normalized. Default is True. Examples: ```python # Create an Instances object instances = Instances( bboxes=np.array([[10, 10, 30, 30], [20, 20, 40, 40]]), segments=[np.array([[5, 5], [10, 10]]), np.array([[15, 15], [20, 20]])], keypoints=np.array([[[5, 5, 1], [10, 10, 1]], [[15, 15, 1], [20, 20, 1]]]), ) ``` Note: The bounding box format is either 'xywh' or 'xyxy', and is determined by the `bbox_format` argument. This class does not perform input validation, and it assumes the inputs are well-formed. """ def __init__(self, bboxes, segments=None, keypoints=None, bbox_format="xywh", normalized=True) -> None: """ Initialize the object with bounding boxes, segments, and keypoints. Args: bboxes (np.ndarray): Bounding boxes, shape [N, 4]. segments (list | np.ndarray, optional): Segmentation masks. Defaults to None. keypoints (np.ndarray, optional): Keypoints, shape [N, 17, 3] and format (x, y, visible). Defaults to None. bbox_format (str, optional): Format of bboxes. Defaults to "xywh". normalized (bool, optional): Whether the coordinates are normalized. Defaults to True. """ self._bboxes = Bboxes(bboxes=bboxes, format=bbox_format) self.keypoints = keypoints self.normalized = normalized self.segments = segments def convert_bbox(self, format): """Convert bounding box format.""" self._bboxes.convert(format=format) @property def bbox_areas(self): """Calculate the area of bounding boxes.""" return self._bboxes.areas() def scale(self, scale_w, scale_h, bbox_only=False): """Similar to denormalize func but without normalized sign.""" self._bboxes.mul(scale=(scale_w, scale_h, scale_w, scale_h)) if bbox_only: return self.segments[..., 0] *= scale_w self.segments[..., 1] *= scale_h if self.keypoints is not None: self.keypoints[..., 0] *= scale_w self.keypoints[..., 1] *= scale_h def denormalize(self, w, h): """Denormalizes boxes, segments, and keypoints from normalized coordinates.""" if not self.normalized: return self._bboxes.mul(scale=(w, h, w, h)) self.segments[..., 0] *= w self.segments[..., 1] *= h if self.keypoints is not None: self.keypoints[..., 0] *= w self.keypoints[..., 1] *= h self.normalized = False def normalize(self, w, h): """Normalize bounding boxes, segments, and keypoints to image dimensions.""" if self.normalized: return self._bboxes.mul(scale=(1 / w, 1 / h, 1 / w, 1 / h)) self.segments[..., 0] /= w self.segments[..., 1] /= h if self.keypoints is not None: self.keypoints[..., 0] /= w self.keypoints[..., 1] /= h self.normalized = True def add_padding(self, padw, padh): """Handle rect and mosaic situation.""" assert not self.normalized, "you should add padding with absolute coordinates." self._bboxes.add(offset=(padw, padh, padw, padh)) self.segments[..., 0] += padw self.segments[..., 1] += padh if self.keypoints is not None: self.keypoints[..., 0] += padw self.keypoints[..., 1] += padh def __getitem__(self, index) -> "Instances": """ Retrieve a specific instance or a set of instances using indexing. Args: index (int, slice, or np.ndarray): The index, slice, or boolean array to select the desired instances. Returns: Instances: A new Instances object containing the selected bounding boxes, segments, and keypoints if present. Note: When using boolean indexing, make sure to provide a boolean array with the same length as the number of instances. """ segments = self.segments[index] if len(self.segments) else self.segments keypoints = self.keypoints[index] if self.keypoints is not None else None bboxes = self.bboxes[index] bbox_format = self._bboxes.format return Instances( bboxes=bboxes, segments=segments, keypoints=keypoints, bbox_format=bbox_format, normalized=self.normalized, ) def flipud(self, h): """Flips the coordinates of bounding boxes, segments, and keypoints vertically.""" if self._bboxes.format == "xyxy": y1 = self.bboxes[:, 1].copy() y2 = self.bboxes[:, 3].copy() self.bboxes[:, 1] = h - y2 self.bboxes[:, 3] = h - y1 else: self.bboxes[:, 1] = h - self.bboxes[:, 1] self.segments[..., 1] = h - self.segments[..., 1] if self.keypoints is not None: self.keypoints[..., 1] = h - self.keypoints[..., 1] def fliplr(self, w): """Reverses the order of the bounding boxes and segments horizontally.""" if self._bboxes.format == "xyxy": x1 = self.bboxes[:, 0].copy() x2 = self.bboxes[:, 2].copy() self.bboxes[:, 0] = w - x2 self.bboxes[:, 2] = w - x1 else: self.bboxes[:, 0] = w - self.bboxes[:, 0] self.segments[..., 0] = w - self.segments[..., 0] if self.keypoints is not None: self.keypoints[..., 0] = w - self.keypoints[..., 0] def clip(self, w, h): """Clips bounding boxes, segments, and keypoints values to stay within image boundaries.""" ori_format = self._bboxes.format self.convert_bbox(format="xyxy") self.bboxes[:, [0, 2]] = self.bboxes[:, [0, 2]].clip(0, w) self.bboxes[:, [1, 3]] = self.bboxes[:, [1, 3]].clip(0, h) if ori_format != "xyxy": self.convert_bbox(format=ori_format) self.segments[..., 0] = self.segments[..., 0].clip(0, w) self.segments[..., 1] = self.segments[..., 1].clip(0, h) if self.keypoints is not None: self.keypoints[..., 0] = self.keypoints[..., 0].clip(0, w) self.keypoints[..., 1] = self.keypoints[..., 1].clip(0, h) def remove_zero_area_boxes(self): """Remove zero-area boxes, i.e. after clipping some boxes may have zero width or height.""" good = self.bbox_areas > 0 if not all(good): self._bboxes = self._bboxes[good] if len(self.segments): self.segments = self.segments[good] if self.keypoints is not None: self.keypoints = self.keypoints[good] return good
271436
etric(SimpleClass): """ Class for computing evaluation metrics for YOLOv8 model. Attributes: p (list): Precision for each class. Shape: (nc,). r (list): Recall for each class. Shape: (nc,). f1 (list): F1 score for each class. Shape: (nc,). all_ap (list): AP scores for all classes and all IoU thresholds. Shape: (nc, 10). ap_class_index (list): Index of class for each AP score. Shape: (nc,). nc (int): Number of classes. Methods: ap50(): AP at IoU threshold of 0.5 for all classes. Returns: List of AP scores. Shape: (nc,) or []. ap(): AP at IoU thresholds from 0.5 to 0.95 for all classes. Returns: List of AP scores. Shape: (nc,) or []. mp(): Mean precision of all classes. Returns: Float. mr(): Mean recall of all classes. Returns: Float. map50(): Mean AP at IoU threshold of 0.5 for all classes. Returns: Float. map75(): Mean AP at IoU threshold of 0.75 for all classes. Returns: Float. map(): Mean AP at IoU thresholds from 0.5 to 0.95 for all classes. Returns: Float. mean_results(): Mean of results, returns mp, mr, map50, map. class_result(i): Class-aware result, returns p[i], r[i], ap50[i], ap[i]. maps(): mAP of each class. Returns: Array of mAP scores, shape: (nc,). fitness(): Model fitness as a weighted combination of metrics. Returns: Float. update(results): Update metric attributes with new evaluation results. """ def __init__(self) -> None: """Initializes a Metric instance for computing evaluation metrics for the YOLOv8 model.""" self.p = [] # (nc, ) self.r = [] # (nc, ) self.f1 = [] # (nc, ) self.all_ap = [] # (nc, 10) self.ap_class_index = [] # (nc, ) self.nc = 0 @property def ap50(self): """ Returns the Average Precision (AP) at an IoU threshold of 0.5 for all classes. Returns: (np.ndarray, list): Array of shape (nc,) with AP50 values per class, or an empty list if not available. """ return self.all_ap[:, 0] if len(self.all_ap) else [] @property def ap(self): """ Returns the Average Precision (AP) at an IoU threshold of 0.5-0.95 for all classes. Returns: (np.ndarray, list): Array of shape (nc,) with AP50-95 values per class, or an empty list if not available. """ return self.all_ap.mean(1) if len(self.all_ap) else [] @property def mp(self): """ Returns the Mean Precision of all classes. Returns: (float): The mean precision of all classes. """ return self.p.mean() if len(self.p) else 0.0 @property def mr(self): """ Returns the Mean Recall of all classes. Returns: (float): The mean recall of all classes. """ return self.r.mean() if len(self.r) else 0.0 @property def map50(self): """ Returns the mean Average Precision (mAP) at an IoU threshold of 0.5. Returns: (float): The mAP at an IoU threshold of 0.5. """ return self.all_ap[:, 0].mean() if len(self.all_ap) else 0.0 @property def map75(self): """ Returns the mean Average Precision (mAP) at an IoU threshold of 0.75. Returns: (float): The mAP at an IoU threshold of 0.75. """ return self.all_ap[:, 5].mean() if len(self.all_ap) else 0.0 @property def map(self): """ Returns the mean Average Precision (mAP) over IoU thresholds of 0.5 - 0.95 in steps of 0.05. Returns: (float): The mAP over IoU thresholds of 0.5 - 0.95 in steps of 0.05. """ return self.all_ap.mean() if len(self.all_ap) else 0.0 def mean_results(self): """Mean of results, return mp, mr, map50, map.""" return [self.mp, self.mr, self.map50, self.map] def class_result(self, i): """Class-aware result, return p[i], r[i], ap50[i], ap[i].""" return self.p[i], self.r[i], self.ap50[i], self.ap[i] @property def maps(self): """MAP of each class.""" maps = np.zeros(self.nc) + self.map for i, c in enumerate(self.ap_class_index): maps[c] = self.ap[i] return maps def fitness(self): """Model fitness as a weighted combination of metrics.""" w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] return (np.array(self.mean_results()) * w).sum() def update(self, results): """ Updates the evaluation metrics of the model with a new set of results. Args: results (tuple): A tuple containing the following evaluation metrics: - p (list): Precision for each class. Shape: (nc,). - r (list): Recall for each class. Shape: (nc,). - f1 (list): F1 score for each class. Shape: (nc,). - all_ap (list): AP scores for all classes and all IoU thresholds. Shape: (nc, 10). - ap_class_index (list): Index of class for each AP score. Shape: (nc,). Side Effects: Updates the class attributes `self.p`, `self.r`, `self.f1`, `self.all_ap`, and `self.ap_class_index` based on the values provided in the `results` tuple. """ ( self.p, self.r, self.f1, self.all_ap, self.ap_class_index, self.p_curve, self.r_curve, self.f1_curve, self.px, self.prec_values, ) = results @property def curves(self): """Returns a list of curves for accessing specific metrics curves.""" return [] @property def curves_results(self): """Returns a list of curves for accessing specific metrics curves.""" return [ [self.px, self.prec_values, "Recall", "Precision"], [self.px, self.f1_curve, "Confidence", "F1"], [self.px, self.p_curve, "Confidence", "Precision"], [self.px, self.r_curve, "Confidence", "Recall"], ] clas
271443
# Ultralytics YOLO 🚀, AGPL-3.0 license """Functions for estimating the best YOLO batch size to use a fraction of the available CUDA memory in PyTorch.""" import os from copy import deepcopy import numpy as np import torch from ultralytics.utils import DEFAULT_CFG, LOGGER, colorstr from ultralytics.utils.torch_utils import autocast, profile def check_train_batch_size(model, imgsz=640, amp=True, batch=-1): """ Compute optimal YOLO training batch size using the autobatch() function. Args: model (torch.nn.Module): YOLO model to check batch size for. imgsz (int, optional): Image size used for training. amp (bool, optional): Use automatic mixed precision if True. batch (float, optional): Fraction of GPU memory to use. If -1, use default. Returns: (int): Optimal batch size computed using the autobatch() function. Note: If 0.0 < batch < 1.0, it's used as the fraction of GPU memory to use. Otherwise, a default fraction of 0.6 is used. """ with autocast(enabled=amp): return autobatch(deepcopy(model).train(), imgsz, fraction=batch if 0.0 < batch < 1.0 else 0.6) def autobatch(model, imgsz=640, fraction=0.60, batch_size=DEFAULT_CFG.batch): """ Automatically estimate the best YOLO batch size to use a fraction of the available CUDA memory. Args: model (torch.nn.module): YOLO model to compute batch size for. imgsz (int, optional): The image size used as input for the YOLO model. Defaults to 640. fraction (float, optional): The fraction of available CUDA memory to use. Defaults to 0.60. batch_size (int, optional): The default batch size to use if an error is detected. Defaults to 16. Returns: (int): The optimal batch size. """ # Check device prefix = colorstr("AutoBatch: ") LOGGER.info(f"{prefix}Computing optimal batch size for imgsz={imgsz} at {fraction * 100}% CUDA memory utilization.") device = next(model.parameters()).device # get model device if device.type in {"cpu", "mps"}: LOGGER.info(f"{prefix} ⚠️ intended for CUDA devices, using default batch-size {batch_size}") return batch_size if torch.backends.cudnn.benchmark: LOGGER.info(f"{prefix} ⚠️ Requires torch.backends.cudnn.benchmark=False, using default batch-size {batch_size}") return batch_size # Inspect CUDA memory gb = 1 << 30 # bytes to GiB (1024 ** 3) d = f"CUDA:{os.getenv('CUDA_VISIBLE_DEVICES', '0').strip()[0]}" # 'CUDA:0' properties = torch.cuda.get_device_properties(device) # device properties t = properties.total_memory / gb # GiB total r = torch.cuda.memory_reserved(device) / gb # GiB reserved a = torch.cuda.memory_allocated(device) / gb # GiB allocated f = t - (r + a) # GiB free LOGGER.info(f"{prefix}{d} ({properties.name}) {t:.2f}G total, {r:.2f}G reserved, {a:.2f}G allocated, {f:.2f}G free") # Profile batch sizes batch_sizes = [1, 2, 4, 8, 16] if t < 16 else [1, 2, 4, 8, 16, 32, 64] try: img = [torch.empty(b, 3, imgsz, imgsz) for b in batch_sizes] results = profile(img, model, n=1, device=device) # Fit a solution y = [x[2] for x in results if x] # memory [2] p = np.polyfit(batch_sizes[: len(y)], y, deg=1) # first degree polynomial fit b = int((f * fraction - p[1]) / p[0]) # y intercept (optimal batch size) if None in results: # some sizes failed i = results.index(None) # first fail index if b >= batch_sizes[i]: # y intercept above failure point b = batch_sizes[max(i - 1, 0)] # select prior safe point if b < 1 or b > 1024: # b outside of safe range b = batch_size LOGGER.info(f"{prefix}WARNING ⚠️ CUDA anomaly detected, using default batch-size {batch_size}.") fraction = (np.polyval(p, b) + r + a) / t # actual fraction predicted LOGGER.info(f"{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%) ✅") return b except Exception as e: LOGGER.warning(f"{prefix}WARNING ⚠️ error detected: {e}, using default batch-size {batch_size}.") return batch_size finally: torch.cuda.empty_cache()
271446
ments(requirements=ROOT.parent / "requirements.txt", exclude=(), install=True, cmds=""): """ Check if installed dependencies meet YOLOv8 requirements and attempt to auto-update if needed. Args: requirements (Union[Path, str, List[str]]): Path to a requirements.txt file, a single package requirement as a string, or a list of package requirements as strings. exclude (Tuple[str]): Tuple of package names to exclude from checking. install (bool): If True, attempt to auto-update packages that don't meet requirements. cmds (str): Additional commands to pass to the pip install command when auto-updating. Example: ```python from ultralytics.utils.checks import check_requirements # Check a requirements.txt file check_requirements("path/to/requirements.txt") # Check a single package check_requirements("ultralytics>=8.0.0") # Check multiple packages check_requirements(["numpy", "ultralytics>=8.0.0"]) ``` """ prefix = colorstr("red", "bold", "requirements:") if isinstance(requirements, Path): # requirements.txt file file = requirements.resolve() assert file.exists(), f"{prefix} {file} not found, check failed." requirements = [f"{x.name}{x.specifier}" for x in parse_requirements(file) if x.name not in exclude] elif isinstance(requirements, str): requirements = [requirements] pkgs = [] for r in requirements: r_stripped = r.split("/")[-1].replace(".git", "") # replace git+https://org/repo.git -> 'repo' match = re.match(r"([a-zA-Z0-9-_]+)([<>!=~]+.*)?", r_stripped) name, required = match[1], match[2].strip() if match[2] else "" try: assert check_version(metadata.version(name), required) # exception if requirements not met except (AssertionError, metadata.PackageNotFoundError): pkgs.append(r) @Retry(times=2, delay=1) def attempt_install(packages, commands): """Attempt pip install command with retries on failure.""" return subprocess.check_output(f"pip install --no-cache-dir {packages} {commands}", shell=True).decode() s = " ".join(f'"{x}"' for x in pkgs) # console string if s: if install and AUTOINSTALL: # check environment variable n = len(pkgs) # number of packages updates LOGGER.info(f"{prefix} Ultralytics requirement{'s' * (n > 1)} {pkgs} not found, attempting AutoUpdate...") try: t = time.time() assert ONLINE, "AutoUpdate skipped (offline)" LOGGER.info(attempt_install(s, cmds)) dt = time.time() - t LOGGER.info( f"{prefix} AutoUpdate success ✅ {dt:.1f}s, installed {n} package{'s' * (n > 1)}: {pkgs}\n" f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" ) except Exception as e: LOGGER.warning(f"{prefix} ❌ {e}") return False else: return False return True def check_torchvision(): """ Checks the installed versions of PyTorch and Torchvision to ensure they're compatible. This function checks the installed versions of PyTorch and Torchvision, and warns if they're incompatible according to the provided compatibility table based on: https://github.com/pytorch/vision#installation. The compatibility table is a dictionary where the keys are PyTorch versions and the values are lists of compatible Torchvision versions. """ # Compatibility table compatibility_table = { "2.4": ["0.19"], "2.3": ["0.18"], "2.2": ["0.17"], "2.1": ["0.16"], "2.0": ["0.15"], "1.13": ["0.14"], "1.12": ["0.13"], } # Extract only the major and minor versions v_torch = ".".join(torch.__version__.split("+")[0].split(".")[:2]) if v_torch in compatibility_table: compatible_versions = compatibility_table[v_torch] v_torchvision = ".".join(TORCHVISION_VERSION.split("+")[0].split(".")[:2]) if all(v_torchvision != v for v in compatible_versions): print( f"WARNING ⚠️ torchvision=={v_torchvision} is incompatible with torch=={v_torch}.\n" f"Run 'pip install torchvision=={compatible_versions[0]}' to fix torchvision or " "'pip install -U torch torchvision' to update both.\n" "For a full compatibility table see https://github.com/pytorch/vision#installation" ) def check_suffix(file="yolo11n.pt", suffix=".pt", msg=""): """Check file(s) for acceptable suffix.""" if file and suffix: if isinstance(suffix, str): suffix = (suffix,) for f in file if isinstance(file, (list, tuple)) else [file]: s = Path(f).suffix.lower().strip() # file suffix if len(s): assert s in suffix, f"{msg}{f} acceptable suffix is {suffix}, not {s}" def check_yolov5u_filename(file: str, verbose: bool = True): """Replace legacy YOLOv5 filenames with updated YOLOv5u filenames.""" if "yolov3" in file or "yolov5" in file: if "u.yaml" in file: file = file.replace("u.yaml", ".yaml") # i.e. yolov5nu.yaml -> yolov5n.yaml elif ".pt" in file and "u" not in file: original_file = file file = re.sub(r"(.*yolov5([nsmlx]))\.pt", "\\1u.pt", file) # i.e. yolov5n.pt -> yolov5nu.pt file = re.sub(r"(.*yolov5([nsmlx])6)\.pt", "\\1u.pt", file) # i.e. yolov5n6.pt -> yolov5n6u.pt file = re.sub(r"(.*yolov3(|-tiny|-spp))\.pt", "\\1u.pt", file) # i.e. yolov3-spp.pt -> yolov3-sppu.pt if file != original_file and verbose: LOGGER.info( f"PRO TIP 💡 Replace 'model={original_file}' with new 'model={file}'.\nYOLOv5 'u' models are " f"trained with https://github.com/ultralytics/ultralytics and feature improved performance vs " f"standard YOLOv5 models trained with https://github.com/ultralytics/yolov5.\n" ) return file def check_model_file_from_stem(model="yolov8n"): """Return a model filename from a valid model stem.""" if model and not Path(model).suffix and Path(model).stem in downloads.GITHUB_ASSETS_STEMS: return Path(model).with_suffix(".pt") # add suffix, i.e. yolov8n -> yolov8n.pt else: return model def check_file(file, suffix="", download=T
271454
kpts(self, kpts, shape=(640, 640), radius=None, kpt_line=True, conf_thres=0.25, kpt_color=None): """ Plot keypoints on the image. Args: kpts (torch.Tensor): Keypoints, shape [17, 3] (x, y, confidence). shape (tuple, optional): Image shape (h, w). Defaults to (640, 640). radius (int, optional): Keypoint radius. Defaults to 5. kpt_line (bool, optional): Draw lines between keypoints. Defaults to True. conf_thres (float, optional): Confidence threshold. Defaults to 0.25. kpt_color (tuple, optional): Keypoint color (B, G, R). Defaults to None. Note: - `kpt_line=True` currently only supports human pose plotting. - Modifies self.im in-place. - If self.pil is True, converts image to numpy array and back to PIL. """ radius = radius if radius is not None else self.lw if self.pil: # Convert to numpy first self.im = np.asarray(self.im).copy() nkpt, ndim = kpts.shape is_pose = nkpt == 17 and ndim in {2, 3} kpt_line &= is_pose # `kpt_line=True` for now only supports human pose plotting for i, k in enumerate(kpts): color_k = kpt_color or (self.kpt_color[i].tolist() if is_pose else colors(i)) x_coord, y_coord = k[0], k[1] if x_coord % shape[1] != 0 and y_coord % shape[0] != 0: if len(k) == 3: conf = k[2] if conf < conf_thres: continue cv2.circle(self.im, (int(x_coord), int(y_coord)), radius, color_k, -1, lineType=cv2.LINE_AA) if kpt_line: ndim = kpts.shape[-1] for i, sk in enumerate(self.skeleton): pos1 = (int(kpts[(sk[0] - 1), 0]), int(kpts[(sk[0] - 1), 1])) pos2 = (int(kpts[(sk[1] - 1), 0]), int(kpts[(sk[1] - 1), 1])) if ndim == 3: conf1 = kpts[(sk[0] - 1), 2] conf2 = kpts[(sk[1] - 1), 2] if conf1 < conf_thres or conf2 < conf_thres: continue if pos1[0] % shape[1] == 0 or pos1[1] % shape[0] == 0 or pos1[0] < 0 or pos1[1] < 0: continue if pos2[0] % shape[1] == 0 or pos2[1] % shape[0] == 0 or pos2[0] < 0 or pos2[1] < 0: continue cv2.line( self.im, pos1, pos2, kpt_color or self.limb_color[i].tolist(), thickness=int(np.ceil(self.lw / 2)), lineType=cv2.LINE_AA, ) if self.pil: # Convert im back to PIL and update draw self.fromarray(self.im) def rectangle(self, xy, fill=None, outline=None, width=1): """Add rectangle to image (PIL-only).""" self.draw.rectangle(xy, fill, outline, width) def text(self, xy, text, txt_color=(255, 255, 255), anchor="top", box_style=False): """Adds text to an image using PIL or cv2.""" if anchor == "bottom": # start y from font bottom w, h = self.font.getsize(text) # text width, height xy[1] += 1 - h if self.pil: if box_style: w, h = self.font.getsize(text) self.draw.rectangle((xy[0], xy[1], xy[0] + w + 1, xy[1] + h + 1), fill=txt_color) # Using `txt_color` for background and draw fg with white color txt_color = (255, 255, 255) if "\n" in text: lines = text.split("\n") _, h = self.font.getsize(text) for line in lines: self.draw.text(xy, line, fill=txt_color, font=self.font) xy[1] += h else: self.draw.text(xy, text, fill=txt_color, font=self.font) else: if box_style: w, h = cv2.getTextSize(text, 0, fontScale=self.sf, thickness=self.tf)[0] # text width, height h += 3 # add pixels to pad text outside = xy[1] >= h # label fits outside box p2 = xy[0] + w, xy[1] - h if outside else xy[1] + h cv2.rectangle(self.im, xy, p2, txt_color, -1, cv2.LINE_AA) # filled # Using `txt_color` for background and draw fg with white color txt_color = (255, 255, 255) cv2.putText(self.im, text, xy, 0, self.sf, txt_color, thickness=self.tf, lineType=cv2.LINE_AA) def fromarray(self, im): """Update self.im from a numpy array.""" self.im = im if isinstance(im, Image.Image) else Image.fromarray(im) self.draw = ImageDraw.Draw(self.im) def result(self): """Return annotated image as array.""" return np.asarray(self.im) def show(self, title=None): """Show the annotated image.""" im = Image.fromarray(np.asarray(self.im)[..., ::-1]) # Convert numpy array to PIL Image with RGB to BGR if IS_COLAB or IS_KAGGLE: # can not use IS_JUPYTER as will run for all ipython environments try: display(im) # noqa - display() function only available in ipython environments except ImportError as e: LOGGER.warning(f"Unable to display image in Jupyter notebooks: {e}") else: im.show(title=title) def save(self, filename="image.jpg"): """Save the annotated image to 'filename'.""" cv2.imwrite(filename, np.asarray(self.im)) def get_bbox_dimension(self, bbox=None): """ Calculate the area of a bounding box. Args: bbox (tuple): Bounding box coordinates in the format (x_min, y_min, x_max, y_max). Returns: angle (degree): Degree value of angle between three points """ x_min, y_min, x_max, y_max = bbox width = x_max - x_min height = y_max - y_min return width, height, width * height def draw_region(self, reg_pts=None, color=(0, 255, 0), thickness=5): """ Draw region line. Args: reg_pts (list): Region Points (for line 2 points, for region 4 points) color (tuple): Region Color value thickness (int): Region area thickness value """ cv2.polylines(self.im, [np.array(reg_pts, dtype=np.int32)], isClosed=True, color=color, thickness=thickness) # Draw small circles at the corner points for point in reg_pts: cv2.circle(self.im, (point[0], point[1]), thickness * 2, color, -1) # -1 fills the circle def draw_centroid_and_tracks(self, track, color=(255, 0, 255), track_thickness=2): """ Draw centroid point and track trails. Args: track (list): object tracking points for trails display color (tuple): tracks line color track_thickness (int): track line thickness value """ points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2)) cv2.polylines(self.im, [points], isClosed=False, color=color, thickness=track_thickness) cv2.circle(self.im, (int(track[-1][0]), int(track[-1][1])), track_thickness * 2, color, -1)
271457
save_one_box(xyxy, im, file=Path("im.jpg"), gain=1.02, pad=10, square=False, BGR=False, save=True): """ Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop. This function takes a bounding box and an image, and then saves a cropped portion of the image according to the bounding box. Optionally, the crop can be squared, and the function allows for gain and padding adjustments to the bounding box. Args: xyxy (torch.Tensor or list): A tensor or list representing the bounding box in xyxy format. im (numpy.ndarray): The input image. file (Path, optional): The path where the cropped image will be saved. Defaults to 'im.jpg'. gain (float, optional): A multiplicative factor to increase the size of the bounding box. Defaults to 1.02. pad (int, optional): The number of pixels to add to the width and height of the bounding box. Defaults to 10. square (bool, optional): If True, the bounding box will be transformed into a square. Defaults to False. BGR (bool, optional): If True, the image will be saved in BGR format, otherwise in RGB. Defaults to False. save (bool, optional): If True, the cropped image will be saved to disk. Defaults to True. Returns: (numpy.ndarray): The cropped image. Example: ```python from ultralytics.utils.plotting import save_one_box xyxy = [50, 50, 150, 150] im = cv2.imread("image.jpg") cropped_im = save_one_box(xyxy, im, file="cropped.jpg", square=True) ``` """ if not isinstance(xyxy, torch.Tensor): # may be list xyxy = torch.stack(xyxy) b = ops.xyxy2xywh(xyxy.view(-1, 4)) # boxes if square: b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad xyxy = ops.xywh2xyxy(b).long() xyxy = ops.clip_boxes(xyxy, im.shape) crop = im[int(xyxy[0, 1]) : int(xyxy[0, 3]), int(xyxy[0, 0]) : int(xyxy[0, 2]), :: (1 if BGR else -1)] if save: file.parent.mkdir(parents=True, exist_ok=True) # make directory f = str(increment_path(file).with_suffix(".jpg")) # cv2.imwrite(f, crop) # save BGR, https://github.com/ultralytics/yolov5/issues/7007 chroma subsampling issue Image.fromarray(crop[..., ::-1]).save(f, quality=95, subsampling=0) # save RGB return crop
271479
y2xywhn(x, w=640, h=640, clip=False, eps=0.0): """ Convert bounding box coordinates from (x1, y1, x2, y2) format to (x, y, width, height, normalized) format. x, y, width and height are normalized to image dimensions. Args: x (np.ndarray | torch.Tensor): The input bounding box coordinates in (x1, y1, x2, y2) format. w (int): The width of the image. Defaults to 640 h (int): The height of the image. Defaults to 640 clip (bool): If True, the boxes will be clipped to the image boundaries. Defaults to False eps (float): The minimum value of the box's width and height. Defaults to 0.0 Returns: y (np.ndarray | torch.Tensor): The bounding box coordinates in (x, y, width, height, normalized) format """ if clip: x = clip_boxes(x, (h - eps, w - eps)) assert x.shape[-1] == 4, f"input shape last dimension expected 4 but input shape is {x.shape}" y = torch.empty_like(x) if isinstance(x, torch.Tensor) else np.empty_like(x) # faster than clone/copy y[..., 0] = ((x[..., 0] + x[..., 2]) / 2) / w # x center y[..., 1] = ((x[..., 1] + x[..., 3]) / 2) / h # y center y[..., 2] = (x[..., 2] - x[..., 0]) / w # width y[..., 3] = (x[..., 3] - x[..., 1]) / h # height return y def xywh2ltwh(x): """ Convert the bounding box format from [x, y, w, h] to [x1, y1, w, h], where x1, y1 are the top-left coordinates. Args: x (np.ndarray | torch.Tensor): The input tensor with the bounding box coordinates in the xywh format Returns: y (np.ndarray | torch.Tensor): The bounding box coordinates in the xyltwh format """ y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) y[..., 0] = x[..., 0] - x[..., 2] / 2 # top left x y[..., 1] = x[..., 1] - x[..., 3] / 2 # top left y return y def xyxy2ltwh(x): """ Convert nx4 bounding boxes from [x1, y1, x2, y2] to [x1, y1, w, h], where xy1=top-left, xy2=bottom-right. Args: x (np.ndarray | torch.Tensor): The input tensor with the bounding boxes coordinates in the xyxy format Returns: y (np.ndarray | torch.Tensor): The bounding box coordinates in the xyltwh format. """ y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) y[..., 2] = x[..., 2] - x[..., 0] # width y[..., 3] = x[..., 3] - x[..., 1] # height return y def ltwh2xywh(x): """ Convert nx4 boxes from [x1, y1, w, h] to [x, y, w, h] where xy1=top-left, xy=center. Args: x (torch.Tensor): the input tensor Returns: y (np.ndarray | torch.Tensor): The bounding box coordinates in the xywh format. """ y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) y[..., 0] = x[..., 0] + x[..., 2] / 2 # center x y[..., 1] = x[..., 1] + x[..., 3] / 2 # center y return y def xyxyxyxy2xywhr(x): """ Convert batched Oriented Bounding Boxes (OBB) from [xy1, xy2, xy3, xy4] to [xywh, rotation]. Rotation values are returned in radians from 0 to pi/2. Args: x (numpy.ndarray | torch.Tensor): Input box corners [xy1, xy2, xy3, xy4] of shape (n, 8). Returns: (numpy.ndarray | torch.Tensor): Converted data in [cx, cy, w, h, rotation] format of shape (n, 5). """ is_torch = isinstance(x, torch.Tensor) points = x.cpu().numpy() if is_torch else x points = points.reshape(len(x), -1, 2) rboxes = [] for pts in points: # NOTE: Use cv2.minAreaRect to get accurate xywhr, # especially some objects are cut off by augmentations in dataloader. (cx, cy), (w, h), angle = cv2.minAreaRect(pts) rboxes.append([cx, cy, w, h, angle / 180 * np.pi]) return torch.tensor(rboxes, device=x.device, dtype=x.dtype) if is_torch else np.asarray(rboxes) def xywhr2xyxyxyxy(x): """ Convert batched Oriented Bounding Boxes (OBB) from [xywh, rotation] to [xy1, xy2, xy3, xy4]. Rotation values should be in radians from 0 to pi/2. Args: x (numpy.ndarray | torch.Tensor): Boxes in [cx, cy, w, h, rotation] format of shape (n, 5) or (b, n, 5). Returns: (numpy.ndarray | torch.Tensor): Converted corner points of shape (n, 4, 2) or (b, n, 4, 2). """ cos, sin, cat, stack = ( (torch.cos, torch.sin, torch.cat, torch.stack) if isinstance(x, torch.Tensor) else (np.cos, np.sin, np.concatenate, np.stack) ) ctr = x[..., :2] w, h, angle = (x[..., i : i + 1] for i in range(2, 5)) cos_value, sin_value = cos(angle), sin(angle) vec1 = [w / 2 * cos_value, w / 2 * sin_value] vec2 = [-h / 2 * sin_value, h / 2 * cos_value] vec1 = cat(vec1, -1) vec2 = cat(vec2, -1) pt1 = ctr + vec1 + vec2 pt2 = ctr + vec1 - vec2 pt3 = ctr - vec1 - vec2 pt4 = ctr - vec1 + vec2 return stack([pt1, pt2, pt3, pt4], -2) def ltwh2xyxy(x): """ It converts the bounding box from [x1, y1, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right. Args: x (np.ndarray | torch.Tensor): the input image Returns: y (np.ndarray | torch.Tensor): the xyxy coordinates of the bounding boxes. """ y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) y[..., 2] = x[..., 2] + x[..., 0] # width y[..., 3] = x[..., 3] + x[..., 1] # height return y def segments2boxes(segments): """ It converts segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh). Args: segments (list): list of segments, each segment is a list of points, each point is a list of x, y coordinates Returns: (np.ndarray): the xywh coordinates of the bounding boxes. """ boxes = [] for s in segments: x, y = s.T # segment xy boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy return xyxy2xywh(np.array(boxes)) # cls, xywh def
271499
info(model, detailed=False, verbose=True, imgsz=640): """ Model information. imgsz may be int or list, i.e. imgsz=640 or imgsz=[640, 320]. """ if not verbose: return n_p = get_num_params(model) # number of parameters n_g = get_num_gradients(model) # number of gradients n_l = len(list(model.modules())) # number of layers if detailed: LOGGER.info( f"{'layer':>5} {'name':>40} {'gradient':>9} {'parameters':>12} {'shape':>20} {'mu':>10} {'sigma':>10}" ) for i, (name, p) in enumerate(model.named_parameters()): name = name.replace("module_list.", "") LOGGER.info( "%5g %40s %9s %12g %20s %10.3g %10.3g %10s" % (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std(), p.dtype) ) flops = get_flops(model, imgsz) fused = " (fused)" if getattr(model, "is_fused", lambda: False)() else "" fs = f", {flops:.1f} GFLOPs" if flops else "" yaml_file = getattr(model, "yaml_file", "") or getattr(model, "yaml", {}).get("yaml_file", "") model_name = Path(yaml_file).stem.replace("yolo", "YOLO") or "Model" LOGGER.info(f"{model_name} summary{fused}: {n_l:,} layers, {n_p:,} parameters, {n_g:,} gradients{fs}") return n_l, n_p, n_g, flops def get_num_params(model): """Return the total number of parameters in a YOLO model.""" return sum(x.numel() for x in model.parameters()) def get_num_gradients(model): """Return the total number of parameters with gradients in a YOLO model.""" return sum(x.numel() for x in model.parameters() if x.requires_grad) def model_info_for_loggers(trainer): """ Return model info dict with useful model information. Example: YOLOv8n info for loggers ```python results = { "model/parameters": 3151904, "model/GFLOPs": 8.746, "model/speed_ONNX(ms)": 41.244, "model/speed_TensorRT(ms)": 3.211, "model/speed_PyTorch(ms)": 18.755, } ``` """ if trainer.args.profile: # profile ONNX and TensorRT times from ultralytics.utils.benchmarks import ProfileModels results = ProfileModels([trainer.last], device=trainer.device).profile()[0] results.pop("model/name") else: # only return PyTorch times from most recent validation results = { "model/parameters": get_num_params(trainer.model), "model/GFLOPs": round(get_flops(trainer.model), 3), } results["model/speed_PyTorch(ms)"] = round(trainer.validator.speed["inference"], 3) return results def get_flops(model, imgsz=640): """Return a YOLO model's FLOPs.""" if not thop: return 0.0 # if not installed return 0.0 GFLOPs try: model = de_parallel(model) p = next(model.parameters()) if not isinstance(imgsz, list): imgsz = [imgsz, imgsz] # expand if int/float try: # Use stride size for input tensor stride = max(int(model.stride.max()), 32) if hasattr(model, "stride") else 32 # max stride im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format flops = thop.profile(deepcopy(model), inputs=[im], verbose=False)[0] / 1e9 * 2 # stride GFLOPs return flops * imgsz[0] / stride * imgsz[1] / stride # imgsz GFLOPs except Exception: # Use actual image size for input tensor (i.e. required for RTDETR models) im = torch.empty((1, p.shape[1], *imgsz), device=p.device) # input image in BCHW format return thop.profile(deepcopy(model), inputs=[im], verbose=False)[0] / 1e9 * 2 # imgsz GFLOPs except Exception: return 0.0 def get_flops_with_torch_profiler(model, imgsz=640): """Compute model FLOPs (thop package alternative, but 2-10x slower unfortunately).""" if not TORCH_2_0: # torch profiler implemented in torch>=2.0 return 0.0 model = de_parallel(model) p = next(model.parameters()) if not isinstance(imgsz, list): imgsz = [imgsz, imgsz] # expand if int/float try: # Use stride size for input tensor stride = (max(int(model.stride.max()), 32) if hasattr(model, "stride") else 32) * 2 # max stride im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format with torch.profiler.profile(with_flops=True) as prof: model(im) flops = sum(x.flops for x in prof.key_averages()) / 1e9 flops = flops * imgsz[0] / stride * imgsz[1] / stride # 640x640 GFLOPs except Exception: # Use actual image size for input tensor (i.e. required for RTDETR models) im = torch.empty((1, p.shape[1], *imgsz), device=p.device) # input image in BCHW format with torch.profiler.profile(with_flops=True) as prof: model(im) flops = sum(x.flops for x in prof.key_averages()) / 1e9 return flops def initialize_weights(model): """Initialize model weights to random values.""" for m in model.modules(): t = type(m) if t is nn.Conv2d: pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif t is nn.BatchNorm2d: m.eps = 1e-3 m.momentum = 0.03 elif t in {nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU}: m.inplace = True def scale_img(img, ratio=1.0, same_shape=False, gs=32): """Scales and pads an image tensor, optionally maintaining aspect ratio and padding to gs multiple.""" if ratio == 1.0: return img h, w = img.shape[2:] s = (int(h * ratio), int(w * ratio)) # new size img = F.interpolate(img, size=s, mode="bilinear", align_corners=False) # resize if not same_shape: # pad/crop img h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w)) return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean def copy_attr(a, b, include=(), exclude=()): """Copies attributes from object 'b' to object 'a', with options to include/exclude certain attributes.""" for k, v in b.__dict__.items(): if (len(include) and k not in include) or k.startswith("_") or k in exclude: continue else: setattr(a, k, v) def get
271500
test_opset(): """Return the second-most recent ONNX opset version supported by this version of PyTorch, adjusted for maturity.""" if TORCH_1_13: # If the PyTorch>=1.13, dynamically compute the latest opset minus one using 'symbolic_opset' return max(int(k[14:]) for k in vars(torch.onnx) if "symbolic_opset" in k) - 1 # Otherwise for PyTorch<=1.12 return the corresponding predefined opset version = torch.onnx.producer_version.rsplit(".", 1)[0] # i.e. '2.3' return {"1.12": 15, "1.11": 14, "1.10": 13, "1.9": 12, "1.8": 12}.get(version, 12) def intersect_dicts(da, db, exclude=()): """Returns a dictionary of intersecting keys with matching shapes, excluding 'exclude' keys, using da values.""" return {k: v for k, v in da.items() if k in db and all(x not in k for x in exclude) and v.shape == db[k].shape} def is_parallel(model): """Returns True if model is of type DP or DDP.""" return isinstance(model, (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)) def de_parallel(model): """De-parallelize a model: returns single-GPU model if model is of type DP or DDP.""" return model.module if is_parallel(model) else model def one_cycle(y1=0.0, y2=1.0, steps=100): """Returns a lambda function for sinusoidal ramp from y1 to y2 https://arxiv.org/pdf/1812.01187.pdf.""" return lambda x: max((1 - math.cos(x * math.pi / steps)) / 2, 0) * (y2 - y1) + y1 def init_seeds(seed=0, deterministic=False): """Initialize random number generator (RNG) seeds https://pytorch.org/docs/stable/notes/randomness.html.""" random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) # for Multi-GPU, exception safe # torch.backends.cudnn.benchmark = True # AutoBatch problem https://github.com/ultralytics/yolov5/issues/9287 if deterministic: if TORCH_2_0: torch.use_deterministic_algorithms(True, warn_only=True) # warn if deterministic is not possible torch.backends.cudnn.deterministic = True os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8" os.environ["PYTHONHASHSEED"] = str(seed) else: LOGGER.warning("WARNING ⚠️ Upgrade to torch>=2.0.0 for deterministic training.") else: torch.use_deterministic_algorithms(False) torch.backends.cudnn.deterministic = False class ModelEMA: """ Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models. Keeps a moving average of everything in the model state_dict (parameters and buffers). For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage To disable EMA set the `enabled` attribute to `False`. """ def __init__(self, model, decay=0.9999, tau=2000, updates=0): """Initialize EMA for 'model' with given arguments.""" self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA self.updates = updates # number of EMA updates self.decay = lambda x: decay * (1 - math.exp(-x / tau)) # decay exponential ramp (to help early epochs) for p in self.ema.parameters(): p.requires_grad_(False) self.enabled = True def update(self, model): """Update EMA parameters.""" if self.enabled: self.updates += 1 d = self.decay(self.updates) msd = de_parallel(model).state_dict() # model state_dict for k, v in self.ema.state_dict().items(): if v.dtype.is_floating_point: # true for FP16 and FP32 v *= d v += (1 - d) * msd[k].detach() # assert v.dtype == msd[k].dtype == torch.float32, f'{k}: EMA {v.dtype}, model {msd[k].dtype}' def update_attr(self, model, include=(), exclude=("process_group", "reducer")): """Updates attributes and saves stripped model with optimizer removed.""" if self.enabled: copy_attr(self.ema, model, include, exclude) def strip_optimizer(f: Union[str, Path] = "best.pt", s: str = "", updates: dict = None) -> dict: """ Strip optimizer from 'f' to finalize training, optionally save as 's'. Args: f (str): file path to model to strip the optimizer from. Default is 'best.pt'. s (str): file path to save the model with stripped optimizer to. If not provided, 'f' will be overwritten. updates (dict): a dictionary of updates to overlay onto the checkpoint before saving. Returns: (dict): The combined checkpoint dictionary. Example: ```python from pathlib import Path from ultralytics.utils.torch_utils import strip_optimizer for f in Path("path/to/model/checkpoints").rglob("*.pt"): strip_optimizer(f) ``` Note: Use `ultralytics.nn.torch_safe_load` for missing modules with `x = torch_safe_load(f)[0]` """ try: x = torch.load(f, map_location=torch.device("cpu")) assert isinstance(x, dict), "checkpoint is not a Python dictionary" assert "model" in x, "'model' missing from checkpoint" except Exception as e: LOGGER.warning(f"WARNING ⚠️ Skipping {f}, not a valid Ultralytics model: {e}") return {} metadata = { "date": datetime.now().isoformat(), "version": __version__, "license": "AGPL-3.0 License (https://ultralytics.com/license)", "docs": "https://docs.ultralytics.com", } # Update model if x.get("ema"): x["model"] = x["ema"] # replace model with EMA if hasattr(x["model"], "args"): x["model"].args = dict(x["model"].args) # convert from IterableSimpleNamespace to dict if hasattr(x["model"], "criterion"): x["model"].criterion = None # strip loss criterion x["model"].half() # to FP16 for p in x["model"].parameters(): p.requires_grad = False # Update other keys args = {**DEFAULT_CFG_DICT, **x.get("train_args", {})} # combine args for k in "optimizer", "best_fitness", "ema", "updates": # keys x[k] = None x["epoch"] = -1 x["train_args"] = {k: v for k, v in args.items() if k in DEFAULT_CFG_KEYS} # strip non-default keys # x['model'].args = x['train_args'] # Save combined = {**metadata, **x, **(updates or {})} torch.save(combined, s or f) # combine dicts (prefer to the right) mb = os.path.getsize(s or f) / 1e6 # file size LOGGER.info(f"Optimizer stripped from {f},{f' saved as {s},' if s else ''} {mb:.1f}MB") return combined def convert_optimizer_state_dict_to_fp16(state_dict): """ Converts the state_dict of a given optimizer to FP16, focusing on the 'state' key for tensor conversions. This method aims to reduce storage size without altering 'param_groups' as they contain non-tensor data. """ for state in state_dict["state"].values(): for k, v in state.items(): if k != "step" and isinstance(v, torch.Tensor) and v.dtype is torch.float32: state[k] = v.half() return state_dict def profile(inp
271523
batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor: """Calculates bounding boxes in XYXY format around binary masks, handling empty masks and various input shapes.""" # torch.max below raises an error on empty inputs, just skip in this case if torch.numel(masks) == 0: return torch.zeros(*masks.shape[:-2], 4, device=masks.device) # Normalize shape to CxHxW shape = masks.shape h, w = shape[-2:] masks = masks.flatten(0, -3) if len(shape) > 2 else masks.unsqueeze(0) # Get top and bottom edges in_height, _ = torch.max(masks, dim=-1) in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :] bottom_edges, _ = torch.max(in_height_coords, dim=-1) in_height_coords = in_height_coords + h * (~in_height) top_edges, _ = torch.min(in_height_coords, dim=-1) # Get left and right edges in_width, _ = torch.max(masks, dim=-2) in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :] right_edges, _ = torch.max(in_width_coords, dim=-1) in_width_coords = in_width_coords + w * (~in_width) left_edges, _ = torch.min(in_width_coords, dim=-1) # If the mask is empty the right edge will be to the left of the left edge. # Replace these boxes with [0, 0, 0, 0] empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges) out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1) out = out * (~empty_filter).unsqueeze(-1) # Return to original shape return out.reshape(*shape[:-2], 4) if len(shape) > 2 else out[0]
271581
# Ultralytics YOLO 🚀, AGPL-3.0 license from ultralytics.models.yolo import classify, detect, obb, pose, segment, world from .model import YOLO, YOLOWorld __all__ = "classify", "segment", "detect", "pose", "obb", "world", "YOLO", "YOLOWorld"
271582
# Ultralytics YOLO 🚀, AGPL-3.0 license from pathlib import Path from ultralytics.engine.model import Model from ultralytics.models import yolo from ultralytics.nn.tasks import ClassificationModel, DetectionModel, OBBModel, PoseModel, SegmentationModel, WorldModel from ultralytics.utils import ROOT, yaml_load class YOLO(Model): """YOLO (You Only Look Once) object detection model.""" def __init__(self, model="yolo11n.pt", task=None, verbose=False): """Initialize YOLO model, switching to YOLOWorld if model filename contains '-world'.""" path = Path(model) if "-world" in path.stem and path.suffix in {".pt", ".yaml", ".yml"}: # if YOLOWorld PyTorch model new_instance = YOLOWorld(path, verbose=verbose) self.__class__ = type(new_instance) self.__dict__ = new_instance.__dict__ else: # Continue with default YOLO initialization super().__init__(model=model, task=task, verbose=verbose) @property def task_map(self): """Map head to model, trainer, validator, and predictor classes.""" return { "classify": { "model": ClassificationModel, "trainer": yolo.classify.ClassificationTrainer, "validator": yolo.classify.ClassificationValidator, "predictor": yolo.classify.ClassificationPredictor, }, "detect": { "model": DetectionModel, "trainer": yolo.detect.DetectionTrainer, "validator": yolo.detect.DetectionValidator, "predictor": yolo.detect.DetectionPredictor, }, "segment": { "model": SegmentationModel, "trainer": yolo.segment.SegmentationTrainer, "validator": yolo.segment.SegmentationValidator, "predictor": yolo.segment.SegmentationPredictor, }, "pose": { "model": PoseModel, "trainer": yolo.pose.PoseTrainer, "validator": yolo.pose.PoseValidator, "predictor": yolo.pose.PosePredictor, }, "obb": { "model": OBBModel, "trainer": yolo.obb.OBBTrainer, "validator": yolo.obb.OBBValidator, "predictor": yolo.obb.OBBPredictor, }, } class YOLOWorld(Model): """YOLO-World object detection model.""" def __init__(self, model="yolov8s-world.pt", verbose=False) -> None: """ Initialize YOLOv8-World model with a pre-trained model file. Loads a YOLOv8-World model for object detection. If no custom class names are provided, it assigns default COCO class names. Args: model (str | Path): Path to the pre-trained model file. Supports *.pt and *.yaml formats. verbose (bool): If True, prints additional information during initialization. """ super().__init__(model=model, task="detect", verbose=verbose) # Assign default COCO class names when there are no custom names if not hasattr(self.model, "names"): self.model.names = yaml_load(ROOT / "cfg/datasets/coco8.yaml").get("names") @property def task_map(self): """Map head to model, validator, and predictor classes.""" return { "detect": { "model": WorldModel, "validator": yolo.detect.DetectionValidator, "predictor": yolo.detect.DetectionPredictor, "trainer": yolo.world.WorldTrainer, } } def set_classes(self, classes): """ Set classes. Args: classes (List(str)): A list of categories i.e. ["person"]. """ self.model.set_classes(classes) # Remove background if it's given background = " " if background in classes: classes.remove(background) self.model.names = classes # Reset method class names # self.predictor = None # reset predictor otherwise old names remain if self.predictor: self.predictor.model.names = classes
271589
esults(self): """Prints training/validation set metrics per class.""" pf = "%22s" + "%11i" * 2 + "%11.3g" * len(self.metrics.keys) # print format LOGGER.info(pf % ("all", self.seen, self.nt_per_class.sum(), *self.metrics.mean_results())) if self.nt_per_class.sum() == 0: LOGGER.warning(f"WARNING ⚠️ no labels found in {self.args.task} set, can not compute metrics without labels") # Print results per class if self.args.verbose and not self.training and self.nc > 1 and len(self.stats): for i, c in enumerate(self.metrics.ap_class_index): LOGGER.info( pf % (self.names[c], self.nt_per_image[c], self.nt_per_class[c], *self.metrics.class_result(i)) ) if self.args.plots: for normalize in True, False: self.confusion_matrix.plot( save_dir=self.save_dir, names=self.names.values(), normalize=normalize, on_plot=self.on_plot ) def _process_batch(self, detections, gt_bboxes, gt_cls): """ Return correct prediction matrix. Args: detections (torch.Tensor): Tensor of shape (N, 6) representing detections where each detection is (x1, y1, x2, y2, conf, class). gt_bboxes (torch.Tensor): Tensor of shape (M, 4) representing ground-truth bounding box coordinates. Each bounding box is of the format: (x1, y1, x2, y2). gt_cls (torch.Tensor): Tensor of shape (M,) representing target class indices. Returns: (torch.Tensor): Correct prediction matrix of shape (N, 10) for 10 IoU levels. Note: The function does not return any value directly usable for metrics calculation. Instead, it provides an intermediate representation used for evaluating predictions against ground truth. """ iou = box_iou(gt_bboxes, detections[:, :4]) return self.match_predictions(detections[:, 5], gt_cls, iou) def build_dataset(self, img_path, mode="val", batch=None): """ Build YOLO Dataset. Args: img_path (str): Path to the folder containing images. mode (str): `train` mode or `val` mode, users are able to customize different augmentations for each mode. batch (int, optional): Size of batches, this is for `rect`. Defaults to None. """ return build_yolo_dataset(self.args, img_path, batch, self.data, mode=mode, stride=self.stride) def get_dataloader(self, dataset_path, batch_size): """Construct and return dataloader.""" dataset = self.build_dataset(dataset_path, batch=batch_size, mode="val") return build_dataloader(dataset, batch_size, self.args.workers, shuffle=False, rank=-1) # return dataloader def plot_val_samples(self, batch, ni): """Plot validation image samples.""" plot_images( batch["img"], batch["batch_idx"], batch["cls"].squeeze(-1), batch["bboxes"], paths=batch["im_file"], fname=self.save_dir / f"val_batch{ni}_labels.jpg", names=self.names, on_plot=self.on_plot, ) def plot_predictions(self, batch, preds, ni): """Plots predicted bounding boxes on input images and saves the result.""" plot_images( batch["img"], *output_to_target(preds, max_det=self.args.max_det), paths=batch["im_file"], fname=self.save_dir / f"val_batch{ni}_pred.jpg", names=self.names, on_plot=self.on_plot, ) # pred def save_one_txt(self, predn, save_conf, shape, file): """Save YOLO detections to a txt file in normalized coordinates in a specific format.""" from ultralytics.engine.results import Results Results( np.zeros((shape[0], shape[1]), dtype=np.uint8), path=None, names=self.names, boxes=predn[:, :6], ).save_txt(file, save_conf=save_conf) def pred_to_json(self, predn, filename): """Serialize YOLO predictions to COCO json format.""" stem = Path(filename).stem image_id = int(stem) if stem.isnumeric() else stem box = ops.xyxy2xywh(predn[:, :4]) # xywh box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner for p, b in zip(predn.tolist(), box.tolist()): self.jdict.append( { "image_id": image_id, "category_id": self.class_map[int(p[5])] + (1 if self.is_lvis else 0), # index starts from 1 if it's lvis "bbox": [round(x, 3) for x in b], "score": round(p[4], 5), } ) def eval_json(self, stats): """Evaluates YOLO output in JSON format and returns performance statistics.""" if self.args.save_json and (self.is_coco or self.is_lvis) and len(self.jdict): pred_json = self.save_dir / "predictions.json" # predictions anno_json = ( self.data["path"] / "annotations" / ("instances_val2017.json" if self.is_coco else f"lvis_v1_{self.args.split}.json") ) # annotations pkg = "pycocotools" if self.is_coco else "lvis" LOGGER.info(f"\nEvaluating {pkg} mAP using {pred_json} and {anno_json}...") try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb for x in pred_json, anno_json: assert x.is_file(), f"{x} file not found" check_requirements("pycocotools>=2.0.6" if self.is_coco else "lvis>=0.5.3") if self.is_coco: from pycocotools.coco import COCO # noqa from pycocotools.cocoeval import COCOeval # noqa anno = COCO(str(anno_json)) # init annotations api pred = anno.loadRes(str(pred_json)) # init predictions api (must pass string, not Path) val = COCOeval(anno, pred, "bbox") else: from lvis import LVIS, LVISEval anno = LVIS(str(anno_json)) # init annotations api pred = anno._load_json(str(pred_json)) # init predictions api (must pass string, not Path) val = LVISEval(anno, pred, "bbox") val.params.imgIds = [int(Path(x).stem) for x in self.dataloader.dataset.im_files] # images to eval val.evaluate() val.accumulate() val.summarize() if self.is_lvis: val.print_results() # explicitly call print_results # update mAP50-95 and mAP50 stats[self.metrics.keys[-1]], stats[self.metrics.keys[-2]] = ( val.stats[:2] if self.is_coco else [val.results["AP50"], val.results["AP"]] ) except Exception as e: LOGGER.warning(f"{pkg} unable to run: {e}") return stats
271590
# Ultralytics YOLO 🚀, AGPL-3.0 license from ultralytics.engine.predictor import BasePredictor from ultralytics.engine.results import Results from ultralytics.utils import ops class DetectionPredictor(BasePredictor): """ A class extending the BasePredictor class for prediction based on a detection model. Example: ```python from ultralytics.utils import ASSETS from ultralytics.models.yolo.detect import DetectionPredictor args = dict(model="yolo11n.pt", source=ASSETS) predictor = DetectionPredictor(overrides=args) predictor.predict_cli() ``` """ def postprocess(self, preds, img, orig_imgs): """Post-processes predictions and returns a list of Results objects.""" preds = ops.non_max_suppression( preds, self.args.conf, self.args.iou, agnostic=self.args.agnostic_nms, max_det=self.args.max_det, classes=self.args.classes, ) if not isinstance(orig_imgs, list): # input images are a torch.Tensor, not a list orig_imgs = ops.convert_torch2numpy_batch(orig_imgs) results = [] for pred, orig_img, img_path in zip(preds, orig_imgs, self.batch[0]): pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], orig_img.shape) results.append(Results(orig_img, path=img_path, names=self.model.names, boxes=pred)) return results
271596
# Ultralytics YOLO 🚀, AGPL-3.0 license from copy import copy import torch from ultralytics.data import ClassificationDataset, build_dataloader from ultralytics.engine.trainer import BaseTrainer from ultralytics.models import yolo from ultralytics.nn.tasks import ClassificationModel from ultralytics.utils import DEFAULT_CFG, LOGGER, RANK from ultralytics.utils.plotting import plot_images, plot_results from ultralytics.utils.torch_utils import is_parallel, strip_optimizer, torch_distributed_zero_first class ClassificationTrainer(BaseTrainer): """ A class extending the BaseTrainer class for training based on a classification model. Notes: - Torchvision classification models can also be passed to the 'model' argument, i.e. model='resnet18'. Example: ```python from ultralytics.models.yolo.classify import ClassificationTrainer args = dict(model="yolov8n-cls.pt", data="imagenet10", epochs=3) trainer = ClassificationTrainer(overrides=args) trainer.train() ``` """ def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): """Initialize a ClassificationTrainer object with optional configuration overrides and callbacks.""" if overrides is None: overrides = {} overrides["task"] = "classify" if overrides.get("imgsz") is None: overrides["imgsz"] = 224 super().__init__(cfg, overrides, _callbacks) def set_model_attributes(self): """Set the YOLO model's class names from the loaded dataset.""" self.model.names = self.data["names"] def get_model(self, cfg=None, weights=None, verbose=True): """Returns a modified PyTorch model configured for training YOLO.""" model = ClassificationModel(cfg, nc=self.data["nc"], verbose=verbose and RANK == -1) if weights: model.load(weights) for m in model.modules(): if not self.args.pretrained and hasattr(m, "reset_parameters"): m.reset_parameters() if isinstance(m, torch.nn.Dropout) and self.args.dropout: m.p = self.args.dropout # set dropout for p in model.parameters(): p.requires_grad = True # for training return model def setup_model(self): """Load, create or download model for any task.""" import torchvision # scope for faster 'import ultralytics' if str(self.model) in torchvision.models.__dict__: self.model = torchvision.models.__dict__[self.model]( weights="IMAGENET1K_V1" if self.args.pretrained else None ) ckpt = None else: ckpt = super().setup_model() ClassificationModel.reshape_outputs(self.model, self.data["nc"]) return ckpt def build_dataset(self, img_path, mode="train", batch=None): """Creates a ClassificationDataset instance given an image path, and mode (train/test etc.).""" return ClassificationDataset(root=img_path, args=self.args, augment=mode == "train", prefix=mode) def get_dataloader(self, dataset_path, batch_size=16, rank=0, mode="train"): """Returns PyTorch DataLoader with transforms to preprocess images for inference.""" with torch_distributed_zero_first(rank): # init dataset *.cache only once if DDP dataset = self.build_dataset(dataset_path, mode) loader = build_dataloader(dataset, batch_size, self.args.workers, rank=rank) # Attach inference transforms if mode != "train": if is_parallel(self.model): self.model.module.transforms = loader.dataset.torch_transforms else: self.model.transforms = loader.dataset.torch_transforms return loader def preprocess_batch(self, batch): """Preprocesses a batch of images and classes.""" batch["img"] = batch["img"].to(self.device) batch["cls"] = batch["cls"].to(self.device) return batch def progress_string(self): """Returns a formatted string showing training progress.""" return ("\n" + "%11s" * (4 + len(self.loss_names))) % ( "Epoch", "GPU_mem", *self.loss_names, "Instances", "Size", ) def get_validator(self): """Returns an instance of ClassificationValidator for validation.""" self.loss_names = ["loss"] return yolo.classify.ClassificationValidator( self.test_loader, self.save_dir, args=copy(self.args), _callbacks=self.callbacks ) def label_loss_items(self, loss_items=None, prefix="train"): """ Returns a loss dict with labelled training loss items tensor. Not needed for classification but necessary for segmentation & detection """ keys = [f"{prefix}/{x}" for x in self.loss_names] if loss_items is None: return keys loss_items = [round(float(loss_items), 5)] return dict(zip(keys, loss_items)) def plot_metrics(self): """Plots metrics from a CSV file.""" plot_results(file=self.csv, classify=True, on_plot=self.on_plot) # save results.png def final_eval(self): """Evaluate trained model and save validation results.""" for f in self.last, self.best: if f.exists(): strip_optimizer(f) # strip optimizers if f is self.best: LOGGER.info(f"\nValidating {f}...") self.validator.args.data = self.args.data self.validator.args.plots = self.args.plots self.metrics = self.validator(model=f) self.metrics.pop("fitness", None) self.run_callbacks("on_fit_epoch_end") def plot_training_samples(self, batch, ni): """Plots training samples with their annotations.""" plot_images( images=batch["img"], batch_idx=torch.arange(len(batch["img"])), cls=batch["cls"].view(-1), # warning: use .view(), not .squeeze() for Classify models fname=self.save_dir / f"train_batch{ni}.jpg", on_plot=self.on_plot, )
271599
# Ultralytics YOLO 🚀, AGPL-3.0 license from ultralytics.engine.results import Results from ultralytics.models.yolo.detect.predict import DetectionPredictor from ultralytics.utils import DEFAULT_CFG, ops class SegmentationPredictor(DetectionPredictor): """ A class extending the DetectionPredictor class for prediction based on a segmentation model. Example: ```python from ultralytics.utils import ASSETS from ultralytics.models.yolo.segment import SegmentationPredictor args = dict(model="yolov8n-seg.pt", source=ASSETS) predictor = SegmentationPredictor(overrides=args) predictor.predict_cli() ``` """ def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): """Initializes the SegmentationPredictor with the provided configuration, overrides, and callbacks.""" super().__init__(cfg, overrides, _callbacks) self.args.task = "segment" def postprocess(self, preds, img, orig_imgs): """Applies non-max suppression and processes detections for each image in an input batch.""" p = ops.non_max_suppression( preds[0], self.args.conf, self.args.iou, agnostic=self.args.agnostic_nms, max_det=self.args.max_det, nc=len(self.model.names), classes=self.args.classes, ) if not isinstance(orig_imgs, list): # input images are a torch.Tensor, not a list orig_imgs = ops.convert_torch2numpy_batch(orig_imgs) results = [] proto = preds[1][-1] if isinstance(preds[1], tuple) else preds[1] # tuple if PyTorch model or array if exported for i, (pred, orig_img, img_path) in enumerate(zip(p, orig_imgs, self.batch[0])): if not len(pred): # save empty boxes masks = None elif self.args.retina_masks: pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], orig_img.shape) masks = ops.process_mask_native(proto[i], pred[:, 6:], pred[:, :4], orig_img.shape[:2]) # HWC else: masks = ops.process_mask(proto[i], pred[:, 6:], pred[:, :4], img.shape[2:], upsample=True) # HWC pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], orig_img.shape) results.append(Results(orig_img, path=img_path, names=self.model.names, boxes=pred[:, :6], masks=masks)) return results
271606
ss PoseValidator(DetectionValidator): """ A class extending the DetectionValidator class for validation based on a pose model. Example: ```python from ultralytics.models.yolo.pose import PoseValidator args = dict(model="yolov8n-pose.pt", data="coco8-pose.yaml") validator = PoseValidator(args=args) validator() ``` """ def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None): """Initialize a 'PoseValidator' object with custom parameters and assigned attributes.""" super().__init__(dataloader, save_dir, pbar, args, _callbacks) self.sigma = None self.kpt_shape = None self.args.task = "pose" self.metrics = PoseMetrics(save_dir=self.save_dir, on_plot=self.on_plot) if isinstance(self.args.device, str) and self.args.device.lower() == "mps": LOGGER.warning( "WARNING ⚠️ Apple MPS known Pose bug. Recommend 'device=cpu' for Pose models. " "See https://github.com/ultralytics/ultralytics/issues/4031." ) def preprocess(self, batch): """Preprocesses the batch by converting the 'keypoints' data into a float and moving it to the device.""" batch = super().preprocess(batch) batch["keypoints"] = batch["keypoints"].to(self.device).float() return batch def get_desc(self): """Returns description of evaluation metrics in string format.""" return ("%22s" + "%11s" * 10) % ( "Class", "Images", "Instances", "Box(P", "R", "mAP50", "mAP50-95)", "Pose(P", "R", "mAP50", "mAP50-95)", ) def postprocess(self, preds): """Apply non-maximum suppression and return detections with high confidence scores.""" return ops.non_max_suppression( preds, self.args.conf, self.args.iou, labels=self.lb, multi_label=True, agnostic=self.args.single_cls or self.args.agnostic_nms, max_det=self.args.max_det, nc=self.nc, ) def init_metrics(self, model): """Initiate pose estimation metrics for YOLO model.""" super().init_metrics(model) self.kpt_shape = self.data["kpt_shape"] is_pose = self.kpt_shape == [17, 3] nkpt = self.kpt_shape[0] self.sigma = OKS_SIGMA if is_pose else np.ones(nkpt) / nkpt self.stats = dict(tp_p=[], tp=[], conf=[], pred_cls=[], target_cls=[], target_img=[]) def _prepare_batch(self, si, batch): """Prepares a batch for processing by converting keypoints to float and moving to device.""" pbatch = super()._prepare_batch(si, batch) kpts = batch["keypoints"][batch["batch_idx"] == si] h, w = pbatch["imgsz"] kpts = kpts.clone() kpts[..., 0] *= w kpts[..., 1] *= h kpts = ops.scale_coords(pbatch["imgsz"], kpts, pbatch["ori_shape"], ratio_pad=pbatch["ratio_pad"]) pbatch["kpts"] = kpts return pbatch def _prepare_pred(self, pred, pbatch): """Prepares and scales keypoints in a batch for pose processing.""" predn = super()._prepare_pred(pred, pbatch) nk = pbatch["kpts"].shape[1] pred_kpts = predn[:, 6:].view(len(predn), nk, -1) ops.scale_coords(pbatch["imgsz"], pred_kpts, pbatch["ori_shape"], ratio_pad=pbatch["ratio_pad"]) return predn, pred_kpts def update_metrics(self, preds, batch): """Metrics.""" for si, pred in enumerate(preds): self.seen += 1 npr = len(pred) stat = dict( conf=torch.zeros(0, device=self.device), pred_cls=torch.zeros(0, device=self.device), tp=torch.zeros(npr, self.niou, dtype=torch.bool, device=self.device), tp_p=torch.zeros(npr, self.niou, dtype=torch.bool, device=self.device), ) pbatch = self._prepare_batch(si, batch) cls, bbox = pbatch.pop("cls"), pbatch.pop("bbox") nl = len(cls) stat["target_cls"] = cls stat["target_img"] = cls.unique() if npr == 0: if nl: for k in self.stats.keys(): self.stats[k].append(stat[k]) if self.args.plots: self.confusion_matrix.process_batch(detections=None, gt_bboxes=bbox, gt_cls=cls) continue # Predictions if self.args.single_cls: pred[:, 5] = 0 predn, pred_kpts = self._prepare_pred(pred, pbatch) stat["conf"] = predn[:, 4] stat["pred_cls"] = predn[:, 5] # Evaluate if nl: stat["tp"] = self._process_batch(predn, bbox, cls) stat["tp_p"] = self._process_batch(predn, bbox, cls, pred_kpts, pbatch["kpts"]) if self.args.plots: self.confusion_matrix.process_batch(predn, bbox, cls) for k in self.stats.keys(): self.stats[k].append(stat[k]) # Save if self.args.save_json: self.pred_to_json(predn, batch["im_file"][si]) if self.args.save_txt: self.save_one_txt( predn, pred_kpts, self.args.save_conf, pbatch["ori_shape"], self.save_dir / "labels" / f'{Path(batch["im_file"][si]).stem}.txt', ) def _process_batch(self, detections, gt_bboxes, gt_cls, pred_kpts=None, gt_kpts=None): """ Return correct prediction matrix by computing Intersection over Union (IoU) between detections and ground truth. Args: detections (torch.Tensor): Tensor with shape (N, 6) representing detection boxes and scores, where each detection is of the format (x1, y1, x2, y2, conf, class). gt_bboxes (torch.Tensor): Tensor with shape (M, 4) representing ground truth bounding boxes, where each box is of the format (x1, y1, x2, y2). gt_cls (torch.Tensor): Tensor with shape (M,) representing ground truth class indices. pred_kpts (torch.Tensor | None): Optional tensor with shape (N, 51) representing predicted keypoints, where 51 corresponds to 17 keypoints each having 3 values. gt_kpts (torch.Tensor | None): Optional tensor with shape (N, 51) representing ground truth keypoints. Returns: torch.Tensor: A tensor with shape (N, 10) representing the correct prediction matrix for 10 IoU levels, where N is the number of detections. Example: ```python detections = torch.rand(100, 6) # 100 predictions: (x1, y1, x2, y2, conf, class) gt_bboxes = torch.rand(50, 4) # 50 ground truth boxes: (x1, y1, x2, y2) gt_cls = torch.randint(0, 2, (50,)) # 50 ground truth class indices pred_kpts = torch.rand(100, 51) # 100 predicted keypoints gt_kpts = torch.rand(50, 51) # 50 ground truth keypoints correct_preds = _process_batch(detections, gt_bboxes, gt_cls, pred_kpts, gt_kpts) ``` Note: `0.53` scale factor used in area computation is referenced from https://github.com/jin-s13/xtcocoapi/blob/master/xtcocotools/cocoeval.py#L384. """ if pred_kpts is not None and gt_kpts is not None: # `0.53` is from https://github.com/jin-s13/xtcocoapi/blob/master/xtcocotools/cocoeval.py#L384 area = ops.xyxy2xywh(gt_bboxes)[:, 2:].prod(1) * 0.53 iou = kpt_iou(gt_kpts, pred_kpts, sigma=self.sigma, area=area) else: # boxes iou = box_iou(gt_bboxes, detections[:, :4]) return self.match_predictions(detections[:, 5], gt_cls, iou) d
271608
# Ultralytics YOLO 🚀, AGPL-3.0 license from ultralytics.engine.results import Results from ultralytics.models.yolo.detect.predict import DetectionPredictor from ultralytics.utils import DEFAULT_CFG, LOGGER, ops class PosePredictor(DetectionPredictor): """ A class extending the DetectionPredictor class for prediction based on a pose model. Example: ```python from ultralytics.utils import ASSETS from ultralytics.models.yolo.pose import PosePredictor args = dict(model="yolov8n-pose.pt", source=ASSETS) predictor = PosePredictor(overrides=args) predictor.predict_cli() ``` """ def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): """Initializes PosePredictor, sets task to 'pose' and logs a warning for using 'mps' as device.""" super().__init__(cfg, overrides, _callbacks) self.args.task = "pose" if isinstance(self.args.device, str) and self.args.device.lower() == "mps": LOGGER.warning( "WARNING ⚠️ Apple MPS known Pose bug. Recommend 'device=cpu' for Pose models. " "See https://github.com/ultralytics/ultralytics/issues/4031." ) def postprocess(self, preds, img, orig_imgs): """Return detection results for a given input image or list of images.""" preds = ops.non_max_suppression( preds, self.args.conf, self.args.iou, agnostic=self.args.agnostic_nms, max_det=self.args.max_det, classes=self.args.classes, nc=len(self.model.names), ) if not isinstance(orig_imgs, list): # input images are a torch.Tensor, not a list orig_imgs = ops.convert_torch2numpy_batch(orig_imgs) results = [] for pred, orig_img, img_path in zip(preds, orig_imgs, self.batch[0]): pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], orig_img.shape).round() pred_kpts = pred[:, 6:].view(len(pred), *self.model.kpt_shape) if len(pred) else pred[:, 6:] pred_kpts = ops.scale_coords(img.shape[2:], pred_kpts, orig_img.shape) results.append( Results(orig_img, path=img_path, names=self.model.names, boxes=pred[:, :6], keypoints=pred_kpts) ) return results
271621
Python Examples ### Persisting Tracks Loop Here is a Python script using OpenCV (`cv2`) and YOLO11 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker than the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image. #### Python ```python import cv2 from ultralytics import YOLO # Load the YOLO11 model model = YOLO("yolo11n.pt") # Open the video file video_path = "path/to/video.mp4" cap = cv2.VideoCapture(video_path) # Loop through the video frames while cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: # Run YOLO11 tracking on the frame, persisting tracks between frames results = model.track(frame, persist=True) # Visualize the results on the frame annotated_frame = results[0].plot() # Display the annotated frame cv2.imshow("YOLO11 Tracking", annotated_frame) # Break the loop if 'q' is pressed if cv2.waitKey(1) & 0xFF == ord("q"): break else: # Break the loop if the end of the video is reached break # Release the video capture object and close the display window cap.release() cv2.destroyAllWindows() ``` Please note the change from `model(frame)` to `model.track(frame)`, which enables object tracking instead of simple detection. This modified script will run the tracker on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'. ### Plotting Tracks Over Time Visualizing object tracks over consecutive frames can provide valuable insights into the movement patterns and behavior of detected objects within a video. With Ultralytics YOLO11, plotting these tracks is a seamless and efficient process. In the following example, we demonstrate how to utilize YOLO11's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects. #### Python ```python from collections import defaultdict import cv2 import numpy as np from ultralytics import YOLO # Load the YOLO11 model model = YOLO("yolo11n.pt") # Open the video file video_path = "path/to/video.mp4" cap = cv2.VideoCapture(video_path) # Store the track history track_history = defaultdict(lambda: []) # Loop through the video frames while cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: # Run YOLO11 tracking on the frame, persisting tracks between frames results = model.track(frame, persist=True) # Get the boxes and track IDs boxes = results[0].boxes.xywh.cpu() track_ids = results[0].boxes.id.int().cpu().tolist() # Visualize the results on the frame annotated_frame = results[0].plot() # Plot the tracks for box, track_id in zip(boxes, track_ids): x, y, w, h = box track = track_history[track_id] track.append((float(x), float(y))) # x, y center point if len(track) > 30: # retain 90 tracks for 90 frames track.pop(0) # Draw the tracking lines points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2)) cv2.polylines( annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=10, ) # Display the annotated frame cv2.imshow("YOLO11 Tracking", annotated_frame) # Break the loop if 'q' is pressed if cv2.waitKey(1) & 0xFF == ord("q"): break else: # Break the loop if the end of the video is reached break # Release the video capture object and close the display window cap.release() cv2.destroyAllWindows() ``` ### Multithreaded Tracking Multithreaded tracking provides the capability to run object tracking on multiple video streams simultaneously. This is particularly useful when handling multiple video inputs, such as from multiple surveillance cameras, where concurrent processing can greatly enhance efficiency and performance. In the provided Python script, we make use of Python's `threading` module to run multiple instances of the tracker concurrently. Each thread is responsible for running the tracker on one video file, and all the threads run simultaneously in the background. To ensure that each thread receives the correct parameters (the video file and the model to use), we define a function `run_tracker_in_thread` that accepts these parameters and contains the main tracking loop. This function reads the video frame by frame, runs the tracker, and displays the results. Two different models are used in this example: `yolo11n.pt` and `yolo11n-seg.pt`, each tracking objects in a different video file. The video files are specified in `video_file1` and `video_file2`. The `daemon=True` parameter in `threading.Thread` means that these threads will be closed as soon as the main program finishes. We then start the threads with `start()` and use `join()` to make the main thread wait until both tracker threads have finished. Finally, after all threads have completed their task, the windows displaying the results are closed using `cv2.destroyAllWindows()`. #### Python ```python import threading import cv2 from ultralytics import YOLO def run_tracker_in_thread(filename, model): """Starts multi-thread tracking on video from `filename` using `model` and displays results frame by frame.""" video = cv2.VideoCapture(filename) frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) for _ in range(frames): ret, frame = video.read() if ret: results = model.track(source=frame, persist=True) res_plotted = results[0].plot() cv2.imshow("p", res_plotted) if cv2.waitKey(1) == ord("q"): break # Load the models model1 = YOLO("yolo11n.pt") model2 = YOLO("yolo11n-seg.pt") # Define the video files for the trackers video_file1 = "path/to/video1.mp4" video_file2 = "path/to/video2.mp4" # Create the tracker threads tracker_thread1 = threading.Thread(target=run_tracker_in_thread, args=(video_file1, model1), daemon=True) tracker_thread2 = threading.Thread(target=run_tracker_in_thread, args=(video_file2, model2), daemon=True) # Start the tracker threads tracker_thread1.start() tracker_thread2.start() # Wait for the tracker threads to finish tracker_thread1.join() tracker_thread2.join() # Clean up and close windows cv2.destroyAllWindows() ``` This example can easily be extended to handle more video files and models by creating more threads and applying the same methodology. ## Contribute New Trackers Are you proficient in multi-object tracking and have successfully implemented or adapted a tracking algorithm with Ultralytics YOLO? We invite you to contribute to our Trackers section in [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers)! Your real-world applications and solutions could be invaluable for users working on tracking tasks. By contributing to this section, you help expand the scope of tracking solutions available within the Ultralytics YOLO framework, adding another layer of functionality and utility for the community. To initiate your contribution, please refer to our [Contributing Guide](https://docs.ultralytics.com/help/contributing/) for comprehensive instructions on submitting a Pull Request (PR) 🛠️. We are excited to see what you bring to the table! Together, let's enhance the tracking capabilities of the Ultralytics YOLO ecosystem 🙏!
271637
get_cfg(cfg: Union[str, Path, Dict, SimpleNamespace] = DEFAULT_CFG_DICT, overrides: Dict = None): """ Load and merge configuration data from a file or dictionary, with optional overrides. Args: cfg (str | Path | Dict | SimpleNamespace): Configuration data source. Can be a file path, dictionary, or SimpleNamespace object. overrides (Dict | None): Dictionary containing key-value pairs to override the base configuration. Returns: (SimpleNamespace): Namespace containing the merged configuration arguments. Examples: >>> from ultralytics.cfg import get_cfg >>> config = get_cfg() # Load default configuration >>> config = get_cfg("path/to/config.yaml", overrides={"epochs": 50, "batch_size": 16}) Notes: - If both `cfg` and `overrides` are provided, the values in `overrides` will take precedence. - Special handling ensures alignment and correctness of the configuration, such as converting numeric `project` and `name` to strings and validating configuration keys and values. - The function performs type and value checks on the configuration data. """ cfg = cfg2dict(cfg) # Merge overrides if overrides: overrides = cfg2dict(overrides) if "save_dir" not in cfg: overrides.pop("save_dir", None) # special override keys to ignore check_dict_alignment(cfg, overrides) cfg = {**cfg, **overrides} # merge cfg and overrides dicts (prefer overrides) # Special handling for numeric project/name for k in "project", "name": if k in cfg and isinstance(cfg[k], (int, float)): cfg[k] = str(cfg[k]) if cfg.get("name") == "model": # assign model to 'name' arg cfg["name"] = cfg.get("model", "").split(".")[0] LOGGER.warning(f"WARNING ⚠️ 'name=model' automatically updated to 'name={cfg['name']}'.") # Type and Value checks check_cfg(cfg) # Return instance return IterableSimpleNamespace(**cfg) def check_cfg(cfg, hard=True): """ Checks configuration argument types and values for the Ultralytics library. This function validates the types and values of configuration arguments, ensuring correctness and converting them if necessary. It checks for specific key types defined in global variables such as CFG_FLOAT_KEYS, CFG_FRACTION_KEYS, CFG_INT_KEYS, and CFG_BOOL_KEYS. Args: cfg (Dict): Configuration dictionary to validate. hard (bool): If True, raises exceptions for invalid types and values; if False, attempts to convert them. Examples: >>> config = { ... "epochs": 50, # valid integer ... "lr0": 0.01, # valid float ... "momentum": 1.2, # invalid float (out of 0.0-1.0 range) ... "save": "true", # invalid bool ... } >>> check_cfg(config, hard=False) >>> print(config) {'epochs': 50, 'lr0': 0.01, 'momentum': 1.2, 'save': False} # corrected 'save' key Notes: - The function modifies the input dictionary in-place. - None values are ignored as they may be from optional arguments. - Fraction keys are checked to be within the range [0.0, 1.0]. """ for k, v in cfg.items(): if v is not None: # None values may be from optional args if k in CFG_FLOAT_KEYS and not isinstance(v, (int, float)): if hard: raise TypeError( f"'{k}={v}' is of invalid type {type(v).__name__}. " f"Valid '{k}' types are int (i.e. '{k}=0') or float (i.e. '{k}=0.5')" ) cfg[k] = float(v) elif k in CFG_FRACTION_KEYS: if not isinstance(v, (int, float)): if hard: raise TypeError( f"'{k}={v}' is of invalid type {type(v).__name__}. " f"Valid '{k}' types are int (i.e. '{k}=0') or float (i.e. '{k}=0.5')" ) cfg[k] = v = float(v) if not (0.0 <= v <= 1.0): raise ValueError(f"'{k}={v}' is an invalid value. " f"Valid '{k}' values are between 0.0 and 1.0.") elif k in CFG_INT_KEYS and not isinstance(v, int): if hard: raise TypeError( f"'{k}={v}' is of invalid type {type(v).__name__}. " f"'{k}' must be an int (i.e. '{k}=8')" ) cfg[k] = int(v) elif k in CFG_BOOL_KEYS and not isinstance(v, bool): if hard: raise TypeError( f"'{k}={v}' is of invalid type {type(v).__name__}. " f"'{k}' must be a bool (i.e. '{k}=True' or '{k}=False')" ) cfg[k] = bool(v) def get_save_dir(args, name=None): """ Returns the directory path for saving outputs, derived from arguments or default settings. Args: args (SimpleNamespace): Namespace object containing configurations such as 'project', 'name', 'task', 'mode', and 'save_dir'. name (str | None): Optional name for the output directory. If not provided, it defaults to 'args.name' or the 'args.mode'. Returns: (Path): Directory path where outputs should be saved. Examples: >>> from types import SimpleNamespace >>> args = SimpleNamespace(project="my_project", task="detect", mode="train", exist_ok=True) >>> save_dir = get_save_dir(args) >>> print(save_dir) my_project/detect/train """ if getattr(args, "save_dir", None): save_dir = args.save_dir else: from ultralytics.utils.files import increment_path project = args.project or (ROOT.parent / "tests/tmp/runs" if TESTS_RUNNING else RUNS_DIR) / args.task name = name or args.name or f"{args.mode}" save_dir = increment_path(Path(project) / name, exist_ok=args.exist_ok if RANK in {-1, 0} else True) return Path(save_dir) def _handle_deprecation(custom): """ Handles deprecated configuration keys by mapping them to current equivalents with deprecation warnings. Args: custom (Dict): Configuration dictionary potentially containing deprecated keys. Examples: >>> custom_config = {"boxes": True, "hide_labels": "False", "line_thickness": 2} >>> _handle_deprecation(custom_config) >>> print(custom_config) {'show_boxes': True, 'show_labels': True, 'line_width': 2} Notes: This function modifies the input dictionary in-place, replacing deprecated keys with their current equivalents. It also handles value conversions where necessary, such as inverting boolean values for 'hide_labels' and 'hide_conf'. """ for key in custom.copy().keys(): if key == "boxes": deprecation_warn(key, "show_boxes") custom["show_boxes"] = custom.pop("boxes") if key == "hide_labels": deprecation_warn(key, "show_labels") custom["show_labels"] = custom.pop("hide_labels") == "False" if key == "hide_conf": deprecation_warn(key, "show_conf") custom["show_conf"] = custom.pop("hide_conf") == "False" if key == "line_thickness": deprecation_warn(key, "line_width") custom["line_width"] = custom.pop("line_thickness") return custom def
271643
k: detect # (str) YOLO task, i.e. detect, segment, classify, pose, obb mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmark # Train settings ------------------------------------------------------------------------------------------------------- model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml data: # (str, optional) path to data file, i.e. coco8.yaml epochs: 100 # (int) number of epochs to train for time: # (float, optional) number of hours to train for, overrides epochs if supplied patience: 100 # (int) epochs to wait for no observable improvement for early stopping of training batch: 16 # (int) number of images per batch (-1 for AutoBatch) imgsz: 640 # (int | list) input images size as int for train and val modes, or list[h,w] for predict and export modes save: True # (bool) save train checkpoints and predict results save_period: -1 # (int) Save checkpoint every x epochs (disabled if < 1) cache: False # (bool) True/ram, disk or False. Use cache for data loading device: # (int | str | list, optional) device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu workers: 8 # (int) number of worker threads for data loading (per RANK if DDP) project: # (str, optional) project name name: # (str, optional) experiment name, results saved to 'project/name' directory exist_ok: False # (bool) whether to overwrite existing experiment pretrained: True # (bool | str) whether to use a pretrained model (bool) or a model to load weights from (str) optimizer: auto # (str) optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto] verbose: True # (bool) whether to print verbose output seed: 0 # (int) random seed for reproducibility deterministic: True # (bool) whether to enable deterministic mode single_cls: False # (bool) train multi-class data as single-class rect: False # (bool) rectangular training if mode='train' or rectangular validation if mode='val' cos_lr: False # (bool) use cosine learning rate scheduler close_mosaic: 10 # (int) disable mosaic augmentation for final epochs (0 to disable) resume: False # (bool) resume training from last checkpoint amp: True # (bool) Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check fraction: 1.0 # (float) dataset fraction to train on (default is 1.0, all images in train set) profile: False # (bool) profile ONNX and TensorRT speeds during training for loggers freeze: None # (int | list, optional) freeze first n layers, or freeze list of layer indices during training multi_scale: False # (bool) Whether to use multiscale during training # Segmentation overlap_mask: True # (bool) masks should overlap during training (segment train only) mask_ratio: 4 # (int) mask downsample ratio (segment train only) # Classification dropout: 0.0 # (float) use dropout regularization (classify train only) # Val/Test settings ---------------------------------------------------------------------------------------------------- val: True # (bool) validate/test during training split: val # (str) dataset split to use for validation, i.e. 'val', 'test' or 'train' save_json: False # (bool) save results to JSON file save_hybrid: False # (bool) save hybrid version of labels (labels + additional predictions) conf: # (float, optional) object confidence threshold for detection (default 0.25 predict, 0.001 val) iou: 0.7 # (float) intersection over union (IoU) threshold for NMS max_det: 300 # (int) maximum number of detections per image half: False # (bool) use half precision (FP16) dnn: False # (bool) use OpenCV DNN for ONNX inference plots: True # (bool) save plots and images during train/val # Predict settings ----------------------------------------------------------------------------------------------------- source: # (str, optional) source directory for images or videos vid_stride: 1 # (int) video frame-rate stride stream_buffer: False # (bool) buffer all streaming frames (True) or return the most recent frame (False) visualize: False # (bool) visualize model features augment: False # (bool) apply image augmentation to prediction sources agnostic_nms: False # (bool) class-agnostic NMS classes: # (int | list[int], optional) filter results by class, i.e. classes=0, or classes=[0,2,3] retina_masks: False # (bool) use high-resolution segmentation masks embed: # (list[int], optional) return feature vectors/embeddings from given layers # Visualize settings --------------------------------------------------------------------------------------------------- show: False # (bool) show predicted images and videos if environment allows save_frames: False # (bool) save predicted individual video frames save_txt: False # (bool) save results as .txt file save_conf: False # (bool) save results with confidence scores save_crop: False # (bool) save cropped images with results show_labels: True # (bool) show prediction labels, i.e. 'person' show_conf: True # (bool) show prediction confidence, i.e. '0.99' show_boxes: True # (bool) show prediction boxes line_width: # (int, optional) line width of the bounding boxes. Scaled to image size if None. # Export settings ------------------------------------------------------------------------------------------------------ format: torchscript # (str) format to export to, choices at https://docs.ultralytics.com/modes/export/#export-formats keras: False # (bool) use Kera=s optimize: False # (bool) TorchScript: optimize for mobile int8: False # (bool) CoreML/TF INT8 quantization dynamic: False # (bool) ONNX/TF/TensorRT: dynamic axes simplify: True # (bool) ONNX: simplify model using `onnxslim` opset: # (int, optional) ONNX: opset version workspace: 4 # (int) TensorRT: workspace size (GB) nms: False # (bool) CoreML: add NMS # Hyperparameters ------------------------------------------------------------------------------------------------------ lr0: 0.01 # (float) initial learning rate (i.e. SGD=1E-2, Adam=1E-3) lrf: 0.01 # (float) final learning rate (lr0 * lrf) momentum: 0.937 # (float) SGD momentum/Adam beta1 weight_decay: 0.0005 # (float) optimizer weight decay 5e-4 warmup_epochs: 3.0 # (float) warmup epochs (fractions ok) warmup_momentum: 0.8 # (float) warmup initial momentum warmup_bias_lr: 0.1 # (float) warmup initial bias lr box: 7.5 # (float) box loss gain cls: 0.5 # (float) cls loss gain (scale with pixels) dfl: 1.5 # (float) dfl loss gain pose: 12.0 # (float) pose loss gain kobj: 1.0 # (float) keypoint obj loss gain label_smoothing: 0.0 # (float) label smoothing (fraction) nbs: 64 # (int) nominal batch size hsv_h: 0.015 # (float) image HSV-Hue augmentation (fraction) hsv_s: 0.7 # (float) image HSV-Saturation augmentation (fraction) hsv_v: 0.4 # (float) image HSV-Value augmentation (fraction) degrees: 0.0 # (float) image rotation (+/- deg) translate: 0.1 # (float) image translation (+/- fraction) scale: 0.5 # (float) image scale (+/- gain) shear: 0.0 # (float) image shear (+/- deg) perspective: 0.0 # (float) image perspective (+/- fraction), range 0-0.001 flipud: 0.0 # (float) image flip up-down (probability) fliplr: 0.5 # (float) image flip left-right (probability) bgr: 0.0 # (float) image channel BGR (probability) mosaic: 1.0 # (float) image mosaic (probability) mixup: 0.0 # (float) image mixup (probability) copy_paste: 0.0 # (float) segment copy-paste (probability) copy_paste_mode: "flip" #
271647
# Ultralytics YOLO 🚀, AGPL-3.0 license # COCO8 dataset (first 8 images from COCO train2017) by Ultralytics # Documentation: https://docs.ultralytics.com/datasets/detect/coco8/ # Example usage: yolo train data=coco8.yaml # parent # ├── ultralytics # └── datasets # └── coco8 ← downloads here (1 MB) # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco8 # dataset root dir train: images/train # train images (relative to 'path') 4 images val: images/val # val images (relative to 'path') 4 images test: # test images (optional) # Classes names: 0: person 1: bicycle 2: car 3: motorcycle 4: airplane 5: bus 6: train 7: truck 8: boat 9: traffic light 10: fire hydrant 11: stop sign 12: parking meter 13: bench 14: bird 15: cat 16: dog 17: horse 18: sheep 19: cow 20: elephant 21: bear 22: zebra 23: giraffe 24: backpack 25: umbrella 26: handbag 27: tie 28: suitcase 29: frisbee 30: skis 31: snowboard 32: sports ball 33: kite 34: baseball bat 35: baseball glove 36: skateboard 37: surfboard 38: tennis racket 39: bottle 40: wine glass 41: cup 42: fork 43: knife 44: spoon 45: bowl 46: banana 47: apple 48: sandwich 49: orange 50: broccoli 51: carrot 52: hot dog 53: pizza 54: donut 55: cake 56: chair 57: couch 58: potted plant 59: bed 60: dining table 61: toilet 62: tv 63: laptop 64: mouse 65: remote 66: keyboard 67: cell phone 68: microwave 69: oven 70: toaster 71: sink 72: refrigerator 73: book 74: clock 75: vase 76: scissors 77: teddy bear 78: hair drier 79: toothbrush # Download script/URL (optional) download: https://github.com/ultralytics/assets/releases/download/v0.0.0/coco8.zip
271648
# Ultralytics YOLO 🚀, AGPL-3.0 license # COCO128 dataset https://www.kaggle.com/datasets/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics # Documentation: https://docs.ultralytics.com/datasets/detect/coco/ # Example usage: yolo train data=coco128.yaml # parent # ├── ultralytics # └── datasets # └── coco128 ← downloads here (7 MB) # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco128 # dataset root dir train: images/train2017 # train images (relative to 'path') 128 images val: images/train2017 # val images (relative to 'path') 128 images test: # test images (optional) # Classes names: 0: person 1: bicycle 2: car 3: motorcycle 4: airplane 5: bus 6: train 7: truck 8: boat 9: traffic light 10: fire hydrant 11: stop sign 12: parking meter 13: bench 14: bird 15: cat 16: dog 17: horse 18: sheep 19: cow 20: elephant 21: bear 22: zebra 23: giraffe 24: backpack 25: umbrella 26: handbag 27: tie 28: suitcase 29: frisbee 30: skis 31: snowboard 32: sports ball 33: kite 34: baseball bat 35: baseball glove 36: skateboard 37: surfboard 38: tennis racket 39: bottle 40: wine glass 41: cup 42: fork 43: knife 44: spoon 45: bowl 46: banana 47: apple 48: sandwich 49: orange 50: broccoli 51: carrot 52: hot dog 53: pizza 54: donut 55: cake 56: chair 57: couch 58: potted plant 59: bed 60: dining table 61: toilet 62: tv 63: laptop 64: mouse 65: remote 66: keyboard 67: cell phone 68: microwave 69: oven 70: toaster 71: sink 72: refrigerator 73: book 74: clock 75: vase 76: scissors 77: teddy bear 78: hair drier 79: toothbrush # Download script/URL (optional) download: https://github.com/ultralytics/assets/releases/download/v0.0.0/coco128.zip
271649
# Ultralytics YOLO 🚀, AGPL-3.0 license # COCO 2017 dataset https://cocodataset.org by Microsoft # Documentation: https://docs.ultralytics.com/datasets/detect/coco/ # Example usage: yolo train data=coco.yaml # parent # ├── ultralytics # └── datasets # └── coco ← downloads here (20.1 GB) # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco # dataset root dir train: train2017.txt # train images (relative to 'path') 118287 images val: val2017.txt # val images (relative to 'path') 5000 images test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794 # Classes names: 0: person 1: bicycle 2: car 3: motorcycle 4: airplane 5: bus 6: train 7: truck 8: boat 9: traffic light 10: fire hydrant 11: stop sign 12: parking meter 13: bench 14: bird 15: cat 16: dog 17: horse 18: sheep 19: cow 20: elephant 21: bear 22: zebra 23: giraffe 24: backpack 25: umbrella 26: handbag 27: tie 28: suitcase 29: frisbee 30: skis 31: snowboard 32: sports ball 33: kite 34: baseball bat 35: baseball glove 36: skateboard 37: surfboard 38: tennis racket 39: bottle 40: wine glass 41: cup 42: fork 43: knife 44: spoon 45: bowl 46: banana 47: apple 48: sandwich 49: orange 50: broccoli 51: carrot 52: hot dog 53: pizza 54: donut 55: cake 56: chair 57: couch 58: potted plant 59: bed 60: dining table 61: toilet 62: tv 63: laptop 64: mouse 65: remote 66: keyboard 67: cell phone 68: microwave 69: oven 70: toaster 71: sink 72: refrigerator 73: book 74: clock 75: vase 76: scissors 77: teddy bear 78: hair drier 79: toothbrush # Download script/URL (optional) download: | from ultralytics.utils.downloads import download from pathlib import Path # Download labels segments = True # segment or box labels dir = Path(yaml['path']) # dataset root dir url = 'https://github.com/ultralytics/assets/releases/download/v0.0.0/' urls = [url + ('coco2017labels-segments.zip' if segments else 'coco2017labels.zip')] # labels download(urls, dir=dir.parent) # Download data urls = ['http://images.cocodataset.org/zips/train2017.zip', # 19G, 118k images 'http://images.cocodataset.org/zips/val2017.zip', # 1G, 5k images 'http://images.cocodataset.org/zips/test2017.zip'] # 7G, 41k images (optional) download(urls, dir=dir / 'images', threads=3)
271650
# Ultralytics YOLO 🚀, AGPL-3.0 license # COCO128-seg dataset https://www.kaggle.com/datasets/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics # Documentation: https://docs.ultralytics.com/datasets/segment/coco/ # Example usage: yolo train data=coco128.yaml # parent # ├── ultralytics # └── datasets # └── coco128-seg ← downloads here (7 MB) # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco128-seg # dataset root dir train: images/train2017 # train images (relative to 'path') 128 images val: images/train2017 # val images (relative to 'path') 128 images test: # test images (optional) # Classes names: 0: person 1: bicycle 2: car 3: motorcycle 4: airplane 5: bus 6: train 7: truck 8: boat 9: traffic light 10: fire hydrant 11: stop sign 12: parking meter 13: bench 14: bird 15: cat 16: dog 17: horse 18: sheep 19: cow 20: elephant 21: bear 22: zebra 23: giraffe 24: backpack 25: umbrella 26: handbag 27: tie 28: suitcase 29: frisbee 30: skis 31: snowboard 32: sports ball 33: kite 34: baseball bat 35: baseball glove 36: skateboard 37: surfboard 38: tennis racket 39: bottle 40: wine glass 41: cup 42: fork 43: knife 44: spoon 45: bowl 46: banana 47: apple 48: sandwich 49: orange 50: broccoli 51: carrot 52: hot dog 53: pizza 54: donut 55: cake 56: chair 57: couch 58: potted plant 59: bed 60: dining table 61: toilet 62: tv 63: laptop 64: mouse 65: remote 66: keyboard 67: cell phone 68: microwave 69: oven 70: toaster 71: sink 72: refrigerator 73: book 74: clock 75: vase 76: scissors 77: teddy bear 78: hair drier 79: toothbrush # Download script/URL (optional) download: https://github.com/ultralytics/assets/releases/download/v0.0.0/coco128-seg.zip
271665
# Ultralytics YOLO 🚀, AGPL-3.0 license # COCO8-pose dataset (first 8 images from COCO train2017) by Ultralytics # Documentation: https://docs.ultralytics.com/datasets/pose/coco8-pose/ # Example usage: yolo train data=coco8-pose.yaml # parent # ├── ultralytics # └── datasets # └── coco8-pose ← downloads here (1 MB) # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco8-pose # dataset root dir train: images/train # train images (relative to 'path') 4 images val: images/val # val images (relative to 'path') 4 images test: # test images (optional) # Keypoints kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible) flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15] # Classes names: 0: person # Download script/URL (optional) download: https://github.com/ultralytics/assets/releases/download/v0.0.0/coco8-pose.zip
271668
# Ultralytics YOLO 🚀, AGPL-3.0 license # Objects365 dataset https://www.objects365.org/ by Megvii # Documentation: https://docs.ultralytics.com/datasets/detect/objects365/ # Example usage: yolo train data=Objects365.yaml # parent # ├── ultralytics # └── datasets # └── Objects365 ← downloads here (712 GB = 367G data + 345G zips) # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/Objects365 # dataset root dir train: images/train # train images (relative to 'path') 1742289 images val: images/val # val images (relative to 'path') 80000 images test: # test images (optional) # Classes names: 0: Person 1
271692
# Ultralytics YOLO 🚀, AGPL-3.0 license # COCO8-seg dataset (first 8 images from COCO train2017) by Ultralytics # Documentation: https://docs.ultralytics.com/datasets/segment/coco8-seg/ # Example usage: yolo train data=coco8-seg.yaml # parent # ├── ultralytics # └── datasets # └── coco8-seg ← downloads here (1 MB) # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco8-seg # dataset root dir train: images/train # train images (relative to 'path') 4 images val: images/val # val images (relative to 'path') 4 images test: # test images (optional) # Classes names: 0: person 1: bicycle 2: car 3: motorcycle 4: airplane 5: bus 6: train 7: truck 8: boat 9: traffic light 10: fire hydrant 11: stop sign 12: parking meter 13: bench 14: bird 15: cat 16: dog 17: horse 18: sheep 19: cow 20: elephant 21: bear 22: zebra 23: giraffe 24: backpack 25: umbrella 26: handbag 27: tie 28: suitcase 29: frisbee 30: skis 31: snowboard 32: sports ball 33: kite 34: baseball bat 35: baseball glove 36: skateboard 37: surfboard 38: tennis racket 39: bottle 40: wine glass 41: cup 42: fork 43: knife 44: spoon 45: bowl 46: banana 47: apple 48: sandwich 49: orange 50: broccoli 51: carrot 52: hot dog 53: pizza 54: donut 55: cake 56: chair 57: couch 58: potted plant 59: bed 60: dining table 61: toilet 62: tv 63: laptop 64: mouse 65: remote 66: keyboard 67: cell phone 68: microwave 69: oven 70: toaster 71: sink 72: refrigerator 73: book 74: clock 75: vase 76: scissors 77: teddy bear 78: hair drier 79: toothbrush # Download script/URL (optional) download: https://github.com/ultralytics/assets/releases/download/v0.0.0/coco8-seg.zip
271698
# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P6 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n-p6.yaml' will call yolov8-p6.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n-p6 summary (fused): 220 layers, 4976656 parameters, 42560 gradients, 8.7 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s-p6 summary (fused): 220 layers, 17897168 parameters, 57920 gradients, 28.5 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m-p6 summary (fused): 285 layers, 44862352 parameters, 78400 gradients, 83.1 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l-p6 summary (fused): 350 layers, 62351440 parameters, 98880 gradients, 167.3 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x-p6 summary (fused): 350 layers, 97382352 parameters, 123456 gradients, 261.1 GFLOPs # YOLOv8.0x6 backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [768, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [768, True]] - [-1, 1, Conv, [1024, 3, 2]] # 9-P6/64 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 11 # YOLOv8.0x6 head head: - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 8], 1, Concat, [1]] # cat backbone P5 - [-1, 3, C2, [768, False]] # 14 - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2, [512, False]] # 17 - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2, [256, False]] # 20 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 17], 1, Concat, [1]] # cat head P4 - [-1, 3, C2, [512, False]] # 23 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 14], 1, Concat, [1]] # cat head P5 - [-1, 3, C2, [768, False]] # 26 (P5/32-large) - [-1, 1, Conv, [768, 3, 2]] - [[-1, 11], 1, Concat, [1]] # cat head P6 - [-1, 3, C2, [1024, False]] # 29 (P6/64-xlarge) - [[20, 23, 26, 29], 1, Detect, [nc]] # Detect(P3, P4, P5, P6)
271704
# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8-pose keypoints/pose estimation model. For Usage examples see https://docs.ultralytics.com/tasks/pose # Parameters nc: 1 # number of classes kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible) scales: # model compound scaling constants, i.e. 'model=yolov8n-pose.yaml' will call yolov8-pose.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] s: [0.33, 0.50, 1024] m: [0.67, 0.75, 768] l: [1.00, 1.00, 512] x: [1.00, 1.25, 512] # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 18 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 21 (P5/32-large) - [[15, 18, 21], 1, Pose, [nc, kpt_shape]] # Pose(P3, P4, P5)
271706
# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P2-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] s: [0.33, 0.50, 1024] m: [0.67, 0.75, 768] l: [1.00, 1.00, 512] x: [1.00, 1.25, 512] # YOLOv8.0 backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0-p2 head head: - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 2], 1, Concat, [1]] # cat backbone P2 - [-1, 3, C2f, [128]] # 18 (P2/4-xsmall) - [-1, 1, Conv, [128, 3, 2]] - [[-1, 15], 1, Concat, [1]] # cat head P3 - [-1, 3, C2f, [256]] # 21 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 24 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 27 (P5/32-large) - [[18, 21, 24, 27], 1, Detect, [nc]] # Detect(P2, P3, P4, P5)
271708
# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 18 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 21 (P5/32-large) - [[15, 18, 21], 1, Detect, [nc]] # Detect(P3, P4, P5)
271736
# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv10 object detection model. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov10n.yaml' will call yolov10.yaml with scale 'n' # [depth, width, max_channels] m: [0.67, 0.75, 768] backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, SCDown, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, SCDown, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2fCIB, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 - [-1, 1, PSA, [1024]] # 10 # YOLOv10.0n head head: - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, "nearest"]] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2fCIB, [512, True]] # 19 (P4/16-medium) - [-1, 1, SCDown, [512, 3, 2]] - [[-1, 10], 1, Concat, [1]] # cat head P5 - [-1, 3, C2fCIB, [1024, True]] # 22 (P5/32-large) - [[16, 19, 22], 1, v10Detect, [nc]] # Detect(P3, P4, P5)
271743
crop_and_save(anno, windows, window_objs, im_dir, lb_dir, allow_background_images=True): """ Crop images and save new labels. Args: anno (dict): Annotation dict, including `filepath`, `label`, `ori_size` as its keys. windows (list): A list of windows coordinates. window_objs (list): A list of labels inside each window. im_dir (str): The output directory path of images. lb_dir (str): The output directory path of labels. allow_background_images (bool): Whether to include background images without labels. Notes: The directory structure assumed for the DOTA dataset: - data_root - images - train - val - labels - train - val """ im = cv2.imread(anno["filepath"]) name = Path(anno["filepath"]).stem for i, window in enumerate(windows): x_start, y_start, x_stop, y_stop = window.tolist() new_name = f"{name}__{x_stop - x_start}__{x_start}___{y_start}" patch_im = im[y_start:y_stop, x_start:x_stop] ph, pw = patch_im.shape[:2] label = window_objs[i] if len(label) or allow_background_images: cv2.imwrite(str(Path(im_dir) / f"{new_name}.jpg"), patch_im) if len(label): label[:, 1::2] -= x_start label[:, 2::2] -= y_start label[:, 1::2] /= pw label[:, 2::2] /= ph with open(Path(lb_dir) / f"{new_name}.txt", "w") as f: for lb in label: formatted_coords = [f"{coord:.6g}" for coord in lb[1:]] f.write(f"{int(lb[0])} {' '.join(formatted_coords)}\n") def split_images_and_labels(data_root, save_dir, split="train", crop_sizes=(1024,), gaps=(200,)): """ Split both images and labels. Notes: The directory structure assumed for the DOTA dataset: - data_root - images - split - labels - split and the output directory structure is: - save_dir - images - split - labels - split """ im_dir = Path(save_dir) / "images" / split im_dir.mkdir(parents=True, exist_ok=True) lb_dir = Path(save_dir) / "labels" / split lb_dir.mkdir(parents=True, exist_ok=True) annos = load_yolo_dota(data_root, split=split) for anno in tqdm(annos, total=len(annos), desc=split): windows = get_windows(anno["ori_size"], crop_sizes, gaps) window_objs = get_window_obj(anno, windows) crop_and_save(anno, windows, window_objs, str(im_dir), str(lb_dir)) def split_trainval(data_root, save_dir, crop_size=1024, gap=200, rates=(1.0,)): """ Split train and val set of DOTA. Notes: The directory structure assumed for the DOTA dataset: - data_root - images - train - val - labels - train - val and the output directory structure is: - save_dir - images - train - val - labels - train - val """ crop_sizes, gaps = [], [] for r in rates: crop_sizes.append(int(crop_size / r)) gaps.append(int(gap / r)) for split in ["train", "val"]: split_images_and_labels(data_root, save_dir, split, crop_sizes, gaps) def split_test(data_root, save_dir, crop_size=1024, gap=200, rates=(1.0,)): """ Split test set of DOTA, labels are not included within this set. Notes: The directory structure assumed for the DOTA dataset: - data_root - images - test and the output directory structure is: - save_dir - images - test """ crop_sizes, gaps = [], [] for r in rates: crop_sizes.append(int(crop_size / r)) gaps.append(int(gap / r)) save_dir = Path(save_dir) / "images" / "test" save_dir.mkdir(parents=True, exist_ok=True) im_dir = Path(data_root) / "images" / "test" assert im_dir.exists(), f"Can't find {im_dir}, please check your data root." im_files = glob(str(im_dir / "*")) for im_file in tqdm(im_files, total=len(im_files), desc="test"): w, h = exif_size(Image.open(im_file)) windows = get_windows((h, w), crop_sizes=crop_sizes, gaps=gaps) im = cv2.imread(im_file) name = Path(im_file).stem for window in windows: x_start, y_start, x_stop, y_stop = window.tolist() new_name = f"{name}__{x_stop - x_start}__{x_start}___{y_start}" patch_im = im[y_start:y_stop, x_start:x_stop] cv2.imwrite(str(save_dir / f"{new_name}.jpg"), patch_im) if __name__ == "__main__": split_trainval(data_root="DOTAv2", save_dir="DOTAv2-split") split_test(data_root="DOTAv2", save_dir="DOTAv2-split")
271751
# Ultralytics YOLO 🚀, AGPL-3.0 license import json import random import shutil from collections import defaultdict from concurrent.futures import ThreadPoolExecutor, as_completed from pathlib import Path import cv2 import numpy as np from PIL import Image from ultralytics.utils import DATASETS_DIR, LOGGER, NUM_THREADS, TQDM from ultralytics.utils.downloads import download from ultralytics.utils.files import increment_path def coco91_to_coco80_class(): """ Converts 91-index COCO class IDs to 80-index COCO class IDs. Returns: (list): A list of 91 class IDs where the index represents the 80-index class ID and the value is the corresponding 91-index class ID. """ return [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, None, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, None, 24, 25, None, None, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, None, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, None, 60, None, None, 61, None, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, None, 73, 74, 75, 76, 77, 78, 79, None, ] def coco80_to_coco91_class(): r""" Converts 80-index (val2014) to 91-index (paper). For details see https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/. Example: ```python import numpy as np a = np.loadtxt("data/coco.names", dtype="str", delimiter="\n") b = np.loadtxt("data/coco_paper.names", dtype="str", delimiter="\n") x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet ``` """ return [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, ]
271760
ss Mosaic(BaseMixTransform): """ Mosaic augmentation for image datasets. This class performs mosaic augmentation by combining multiple (4 or 9) images into a single mosaic image. The augmentation is applied to a dataset with a given probability. Attributes: dataset: The dataset on which the mosaic augmentation is applied. imgsz (int): Image size (height and width) after mosaic pipeline of a single image. p (float): Probability of applying the mosaic augmentation. Must be in the range 0-1. n (int): The grid size, either 4 (for 2x2) or 9 (for 3x3). border (Tuple[int, int]): Border size for width and height. Methods: get_indexes: Returns a list of random indexes from the dataset. _mix_transform: Applies mixup transformation to the input image and labels. _mosaic3: Creates a 1x3 image mosaic. _mosaic4: Creates a 2x2 image mosaic. _mosaic9: Creates a 3x3 image mosaic. _update_labels: Updates labels with padding. _cat_labels: Concatenates labels and clips mosaic border instances. Examples: >>> from ultralytics.data.augment import Mosaic >>> dataset = YourDataset(...) # Your image dataset >>> mosaic_aug = Mosaic(dataset, imgsz=640, p=0.5, n=4) >>> augmented_labels = mosaic_aug(original_labels) """ def __init__(self, dataset, imgsz=640, p=1.0, n=4): """ Initializes the Mosaic augmentation object. This class performs mosaic augmentation by combining multiple (4 or 9) images into a single mosaic image. The augmentation is applied to a dataset with a given probability. Args: dataset (Any): The dataset on which the mosaic augmentation is applied. imgsz (int): Image size (height and width) after mosaic pipeline of a single image. p (float): Probability of applying the mosaic augmentation. Must be in the range 0-1. n (int): The grid size, either 4 (for 2x2) or 9 (for 3x3). Examples: >>> from ultralytics.data.augment import Mosaic >>> dataset = YourDataset(...) >>> mosaic_aug = Mosaic(dataset, imgsz=640, p=0.5, n=4) """ assert 0 <= p <= 1.0, f"The probability should be in range [0, 1], but got {p}." assert n in {4, 9}, "grid must be equal to 4 or 9." super().__init__(dataset=dataset, p=p) self.imgsz = imgsz self.border = (-imgsz // 2, -imgsz // 2) # width, height self.n = n def get_indexes(self, buffer=True): """ Returns a list of random indexes from the dataset for mosaic augmentation. This method selects random image indexes either from a buffer or from the entire dataset, depending on the 'buffer' parameter. It is used to choose images for creating mosaic augmentations. Args: buffer (bool): If True, selects images from the dataset buffer. If False, selects from the entire dataset. Returns: (List[int]): A list of random image indexes. The length of the list is n-1, where n is the number of images used in the mosaic (either 3 or 8, depending on whether n is 4 or 9). Examples: >>> mosaic = Mosaic(dataset, imgsz=640, p=1.0, n=4) >>> indexes = mosaic.get_indexes() >>> print(len(indexes)) # Output: 3 """ if buffer: # select images from buffer return random.choices(list(self.dataset.buffer), k=self.n - 1) else: # select any images return [random.randint(0, len(self.dataset) - 1) for _ in range(self.n - 1)] def _mix_transform(self, labels): """ Applies mosaic augmentation to the input image and labels. This method combines multiple images (3, 4, or 9) into a single mosaic image based on the 'n' attribute. It ensures that rectangular annotations are not present and that there are other images available for mosaic augmentation. Args: labels (Dict): A dictionary containing image data and annotations. Expected keys include: - 'rect_shape': Should be None as rect and mosaic are mutually exclusive. - 'mix_labels': A list of dictionaries containing data for other images to be used in the mosaic. Returns: (Dict): A dictionary containing the mosaic-augmented image and updated annotations. Raises: AssertionError: If 'rect_shape' is not None or if 'mix_labels' is empty. Examples: >>> mosaic = Mosaic(dataset, imgsz=640, p=1.0, n=4) >>> augmented_data = mosaic._mix_transform(labels) """ assert labels.get("rect_shape", None) is None, "rect and mosaic are mutually exclusive." assert len(labels.get("mix_labels", [])), "There are no other images for mosaic augment." return ( self._mosaic3(labels) if self.n == 3 else self._mosaic4(labels) if self.n == 4 else self._mosaic9(labels) ) # This code is modified for mosaic3 method. def _mosaic3(self, labels): """ Creates a 1x3 image mosaic by combining three images. This method arranges three images in a horizontal layout, with the main image in the center and two additional images on either side. It's part of the Mosaic augmentation technique used in object detection. Args: labels (Dict): A dictionary containing image and label information for the main (center) image. Must include 'img' key with the image array, and 'mix_labels' key with a list of two dictionaries containing information for the side images. Returns: (Dict): A dictionary with the mosaic image and updated labels. Keys include: - 'img' (np.ndarray): The mosaic image array with shape (H, W, C). - Other keys from the input labels, updated to reflect the new image dimensions. Examples: >>> mosaic = Mosaic(dataset, imgsz=640, p=1.0, n=3) >>> labels = { ... "img": np.random.rand(480, 640, 3), ... "mix_labels": [{"img": np.random.rand(480, 640, 3)} for _ in range(2)], ... } >>> result = mosaic._mosaic3(labels) >>> print(result["img"].shape) (640, 640, 3) """ mosaic_labels = [] s = self.imgsz for i in range(3): labels_patch = labels if i == 0 else labels["mix_labels"][i - 1] # Load image img = labels_patch["img"] h, w = labels_patch.pop("resized_shape") # Place img in img3 if i == 0: # center img3 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 3 tiles h0, w0 = h, w c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates elif i == 1: # right c = s + w0, s, s + w0 + w, s + h elif i == 2: # left c = s - w, s + h0 - h, s, s + h0 padw, padh = c[:2] x1, y1, x2, y2 = (max(x, 0) for x in c) # allocate coords img3[y1:y2, x1:x2] = img[y1 - padh :, x1 - padw :] # img3[ymin:ymax, xmin:xmax] # hp, wp = h, w # height, width previous for next iteration # Labels assuming imgsz*2 mosaic size labels_patch = self._update_labels(labels_patch, padw + self.border[0], padh + self.border[1]) mosaic_labels.append(labels_patch) final_labels = self._cat_labels(mosaic_labels) final_labels["img"] = img3[-self.border[0] : self.border[0], -self.border[1] : self.border[1]] return final_labels
271762
aticmethod def _update_labels(labels, padw, padh): """ Updates label coordinates with padding values. This method adjusts the bounding box coordinates of object instances in the labels by adding padding values. It also denormalizes the coordinates if they were previously normalized. Args: labels (Dict): A dictionary containing image and instance information. padw (int): Padding width to be added to the x-coordinates. padh (int): Padding height to be added to the y-coordinates. Returns: (Dict): Updated labels dictionary with adjusted instance coordinates. Examples: >>> labels = {"img": np.zeros((100, 100, 3)), "instances": Instances(...)} >>> padw, padh = 50, 50 >>> updated_labels = Mosaic._update_labels(labels, padw, padh) """ nh, nw = labels["img"].shape[:2] labels["instances"].convert_bbox(format="xyxy") labels["instances"].denormalize(nw, nh) labels["instances"].add_padding(padw, padh) return labels def _cat_labels(self, mosaic_labels): """ Concatenates and processes labels for mosaic augmentation. This method combines labels from multiple images used in mosaic augmentation, clips instances to the mosaic border, and removes zero-area boxes. Args: mosaic_labels (List[Dict]): A list of label dictionaries for each image in the mosaic. Returns: (Dict): A dictionary containing concatenated and processed labels for the mosaic image, including: - im_file (str): File path of the first image in the mosaic. - ori_shape (Tuple[int, int]): Original shape of the first image. - resized_shape (Tuple[int, int]): Shape of the mosaic image (imgsz * 2, imgsz * 2). - cls (np.ndarray): Concatenated class labels. - instances (Instances): Concatenated instance annotations. - mosaic_border (Tuple[int, int]): Mosaic border size. - texts (List[str], optional): Text labels if present in the original labels. Examples: >>> mosaic = Mosaic(dataset, imgsz=640) >>> mosaic_labels = [{"cls": np.array([0, 1]), "instances": Instances(...)} for _ in range(4)] >>> result = mosaic._cat_labels(mosaic_labels) >>> print(result.keys()) dict_keys(['im_file', 'ori_shape', 'resized_shape', 'cls', 'instances', 'mosaic_border']) """ if len(mosaic_labels) == 0: return {} cls = [] instances = [] imgsz = self.imgsz * 2 # mosaic imgsz for labels in mosaic_labels: cls.append(labels["cls"]) instances.append(labels["instances"]) # Final labels final_labels = { "im_file": mosaic_labels[0]["im_file"], "ori_shape": mosaic_labels[0]["ori_shape"], "resized_shape": (imgsz, imgsz), "cls": np.concatenate(cls, 0), "instances": Instances.concatenate(instances, axis=0), "mosaic_border": self.border, } final_labels["instances"].clip(imgsz, imgsz) good = final_labels["instances"].remove_zero_area_boxes() final_labels["cls"] = final_labels["cls"][good] if "texts" in mosaic_labels[0]: final_labels["texts"] = mosaic_labels[0]["texts"] return final_labels class MixUp(BaseMixTransform): """ Applies MixUp augmentation to image datasets. This class implements the MixUp augmentation technique as described in the paper "mixup: Beyond Empirical Risk Minimization" (https://arxiv.org/abs/1710.09412). MixUp combines two images and their labels using a random weight. Attributes: dataset (Any): The dataset to which MixUp augmentation will be applied. pre_transform (Callable | None): Optional transform to apply before MixUp. p (float): Probability of applying MixUp augmentation. Methods: get_indexes: Returns a random index from the dataset. _mix_transform: Applies MixUp augmentation to the input labels. Examples: >>> from ultralytics.data.augment import MixUp >>> dataset = YourDataset(...) # Your image dataset >>> mixup = MixUp(dataset, p=0.5) >>> augmented_labels = mixup(original_labels) """ def __init__(self, dataset, pre_transform=None, p=0.0) -> None: """ Initializes the MixUp augmentation object. MixUp is an image augmentation technique that combines two images by taking a weighted sum of their pixel values and labels. This implementation is designed for use with the Ultralytics YOLO framework. Args: dataset (Any): The dataset to which MixUp augmentation will be applied. pre_transform (Callable | None): Optional transform to apply to images before MixUp. p (float): Probability of applying MixUp augmentation to an image. Must be in the range [0, 1]. Examples: >>> from ultralytics.data.dataset import YOLODataset >>> dataset = YOLODataset("path/to/data.yaml") >>> mixup = MixUp(dataset, pre_transform=None, p=0.5) """ super().__init__(dataset=dataset, pre_transform=pre_transform, p=p) def get_indexes(self): """ Get a random index from the dataset. This method returns a single random index from the dataset, which is used to select an image for MixUp augmentation. Returns: (int): A random integer index within the range of the dataset length. Examples: >>> mixup = MixUp(dataset) >>> index = mixup.get_indexes() >>> print(index) 42 """ return random.randint(0, len(self.dataset) - 1) def _mix_transform(self, labels): """ Applies MixUp augmentation to the input labels. This method implements the MixUp augmentation technique as described in the paper "mixup: Beyond Empirical Risk Minimization" (https://arxiv.org/abs/1710.09412). Args: labels (Dict): A dictionary containing the original image and label information. Returns: (Dict): A dictionary containing the mixed-up image and combined label information. Examples: >>> mixer = MixUp(dataset) >>> mixed_labels = mixer._mix_transform(labels) """ r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0 labels2 = labels["mix_labels"][0] labels["img"] = (labels["img"] * r + labels2["img"] * (1 - r)).astype(np.uint8) labels["instances"] = Instances.concatenate([labels["instances"], labels2["instances"]], axis=0) labels["cls"] = np.concatenate([labels["cls"], labels2["cls"]], 0) return labels
271769
ss Albumentations: """ Albumentations transformations for image augmentation. This class applies various image transformations using the Albumentations library. It includes operations such as Blur, Median Blur, conversion to grayscale, Contrast Limited Adaptive Histogram Equalization (CLAHE), random changes in brightness and contrast, RandomGamma, and image quality reduction through compression. Attributes: p (float): Probability of applying the transformations. transform (albumentations.Compose): Composed Albumentations transforms. contains_spatial (bool): Indicates if the transforms include spatial operations. Methods: __call__: Applies the Albumentations transformations to the input labels. Examples: >>> transform = Albumentations(p=0.5) >>> augmented_labels = transform(labels) Notes: - The Albumentations package must be installed to use this class. - If the package is not installed or an error occurs during initialization, the transform will be set to None. - Spatial transforms are handled differently and require special processing for bounding boxes. """ def __init__(self, p=1.0): """ Initialize the Albumentations transform object for YOLO bbox formatted parameters. This class applies various image augmentations using the Albumentations library, including Blur, Median Blur, conversion to grayscale, Contrast Limited Adaptive Histogram Equalization, random changes of brightness and contrast, RandomGamma, and image quality reduction through compression. Args: p (float): Probability of applying the augmentations. Must be between 0 and 1. Attributes: p (float): Probability of applying the augmentations. transform (albumentations.Compose): Composed Albumentations transforms. contains_spatial (bool): Indicates if the transforms include spatial transformations. Raises: ImportError: If the Albumentations package is not installed. Exception: For any other errors during initialization. Examples: >>> transform = Albumentations(p=0.5) >>> augmented = transform(image=image, bboxes=bboxes, class_labels=classes) >>> augmented_image = augmented["image"] >>> augmented_bboxes = augmented["bboxes"] Notes: - Requires Albumentations version 1.0.3 or higher. - Spatial transforms are handled differently to ensure bbox compatibility. - Some transforms are applied with very low probability (0.01) by default. """ self.p = p self.transform = None prefix = colorstr("albumentations: ") try: import albumentations as A check_version(A.__version__, "1.0.3", hard=True) # version requirement # List of possible spatial transforms spatial_transforms = { "Affine", "BBoxSafeRandomCrop", "CenterCrop", "CoarseDropout", "Crop", "CropAndPad", "CropNonEmptyMaskIfExists", "D4", "ElasticTransform", "Flip", "GridDistortion", "GridDropout", "HorizontalFlip", "Lambda", "LongestMaxSize", "MaskDropout", "MixUp", "Morphological", "NoOp", "OpticalDistortion", "PadIfNeeded", "Perspective", "PiecewiseAffine", "PixelDropout", "RandomCrop", "RandomCropFromBorders", "RandomGridShuffle", "RandomResizedCrop", "RandomRotate90", "RandomScale", "RandomSizedBBoxSafeCrop", "RandomSizedCrop", "Resize", "Rotate", "SafeRotate", "ShiftScaleRotate", "SmallestMaxSize", "Transpose", "VerticalFlip", "XYMasking", } # from https://albumentations.ai/docs/getting_started/transforms_and_targets/#spatial-level-transforms # Transforms T = [ A.Blur(p=0.01), A.MedianBlur(p=0.01), A.ToGray(p=0.01), A.CLAHE(p=0.01), A.RandomBrightnessContrast(p=0.0), A.RandomGamma(p=0.0), A.ImageCompression(quality_lower=75, p=0.0), ] # Compose transforms self.contains_spatial = any(transform.__class__.__name__ in spatial_transforms for transform in T) self.transform = ( A.Compose(T, bbox_params=A.BboxParams(format="yolo", label_fields=["class_labels"])) if self.contains_spatial else A.Compose(T) ) LOGGER.info(prefix + ", ".join(f"{x}".replace("always_apply=False, ", "") for x in T if x.p)) except ImportError: # package not installed, skip pass except Exception as e: LOGGER.info(f"{prefix}{e}") def __call__(self, labels): """ Applies Albumentations transformations to input labels. This method applies a series of image augmentations using the Albumentations library. It can perform both spatial and non-spatial transformations on the input image and its corresponding labels. Args: labels (Dict): A dictionary containing image data and annotations. Expected keys are: - 'img': numpy.ndarray representing the image - 'cls': numpy.ndarray of class labels - 'instances': object containing bounding boxes and other instance information Returns: (Dict): The input dictionary with augmented image and updated annotations. Examples: >>> transform = Albumentations(p=0.5) >>> labels = { ... "img": np.random.rand(640, 640, 3), ... "cls": np.array([0, 1]), ... "instances": Instances(bboxes=np.array([[0, 0, 1, 1], [0.5, 0.5, 0.8, 0.8]])), ... } >>> augmented = transform(labels) >>> assert augmented["img"].shape == (640, 640, 3) Notes: - The method applies transformations with probability self.p. - Spatial transforms update bounding boxes, while non-spatial transforms only modify the image. - Requires the Albumentations library to be installed. """ if self.transform is None or random.random() > self.p: return labels if self.contains_spatial: cls = labels["cls"] if len(cls): im = labels["img"] labels["instances"].convert_bbox("xywh") labels["instances"].normalize(*im.shape[:2][::-1]) bboxes = labels["instances"].bboxes # TODO: add supports of segments and keypoints new = self.transform(image=im, bboxes=bboxes, class_labels=cls) # transformed if len(new["class_labels"]) > 0: # skip update if no bbox in new im labels["img"] = new["image"] labels["cls"] = np.array(new["class_labels"]) bboxes = np.array(new["bboxes"], dtype=np.float32) labels["instances"].update(bboxes=bboxes) else: labels["img"] = self.transform(image=labels["img"])["image"] # transformed return labels
271770
ss Format: """ A class for formatting image annotations for object detection, instance segmentation, and pose estimation tasks. This class standardizes image and instance annotations to be used by the `collate_fn` in PyTorch DataLoader. Attributes: bbox_format (str): Format for bounding boxes. Options are 'xywh' or 'xyxy'. normalize (bool): Whether to normalize bounding boxes. return_mask (bool): Whether to return instance masks for segmentation. return_keypoint (bool): Whether to return keypoints for pose estimation. return_obb (bool): Whether to return oriented bounding boxes. mask_ratio (int): Downsample ratio for masks. mask_overlap (bool): Whether to overlap masks. batch_idx (bool): Whether to keep batch indexes. bgr (float): The probability to return BGR images. Methods: __call__: Formats labels dictionary with image, classes, bounding boxes, and optionally masks and keypoints. _format_img: Converts image from Numpy array to PyTorch tensor. _format_segments: Converts polygon points to bitmap masks. Examples: >>> formatter = Format(bbox_format="xywh", normalize=True, return_mask=True) >>> formatted_labels = formatter(labels) >>> img = formatted_labels["img"] >>> bboxes = formatted_labels["bboxes"] >>> masks = formatted_labels["masks"] """ def __init__( self, bbox_format="xywh", normalize=True, return_mask=False, return_keypoint=False, return_obb=False, mask_ratio=4, mask_overlap=True, batch_idx=True, bgr=0.0, ): """ Initializes the Format class with given parameters for image and instance annotation formatting. This class standardizes image and instance annotations for object detection, instance segmentation, and pose estimation tasks, preparing them for use in PyTorch DataLoader's `collate_fn`. Args: bbox_format (str): Format for bounding boxes. Options are 'xywh', 'xyxy', etc. normalize (bool): Whether to normalize bounding boxes to [0,1]. return_mask (bool): If True, returns instance masks for segmentation tasks. return_keypoint (bool): If True, returns keypoints for pose estimation tasks. return_obb (bool): If True, returns oriented bounding boxes. mask_ratio (int): Downsample ratio for masks. mask_overlap (bool): If True, allows mask overlap. batch_idx (bool): If True, keeps batch indexes. bgr (float): Probability of returning BGR images instead of RGB. Attributes: bbox_format (str): Format for bounding boxes. normalize (bool): Whether bounding boxes are normalized. return_mask (bool): Whether to return instance masks. return_keypoint (bool): Whether to return keypoints. return_obb (bool): Whether to return oriented bounding boxes. mask_ratio (int): Downsample ratio for masks. mask_overlap (bool): Whether masks can overlap. batch_idx (bool): Whether to keep batch indexes. bgr (float): The probability to return BGR images. Examples: >>> format = Format(bbox_format="xyxy", return_mask=True, return_keypoint=False) >>> print(format.bbox_format) xyxy """ self.bbox_format = bbox_format self.normalize = normalize self.return_mask = return_mask # set False when training detection only self.return_keypoint = return_keypoint self.return_obb = return_obb self.mask_ratio = mask_ratio self.mask_overlap = mask_overlap self.batch_idx = batch_idx # keep the batch indexes self.bgr = bgr def __call__(self, labels): """ Formats image annotations for object detection, instance segmentation, and pose estimation tasks. This method standardizes the image and instance annotations to be used by the `collate_fn` in PyTorch DataLoader. It processes the input labels dictionary, converting annotations to the specified format and applying normalization if required. Args: labels (Dict): A dictionary containing image and annotation data with the following keys: - 'img': The input image as a numpy array. - 'cls': Class labels for instances. - 'instances': An Instances object containing bounding boxes, segments, and keypoints. Returns: (Dict): A dictionary with formatted data, including: - 'img': Formatted image tensor. - 'cls': Class labels tensor. - 'bboxes': Bounding boxes tensor in the specified format. - 'masks': Instance masks tensor (if return_mask is True). - 'keypoints': Keypoints tensor (if return_keypoint is True). - 'batch_idx': Batch index tensor (if batch_idx is True). Examples: >>> formatter = Format(bbox_format="xywh", normalize=True, return_mask=True) >>> labels = {"img": np.random.rand(640, 640, 3), "cls": np.array([0, 1]), "instances": Instances(...)} >>> formatted_labels = formatter(labels) >>> print(formatted_labels.keys()) """ img = labels.pop("img") h, w = img.shape[:2] cls = labels.pop("cls") instances = labels.pop("instances") instances.convert_bbox(format=self.bbox_format) instances.denormalize(w, h) nl = len(instances) if self.return_mask: if nl: masks, instances, cls = self._format_segments(instances, cls, w, h) masks = torch.from_numpy(masks) else: masks = torch.zeros( 1 if self.mask_overlap else nl, img.shape[0] // self.mask_ratio, img.shape[1] // self.mask_ratio ) labels["masks"] = masks labels["img"] = self._format_img(img) labels["cls"] = torch.from_numpy(cls) if nl else torch.zeros(nl) labels["bboxes"] = torch.from_numpy(instances.bboxes) if nl else torch.zeros((nl, 4)) if self.return_keypoint: labels["keypoints"] = torch.from_numpy(instances.keypoints) if self.normalize: labels["keypoints"][..., 0] /= w labels["keypoints"][..., 1] /= h if self.return_obb: labels["bboxes"] = ( xyxyxyxy2xywhr(torch.from_numpy(instances.segments)) if len(instances.segments) else torch.zeros((0, 5)) ) # NOTE: need to normalize obb in xywhr format for width-height consistency if self.normalize: labels["bboxes"][:, [0, 2]] /= w labels["bboxes"][:, [1, 3]] /= h # Then we can use collate_fn if self.batch_idx: labels["batch_idx"] = torch.zeros(nl) return labels def _format_img(self, img): """ Formats an image for YOLO from a Numpy array to a PyTorch tensor. This function performs the following operations: 1. Ensures the image has 3 dimensions (adds a channel dimension if needed). 2. Transposes the image from HWC to CHW format. 3. Optionally flips the color channels from RGB to BGR. 4. Converts the image to a contiguous array. 5. Converts the Numpy array to a PyTorch tensor. Args: img (np.ndarray): Input image as a Numpy array with shape (H, W, C) or (H, W). Returns: (torch.Tensor): Formatted image as a PyTorch tensor with shape (C, H, W). Examples: >>> import numpy as np >>> img = np.random.rand(100, 100, 3) >>> formatted_img = self._format_img(img) >>> print(formatted_img.shape) torch.Size([3, 100, 100]) """ if len(img.shape) < 3: img = np.expand_dims(img, -1) img = img.transpose(2, 0, 1) img = np.ascontiguousarray(img[::-1] if random.uniform(0, 1) > self.bgr else img) img = torch.from_numpy(img) return img
271772
v8_transforms(dataset, imgsz, hyp, stretch=False): """ Applies a series of image transformations for training. This function creates a composition of image augmentation techniques to prepare images for YOLO training. It includes operations such as mosaic, copy-paste, random perspective, mixup, and various color adjustments. Args: dataset (Dataset): The dataset object containing image data and annotations. imgsz (int): The target image size for resizing. hyp (Dict): A dictionary of hyperparameters controlling various aspects of the transformations. stretch (bool): If True, applies stretching to the image. If False, uses LetterBox resizing. Returns: (Compose): A composition of image transformations to be applied to the dataset. Examples: >>> from ultralytics.data.dataset import YOLODataset >>> dataset = YOLODataset(img_path="path/to/images", imgsz=640) >>> hyp = {"mosaic": 1.0, "copy_paste": 0.5, "degrees": 10.0, "translate": 0.2, "scale": 0.9} >>> transforms = v8_transforms(dataset, imgsz=640, hyp=hyp) >>> augmented_data = transforms(dataset[0]) """ mosaic = Mosaic(dataset, imgsz=imgsz, p=hyp.mosaic) affine = RandomPerspective( degrees=hyp.degrees, translate=hyp.translate, scale=hyp.scale, shear=hyp.shear, perspective=hyp.perspective, pre_transform=None if stretch else LetterBox(new_shape=(imgsz, imgsz)), ) pre_transform = Compose([mosaic, affine]) if hyp.copy_paste_mode == "flip": pre_transform.insert(1, CopyPaste(p=hyp.copy_paste, mode=hyp.copy_paste_mode)) else: pre_transform.append( CopyPaste( dataset, pre_transform=Compose([Mosaic(dataset, imgsz=imgsz, p=hyp.mosaic), affine]), p=hyp.copy_paste, mode=hyp.copy_paste_mode, ) ) flip_idx = dataset.data.get("flip_idx", []) # for keypoints augmentation if dataset.use_keypoints: kpt_shape = dataset.data.get("kpt_shape", None) if len(flip_idx) == 0 and hyp.fliplr > 0.0: hyp.fliplr = 0.0 LOGGER.warning("WARNING ⚠️ No 'flip_idx' array defined in data.yaml, setting augmentation 'fliplr=0.0'") elif flip_idx and (len(flip_idx) != kpt_shape[0]): raise ValueError(f"data.yaml flip_idx={flip_idx} length must be equal to kpt_shape[0]={kpt_shape[0]}") return Compose( [ pre_transform, MixUp(dataset, pre_transform=pre_transform, p=hyp.mixup), Albumentations(p=1.0), RandomHSV(hgain=hyp.hsv_h, sgain=hyp.hsv_s, vgain=hyp.hsv_v), RandomFlip(direction="vertical", p=hyp.flipud), RandomFlip(direction="horizontal", p=hyp.fliplr, flip_idx=flip_idx), ] ) # transforms # Classification augmentations ----------------------------------------------------------------------------------------- def classify_transforms( size=224, mean=DEFAULT_MEAN, std=DEFAULT_STD, interpolation="BILINEAR", crop_fraction: float = DEFAULT_CROP_FRACTION, ): """ Creates a composition of image transforms for classification tasks. This function generates a sequence of torchvision transforms suitable for preprocessing images for classification models during evaluation or inference. The transforms include resizing, center cropping, conversion to tensor, and normalization. Args: size (int | tuple): The target size for the transformed image. If an int, it defines the shortest edge. If a tuple, it defines (height, width). mean (tuple): Mean values for each RGB channel used in normalization. std (tuple): Standard deviation values for each RGB channel used in normalization. interpolation (str): Interpolation method of either 'NEAREST', 'BILINEAR' or 'BICUBIC'. crop_fraction (float): Fraction of the image to be cropped. Returns: (torchvision.transforms.Compose): A composition of torchvision transforms. Examples: >>> transforms = classify_transforms(size=224) >>> img = Image.open("path/to/image.jpg") >>> transformed_img = transforms(img) """ import torchvision.transforms as T # scope for faster 'import ultralytics' if isinstance(size, (tuple, list)): assert len(size) == 2, f"'size' tuples must be length 2, not length {len(size)}" scale_size = tuple(math.floor(x / crop_fraction) for x in size) else: scale_size = math.floor(size / crop_fraction) scale_size = (scale_size, scale_size) # Aspect ratio is preserved, crops center within image, no borders are added, image is lost if scale_size[0] == scale_size[1]: # Simple case, use torchvision built-in Resize with the shortest edge mode (scalar size arg) tfl = [T.Resize(scale_size[0], interpolation=getattr(T.InterpolationMode, interpolation))] else: # Resize the shortest edge to matching target dim for non-square target tfl = [T.Resize(scale_size)] tfl.extend( [ T.CenterCrop(size), T.ToTensor(), T.Normalize(mean=torch.tensor(mean), std=torch.tensor(std)), ] ) return T.Compose(tfl) # Classification training augmentations -------------------------------------------------------------------------------- def cl
271777
Dataset class for loading object detection and/or segmentation labels in YOLO format. Args: data (dict, optional): A dataset YAML dictionary. Defaults to None. task (str): An explicit arg to point current task, Defaults to 'detect'. Returns: (torch.utils.data.Dataset): A PyTorch dataset object that can be used for training an object detection model. """ def __init__(self, *args, data=None, task="detect", **kwargs): """Initializes the YOLODataset with optional configurations for segments and keypoints.""" self.use_segments = task == "segment" self.use_keypoints = task == "pose" self.use_obb = task == "obb" self.data = data assert not (self.use_segments and self.use_keypoints), "Can not use both segments and keypoints." super().__init__(*args, **kwargs) def cache_labels(self, path=Path("./labels.cache")): """ Cache dataset labels, check images and read shapes. Args: path (Path): Path where to save the cache file. Default is Path('./labels.cache'). Returns: (dict): labels. """ x = {"labels": []} nm, nf, ne, nc, msgs = 0, 0, 0, 0, [] # number missing, found, empty, corrupt, messages desc = f"{self.prefix}Scanning {path.parent / path.stem}..." total = len(self.im_files) nkpt, ndim = self.data.get("kpt_shape", (0, 0)) if self.use_keypoints and (nkpt <= 0 or ndim not in {2, 3}): raise ValueError( "'kpt_shape' in data.yaml missing or incorrect. Should be a list with [number of " "keypoints, number of dims (2 for x,y or 3 for x,y,visible)], i.e. 'kpt_shape: [17, 3]'" ) with ThreadPool(NUM_THREADS) as pool: results = pool.imap( func=verify_image_label, iterable=zip( self.im_files, self.label_files, repeat(self.prefix), repeat(self.use_keypoints), repeat(len(self.data["names"])), repeat(nkpt), repeat(ndim), ), ) pbar = TQDM(results, desc=desc, total=total) for im_file, lb, shape, segments, keypoint, nm_f, nf_f, ne_f, nc_f, msg in pbar: nm += nm_f nf += nf_f ne += ne_f nc += nc_f if im_file: x["labels"].append( { "im_file": im_file, "shape": shape, "cls": lb[:, 0:1], # n, 1 "bboxes": lb[:, 1:], # n, 4 "segments": segments, "keypoints": keypoint, "normalized": True, "bbox_format": "xywh", } ) if msg: msgs.append(msg) pbar.desc = f"{desc} {nf} images, {nm + ne} backgrounds, {nc} corrupt" pbar.close() if msgs: LOGGER.info("\n".join(msgs)) if nf == 0: LOGGER.warning(f"{self.prefix}WARNING ⚠️ No labels found in {path}. {HELP_URL}") x["hash"] = get_hash(self.label_files + self.im_files) x["results"] = nf, nm, ne, nc, len(self.im_files) x["msgs"] = msgs # warnings save_dataset_cache_file(self.prefix, path, x, DATASET_CACHE_VERSION) return x def get_labels(self): """Returns dictionary of labels for YOLO training.""" self.label_files = img2label_paths(self.im_files) cache_path = Path(self.label_files[0]).parent.with_suffix(".cache") try: cache, exists = load_dataset_cache_file(cache_path), True # attempt to load a *.cache file assert cache["version"] == DATASET_CACHE_VERSION # matches current version assert cache["hash"] == get_hash(self.label_files + self.im_files) # identical hash except (FileNotFoundError, AssertionError, AttributeError): cache, exists = self.cache_labels(cache_path), False # run cache ops # Display cache nf, nm, ne, nc, n = cache.pop("results") # found, missing, empty, corrupt, total if exists and LOCAL_RANK in {-1, 0}: d = f"Scanning {cache_path}... {nf} images, {nm + ne} backgrounds, {nc} corrupt" TQDM(None, desc=self.prefix + d, total=n, initial=n) # display results if cache["msgs"]: LOGGER.info("\n".join(cache["msgs"])) # display warnings # Read cache [cache.pop(k) for k in ("hash", "version", "msgs")] # remove items labels = cache["labels"] if not labels: LOGGER.warning(f"WARNING ⚠️ No images found in {cache_path}, training may not work correctly. {HELP_URL}") self.im_files = [lb["im_file"] for lb in labels] # update im_files # Check if the dataset is all boxes or all segments lengths = ((len(lb["cls"]), len(lb["bboxes"]), len(lb["segments"])) for lb in labels) len_cls, len_boxes, len_segments = (sum(x) for x in zip(*lengths)) if len_segments and len_boxes != len_segments: LOGGER.warning( f"WARNING ⚠️ Box and segment counts should be equal, but got len(segments) = {len_segments}, " f"len(boxes) = {len_boxes}. To resolve this only boxes will be used and all segments will be removed. " "To avoid this please supply either a detect or segment dataset, not a detect-segment mixed dataset." ) for lb in labels: lb["segments"] = [] if len_cls == 0: LOGGER.warning(f"WARNING ⚠️ No labels found in {cache_path}, training may not work correctly. {HELP_URL}") return labels def build_transforms(self, hyp=None): """Builds and appends transforms to the list.""" if self.augment: hyp.mosaic = hyp.mosaic if self.augment and not self.rect else 0.0 hyp.mixup = hyp.mixup if self.augment and not self.rect else 0.0 transforms = v8_transforms(self, self.imgsz, hyp) else: transforms = Compose([LetterBox(new_shape=(self.imgsz, self.imgsz), scaleup=False)]) transforms.append( Format( bbox_format="xywh", normalize=True, return_mask=self.use_segments, return_keypoint=self.use_keypoints, return_obb=self.use_obb, batch_idx=True, mask_ratio=hyp.mask_ratio, mask_overlap=hyp.overlap_mask, bgr=hyp.bgr if self.augment else 0.0, # only affect training. ) ) return transforms def close_mosaic(self, hyp): """Sets mosaic, copy_paste and mixup options to 0.0 and builds transformations.""" hyp.mosaic = 0.0 # set mosaic ratio=0.0 hyp.copy_paste = 0.0 # keep the same behavior as previous v8 close-mosaic hyp.mixup = 0.0 # keep the same behavior as previous v8 close-mosaic self.transforms = self.build_transforms(hyp) def update_labels_info(self, label): """ Custom your label format here. Note: cls is not with bboxes now, classification and semantic segmentation need an independent cls label Can also support classification and semantic segmentation by adding or removing dict keys there. """ bboxes = label.pop("bboxes") segments = label.pop("segments", []) keypoints = label.pop("keypoints", None) bbox_format = label.pop("bbox_format") normalized = label.pop("normalized") # NOTE: do NOT resample oriented boxes segment_resamples = 100 if self.use_obb else 1000 if len(segments) > 0: # list[np.array(1000, 2)] * num_samples # (N, 1000, 2) segments = np.stack(resample_segments(segments, n=segment_resamples), axis=0) else: segments = np.zeros((0, segment_resamples, 2), dtype=np.float32) label["instances"] = Instances(bboxes, segments, keypoints, bbox_format=bbox_format, normalized=normalized) return label @staticmethod
271781
image_label(args): """Verify one image-label pair.""" im_file, lb_file, prefix, keypoint, num_cls, nkpt, ndim = args # Number (missing, found, empty, corrupt), message, segments, keypoints nm, nf, ne, nc, msg, segments, keypoints = 0, 0, 0, 0, "", [], None try: # Verify images im = Image.open(im_file) im.verify() # PIL verify shape = exif_size(im) # image size shape = (shape[1], shape[0]) # hw assert (shape[0] > 9) & (shape[1] > 9), f"image size {shape} <10 pixels" assert im.format.lower() in IMG_FORMATS, f"invalid image format {im.format}. {FORMATS_HELP_MSG}" if im.format.lower() in {"jpg", "jpeg"}: with open(im_file, "rb") as f: f.seek(-2, 2) if f.read() != b"\xff\xd9": # corrupt JPEG ImageOps.exif_transpose(Image.open(im_file)).save(im_file, "JPEG", subsampling=0, quality=100) msg = f"{prefix}WARNING ⚠️ {im_file}: corrupt JPEG restored and saved" # Verify labels if os.path.isfile(lb_file): nf = 1 # label found with open(lb_file) as f: lb = [x.split() for x in f.read().strip().splitlines() if len(x)] if any(len(x) > 6 for x in lb) and (not keypoint): # is segment classes = np.array([x[0] for x in lb], dtype=np.float32) segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in lb] # (cls, xy1...) lb = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh) lb = np.array(lb, dtype=np.float32) nl = len(lb) if nl: if keypoint: assert lb.shape[1] == (5 + nkpt * ndim), f"labels require {(5 + nkpt * ndim)} columns each" points = lb[:, 5:].reshape(-1, ndim)[:, :2] else: assert lb.shape[1] == 5, f"labels require 5 columns, {lb.shape[1]} columns detected" points = lb[:, 1:] assert points.max() <= 1, f"non-normalized or out of bounds coordinates {points[points > 1]}" assert lb.min() >= 0, f"negative label values {lb[lb < 0]}" # All labels max_cls = lb[:, 0].max() # max label count assert max_cls <= num_cls, ( f"Label class {int(max_cls)} exceeds dataset class count {num_cls}. " f"Possible class labels are 0-{num_cls - 1}" ) _, i = np.unique(lb, axis=0, return_index=True) if len(i) < nl: # duplicate row check lb = lb[i] # remove duplicates if segments: segments = [segments[x] for x in i] msg = f"{prefix}WARNING ⚠️ {im_file}: {nl - len(i)} duplicate labels removed" else: ne = 1 # label empty lb = np.zeros((0, (5 + nkpt * ndim) if keypoint else 5), dtype=np.float32) else: nm = 1 # label missing lb = np.zeros((0, (5 + nkpt * ndim) if keypoints else 5), dtype=np.float32) if keypoint: keypoints = lb[:, 5:].reshape(-1, nkpt, ndim) if ndim == 2: kpt_mask = np.where((keypoints[..., 0] < 0) | (keypoints[..., 1] < 0), 0.0, 1.0).astype(np.float32) keypoints = np.concatenate([keypoints, kpt_mask[..., None]], axis=-1) # (nl, nkpt, 3) lb = lb[:, :5] return im_file, lb, shape, segments, keypoints, nm, nf, ne, nc, msg except Exception as e: nc = 1 msg = f"{prefix}WARNING ⚠️ {im_file}: ignoring corrupt image/label: {e}" return [None, None, None, None, None, nm, nf, ne, nc, msg] def polygon2mask(imgsz, polygons, color=1, downsample_ratio=1): """ Convert a list of polygons to a binary mask of the specified image size. Args: imgsz (tuple): The size of the image as (height, width). polygons (list[np.ndarray]): A list of polygons. Each polygon is an array with shape [N, M], where N is the number of polygons, and M is the number of points such that M % 2 = 0. color (int, optional): The color value to fill in the polygons on the mask. Defaults to 1. downsample_ratio (int, optional): Factor by which to downsample the mask. Defaults to 1. Returns: (np.ndarray): A binary mask of the specified image size with the polygons filled in. """ mask = np.zeros(imgsz, dtype=np.uint8) polygons = np.asarray(polygons, dtype=np.int32) polygons = polygons.reshape((polygons.shape[0], -1, 2)) cv2.fillPoly(mask, polygons, color=color) nh, nw = (imgsz[0] // downsample_ratio, imgsz[1] // downsample_ratio) # Note: fillPoly first then resize is trying to keep the same loss calculation method when mask-ratio=1 return cv2.resize(mask, (nw, nh)) def polygons2masks(imgsz, polygons, color, downsample_ratio=1): """ Convert a list of polygons to a set of binary masks of the specified image size. Args: imgsz (tuple): The size of the image as (height, width). polygons (list[np.ndarray]): A list of polygons. Each polygon is an array with shape [N, M], where N is the number of polygons, and M is the number of points such that M % 2 = 0. color (int): The color value to fill in the polygons on the masks. downsample_ratio (int, optional): Factor by which to downsample each mask. Defaults to 1. Returns: (np.ndarray): A set of binary masks of the specified image size with the polygons filled in. """ return np.array([polygon2mask(imgsz, [x.reshape(-1)], color, downsample_ratio) for x in polygons]) def polygons2masks_overlap(imgsz, segments, downsample_ratio=1): """Return a (640, 640) overlap mask.""" masks = np.zeros( (imgsz[0] // downsample_ratio, imgsz[1] // downsample_ratio), dtype=np.int32 if len(segments) > 255 else np.uint8, ) areas = [] ms = [] for si in range(len(segments)): mask = polygon2mask(imgsz, [segments[si].reshape(-1)], downsample_ratio=downsample_ratio, color=1) ms.append(mask.astype(masks.dtype)) areas.append(mask.sum()) areas = np.asarray(areas) index = np.argsort(-areas) ms = np.array(ms)[index] for i in range(len(segments)): mask = ms[i] * (i + 1) masks = masks + mask masks = np.clip(masks, a_min=0, a_max=i + 1) return masks, index def find_dataset_yam
271782
ath: Path) -> Path: """ Find and return the YAML file associated with a Detect, Segment or Pose dataset. This function searches for a YAML file at the root level of the provided directory first, and if not found, it performs a recursive search. It prefers YAML files that have the same stem as the provided path. An AssertionError is raised if no YAML file is found or if multiple YAML files are found. Args: path (Path): The directory path to search for the YAML file. Returns: (Path): The path of the found YAML file. """ files = list(path.glob("*.yaml")) or list(path.rglob("*.yaml")) # try root level first and then recursive assert files, f"No YAML file found in '{path.resolve()}'" if len(files) > 1: files = [f for f in files if f.stem == path.stem] # prefer *.yaml files that match assert len(files) == 1, f"Expected 1 YAML file in '{path.resolve()}', but found {len(files)}.\n{files}" return files[0] def check_det_dataset(dataset, autodownload=True): """ Download, verify, and/or unzip a dataset if not found locally. This function checks the availability of a specified dataset, and if not found, it has the option to download and unzip the dataset. It then reads and parses the accompanying YAML data, ensuring key requirements are met and also resolves paths related to the dataset. Args: dataset (str): Path to the dataset or dataset descriptor (like a YAML file). autodownload (bool, optional): Whether to automatically download the dataset if not found. Defaults to True. Returns: (dict): Parsed dataset information and paths. """ file = check_file(dataset) # Download (optional) extract_dir = "" if zipfile.is_zipfile(file) or is_tarfile(file): new_dir = safe_download(file, dir=DATASETS_DIR, unzip=True, delete=False) file = find_dataset_yaml(DATASETS_DIR / new_dir) extract_dir, autodownload = file.parent, False # Read YAML data = yaml_load(file, append_filename=True) # dictionary # Checks for k in "train", "val": if k not in data: if k != "val" or "validation" not in data: raise SyntaxError( emojis(f"{dataset} '{k}:' key missing ❌.\n'train' and 'val' are required in all data YAMLs.") ) LOGGER.info("WARNING ⚠️ renaming data YAML 'validation' key to 'val' to match YOLO format.") data["val"] = data.pop("validation") # replace 'validation' key with 'val' key if "names" not in data and "nc" not in data: raise SyntaxError(emojis(f"{dataset} key missing ❌.\n either 'names' or 'nc' are required in all data YAMLs.")) if "names" in data and "nc" in data and len(data["names"]) != data["nc"]: raise SyntaxError(emojis(f"{dataset} 'names' length {len(data['names'])} and 'nc: {data['nc']}' must match.")) if "names" not in data: data["names"] = [f"class_{i}" for i in range(data["nc"])] else: data["nc"] = len(data["names"]) data["names"] = check_class_names(data["names"]) # Resolve paths path = Path(extract_dir or data.get("path") or Path(data.get("yaml_file", "")).parent) # dataset root if not path.is_absolute(): path = (DATASETS_DIR / path).resolve() # Set paths data["path"] = path # download scripts for k in "train", "val", "test", "minival": if data.get(k): # prepend path if isinstance(data[k], str): x = (path / data[k]).resolve() if not x.exists() and data[k].startswith("../"): x = (path / data[k][3:]).resolve() data[k] = str(x) else: data[k] = [str((path / x).resolve()) for x in data[k]] # Parse YAML val, s = (data.get(x) for x in ("val", "download")) if val: val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path if not all(x.exists() for x in val): name = clean_url(dataset) # dataset name with URL auth stripped m = f"\nDataset '{name}' images not found ⚠️, missing path '{[x for x in val if not x.exists()][0]}'" if s and autodownload: LOGGER.warning(m) else: m += f"\nNote dataset download directory is '{DATASETS_DIR}'. You can update this in '{SETTINGS_FILE}'" raise FileNotFoundError(m) t = time.time() r = None # success if s.startswith("http") and s.endswith(".zip"): # URL safe_download(url=s, dir=DATASETS_DIR, delete=True) elif s.startswith("bash "): # bash script LOGGER.info(f"Running {s} ...") r = os.system(s) else: # python script exec(s, {"yaml": data}) dt = f"({round(time.time() - t, 1)}s)" s = f"success ✅ {dt}, saved to {colorstr('bold', DATASETS_DIR)}" if r in {0, None} else f"failure {dt} ❌" LOGGER.info(f"Dataset download {s}\n") check_font("Arial.ttf" if is_ascii(data["names"]) else "Arial.Unicode.ttf") # download fonts return data # dictionary def check_cls_dataset(dataset, split
271796
# Ultralytics YOLO 🚀, AGPL-3.0 license """ Run prediction on images, videos, directories, globs, YouTube, webcam, streams, etc. Usage - sources: $ yolo mode=predict model=yolov8n.pt source=0 # webcam img.jpg # image vid.mp4 # video screen # screenshot path/ # directory list.txt # list of images list.streams # list of streams 'path/*.jpg' # glob 'https://youtu.be/LNwODJXcvt4' # YouTube 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP, TCP stream Usage - formats: $ yolo mode=predict model=yolov8n.pt # PyTorch yolov8n.torchscript # TorchScript yolov8n.onnx # ONNX Runtime or OpenCV DNN with dnn=True yolov8n_openvino_model # OpenVINO yolov8n.engine # TensorRT yolov8n.mlpackage # CoreML (macOS-only) yolov8n_saved_model # TensorFlow SavedModel yolov8n.pb # TensorFlow GraphDef yolov8n.tflite # TensorFlow Lite yolov8n_edgetpu.tflite # TensorFlow Edge TPU yolov8n_paddle_model # PaddlePaddle yolov8n_ncnn_model # NCNN """ import platform import re import threading from pathlib import Path import cv2 import numpy as np import torch from ultralytics.cfg import get_cfg, get_save_dir from ultralytics.data import load_inference_source from ultralytics.data.augment import LetterBox, classify_transforms from ultralytics.nn.autobackend import AutoBackend from ultralytics.utils import DEFAULT_CFG, LOGGER, MACOS, WINDOWS, callbacks, colorstr, ops from ultralytics.utils.checks import check_imgsz, check_imshow from ultralytics.utils.files import increment_path from ultralytics.utils.torch_utils import select_device, smart_inference_mode STREAM_WARNING = """ WARNING ⚠️ inference results will accumulate in RAM unless `stream=True` is passed, causing potential out-of-memory errors for large sources or long-running streams and videos. See https://docs.ultralytics.com/modes/predict/ for help. Example: results = model(source=..., stream=True) # generator of Results objects for r in results: boxes = r.boxes # Boxes object for bbox outputs masks = r.masks # Masks object for segment masks outputs probs = r.probs # Class probabilities for classification outputs """ class BasePredictor: """ BasePredictor. A base class for creating predictors. Attributes: args (SimpleNamespace): Configuration for the predictor. save_dir (Path): Directory to save results. done_warmup (bool): Whether the predictor has finished setup. model (nn.Module): Model used for prediction. data (dict): Data configuration. device (torch.device): Device used for prediction. dataset (Dataset): Dataset used for prediction. vid_writer (dict): Dictionary of {save_path: video_writer, ...} writer for saving video output. """ def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): """ Initializes the BasePredictor class. Args: cfg (str, optional): Path to a configuration file. Defaults to DEFAULT_CFG. overrides (dict, optional): Configuration overrides. Defaults to None. """ self.args = get_cfg(cfg, overrides) self.save_dir = get_save_dir(self.args) if self.args.conf is None: self.args.conf = 0.25 # default conf=0.25 self.done_warmup = False if self.args.show: self.args.show = check_imshow(warn=True) # Usable if setup is done self.model = None self.data = self.args.data # data_dict self.imgsz = None self.device = None self.dataset = None self.vid_writer = {} # dict of {save_path: video_writer, ...} self.plotted_img = None self.source_type = None self.seen = 0 self.windows = [] self.batch = None self.results = None self.transforms = None self.callbacks = _callbacks or callbacks.get_default_callbacks() self.txt_path = None self._lock = threading.Lock() # for automatic thread-safe inference callbacks.add_integration_callbacks(self) def preprocess(self, im): """ Prepares input image before inference. Args: im (torch.Tensor | List(np.ndarray)): BCHW for tensor, [(HWC) x B] for list. """ not_tensor = not isinstance(im, torch.Tensor) if not_tensor: im = np.stack(self.pre_transform(im)) im = im[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW, (n, 3, h, w) im = np.ascontiguousarray(im) # contiguous im = torch.from_numpy(im) im = im.to(self.device) im = im.half() if self.model.fp16 else im.float() # uint8 to fp16/32 if not_tensor: im /= 255 # 0 - 255 to 0.0 - 1.0 return im def inference(self, im, *args, **kwargs): """Runs inference on a given image using the specified model and arguments.""" visualize = ( increment_path(self.save_dir / Path(self.batch[0][0]).stem, mkdir=True) if self.args.visualize and (not self.source_type.tensor) else False ) return self.model(im, augment=self.args.augment, visualize=visualize, embed=self.args.embed, *args, **kwargs) def pre_transform(self, im): """ Pre-transform input image before inference. Args: im (List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list. Returns: (list): A list of transformed images. """ same_shapes = len({x.shape for x in im}) == 1 letterbox = LetterBox(self.imgsz, auto=same_shapes and self.model.pt, stride=self.model.stride) return [letterbox(image=x) for x in im] def postprocess(self, preds, img, orig_imgs): """Post-processes predictions for an image and returns them.""" return preds def __call__(self, source=None, model=None, stream=False, *args, **kwargs): """Performs inference on an image or stream.""" self.stream = stream if stream: return self.stream_inference(source, model, *args, **kwargs) else: return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one def predict_cli(self, source=None, model=None): """ Method used for Command Line Interface (CLI) prediction. This function is designed to run predictions using the CLI. It sets up the source and model, then processes the inputs in a streaming manner. This method ensures that no outputs accumulate in memory by consuming the generator without storing results. Note: Do not modify this function or remove the generator. The generator ensures that no outputs are accumulated in memory, which is critical for preventing memory issues during long-running predictions. """ gen = self.stream_inference(source, model) for _ in gen: # sourcery skip: remove-empty-nested-block, noqa pass def setup_source(self, source): """Sets up source and inference mode.""" self.imgsz = check_imgsz(self.args.imgsz, stride=self.model.stride, min_dim=2) # check image size self.transforms = ( getattr( self.model.model, "transforms", classify_transforms(self.imgsz[0], crop_fraction=self.args.crop_fraction), ) if self.args.task == "classify" else None ) self.dataset = load_inference_source( source=source, batch=self.args.batch, vid_stride=self.args.vid_stride, buffer=self.args.stream_buffer, ) self.source_type = self.dataset.source_type if not getattr(self, "stream", True) and ( self.source_type.stream or self.source_type.screenshot or len(self.dataset) > 1000 # many images or any(getattr(self.dataset, "video_flag", [False])) ): # videos LOGGER.warning(STREAM_WARNING) self.vid_writer = {} @
271797
inference_mode() def stream_inference(self, source=None, model=None, *args, **kwargs): """Streams real-time inference on camera feed and saves results to file.""" if self.args.verbose: LOGGER.info("") # Setup model if not self.model: self.setup_model(model) with self._lock: # for thread-safe inference # Setup source every time predict is called self.setup_source(source if source is not None else self.args.source) # Check if save_dir/ label file exists if self.args.save or self.args.save_txt: (self.save_dir / "labels" if self.args.save_txt else self.save_dir).mkdir(parents=True, exist_ok=True) # Warmup model if not self.done_warmup: self.model.warmup(imgsz=(1 if self.model.pt or self.model.triton else self.dataset.bs, 3, *self.imgsz)) self.done_warmup = True self.seen, self.windows, self.batch = 0, [], None profilers = ( ops.Profile(device=self.device), ops.Profile(device=self.device), ops.Profile(device=self.device), ) self.run_callbacks("on_predict_start") for self.batch in self.dataset: self.run_callbacks("on_predict_batch_start") paths, im0s, s = self.batch # Preprocess with profilers[0]: im = self.preprocess(im0s) # Inference with profilers[1]: preds = self.inference(im, *args, **kwargs) if self.args.embed: yield from [preds] if isinstance(preds, torch.Tensor) else preds # yield embedding tensors continue # Postprocess with profilers[2]: self.results = self.postprocess(preds, im, im0s) self.run_callbacks("on_predict_postprocess_end") # Visualize, save, write results n = len(im0s) for i in range(n): self.seen += 1 self.results[i].speed = { "preprocess": profilers[0].dt * 1e3 / n, "inference": profilers[1].dt * 1e3 / n, "postprocess": profilers[2].dt * 1e3 / n, } if self.args.verbose or self.args.save or self.args.save_txt or self.args.show: s[i] += self.write_results(i, Path(paths[i]), im, s) # Print batch results if self.args.verbose: LOGGER.info("\n".join(s)) self.run_callbacks("on_predict_batch_end") yield from self.results # Release assets for v in self.vid_writer.values(): if isinstance(v, cv2.VideoWriter): v.release() # Print final results if self.args.verbose and self.seen: t = tuple(x.t / self.seen * 1e3 for x in profilers) # speeds per image LOGGER.info( f"Speed: %.1fms preprocess, %.1fms inference, %.1fms postprocess per image at shape " f"{(min(self.args.batch, self.seen), 3, *im.shape[2:])}" % t ) if self.args.save or self.args.save_txt or self.args.save_crop: nl = len(list(self.save_dir.glob("labels/*.txt"))) # number of labels s = f"\n{nl} label{'s' * (nl > 1)} saved to {self.save_dir / 'labels'}" if self.args.save_txt else "" LOGGER.info(f"Results saved to {colorstr('bold', self.save_dir)}{s}") self.run_callbacks("on_predict_end") def setup_model(self, model, verbose=True): """Initialize YOLO model with given parameters and set it to evaluation mode.""" self.model = AutoBackend( weights=model or self.args.model, device=select_device(self.args.device, verbose=verbose), dnn=self.args.dnn, data=self.args.data, fp16=self.args.half, batch=self.args.batch, fuse=True, verbose=verbose, ) self.device = self.model.device # update device self.args.half = self.model.fp16 # update half self.model.eval() def write_results(self, i, p, im, s): """Write inference results to a file or directory.""" string = "" # print string if len(im.shape) == 3: im = im[None] # expand for batch dim if self.source_type.stream or self.source_type.from_img or self.source_type.tensor: # batch_size >= 1 string += f"{i}: " frame = self.dataset.count else: match = re.search(r"frame (\d+)/", s[i]) frame = int(match[1]) if match else None # 0 if frame undetermined self.txt_path = self.save_dir / "labels" / (p.stem + ("" if self.dataset.mode == "image" else f"_{frame}")) string += "{:g}x{:g} ".format(*im.shape[2:]) result = self.results[i] result.save_dir = self.save_dir.__str__() # used in other locations string += f"{result.verbose()}{result.speed['inference']:.1f}ms" # Add predictions to image if self.args.save or self.args.show: self.plotted_img = result.plot( line_width=self.args.line_width, boxes=self.args.show_boxes, conf=self.args.show_conf, labels=self.args.show_labels, im_gpu=None if self.args.retina_masks else im[i], ) # Save results if self.args.save_txt: result.save_txt(f"{self.txt_path}.txt", save_conf=self.args.save_conf) if self.args.save_crop: result.save_crop(save_dir=self.save_dir / "crops", file_name=self.txt_path.stem) if self.args.show: self.show(str(p)) if self.args.save: self.save_predicted_images(str(self.save_dir / p.name), frame) return string def save_predicted_images(self, save_path="", frame=0): """Save video predictions as mp4 at specified path.""" im = self.plotted_img # Save videos and streams if self.dataset.mode in {"stream", "video"}: fps = self.dataset.fps if self.dataset.mode == "video" else 30 frames_path = f'{save_path.split(".", 1)[0]}_frames/' if save_path not in self.vid_writer: # new video if self.args.save_frames: Path(frames_path).mkdir(parents=True, exist_ok=True) suffix, fourcc = (".mp4", "avc1") if MACOS else (".avi", "WMV2") if WINDOWS else (".avi", "MJPG") self.vid_writer[save_path] = cv2.VideoWriter( filename=str(Path(save_path).with_suffix(suffix)), fourcc=cv2.VideoWriter_fourcc(*fourcc), fps=fps, # integer required, floats produce error in MP4 codec frameSize=(im.shape[1], im.shape[0]), # (width, height) ) # Save video self.vid_writer[save_path].write(im) if self.args.save_frames: cv2.imwrite(f"{frames_path}{frame}.jpg", im) # Save images else: cv2.imwrite(str(Path(save_path).with_suffix(".jpg")), im) # save to JPG for best support def show(self, p=""): """Display an image in a window using the OpenCV imshow function.""" im = self.plotted_img if platform.system() == "Linux" and p not in self.windows: self.windows.append(p) cv2.namedWindow(p, cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) cv2.resizeWindow(p, im.shape[1], im.shape[0]) # (width, height) cv2.imshow(p, im) cv2.waitKey(300 if self.dataset.mode == "image" else 1) # 1 millisecond def run_callbacks(self, event: str): """Runs all registered callbacks for a specific event.""" for callback in self.callbacks.get(event, []): callback(self) def add_callback(self, event: str, func): """Add callback.""" self.callbacks[event].append(func)
271799
ss Results(SimpleClass): """ A class for storing and manipulating inference results. This class encapsulates the functionality for handling detection, segmentation, pose estimation, and classification results from YOLO models. Attributes: orig_img (numpy.ndarray): Original image as a numpy array. orig_shape (Tuple[int, int]): Original image shape in (height, width) format. boxes (Boxes | None): Object containing detection bounding boxes. masks (Masks | None): Object containing detection masks. probs (Probs | None): Object containing class probabilities for classification tasks. keypoints (Keypoints | None): Object containing detected keypoints for each object. obb (OBB | None): Object containing oriented bounding boxes. speed (Dict[str, float | None]): Dictionary of preprocess, inference, and postprocess speeds. names (Dict[int, str]): Dictionary mapping class IDs to class names. path (str): Path to the image file. _keys (Tuple[str, ...]): Tuple of attribute names for internal use. Methods: update: Updates object attributes with new detection results. cpu: Returns a copy of the Results object with all tensors on CPU memory. numpy: Returns a copy of the Results object with all tensors as numpy arrays. cuda: Returns a copy of the Results object with all tensors on GPU memory. to: Returns a copy of the Results object with tensors on a specified device and dtype. new: Returns a new Results object with the same image, path, and names. plot: Plots detection results on an input image, returning an annotated image. show: Shows annotated results on screen. save: Saves annotated results to file. verbose: Returns a log string for each task, detailing detections and classifications. save_txt: Saves detection results to a text file. save_crop: Saves cropped detection images. tojson: Converts detection results to JSON format. Examples: >>> results = model("path/to/image.jpg") >>> for result in results: ... print(result.boxes) # Print detection boxes ... result.show() # Display the annotated image ... result.save(filename="result.jpg") # Save annotated image """ def __init__( self, orig_img, path, names, boxes=None, masks=None, probs=None, keypoints=None, obb=None, speed=None ) -> None: """ Initialize the Results class for storing and manipulating inference results. Args: orig_img (numpy.ndarray): The original image as a numpy array. path (str): The path to the image file. names (Dict): A dictionary of class names. boxes (torch.Tensor | None): A 2D tensor of bounding box coordinates for each detection. masks (torch.Tensor | None): A 3D tensor of detection masks, where each mask is a binary image. probs (torch.Tensor | None): A 1D tensor of probabilities of each class for classification task. keypoints (torch.Tensor | None): A 2D tensor of keypoint coordinates for each detection. obb (torch.Tensor | None): A 2D tensor of oriented bounding box coordinates for each detection. speed (Dict | None): A dictionary containing preprocess, inference, and postprocess speeds (ms/image). Examples: >>> results = model("path/to/image.jpg") >>> result = results[0] # Get the first result >>> boxes = result.boxes # Get the boxes for the first result >>> masks = result.masks # Get the masks for the first result Notes: For the default pose model, keypoint indices for human body pose estimation are: 0: Nose, 1: Left Eye, 2: Right Eye, 3: Left Ear, 4: Right Ear 5: Left Shoulder, 6: Right Shoulder, 7: Left Elbow, 8: Right Elbow 9: Left Wrist, 10: Right Wrist, 11: Left Hip, 12: Right Hip 13: Left Knee, 14: Right Knee, 15: Left Ankle, 16: Right Ankle """ self.orig_img = orig_img self.orig_shape = orig_img.shape[:2] self.boxes = Boxes(boxes, self.orig_shape) if boxes is not None else None # native size boxes self.masks = Masks(masks, self.orig_shape) if masks is not None else None # native size or imgsz masks self.probs = Probs(probs) if probs is not None else None self.keypoints = Keypoints(keypoints, self.orig_shape) if keypoints is not None else None self.obb = OBB(obb, self.orig_shape) if obb is not None else None self.speed = speed if speed is not None else {"preprocess": None, "inference": None, "postprocess": None} self.names = names self.path = path self.save_dir = None self._keys = "boxes", "masks", "probs", "keypoints", "obb" def __getitem__(self, idx): """ Return a Results object for a specific index of inference results. Args: idx (int | slice): Index or slice to retrieve from the Results object. Returns: (Results): A new Results object containing the specified subset of inference results. Examples: >>> results = model("path/to/image.jpg") # Perform inference >>> single_result = results[0] # Get the first result >>> subset_results = results[1:4] # Get a slice of results """ return self._apply("__getitem__", idx) def __len__(self): """ Return the number of detections in the Results object. Returns: (int): The number of detections, determined by the length of the first non-empty attribute (boxes, masks, probs, keypoints, or obb). Examples: >>> results = Results(orig_img, path, names, boxes=torch.rand(5, 4)) >>> len(results) 5 """ for k in self._keys: v = getattr(self, k) if v is not None: return len(v) def update(self, boxes=None, masks=None, probs=None, obb=None): """ Updates the Results object with new detection data. This method allows updating the boxes, masks, probabilities, and oriented bounding boxes (OBB) of the Results object. It ensures that boxes are clipped to the original image shape. Args: boxes (torch.Tensor | None): A tensor of shape (N, 6) containing bounding box coordinates and confidence scores. The format is (x1, y1, x2, y2, conf, class). masks (torch.Tensor | None): A tensor of shape (N, H, W) containing segmentation masks. probs (torch.Tensor | None): A tensor of shape (num_classes,) containing class probabilities. obb (torch.Tensor | None): A tensor of shape (N, 5) containing oriented bounding box coordinates. Examples: >>> results = model("image.jpg") >>> new_boxes = torch.tensor([[100, 100, 200, 200, 0.9, 0]]) >>> results[0].update(boxes=new_boxes) """ if boxes is not None: self.boxes = Boxes(ops.clip_boxes(boxes, self.orig_shape), self.orig_shape) if masks is not None: self.masks = Masks(masks, self.orig_shape) if probs is not None: self.probs = probs if obb is not None: self.obb = OBB(obb, self.orig_shape) def _apply(self, fn, *args, **kwargs): """ Applies a function to all non-empty attributes and returns a new Results object with modified attributes. This method is internally called by methods like .to(), .cuda(), .cpu(), etc. Args: fn (str): The name of the function to apply. *args (Any): Variable length argument list to pass to the function. **kwargs (Any): Arbitrary keyword arguments to pass to the function. Returns: (Results): A new Results object with attributes modified by the applied function. Examples: >>> results = model("path/to/image.jpg") >>> for result in results: ... result_cuda = result.cuda() ... result_cpu = result.cpu() """ r = self.new() for k in self._keys: v = getattr(self, k) if v is not None: setattr(r, k, getattr(v, fn)(*args, **kwargs)) return r
271801
show(self, *args, **kwargs): """ Display the image with annotated inference results. This method plots the detection results on the original image and displays it. It's a convenient way to visualize the model's predictions directly. Args: *args (Any): Variable length argument list to be passed to the `plot()` method. **kwargs (Any): Arbitrary keyword arguments to be passed to the `plot()` method. Examples: >>> results = model("path/to/image.jpg") >>> results[0].show() # Display the first result >>> for result in results: ... result.show() # Display all results """ self.plot(show=True, *args, **kwargs) def save(self, filename=None, *args, **kwargs): """ Saves annotated inference results image to file. This method plots the detection results on the original image and saves the annotated image to a file. It utilizes the `plot` method to generate the annotated image and then saves it to the specified filename. Args: filename (str | Path | None): The filename to save the annotated image. If None, a default filename is generated based on the original image path. *args (Any): Variable length argument list to be passed to the `plot` method. **kwargs (Any): Arbitrary keyword arguments to be passed to the `plot` method. Examples: >>> results = model("path/to/image.jpg") >>> for result in results: ... result.save("annotated_image.jpg") >>> # Or with custom plot arguments >>> for result in results: ... result.save("annotated_image.jpg", conf=False, line_width=2) """ if not filename: filename = f"results_{Path(self.path).name}" self.plot(save=True, filename=filename, *args, **kwargs) return filename def verbose(self): """ Returns a log string for each task in the results, detailing detection and classification outcomes. This method generates a human-readable string summarizing the detection and classification results. It includes the number of detections for each class and the top probabilities for classification tasks. Returns: (str): A formatted string containing a summary of the results. For detection tasks, it includes the number of detections per class. For classification tasks, it includes the top 5 class probabilities. Examples: >>> results = model("path/to/image.jpg") >>> for result in results: ... print(result.verbose()) 2 persons, 1 car, 3 traffic lights, dog 0.92, cat 0.78, horse 0.64, Notes: - If there are no detections, the method returns "(no detections), " for detection tasks. - For classification tasks, it returns the top 5 class probabilities and their corresponding class names. - The returned string is comma-separated and ends with a comma and a space. """ log_string = "" probs = self.probs boxes = self.boxes if len(self) == 0: return log_string if probs is not None else f"{log_string}(no detections), " if probs is not None: log_string += f"{', '.join(f'{self.names[j]} {probs.data[j]:.2f}' for j in probs.top5)}, " if boxes: for c in boxes.cls.unique(): n = (boxes.cls == c).sum() # detections per class log_string += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " return log_string def save_txt(self, txt_file, save_conf=False): """ Save detection results to a text file. Args: txt_file (str | Path): Path to the output text file. save_conf (bool): Whether to include confidence scores in the output. Returns: (str): Path to the saved text file. Examples: >>> from ultralytics import YOLO >>> model = YOLO("yolo11n.pt") >>> results = model("path/to/image.jpg") >>> for result in results: ... result.save_txt("output.txt") Notes: - The file will contain one line per detection or classification with the following structure: - For detections: `class confidence x_center y_center width height` - For classifications: `confidence class_name` - For masks and keypoints, the specific formats will vary accordingly. - The function will create the output directory if it does not exist. - If save_conf is False, the confidence scores will be excluded from the output. - Existing contents of the file will not be overwritten; new results will be appended. """ is_obb = self.obb is not None boxes = self.obb if is_obb else self.boxes masks = self.masks probs = self.probs kpts = self.keypoints texts = [] if probs is not None: # Classify [texts.append(f"{probs.data[j]:.2f} {self.names[j]}") for j in probs.top5] elif boxes: # Detect/segment/pose for j, d in enumerate(boxes): c, conf, id = int(d.cls), float(d.conf), None if d.id is None else int(d.id.item()) line = (c, *(d.xyxyxyxyn.view(-1) if is_obb else d.xywhn.view(-1))) if masks: seg = masks[j].xyn[0].copy().reshape(-1) # reversed mask.xyn, (n,2) to (n*2) line = (c, *seg) if kpts is not None: kpt = torch.cat((kpts[j].xyn, kpts[j].conf[..., None]), 2) if kpts[j].has_visible else kpts[j].xyn line += (*kpt.reshape(-1).tolist(),) line += (conf,) * save_conf + (() if id is None else (id,)) texts.append(("%g " * len(line)).rstrip() % line) if texts: Path(txt_file).parent.mkdir(parents=True, exist_ok=True) # make directory with open(txt_file, "a") as f: f.writelines(text + "\n" for text in texts) def save_crop(self, save_dir, file_name=Path("im.jpg")): """ Saves cropped detection images to specified directory. This method saves cropped images of detected objects to a specified directory. Each crop is saved in a subdirectory named after the object's class, with the filename based on the input file_name. Args: save_dir (str | Path): Directory path where cropped images will be saved. file_name (str | Path): Base filename for the saved cropped images. Default is Path("im.jpg"). Notes: - This method does not support Classify or Oriented Bounding Box (OBB) tasks. - Crops are saved as 'save_dir/class_name/file_name.jpg'. - The method will create necessary subdirectories if they don't exist. - Original image is copied before cropping to avoid modifying the original. Examples: >>> results = model("path/to/image.jpg") >>> for result in results: ... result.save_crop(save_dir="path/to/crops", file_name="detection") """ if self.probs is not None: LOGGER.warning("WARNING ⚠️ Classify task do not support `save_crop`.") return if self.obb is not None: LOGGER.warning("WARNING ⚠️ OBB task do not support `save_crop`.") return for d in self.boxes: save_one_box( d.xyxy, self.orig_img.copy(), file=Path(save_dir) / self.names[int(d.cls)] / f"{Path(file_name)}.jpg", BGR=True, ) def s
271802
(self, normalize=False, decimals=5): """ Converts inference results to a summarized dictionary with optional normalization for box coordinates. This method creates a list of detection dictionaries, each containing information about a single detection or classification result. For classification tasks, it returns the top class and its confidence. For detection tasks, it includes class information, bounding box coordinates, and optionally mask segments and keypoints. Args: normalize (bool): Whether to normalize bounding box coordinates by image dimensions. Defaults to False. decimals (int): Number of decimal places to round the output values to. Defaults to 5. Returns: (List[Dict]): A list of dictionaries, each containing summarized information for a single detection or classification result. The structure of each dictionary varies based on the task type (classification or detection) and available information (boxes, masks, keypoints). Examples: >>> results = model("image.jpg") >>> summary = results[0].summary() >>> print(summary) """ # Create list of detection dictionaries results = [] if self.probs is not None: class_id = self.probs.top1 results.append( { "name": self.names[class_id], "class": class_id, "confidence": round(self.probs.top1conf.item(), decimals), } ) return results is_obb = self.obb is not None data = self.obb if is_obb else self.boxes h, w = self.orig_shape if normalize else (1, 1) for i, row in enumerate(data): # xyxy, track_id if tracking, conf, class_id class_id, conf = int(row.cls), round(row.conf.item(), decimals) box = (row.xyxyxyxy if is_obb else row.xyxy).squeeze().reshape(-1, 2).tolist() xy = {} for j, b in enumerate(box): xy[f"x{j + 1}"] = round(b[0] / w, decimals) xy[f"y{j + 1}"] = round(b[1] / h, decimals) result = {"name": self.names[class_id], "class": class_id, "confidence": conf, "box": xy} if data.is_track: result["track_id"] = int(row.id.item()) # track ID if self.masks: result["segments"] = { "x": (self.masks.xy[i][:, 0] / w).round(decimals).tolist(), "y": (self.masks.xy[i][:, 1] / h).round(decimals).tolist(), } if self.keypoints is not None: x, y, visible = self.keypoints[i].data[0].cpu().unbind(dim=1) # torch Tensor result["keypoints"] = { "x": (x / w).numpy().round(decimals).tolist(), # decimals named argument required "y": (y / h).numpy().round(decimals).tolist(), "visible": visible.numpy().round(decimals).tolist(), } results.append(result) return results def to_df(self, normalize=False, decimals=5): """ Converts detection results to a Pandas Dataframe. This method converts the detection results into Pandas Dataframe format. It includes information about detected objects such as bounding boxes, class names, confidence scores, and optionally segmentation masks and keypoints. Args: normalize (bool): Whether to normalize the bounding box coordinates by the image dimensions. If True, coordinates will be returned as float values between 0 and 1. Defaults to False. decimals (int): Number of decimal places to round the output values to. Defaults to 5. Returns: (DataFrame): A Pandas Dataframe containing all the information in results in an organized way. Examples: >>> results = model("path/to/image.jpg") >>> df_result = results[0].to_df() >>> print(df_result) """ import pandas as pd return pd.DataFrame(self.summary(normalize=normalize, decimals=decimals)) def to_csv(self, normalize=False, decimals=5, *args, **kwargs): """ Converts detection results to a CSV format. This method serializes the detection results into a CSV format. It includes information about detected objects such as bounding boxes, class names, confidence scores, and optionally segmentation masks and keypoints. Args: normalize (bool): Whether to normalize the bounding box coordinates by the image dimensions. If True, coordinates will be returned as float values between 0 and 1. Defaults to False. decimals (int): Number of decimal places to round the output values to. Defaults to 5. *args (Any): Variable length argument list to be passed to pandas.DataFrame.to_csv(). **kwargs (Any): Arbitrary keyword arguments to be passed to pandas.DataFrame.to_csv(). Returns: (str): CSV containing all the information in results in an organized way. Examples: >>> results = model("path/to/image.jpg") >>> csv_result = results[0].to_csv() >>> print(csv_result) """ return self.to_df(normalize=normalize, decimals=decimals).to_csv(*args, **kwargs) def to_xml(self, normalize=False, decimals=5, *args, **kwargs): """ Converts detection results to XML format. This method serializes the detection results into an XML format. It includes information about detected objects such as bounding boxes, class names, confidence scores, and optionally segmentation masks and keypoints. Args: normalize (bool): Whether to normalize the bounding box coordinates by the image dimensions. If True, coordinates will be returned as float values between 0 and 1. Defaults to False. decimals (int): Number of decimal places to round the output values to. Defaults to 5. *args (Any): Variable length argument list to be passed to pandas.DataFrame.to_xml(). **kwargs (Any): Arbitrary keyword arguments to be passed to pandas.DataFrame.to_xml(). Returns: (str): An XML string containing all the information in results in an organized way. Examples: >>> results = model("path/to/image.jpg") >>> xml_result = results[0].to_xml() >>> print(xml_result) """ check_requirements("lxml") df = self.to_df(normalize=normalize, decimals=decimals) return '<?xml version="1.0" encoding="utf-8"?>\n<root></root>' if df.empty else df.to_xml(*args, **kwargs) def tojson(self, normalize=False, decimals=5): """Deprecated version of to_json().""" LOGGER.warning("WARNING ⚠️ 'result.tojson()' is deprecated, replace with 'result.to_json()'.") return self.to_json(normalize, decimals) def to_json(self, normalize=False, decimals=5): """ Converts detection results to JSON format. This method serializes the detection results into a JSON-compatible format. It includes information about detected objects such as bounding boxes, class names, confidence scores, and optionally segmentation masks and keypoints. Args: normalize (bool): Whether to normalize the bounding box coordinates by the image dimensions. If True, coordinates will be returned as float values between 0 and 1. Defaults to False. decimals (int): Number of decimal places to round the output values to. Defaults to 5. Returns: (str): A JSON string containing the serialized detection results. Examples: >>> results = model("path/to/image.jpg") >>> json_result = results[0].to_json() >>> print(json_result) Notes: - For classification tasks, the JSON will contain class probabilities instead of bounding boxes. - For object detection tasks, the JSON will include bounding box coordinates, class names, and confidence scores. - If available, segmentation masks and keypoints will also be included in the JSON output. - The method uses the `summary` method internally to generate the data structure before converting it to JSON. """ import json return json.dumps(self.summary(normalize=normalize, decimals=decimals), indent=2) class Boxes(
271803
eTensor): """ A class for managing and manipulating detection boxes. This class provides functionality for handling detection boxes, including their coordinates, confidence scores, class labels, and optional tracking IDs. It supports various box formats and offers methods for easy manipulation and conversion between different coordinate systems. Attributes: data (torch.Tensor | numpy.ndarray): The raw tensor containing detection boxes and associated data. orig_shape (Tuple[int, int]): The original image dimensions (height, width). is_track (bool): Indicates whether tracking IDs are included in the box data. xyxy (torch.Tensor | numpy.ndarray): Boxes in [x1, y1, x2, y2] format. conf (torch.Tensor | numpy.ndarray): Confidence scores for each box. cls (torch.Tensor | numpy.ndarray): Class labels for each box. id (torch.Tensor | numpy.ndarray): Tracking IDs for each box (if available). xywh (torch.Tensor | numpy.ndarray): Boxes in [x, y, width, height] format. xyxyn (torch.Tensor | numpy.ndarray): Normalized [x1, y1, x2, y2] boxes relative to orig_shape. xywhn (torch.Tensor | numpy.ndarray): Normalized [x, y, width, height] boxes relative to orig_shape. Methods: cpu(): Returns a copy of the object with all tensors on CPU memory. numpy(): Returns a copy of the object with all tensors as numpy arrays. cuda(): Returns a copy of the object with all tensors on GPU memory. to(*args, **kwargs): Returns a copy of the object with tensors on specified device and dtype. Examples: >>> import torch >>> boxes_data = torch.tensor([[100, 50, 150, 100, 0.9, 0], [200, 150, 300, 250, 0.8, 1]]) >>> orig_shape = (480, 640) # height, width >>> boxes = Boxes(boxes_data, orig_shape) >>> print(boxes.xyxy) >>> print(boxes.conf) >>> print(boxes.cls) >>> print(boxes.xywhn) """ def __init__(self, boxes, orig_shape) -> None: """ Initialize the Boxes class with detection box data and the original image shape. This class manages detection boxes, providing easy access and manipulation of box coordinates, confidence scores, class identifiers, and optional tracking IDs. It supports multiple formats for box coordinates, including both absolute and normalized forms. Args: boxes (torch.Tensor | np.ndarray): A tensor or numpy array with detection boxes of shape (num_boxes, 6) or (num_boxes, 7). Columns should contain [x1, y1, x2, y2, confidence, class, (optional) track_id]. orig_shape (Tuple[int, int]): The original image shape as (height, width). Used for normalization. Attributes: data (torch.Tensor): The raw tensor containing detection boxes and their associated data. orig_shape (Tuple[int, int]): The original image size, used for normalization. is_track (bool): Indicates whether tracking IDs are included in the box data. Examples: >>> import torch >>> boxes = torch.tensor([[100, 50, 150, 100, 0.9, 0]]) >>> orig_shape = (480, 640) >>> detection_boxes = Boxes(boxes, orig_shape) >>> print(detection_boxes.xyxy) tensor([[100., 50., 150., 100.]]) """ if boxes.ndim == 1: boxes = boxes[None, :] n = boxes.shape[-1] assert n in {6, 7}, f"expected 6 or 7 values but got {n}" # xyxy, track_id, conf, cls super().__init__(boxes, orig_shape) self.is_track = n == 7 self.orig_shape = orig_shape @property def xyxy(self): """ Returns bounding boxes in [x1, y1, x2, y2] format. Returns: (torch.Tensor | numpy.ndarray): A tensor or numpy array of shape (n, 4) containing bounding box coordinates in [x1, y1, x2, y2] format, where n is the number of boxes. Examples: >>> results = model("image.jpg") >>> boxes = results[0].boxes >>> xyxy = boxes.xyxy >>> print(xyxy) """ return self.data[:, :4] @property def conf(self): """ Returns the confidence scores for each detection box. Returns: (torch.Tensor | numpy.ndarray): A 1D tensor or array containing confidence scores for each detection, with shape (N,) where N is the number of detections. Examples: >>> boxes = Boxes(torch.tensor([[10, 20, 30, 40, 0.9, 0]]), orig_shape=(100, 100)) >>> conf_scores = boxes.conf >>> print(conf_scores) tensor([0.9000]) """ return self.data[:, -2] @property def cls(self): """ Returns the class ID tensor representing category predictions for each bounding box. Returns: (torch.Tensor | numpy.ndarray): A tensor or numpy array containing the class IDs for each detection box. The shape is (N,), where N is the number of boxes. Examples: >>> results = model("image.jpg") >>> boxes = results[0].boxes >>> class_ids = boxes.cls >>> print(class_ids) # tensor([0., 2., 1.]) """ return self.data[:, -1] @property def id(self): """ Returns the tracking IDs for each detection box if available. Returns: (torch.Tensor | None): A tensor containing tracking IDs for each box if tracking is enabled, otherwise None. Shape is (N,) where N is the number of boxes. Examples: >>> results = model.track("path/to/video.mp4") >>> for result in results: ... boxes = result.boxes ... if boxes.is_track: ... track_ids = boxes.id ... print(f"Tracking IDs: {track_ids}") ... else: ... print("Tracking is not enabled for these boxes.") Notes: - This property is only available when tracking is enabled (i.e., when `is_track` is True). - The tracking IDs are typically used to associate detections across multiple frames in video analysis. """ return self.data[:, -3] if self.is_track else None @property @lru_cache(maxsize=2) # maxsize 1 should suffice def xywh(self): """ Convert bounding boxes from [x1, y1, x2, y2] format to [x, y, width, height] format. Returns: (torch.Tensor | numpy.ndarray): Boxes in [x_center, y_center, width, height] format, where x_center, y_center are the coordinates of the center point of the bounding box, width, height are the dimensions of the bounding box and the shape of the returned tensor is (N, 4), where N is the number of boxes. Examples: >>> boxes = Boxes(torch.tensor([[100, 50, 150, 100], [200, 150, 300, 250]]), orig_shape=(480, 640)) >>> xywh = boxes.xywh >>> print(xywh) tensor([[100.0000, 50.0000, 50.0000, 50.0000], [200.0000, 150.0000, 100.0000, 100.0000]]) """ return ops.xyxy2xywh(self.xyxy) @property
271804
lru_cache(maxsize=2) def xyxyn(self): """ Returns normalized bounding box coordinates relative to the original image size. This property calculates and returns the bounding box coordinates in [x1, y1, x2, y2] format, normalized to the range [0, 1] based on the original image dimensions. Returns: (torch.Tensor | numpy.ndarray): Normalized bounding box coordinates with shape (N, 4), where N is the number of boxes. Each row contains [x1, y1, x2, y2] values normalized to [0, 1]. Examples: >>> boxes = Boxes(torch.tensor([[100, 50, 300, 400, 0.9, 0]]), orig_shape=(480, 640)) >>> normalized = boxes.xyxyn >>> print(normalized) tensor([[0.1562, 0.1042, 0.4688, 0.8333]]) """ xyxy = self.xyxy.clone() if isinstance(self.xyxy, torch.Tensor) else np.copy(self.xyxy) xyxy[..., [0, 2]] /= self.orig_shape[1] xyxy[..., [1, 3]] /= self.orig_shape[0] return xyxy @property @lru_cache(maxsize=2) def xywhn(self): """ Returns normalized bounding boxes in [x, y, width, height] format. This property calculates and returns the normalized bounding box coordinates in the format [x_center, y_center, width, height], where all values are relative to the original image dimensions. Returns: (torch.Tensor | numpy.ndarray): Normalized bounding boxes with shape (N, 4), where N is the number of boxes. Each row contains [x_center, y_center, width, height] values normalized to [0, 1] based on the original image dimensions. Examples: >>> boxes = Boxes(torch.tensor([[100, 50, 150, 100, 0.9, 0]]), orig_shape=(480, 640)) >>> normalized = boxes.xywhn >>> print(normalized) tensor([[0.1953, 0.1562, 0.0781, 0.1042]]) """ xywh = ops.xyxy2xywh(self.xyxy) xywh[..., [0, 2]] /= self.orig_shape[1] xywh[..., [1, 3]] /= self.orig_shape[0] return xywh class Masks(BaseTensor): """ A class for storing and manipulating detection masks. This class extends BaseTensor and provides functionality for handling segmentation masks, including methods for converting between pixel and normalized coordinates. Attributes: data (torch.Tensor | numpy.ndarray): The raw tensor or array containing mask data. orig_shape (tuple): Original image shape in (height, width) format. xy (List[numpy.ndarray]): A list of segments in pixel coordinates. xyn (List[numpy.ndarray]): A list of normalized segments. Methods: cpu(): Returns a copy of the Masks object with the mask tensor on CPU memory. numpy(): Returns a copy of the Masks object with the mask tensor as a numpy array. cuda(): Returns a copy of the Masks object with the mask tensor on GPU memory. to(*args, **kwargs): Returns a copy of the Masks object with the mask tensor on specified device and dtype. Examples: >>> masks_data = torch.rand(1, 160, 160) >>> orig_shape = (720, 1280) >>> masks = Masks(masks_data, orig_shape) >>> pixel_coords = masks.xy >>> normalized_coords = masks.xyn """ def __init__(self, masks, orig_shape) -> None: """ Initialize the Masks class with detection mask data and the original image shape. Args: masks (torch.Tensor | np.ndarray): Detection masks with shape (num_masks, height, width). orig_shape (tuple): The original image shape as (height, width). Used for normalization. Examples: >>> import torch >>> from ultralytics.engine.results import Masks >>> masks = torch.rand(10, 160, 160) # 10 masks of 160x160 resolution >>> orig_shape = (720, 1280) # Original image shape >>> mask_obj = Masks(masks, orig_shape) """ if masks.ndim == 2: masks = masks[None, :] super().__init__(masks, orig_shape) @property @lru_cache(maxsize=1) def xyn(self): """ Returns normalized xy-coordinates of the segmentation masks. This property calculates and caches the normalized xy-coordinates of the segmentation masks. The coordinates are normalized relative to the original image shape. Returns: (List[numpy.ndarray]): A list of numpy arrays, where each array contains the normalized xy-coordinates of a single segmentation mask. Each array has shape (N, 2), where N is the number of points in the mask contour. Examples: >>> results = model("image.jpg") >>> masks = results[0].masks >>> normalized_coords = masks.xyn >>> print(normalized_coords[0]) # Normalized coordinates of the first mask """ return [ ops.scale_coords(self.data.shape[1:], x, self.orig_shape, normalize=True) for x in ops.masks2segments(self.data) ] @property @lru_cache(maxsize=1) def xy(self): """ Returns the [x, y] pixel coordinates for each segment in the mask tensor. This property calculates and returns a list of pixel coordinates for each segmentation mask in the Masks object. The coordinates are scaled to match the original image dimensions. Returns: (List[numpy.ndarray]): A list of numpy arrays, where each array contains the [x, y] pixel coordinates for a single segmentation mask. Each array has shape (N, 2), where N is the number of points in the segment. Examples: >>> results = model("image.jpg") >>> masks = results[0].masks >>> xy_coords = masks.xy >>> print(len(xy_coords)) # Number of masks >>> print(xy_coords[0].shape) # Shape of first mask's coordinates """ return [ ops.scale_coords(self.data.shape[1:], x, self.orig_shape, normalize=False) for x in ops.masks2segments(self.data) ] class Keypoi
271805
(BaseTensor): """ A class for storing and manipulating detection keypoints. This class encapsulates functionality for handling keypoint data, including coordinate manipulation, normalization, and confidence values. Attributes: data (torch.Tensor): The raw tensor containing keypoint data. orig_shape (Tuple[int, int]): The original image dimensions (height, width). has_visible (bool): Indicates whether visibility information is available for keypoints. xy (torch.Tensor): Keypoint coordinates in [x, y] format. xyn (torch.Tensor): Normalized keypoint coordinates in [x, y] format, relative to orig_shape. conf (torch.Tensor): Confidence values for each keypoint, if available. Methods: cpu(): Returns a copy of the keypoints tensor on CPU memory. numpy(): Returns a copy of the keypoints tensor as a numpy array. cuda(): Returns a copy of the keypoints tensor on GPU memory. to(*args, **kwargs): Returns a copy of the keypoints tensor with specified device and dtype. Examples: >>> import torch >>> from ultralytics.engine.results import Keypoints >>> keypoints_data = torch.rand(1, 17, 3) # 1 detection, 17 keypoints, (x, y, conf) >>> orig_shape = (480, 640) # Original image shape (height, width) >>> keypoints = Keypoints(keypoints_data, orig_shape) >>> print(keypoints.xy.shape) # Access xy coordinates >>> print(keypoints.conf) # Access confidence values >>> keypoints_cpu = keypoints.cpu() # Move keypoints to CPU """ @smart_inference_mode() # avoid keypoints < conf in-place error def __init__(self, keypoints, orig_shape) -> None: """ Initializes the Keypoints object with detection keypoints and original image dimensions. This method processes the input keypoints tensor, handling both 2D and 3D formats. For 3D tensors (x, y, confidence), it masks out low-confidence keypoints by setting their coordinates to zero. Args: keypoints (torch.Tensor): A tensor containing keypoint data. Shape can be either: - (num_objects, num_keypoints, 2) for x, y coordinates only - (num_objects, num_keypoints, 3) for x, y coordinates and confidence scores orig_shape (Tuple[int, int]): The original image dimensions (height, width). Examples: >>> kpts = torch.rand(1, 17, 3) # 1 object, 17 keypoints (COCO format), x,y,conf >>> orig_shape = (720, 1280) # Original image height, width >>> keypoints = Keypoints(kpts, orig_shape) """ if keypoints.ndim == 2: keypoints = keypoints[None, :] if keypoints.shape[2] == 3: # x, y, conf mask = keypoints[..., 2] < 0.5 # points with conf < 0.5 (not visible) keypoints[..., :2][mask] = 0 super().__init__(keypoints, orig_shape) self.has_visible = self.data.shape[-1] == 3 @property @lru_cache(maxsize=1) def xy(self): """ Returns x, y coordinates of keypoints. Returns: (torch.Tensor): A tensor containing the x, y coordinates of keypoints with shape (N, K, 2), where N is the number of detections and K is the number of keypoints per detection. Examples: >>> results = model("image.jpg") >>> keypoints = results[0].keypoints >>> xy = keypoints.xy >>> print(xy.shape) # (N, K, 2) >>> print(xy[0]) # x, y coordinates of keypoints for first detection Notes: - The returned coordinates are in pixel units relative to the original image dimensions. - If keypoints were initialized with confidence values, only keypoints with confidence >= 0.5 are returned. - This property uses LRU caching to improve performance on repeated access. """ return self.data[..., :2] @property @lru_cache(maxsize=1) def xyn(self): """ Returns normalized coordinates (x, y) of keypoints relative to the original image size. Returns: (torch.Tensor | numpy.ndarray): A tensor or array of shape (N, K, 2) containing normalized keypoint coordinates, where N is the number of instances, K is the number of keypoints, and the last dimension contains [x, y] values in the range [0, 1]. Examples: >>> keypoints = Keypoints(torch.rand(1, 17, 2), orig_shape=(480, 640)) >>> normalized_kpts = keypoints.xyn >>> print(normalized_kpts.shape) torch.Size([1, 17, 2]) """ xy = self.xy.clone() if isinstance(self.xy, torch.Tensor) else np.copy(self.xy) xy[..., 0] /= self.orig_shape[1] xy[..., 1] /= self.orig_shape[0] return xy @property @lru_cache(maxsize=1) def conf(self): """ Returns confidence values for each keypoint. Returns: (torch.Tensor | None): A tensor containing confidence scores for each keypoint if available, otherwise None. Shape is (num_detections, num_keypoints) for batched data or (num_keypoints,) for single detection. Examples: >>> keypoints = Keypoints(torch.rand(1, 17, 3), orig_shape=(640, 640)) # 1 detection, 17 keypoints >>> conf = keypoints.conf >>> print(conf.shape) # torch.Size([1, 17]) """ return self.data[..., 2] if self.has_visible else None class Probs(
271809
# Ultralytics YOLO 🚀, AGPL-3.0 license """ Export a YOLO PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit. Format | `format=argument` | Model --- | --- | --- PyTorch | - | yolo11n.pt TorchScript | `torchscript` | yolo11n.torchscript ONNX | `onnx` | yolo11n.onnx OpenVINO | `openvino` | yolo11n_openvino_model/ TensorRT | `engine` | yolo11n.engine CoreML | `coreml` | yolo11n.mlpackage TensorFlow SavedModel | `saved_model` | yolo11n_saved_model/ TensorFlow GraphDef | `pb` | yolo11n.pb TensorFlow Lite | `tflite` | yolo11n.tflite TensorFlow Edge TPU | `edgetpu` | yolo11n_edgetpu.tflite TensorFlow.js | `tfjs` | yolo11n_web_model/ PaddlePaddle | `paddle` | yolo11n_paddle_model/ NCNN | `ncnn` | yolo11n_ncnn_model/ Requirements: $ pip install "ultralytics[export]" Python: from ultralytics import YOLO model = YOLO('yolo11n.pt') results = model.export(format='onnx') CLI: $ yolo mode=export model=yolo11n.pt format=onnx Inference: $ yolo predict model=yolo11n.pt # PyTorch yolo11n.torchscript # TorchScript yolo11n.onnx # ONNX Runtime or OpenCV DNN with dnn=True yolo11n_openvino_model # OpenVINO yolo11n.engine # TensorRT yolo11n.mlpackage # CoreML (macOS-only) yolo11n_saved_model # TensorFlow SavedModel yolo11n.pb # TensorFlow GraphDef yolo11n.tflite # TensorFlow Lite yolo11n_edgetpu.tflite # TensorFlow Edge TPU yolo11n_paddle_model # PaddlePaddle yolo11n_ncnn_model # NCNN TensorFlow.js: $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example $ npm install $ ln -s ../../yolo11n_web_model public/yolo11n_web_model $ npm start """ import gc import json import os import shutil import subprocess import time import warnings from copy import deepcopy from datetime import datetime from pathlib import Path import numpy as np import torch from ultralytics.cfg import TASK2DATA, get_cfg from ultralytics.data import build_dataloader from ultralytics.data.dataset import YOLODataset from ultralytics.data.utils import check_cls_dataset, check_det_dataset from ultralytics.nn.autobackend import check_class_names, default_class_names from ultralytics.nn.modules import C2f, Detect, RTDETRDecoder from ultralytics.nn.tasks import DetectionModel, SegmentationModel, WorldModel from ultralytics.utils import ( ARM64, DEFAULT_CFG, IS_JETSON, LINUX, LOGGER, MACOS, PYTHON_VERSION, ROOT, WINDOWS, __version__, callbacks, colorstr, get_default_args, yaml_save, ) from ultralytics.utils.checks import check_imgsz, check_is_path_safe, check_requirements, check_version from ultralytics.utils.downloads import attempt_download_asset, get_github_assets, safe_download from ultralytics.utils.files import file_size, spaces_in_path from ultralytics.utils.ops import Profile from ultralytics.utils.torch_utils import TORCH_1_13, get_latest_opset, select_device, smart_inference_mode def export_formats(): """Ultralytics YOLO export formats.""" x = [ ["PyTorch", "-", ".pt", True, True], ["TorchScript", "torchscript", ".torchscript", True, True], ["ONNX", "onnx", ".onnx", True, True], ["OpenVINO", "openvino", "_openvino_model", True, False], ["TensorRT", "engine", ".engine", False, True], ["CoreML", "coreml", ".mlpackage", True, False], ["TensorFlow SavedModel", "saved_model", "_saved_model", True, True], ["TensorFlow GraphDef", "pb", ".pb", True, True], ["TensorFlow Lite", "tflite", ".tflite", True, False], ["TensorFlow Edge TPU", "edgetpu", "_edgetpu.tflite", True, False], ["TensorFlow.js", "tfjs", "_web_model", True, False], ["PaddlePaddle", "paddle", "_paddle_model", True, True], ["NCNN", "ncnn", "_ncnn_model", True, True], ] return dict(zip(["Format", "Argument", "Suffix", "CPU", "GPU"], zip(*x))) def gd_outputs(gd): """TensorFlow GraphDef model output node names.""" name_list, input_list = [], [] for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef name_list.append(node.name) input_list.extend(node.input) return sorted(f"{x}:0" for x in list(set(name_list) - set(input_list)) if not x.startswith("NoOp")) def try_export(inner_func): """YOLO export decorator, i.e. @try_export.""" inner_args = get_default_args(inner_func) def outer_func(*args, **kwargs): """Export a model.""" prefix = inner_args["prefix"] try: with Profile() as dt: f, model = inner_func(*args, **kwargs) LOGGER.info(f"{prefix} export success ✅ {dt.t:.1f}s, saved as '{f}' ({file_size(f):.1f} MB)") return f, model except Exception as e: LOGGER.error(f"{prefix} export failure ❌ {dt.t:.1f}s: {e}") raise e return outer_func class Exporter: """ A class for exporting a model. Attributes: args (SimpleNamespace): Configuration for the exporter. callbacks (list, optional): List of callback functions. Defaults to None. """ def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): """ Initializes the Exporter class. Args: cfg (str, optional): Path to a configuration file. Defaults to DEFAULT_CFG. overrides (dict, optional): Configuration overrides. Defaults to None. _callbacks (dict, optional): Dictionary of callback functions. Defaults to None. """ self.args = get_cfg(cfg, overrides) if self.args.format.lower() in {"coreml", "mlmodel"}: # fix attempt for protobuf<3.20.x errors os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python" # must run before TensorBoard callback self.callbacks = _callbacks or callbacks.get_default_callbacks() callbacks.add_integration_callbacks(self) @
271813
("CoreML:")): """YOLO CoreML export.""" mlmodel = self.args.format.lower() == "mlmodel" # legacy *.mlmodel export format requested check_requirements("coremltools>=6.0,<=6.2" if mlmodel else "coremltools>=7.0") import coremltools as ct # noqa LOGGER.info(f"\n{prefix} starting export with coremltools {ct.__version__}...") assert not WINDOWS, "CoreML export is not supported on Windows, please run on macOS or Linux." assert self.args.batch == 1, "CoreML batch sizes > 1 are not supported. Please retry at 'batch=1'." f = self.file.with_suffix(".mlmodel" if mlmodel else ".mlpackage") if f.is_dir(): shutil.rmtree(f) if self.args.nms and getattr(self.model, "end2end", False): LOGGER.warning(f"{prefix} WARNING ⚠️ 'nms=True' is not available for end2end models. Forcing 'nms=False'.") self.args.nms = False bias = [0.0, 0.0, 0.0] scale = 1 / 255 classifier_config = None if self.model.task == "classify": classifier_config = ct.ClassifierConfig(list(self.model.names.values())) if self.args.nms else None model = self.model elif self.model.task == "detect": model = IOSDetectModel(self.model, self.im) if self.args.nms else self.model else: if self.args.nms: LOGGER.warning(f"{prefix} WARNING ⚠️ 'nms=True' is only available for Detect models like 'yolov8n.pt'.") # TODO CoreML Segment and Pose model pipelining model = self.model ts = torch.jit.trace(model.eval(), self.im, strict=False) # TorchScript model ct_model = ct.convert( ts, inputs=[ct.ImageType("image", shape=self.im.shape, scale=scale, bias=bias)], classifier_config=classifier_config, convert_to="neuralnetwork" if mlmodel else "mlprogram", ) bits, mode = (8, "kmeans") if self.args.int8 else (16, "linear") if self.args.half else (32, None) if bits < 32: if "kmeans" in mode: check_requirements("scikit-learn") # scikit-learn package required for k-means quantization if mlmodel: ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode) elif bits == 8: # mlprogram already quantized to FP16 import coremltools.optimize.coreml as cto op_config = cto.OpPalettizerConfig(mode="kmeans", nbits=bits, weight_threshold=512) config = cto.OptimizationConfig(global_config=op_config) ct_model = cto.palettize_weights(ct_model, config=config) if self.args.nms and self.model.task == "detect": if mlmodel: # coremltools<=6.2 NMS export requires Python<3.11 check_version(PYTHON_VERSION, "<3.11", name="Python ", hard=True) weights_dir = None else: ct_model.save(str(f)) # save otherwise weights_dir does not exist weights_dir = str(f / "Data/com.apple.CoreML/weights") ct_model = self._pipeline_coreml(ct_model, weights_dir=weights_dir) m = self.metadata # metadata dict ct_model.short_description = m.pop("description") ct_model.author = m.pop("author") ct_model.license = m.pop("license") ct_model.version = m.pop("version") ct_model.user_defined_metadata.update({k: str(v) for k, v in m.items()}) try: ct_model.save(str(f)) # save *.mlpackage except Exception as e: LOGGER.warning( f"{prefix} WARNING ⚠️ CoreML export to *.mlpackage failed ({e}), reverting to *.mlmodel export. " f"Known coremltools Python 3.11 and Windows bugs https://github.com/apple/coremltools/issues/1928." ) f = f.with_suffix(".mlmodel") ct_model.save(str(f)) return f, ct_model @try_export def export_engine(self, dla=None, prefix=colo
271815
w SavedModel:")): """YOLO TensorFlow SavedModel export.""" cuda = torch.cuda.is_available() try: import tensorflow as tf # noqa except ImportError: suffix = "-macos" if MACOS else "-aarch64" if ARM64 else "" if cuda else "-cpu" version = ">=2.0.0" check_requirements(f"tensorflow{suffix}{version}") import tensorflow as tf # noqa check_requirements( ( "keras", # required by 'onnx2tf' package "tf_keras", # required by 'onnx2tf' package "sng4onnx>=1.0.1", # required by 'onnx2tf' package "onnx_graphsurgeon>=0.3.26", # required by 'onnx2tf' package "onnx>=1.12.0", "onnx2tf>1.17.5,<=1.22.3", "onnxslim>=0.1.31", "tflite_support<=0.4.3" if IS_JETSON else "tflite_support", # fix ImportError 'GLIBCXX_3.4.29' "flatbuffers>=23.5.26,<100", # update old 'flatbuffers' included inside tensorflow package "onnxruntime-gpu" if cuda else "onnxruntime", ), cmds="--extra-index-url https://pypi.ngc.nvidia.com", # onnx_graphsurgeon only on NVIDIA ) LOGGER.info(f"\n{prefix} starting export with tensorflow {tf.__version__}...") check_version( tf.__version__, ">=2.0.0", name="tensorflow", verbose=True, msg="https://github.com/ultralytics/ultralytics/issues/5161", ) import onnx2tf f = Path(str(self.file).replace(self.file.suffix, "_saved_model")) if f.is_dir(): shutil.rmtree(f) # delete output folder # Pre-download calibration file to fix https://github.com/PINTO0309/onnx2tf/issues/545 onnx2tf_file = Path("calibration_image_sample_data_20x128x128x3_float32.npy") if not onnx2tf_file.exists(): attempt_download_asset(f"{onnx2tf_file}.zip", unzip=True, delete=True) # Export to ONNX self.args.simplify = True f_onnx, _ = self.export_onnx() # Export to TF np_data = None if self.args.int8: tmp_file = f / "tmp_tflite_int8_calibration_images.npy" # int8 calibration images file if self.args.data: f.mkdir() images = [batch["img"].permute(0, 2, 3, 1) for batch in self.get_int8_calibration_dataloader(prefix)] images = torch.cat(images, 0).float() np.save(str(tmp_file), images.numpy().astype(np.float32)) # BHWC np_data = [["images", tmp_file, [[[[0, 0, 0]]]], [[[[255, 255, 255]]]]]] LOGGER.info(f"{prefix} starting TFLite export with onnx2tf {onnx2tf.__version__}...") keras_model = onnx2tf.convert( input_onnx_file_path=f_onnx, output_folder_path=str(f), not_use_onnxsim=True, verbosity="error", # note INT8-FP16 activation bug https://github.com/ultralytics/ultralytics/issues/15873 output_integer_quantized_tflite=self.args.int8, quant_type="per-tensor", # "per-tensor" (faster) or "per-channel" (slower but more accurate) custom_input_op_name_np_data_path=np_data, disable_group_convolution=True, # for end-to-end model compatibility enable_batchmatmul_unfold=True, # for end-to-end model compatibility ) yaml_save(f / "metadata.yaml", self.metadata) # add metadata.yaml # Remove/rename TFLite models if self.args.int8: tmp_file.unlink(missing_ok=True) for file in f.rglob("*_dynamic_range_quant.tflite"): file.rename(file.with_name(file.stem.replace("_dynamic_range_quant", "_int8") + file.suffix)) for file in f.rglob("*_integer_quant_with_int16_act.tflite"): file.unlink() # delete extra fp16 activation TFLite files # Add TFLite metadata for file in f.rglob("*.tflite"): f.unlink() if "quant_with_int16_act.tflite" in str(f) else self._add_tflite_metadata(file) return str(f), keras_model # or keras_model = tf.saved_model.load(f, tags=None, options=None) @try_export def export_pb(self, keras_model, prefix=colorstr("TensorFlow GraphDef:")): """YOLO TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow.""" import tensorflow as tf # noqa from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 # noqa LOGGER.info(f"\n{prefix} starting export with tensorflow {tf.__version__}...") f = self.file.with_suffix(".pb") m = tf.function(lambda x: keras_model(x)) # full model m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype)) frozen_func = convert_variables_to_constants_v2(m) frozen_func.graph.as_graph_def() tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False) return f, None @try_export def export_tflite(self, keras_model, nms, agnostic_nms, prefix=colorstr("TensorFlow Lite:")): """YOLO TensorFlow Lite export.""" # BUG https://github.com/ultralytics/ultralytics/issues/13436 import tensorflow as tf # noqa LOGGER.info(f"\n{prefix} starting export with tensorflow {tf.__version__}...") saved_model = Path(str(self.file).replace(self.file.suffix, "_saved_model")) if self.args.int8: f = saved_model / f"{self.file.stem}_int8.tflite" # fp32 in/out elif self.args.half: f = saved_model / f"{self.file.stem}_float16.tflite" # fp32 in/out else: f = saved_model / f"{self.file.stem}_float32.tflite" return str(f), None @try_export def export_edgetpu(self, tflite_model="", prefix=
271816
tr("Edge TPU:")): """YOLO Edge TPU export https://coral.ai/docs/edgetpu/models-intro/.""" LOGGER.warning(f"{prefix} WARNING ⚠️ Edge TPU known bug https://github.com/ultralytics/ultralytics/issues/1185") cmd = "edgetpu_compiler --version" help_url = "https://coral.ai/docs/edgetpu/compiler/" assert LINUX, f"export only supported on Linux. See {help_url}" if subprocess.run(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, shell=True).returncode != 0: LOGGER.info(f"\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}") sudo = subprocess.run("sudo --version >/dev/null", shell=True).returncode == 0 # sudo installed on system for c in ( "curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -", 'echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | ' "sudo tee /etc/apt/sources.list.d/coral-edgetpu.list", "sudo apt-get update", "sudo apt-get install edgetpu-compiler", ): subprocess.run(c if sudo else c.replace("sudo ", ""), shell=True, check=True) ver = subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1] LOGGER.info(f"\n{prefix} starting export with Edge TPU compiler {ver}...") f = str(tflite_model).replace(".tflite", "_edgetpu.tflite") # Edge TPU model cmd = ( "edgetpu_compiler " f'--out_dir "{Path(f).parent}" ' "--show_operations " "--search_delegate " "--delegate_search_step 30 " "--timeout_sec 180 " f'"{tflite_model}"' ) LOGGER.info(f"{prefix} running '{cmd}'") subprocess.run(cmd, shell=True) self._add_tflite_metadata(f) return f, None @try_export def export_tfjs(self, prefix=colorstr("TensorFlow.js:")): """YOLO TensorFlow.js export.""" check_requirements("tensorflowjs") if ARM64: # Fix error: `np.object` was a deprecated alias for the builtin `object` when exporting to TF.js on ARM64 check_requirements("numpy==1.23.5") import tensorflow as tf import tensorflowjs as tfjs # noqa LOGGER.info(f"\n{prefix} starting export with tensorflowjs {tfjs.__version__}...") f = str(self.file).replace(self.file.suffix, "_web_model") # js dir f_pb = str(self.file.with_suffix(".pb")) # *.pb path gd = tf.Graph().as_graph_def() # TF GraphDef with open(f_pb, "rb") as file: gd.ParseFromString(file.read()) outputs = ",".join(gd_outputs(gd)) LOGGER.info(f"\n{prefix} output node names: {outputs}") quantization = "--quantize_float16" if self.args.half else "--quantize_uint8" if self.args.int8 else "" with spaces_in_path(f_pb) as fpb_, spaces_in_path(f) as f_: # exporter can not handle spaces in path cmd = ( "tensorflowjs_converter " f'--input_format=tf_frozen_model {quantization} --output_node_names={outputs} "{fpb_}" "{f_}"' ) LOGGER.info(f"{prefix} running '{cmd}'") subprocess.run(cmd, shell=True) if " " in f: LOGGER.warning(f"{prefix} WARNING ⚠️ your model may not work correctly with spaces in path '{f}'.") # Add metadata yaml_save(Path(f) / "metadata.yaml", self.metadata) # add metadata.yaml return f, None def _add_tflite_metadata(self, file): """Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadata.""" import flatbuffers try: # TFLite Support bug https://github.com/tensorflow/tflite-support/issues/954#issuecomment-2108570845 from tensorflow_lite_support.metadata import metadata_schema_py_generated as schema # noqa from tensorflow_lite_support.metadata.python import metadata # noqa except ImportError: # ARM64 systems may not have the 'tensorflow_lite_support' package available from tflite_support import metadata # noqa from tflite_support import metadata_schema_py_generated as schema # noqa # Create model info model_meta = schema.ModelMetadataT() model_meta.name = self.metadata["description"] model_meta.version = self.metadata["version"] model_meta.author = self.metadata["author"] model_meta.license = self.metadata["license"] # Label file tmp_file = Path(file).parent / "temp_meta.txt" with open(tmp_file, "w") as f: f.write(str(self.metadata)) label_file = schema.AssociatedFileT() label_file.name = tmp_file.name label_file.type = schema.AssociatedFileType.TENSOR_AXIS_LABELS # Create input info input_meta = schema.TensorMetadataT() input_meta.name = "image" input_meta.description = "Input image to be detected." input_meta.content = schema.ContentT() input_meta.content.contentProperties = schema.ImagePropertiesT() input_meta.content.contentProperties.colorSpace = schema.ColorSpaceType.RGB input_meta.content.contentPropertiesType = schema.ContentProperties.ImageProperties # Create output info output1 = schema.TensorMetadataT() output1.name = "output" output1.description = "Coordinates of detected objects, class labels, and confidence score" output1.associatedFiles = [label_file] if self.model.task == "segment": output2 = schema.TensorMetadataT() output2.name = "output" output2.description = "Mask protos" output2.associatedFiles = [label_file] # Create subgraph info subgraph = schema.SubGraphMetadataT() subgraph.inputTensorMetadata = [input_meta] subgraph.outputTensorMetadata = [output1, output2] if self.model.task == "segment" else [output1] model_meta.subgraphMetadata = [subgraph] b = flatbuffers.Builder(0) b.Finish(model_meta.Pack(b), metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER) metadata_buf = b.Output() populator = metadata.MetadataPopulator.with_model_file(str(file)) populator.load_metadata_buffer(metadata_buf) populator.load_associated_files([str(tmp_file)]) populator.populate() tmp_file.unlink() def _pipeline_coreml(self, model, weights_dir=None, prefix=colorstr("Core
271817
eline:")): """YOLO CoreML pipeline.""" import coremltools as ct # noqa LOGGER.info(f"{prefix} starting pipeline with coremltools {ct.__version__}...") _, _, h, w = list(self.im.shape) # BCHW # Output shapes spec = model.get_spec() out0, out1 = iter(spec.description.output) if MACOS: from PIL import Image img = Image.new("RGB", (w, h)) # w=192, h=320 out = model.predict({"image": img}) out0_shape = out[out0.name].shape # (3780, 80) out1_shape = out[out1.name].shape # (3780, 4) else: # linux and windows can not run model.predict(), get sizes from PyTorch model output y out0_shape = self.output_shape[2], self.output_shape[1] - 4 # (3780, 80) out1_shape = self.output_shape[2], 4 # (3780, 4) # Checks names = self.metadata["names"] nx, ny = spec.description.input[0].type.imageType.width, spec.description.input[0].type.imageType.height _, nc = out0_shape # number of anchors, number of classes assert len(names) == nc, f"{len(names)} names found for nc={nc}" # check # Define output shapes (missing) out0.type.multiArrayType.shape[:] = out0_shape # (3780, 80) out1.type.multiArrayType.shape[:] = out1_shape # (3780, 4) # Model from spec model = ct.models.MLModel(spec, weights_dir=weights_dir) # 3. Create NMS protobuf nms_spec = ct.proto.Model_pb2.Model() nms_spec.specificationVersion = 5 for i in range(2): decoder_output = model._spec.description.output[i].SerializeToString() nms_spec.description.input.add() nms_spec.description.input[i].ParseFromString(decoder_output) nms_spec.description.output.add() nms_spec.description.output[i].ParseFromString(decoder_output) nms_spec.description.output[0].name = "confidence" nms_spec.description.output[1].name = "coordinates" output_sizes = [nc, 4] for i in range(2): ma_type = nms_spec.description.output[i].type.multiArrayType ma_type.shapeRange.sizeRanges.add() ma_type.shapeRange.sizeRanges[0].lowerBound = 0 ma_type.shapeRange.sizeRanges[0].upperBound = -1 ma_type.shapeRange.sizeRanges.add() ma_type.shapeRange.sizeRanges[1].lowerBound = output_sizes[i] ma_type.shapeRange.sizeRanges[1].upperBound = output_sizes[i] del ma_type.shape[:] nms = nms_spec.nonMaximumSuppression nms.confidenceInputFeatureName = out0.name # 1x507x80 nms.coordinatesInputFeatureName = out1.name # 1x507x4 nms.confidenceOutputFeatureName = "confidence" nms.coordinatesOutputFeatureName = "coordinates" nms.iouThresholdInputFeatureName = "iouThreshold" nms.confidenceThresholdInputFeatureName = "confidenceThreshold" nms.iouThreshold = 0.45 nms.confidenceThreshold = 0.25 nms.pickTop.perClass = True nms.stringClassLabels.vector.extend(names.values()) nms_model = ct.models.MLModel(nms_spec) # 4. Pipeline models together pipeline = ct.models.pipeline.Pipeline( input_features=[ ("image", ct.models.datatypes.Array(3, ny, nx)), ("iouThreshold", ct.models.datatypes.Double()), ("confidenceThreshold", ct.models.datatypes.Double()), ], output_features=["confidence", "coordinates"], ) pipeline.add_model(model) pipeline.add_model(nms_model) # Correct datatypes pipeline.spec.description.input[0].ParseFromString(model._spec.description.input[0].SerializeToString()) pipeline.spec.description.output[0].ParseFromString(nms_model._spec.description.output[0].SerializeToString()) pipeline.spec.description.output[1].ParseFromString(nms_model._spec.description.output[1].SerializeToString()) # Update metadata pipeline.spec.specificationVersion = 5 pipeline.spec.description.metadata.userDefined.update( {"IoU threshold": str(nms.iouThreshold), "Confidence threshold": str(nms.confidenceThreshold)} ) # Save the model model = ct.models.MLModel(pipeline.spec, weights_dir=weights_dir) model.input_description["image"] = "Input image" model.input_description["iouThreshold"] = f"(optional) IoU threshold override (default: {nms.iouThreshold})" model.input_description["confidenceThreshold"] = ( f"(optional) Confidence threshold override (default: {nms.confidenceThreshold})" ) model.output_description["confidence"] = 'Boxes × Class confidence (see user-defined metadata "classes")' model.output_description["coordinates"] = "Boxes × [x, y, width, height] (relative to image size)" LOGGER.info(f"{prefix} pipeline success") return model def add_callback(self, event: str, callback): """Appends the given callback.""" self.callbacks[event].append(callback) def run_callbacks(self, event: str): """Execute all callbacks for a given event.""" for callback in self.callbacks.get(event, []): callback(self) class IOSDetectModel(torch.nn.Module): """Wrap an Ultralytics YOLO model for Apple iOS CoreML export.""" def __init__(self, model, im): """Initialize the IOSDetectModel class with a YOLO model and example image.""" super().__init__() _, _, h, w = im.shape # batch, channel, height, width self.model = model self.nc = len(model.names) # number of classes if w == h: self.normalize = 1.0 / w # scalar else: self.normalize = torch.tensor([1.0 / w, 1.0 / h, 1.0 / w, 1.0 / h]) # broadcast (slower, smaller) def forward(self, x): """Normalize predictions of object detection model with input size-dependent factors.""" xywh, cls = self.model(x)[0].transpose(0, 1).split((4, self.nc), 1) return cls, xywh * self.normalize # confidence (3780, 80), coordinates (3780, 4)
271820
ss Model(nn.Module): """ A base class for implementing YOLO models, unifying APIs across different model types. This class provides a common interface for various operations related to YOLO models, such as training, validation, prediction, exporting, and benchmarking. It handles different types of models, including those loaded from local files, Ultralytics HUB, or Triton Server. Attributes: callbacks (Dict): A dictionary of callback functions for various events during model operations. predictor (BasePredictor): The predictor object used for making predictions. model (nn.Module): The underlying PyTorch model. trainer (BaseTrainer): The trainer object used for training the model. ckpt (Dict): The checkpoint data if the model is loaded from a *.pt file. cfg (str): The configuration of the model if loaded from a *.yaml file. ckpt_path (str): The path to the checkpoint file. overrides (Dict): A dictionary of overrides for model configuration. metrics (Dict): The latest training/validation metrics. session (HUBTrainingSession): The Ultralytics HUB session, if applicable. task (str): The type of task the model is intended for. model_name (str): The name of the model. Methods: __call__: Alias for the predict method, enabling the model instance to be callable. _new: Initializes a new model based on a configuration file. _load: Loads a model from a checkpoint file. _check_is_pytorch_model: Ensures that the model is a PyTorch model. reset_weights: Resets the model's weights to their initial state. load: Loads model weights from a specified file. save: Saves the current state of the model to a file. info: Logs or returns information about the model. fuse: Fuses Conv2d and BatchNorm2d layers for optimized inference. predict: Performs object detection predictions. track: Performs object tracking. val: Validates the model on a dataset. benchmark: Benchmarks the model on various export formats. export: Exports the model to different formats. train: Trains the model on a dataset. tune: Performs hyperparameter tuning. _apply: Applies a function to the model's tensors. add_callback: Adds a callback function for an event. clear_callback: Clears all callbacks for an event. reset_callbacks: Resets all callbacks to their default functions. Examples: >>> from ultralytics import YOLO >>> model = YOLO("yolo11n.pt") >>> results = model.predict("image.jpg") >>> model.train(data="coco8.yaml", epochs=3) >>> metrics = model.val() >>> model.export(format="onnx") """ def __init__( self, model: Union[str, Path] = "yolo11n.pt", task: str = None, verbose: bool = False, ) -> None: """ Initializes a new instance of the YOLO model class. This constructor sets up the model based on the provided model path or name. It handles various types of model sources, including local files, Ultralytics HUB models, and Triton Server models. The method initializes several important attributes of the model and prepares it for operations like training, prediction, or export. Args: model (Union[str, Path]): Path or name of the model to load or create. Can be a local file path, a model name from Ultralytics HUB, or a Triton Server model. task (str | None): The task type associated with the YOLO model, specifying its application domain. verbose (bool): If True, enables verbose output during the model's initialization and subsequent operations. Raises: FileNotFoundError: If the specified model file does not exist or is inaccessible. ValueError: If the model file or configuration is invalid or unsupported. ImportError: If required dependencies for specific model types (like HUB SDK) are not installed. Examples: >>> model = Model("yolo11n.pt") >>> model = Model("path/to/model.yaml", task="detect") >>> model = Model("hub_model", verbose=True) """ super().__init__() self.callbacks = callbacks.get_default_callbacks() self.predictor = None # reuse predictor self.model = None # model object self.trainer = None # trainer object self.ckpt = None # if loaded from *.pt self.cfg = None # if loaded from *.yaml self.ckpt_path = None self.overrides = {} # overrides for trainer object self.metrics = None # validation/training metrics self.session = None # HUB session self.task = task # task type model = str(model).strip() # Check if Ultralytics HUB model from https://hub.ultralytics.com if self.is_hub_model(model): # Fetch model from HUB checks.check_requirements("hub-sdk>=0.0.12") session = HUBTrainingSession.create_session(model) model = session.model_file if session.train_args: # training sent from HUB self.session = session # Check if Triton Server model elif self.is_triton_model(model): self.model_name = self.model = model return # Load or create new YOLO model if Path(model).suffix in {".yaml", ".yml"}: self._new(model, task=task, verbose=verbose) else: self._load(model, task=task) def __call__( self, source: Union[str, Path, int, Image.Image, list, tuple, np.ndarray, torch.Tensor] = None, stream: bool = False, **kwargs, ) -> list: """ Alias for the predict method, enabling the model instance to be callable for predictions. This method simplifies the process of making predictions by allowing the model instance to be called directly with the required arguments. Args: source (str | Path | int | PIL.Image | np.ndarray | torch.Tensor | List | Tuple): The source of the image(s) to make predictions on. Can be a file path, URL, PIL image, numpy array, PyTorch tensor, or a list/tuple of these. stream (bool): If True, treat the input source as a continuous stream for predictions. **kwargs (Any): Additional keyword arguments to configure the prediction process. Returns: (List[ultralytics.engine.results.Results]): A list of prediction results, each encapsulated in a Results object. Examples: >>> model = YOLO("yolo11n.pt") >>> results = model("https://ultralytics.com/images/bus.jpg") >>> for r in results: ... print(f"Detected {len(r)} objects in image") """ return self.predict(source, stream, **kwargs) @staticmethod def is_triton_model(model: str) -> bool: """ Checks if the given model string is a Triton Server URL. This static method determines whether the provided model string represents a valid Triton Server URL by parsing its components using urllib.parse.urlsplit(). Args: model (str): The model string to be checked. Returns: (bool): True if the model string is a valid Triton Server URL, False otherwise. Examples: >>> Model.is_triton_model("http://localhost:8000/v2/models/yolov8n") True >>> Model.is_triton_model("yolo11n.pt") False """ from urllib.parse import urlsplit url = urlsplit(model) return url.netloc and url.path and url.scheme in {"http", "grpc"} @staticmethod def is_hub_model(model: str) -> bool: """ Check if the provided model is an Ultralytics HUB model. This static method determines whether the given model string represents a valid Ultralytics HUB model identifier. Args: model (str): The model string to check. Returns: (bool): True if the model is a valid Ultralytics HUB model, False otherwise. Examples: >>> Model.is_hub_model("https://hub.ultralytics.com/models/MODEL") True >>> Model.is_hub_model("yolo11n.pt") False """ return model.startswith(f"{HUB_WEB_ROOT}/models/")
271822
info(self, detailed: bool = False, verbose: bool = True): """ Logs or returns model information. This method provides an overview or detailed information about the model, depending on the arguments passed. It can control the verbosity of the output and return the information as a list. Args: detailed (bool): If True, shows detailed information about the model layers and parameters. verbose (bool): If True, prints the information. If False, returns the information as a list. Returns: (List[str]): A list of strings containing various types of information about the model, including model summary, layer details, and parameter counts. Empty if verbose is True. Raises: TypeError: If the model is not a PyTorch model. Examples: >>> model = Model("yolo11n.pt") >>> model.info() # Prints model summary >>> info_list = model.info(detailed=True, verbose=False) # Returns detailed info as a list """ self._check_is_pytorch_model() return self.model.info(detailed=detailed, verbose=verbose) def fuse(self): """ Fuses Conv2d and BatchNorm2d layers in the model for optimized inference. This method iterates through the model's modules and fuses consecutive Conv2d and BatchNorm2d layers into a single layer. This fusion can significantly improve inference speed by reducing the number of operations and memory accesses required during forward passes. The fusion process typically involves folding the BatchNorm2d parameters (mean, variance, weight, and bias) into the preceding Conv2d layer's weights and biases. This results in a single Conv2d layer that performs both convolution and normalization in one step. Raises: TypeError: If the model is not a PyTorch nn.Module. Examples: >>> model = Model("yolo11n.pt") >>> model.fuse() >>> # Model is now fused and ready for optimized inference """ self._check_is_pytorch_model() self.model.fuse() def embed( self, source: Union[str, Path, int, list, tuple, np.ndarray, torch.Tensor] = None, stream: bool = False, **kwargs, ) -> list: """ Generates image embeddings based on the provided source. This method is a wrapper around the 'predict()' method, focusing on generating embeddings from an image source. It allows customization of the embedding process through various keyword arguments. Args: source (str | Path | int | List | Tuple | np.ndarray | torch.Tensor): The source of the image for generating embeddings. Can be a file path, URL, PIL image, numpy array, etc. stream (bool): If True, predictions are streamed. **kwargs (Any): Additional keyword arguments for configuring the embedding process. Returns: (List[torch.Tensor]): A list containing the image embeddings. Raises: AssertionError: If the model is not a PyTorch model. Examples: >>> model = YOLO("yolo11n.pt") >>> image = "https://ultralytics.com/images/bus.jpg" >>> embeddings = model.embed(image) >>> print(embeddings[0].shape) """ if not kwargs.get("embed"): kwargs["embed"] = [len(self.model.model) - 2] # embed second-to-last layer if no indices passed return self.predict(source, stream, **kwargs) def predict( self, source: Union[str, Path, int, Image.Image, list, tuple, np.ndarray, torch.Tensor] = None, stream: bool = False, predictor=None, **kwargs, ) -> List[Results]: """ Performs predictions on the given image source using the YOLO model. This method facilitates the prediction process, allowing various configurations through keyword arguments. It supports predictions with custom predictors or the default predictor method. The method handles different types of image sources and can operate in a streaming mode. Args: source (str | Path | int | PIL.Image | np.ndarray | torch.Tensor | List | Tuple): The source of the image(s) to make predictions on. Accepts various types including file paths, URLs, PIL images, numpy arrays, and torch tensors. stream (bool): If True, treats the input source as a continuous stream for predictions. predictor (BasePredictor | None): An instance of a custom predictor class for making predictions. If None, the method uses a default predictor. **kwargs (Any): Additional keyword arguments for configuring the prediction process. Returns: (List[ultralytics.engine.results.Results]): A list of prediction results, each encapsulated in a Results object. Examples: >>> model = YOLO("yolo11n.pt") >>> results = model.predict(source="path/to/image.jpg", conf=0.25) >>> for r in results: ... print(r.boxes.data) # print detection bounding boxes Notes: - If 'source' is not provided, it defaults to the ASSETS constant with a warning. - The method sets up a new predictor if not already present and updates its arguments with each call. - For SAM-type models, 'prompts' can be passed as a keyword argument. """ if source is None: source = ASSETS LOGGER.warning(f"WARNING ⚠️ 'source' is missing. Using 'source={source}'.") is_cli = (ARGV[0].endswith("yolo") or ARGV[0].endswith("ultralytics")) and any( x in ARGV for x in ("predict", "track", "mode=predict", "mode=track") ) custom = {"conf": 0.25, "batch": 1, "save": is_cli, "mode": "predict"} # method defaults args = {**self.overrides, **custom, **kwargs} # highest priority args on the right prompts = args.pop("prompts", None) # for SAM-type models if not self.predictor: self.predictor = (predictor or self._smart_load("predictor"))(overrides=args, _callbacks=self.callbacks) self.predictor.setup_model(model=self.model, verbose=is_cli) else: # only update args if predictor is already setup self.predictor.args = get_cfg(self.predictor.args, args) if "project" in args or "name" in args: self.predictor.save_dir = get_save_dir(self.predictor.args) if prompts and hasattr(self.predictor, "set_prompts"): # for SAM-type models self.predictor.set_prompts(prompts) return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream) def track( self, source: Union[str, Path, int, list, tuple, np.ndarray, torch.Tensor] = None, stream: bool = False, persist: bool = False, **kwargs, ) -> List[Results]: """ Conducts object tracking on the specified input source using the registered trackers. This method performs object tracking using the model's predictors and optionally registered trackers. It handles various input sources such as file paths or video streams, and supports customization through keyword arguments. The method registers trackers if not already present and can persist them between calls. Args: source (Union[str, Path, int, List, Tuple, np.ndarray, torch.Tensor], optional): Input source for object tracking. Can be a file path, URL, or video stream. stream (bool): If True, treats the input source as a continuous video stream. Defaults to False. persist (bool): If True, persists trackers between different calls to this method. Defaults to False. **kwargs (Any): Additional keyword arguments for configuring the tracking process. Returns: (List[ultralytics.engine.results.Results]): A list of tracking results, each a Results object. Raises: AttributeError: If the predictor does not have registered trackers. Examples: >>> model = YOLO("yolo11n.pt") >>> results = model.track(source="path/to/video.mp4", show=True) >>> for r in results: ... print(r.boxes.id) # print tracking IDs Notes: - This method sets a default confidence threshold of 0.1 for ByteTrack-based tracking. - The tracking mode is explicitly set in the keyword arguments. - Batch size is set to 1 for tracking in videos. """ if not hasattr(self.predictor, "trackers"): from ultralytics.trackers import register_tracker register_tracker(self, persist) kwargs["conf"] = kwargs.get("conf") or 0.1 # ByteTrack-based method needs low confidence predictions as input kwargs["batch"] = kwargs.get("batch") or 1 # batch-size 1 for tracking in videos kwargs["mode"] = "track" return self.predict(source=source, stream=stream, **kwargs) d