tasks/obb/ #7974
Replies: 50 comments 143 replies
-
Hi, when I am using yolov8_obb model in results I always have even when the model makes prediction. Did someone faced with this problem and if yes, what was the solution? Thank you.🙂 |
Beta Was this translation helpful? Give feedback.
-
Hi, how can I access the results from this new model so I can extract bounding box information |
Beta Was this translation helpful? Give feedback.
-
Hi,Could the heatmap functionality be used on YOLOv8-obb? |
Beta Was this translation helpful? Give feedback.
-
Hi,I have some questions when I use the DOTAv1.0 for OBB task. |
Beta Was this translation helpful? Give feedback.
-
how to do Model Ensembling on yolov8-obb models? |
Beta Was this translation helpful? Give feedback.
-
Hi, the YOLOv8n-obb test map-50 result on DOTAv1.0 is 78.0%. I split the train set and val set images to 1024*1024 and I wonder the 78% is the val set result given by the model.val()? or the test set result given by the DOTA server? And how to merge the result submitted to the server if the figure 78% is the test result given by the DOTA server? |
Beta Was this translation helpful? Give feedback.
-
how can i test DOTA1.0 datasets using YOLOV8-obb ? online submit for results? somebody help me? emergency |
Beta Was this translation helpful? Give feedback.
-
Hi, I saw yolov8 obb with tracking from yolov8-v2 |
Beta Was this translation helpful? Give feedback.
-
Subject: Cropping Images from YOLOv8 Detections (Python) Hi everyone, I'm working on a Python project that utilizes a YOLOv8 model for object detection. I'd like to achieve the following functionality: Perform object detection on input images using my trained YOLOv8 model. could you share an example code snippet demonstrating how to achieve this cropping functionality using the bounding box data? |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm currently working with YOLOOBB (You Only Look Once Oriented Bounding Boxes), and I've annotated my data using the YOLOOBB format. My concern now is how to apply augmentations for oriented bounding boxes, particularly for transformations like rotation, horizontal flipping, and vertical flipping. Each augmented image will need its annotation file adjusted accordingly. I'm using the albumentations library in Python for data augmentation. Here's the transformation pipeline I've defined: import albumentations as A
transform_pipeline= A.Compose([
A.Rotate(limit=(-90, 90)),
A.VerticalFlip(p=1),
A.HorizontalFlip(p=0.5),
A.ToTensor()
], bbox_params=A.BboxParams(
format="yolo",
label_fields=["class_labels"],
)) However, the YOLOOBB format is different from the default YOLO format. How should I process the annotations in this case to ensure they match the augmented images? Any insights or suggestions on how to handle this scenario would be greatly appreciated! Thank you. |
Beta Was this translation helpful? Give feedback.
-
HI, when I am training yolov8_obb models on muti-GPUs I meet an error File "/tmp/pycharm_project_538/ultralytics-main/ultralytics/utils/loss.py", line 626, in call It seems that the error is related to the calculation of the OBB loss. It expects two elements but got three. This error occurs when training on multiple GPUs, but it works fine on single GPU. How can I solve this problem?Thank you. |
Beta Was this translation helpful? Give feedback.
-
i am curious, doest it uses the same approach as yolov5-obb which use CSL for its OBB function? |
Beta Was this translation helpful? Give feedback.
-
for yolov8n-obb , results[0].boxes are returning none. previously, it was working. i think ultralytics recent update got it removed? |
Beta Was this translation helpful? Give feedback.
-
Hey, 0: 704x1024 191.9ms AttributeError: 'NoneType' object has no attribute '_jit_internal' This is my code: cap = cv2.VideoCapture("video.mp4") while True: |
Beta Was this translation helpful? Give feedback.
-
@glenn-jocher Hi!The data enhancements in v8-obb are very effective,however the sample distribution of my data is unbalanced, and I have found that YOLOV7 and V9 have ensembles "labels_to_class_weights" and "labels_to_image_weights" in the general.py, and these two strategies are effective when dealing with sample imbalance. However I didn't find these two strategies in v8, v10, I guess the strategies might be beneficial to improve v8's performance when the sample is unbalanced and I want to integrate them into the project, but I don't know which files to add, can you give me a little advice? Thank you very much! |
Beta Was this translation helpful? Give feedback.
-
Hello, thank you very much for your work, I encountered a problem when training my dataset with the yolov8obb model, my data annotation is in the format prompted by the document, but when I debugged the code, I found that there are only four values for each real box in the labels bboxes I read, and I am not sure how to get these four values, and the segments are a 4*2 vector, the values are the values in my label file, I am performing a detection task instead of splitting, My tag file doesn't split tags either. But in the end, I was able to train normally and achieve good results, I hope you can answer my questions |
Beta Was this translation helpful? Give feedback.
-
Hello, thank you for your work, I want to use yolov8obb to detect my dataset, my dataset is about industrial materials, it can be divided into several categories, but each category is divided into heads and tails, which is usually difficult to distinguish, and each material is divided into head and tail, that is to say, I not only need to detect the angle of the material, but also know where his head is or the angle relative to the head, how should I solve my problem, in terms of dataset preparation and model, Can someone help me, thank you very much |
Beta Was this translation helpful? Give feedback.
-
Can object detection be used? |
Beta Was this translation helpful? Give feedback.
-
Dear Concern, is there any ways I can keep the model architecture and parameters constant before and after training, just the weights and bias should be training, just like the conventional ML models. ? For example consider the following code: from ultralytics import YOLO Load a modelmodel = YOLO("yolov8n-obb.yaml") # build a new model from YAML Train the modelresults = model.train(data="dota8.yaml", epochs=100, imgsz=640) Now, Here, is there any parameter, which can help me to keep the architecture and model parameters constant across the train and validation. I know few ways but still I wanna know from experts how to initialize a YOLO("yolov8n-obb.yaml") with weights, which might not have the same architecture or nodes or layers |
Beta Was this translation helpful? Give feedback.
-
if you want draw predict results to img and save , you can ref below easy code : # you predict code #
import cv2
image = cv2.imread(test_img_path)
for r in results:
xyxyxyxy = r.obb.xyxyxyxy.cpu().numpy()
for box in xyxyxyxy:
points = box.reshape(4, 2).astype(int)
cv2.polylines(image, [points], isClosed=True, color=(0, 255, 0), thickness=2)
cv2.imwrite("detected_image.jpg", image) |
Beta Was this translation helpful? Give feedback.
-
I want to understand the range of the output rotation. According to the documentation for the function xyxyxyxy2xywhr, it states that "Rotation values are expected in degrees from 0 to 90." And the function cv2.minAreaRect that prepare the data also produces this range of rotation values. Therefore, I would expect the network to output angles only between 0 and 90 degrees (converted to radians). However, I'm observing output values greater than 270 degrees. How does this discrepancy make sense? |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a problem. I want to convert my labels to OBB format for YOLOv8. While processing in Google Colab, even though my dataset is fine, I want to convert my labels from YOLO format (for example → 0 0.738636 0.124589 0.008264 0.048520) to OBB format. I'm using the following code: from ultralytics.data.converter import convert_dota_to_yolo_obb dota_labels_directory = "/content/drive/MyDrive/data" convert_dota_to_yolo_obb(dota_labels_directory) After that, the names of my label files for train and val are coming out correctly, but they don’t contain any coordinates in OBB format. My labels are in txt format, but they are completely empty inside. What could be the reason for this and what is the solution? Thank you. (What I'm trying to do is to convert the labels to OBB format for YOLOv8 and train with the yolov8n-obb.pt weight file to see if it has a positive impact on my project) |
Beta Was this translation helpful? Give feedback.
-
from ultralytics.data.converter import convert_dota_to_yolo_obb convert_dota_to_yolo_obb("path/to/DOTA") This code only starts converting the DOTA format dataset to YOLO-OBB format. What I am looking for is to convert my YOLO format labels to YOLO-OBB format. In your documentation, you show that you converted the labels you gave in yolo format to yolo obb format, but this code does not help in any way. For example, can't I convert my yolo format label from class_index x1 y1 x2 y2 to → class_index x1 y1 x2 y2 x3 y3 x4 y4? Isn't the process we call oriented bounding box converting from the format I wrote above to obb format? |
Beta Was this translation helpful? Give feedback.
-
Hi, I have trained a YOLO OBB model. While testing it, I am able to test on the images, but the main issue I am facing is that while converting the model into other formats (for example, TFLite), I am unable to test the image. It always gives the same error like this:
But I have only two classes, and I don’t know what is going wrong. |
Beta Was this translation helpful? Give feedback.
-
Why do I need to download yolo11n.pt when I am training an OBB model。 |
Beta Was this translation helpful? Give feedback.
-
For YOLO-OBB Is there any heat map drawing method, can you provide the reference code to draw the heat map of the feature layer? |
Beta Was this translation helpful? Give feedback.
-
hi, i am currently a student learning Yolov11 algorithm, from what i can see the difference between standard detect yolov11 and yolov11-obb located in the head what are the changes made to the head that allow yolov11 to detect object angle? does it use different loss function? thank you |
Beta Was this translation helpful? Give feedback.
-
Yes pls help me do that, I am a beginner and would be great if you could
help me explain it
Pada Rab, 4 Des 2024, 11.47 AM, Glenn Jocher ***@***.***>
menulis:
… The box_loss, cls_loss, and dfl_loss correspond to bounding box
regression loss, classification loss, and distribution focal loss,
respectively. To examine the exact loss function details, you may review
the source code in the YOLO11 repository under training modules or refer to
the loss function implementation in the model head. Let me know if you need
guidance on locating this in the codebase!
—
Reply to this email directly, view it on GitHub
<#7974 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWDBONZEKLRWHBTYQ4RPMWL2D2CNTAVCNFSM6AAAAABCWLPUJWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCNBVGY4TCNI>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello dear developers. I don't get correct forecast results for OBB models unlike BOX. I get forecasts from yolo model and pass them through nms. For the standard yolo8x.pt box model I have everything working fine, however for yolo8x-obb.pt all the forecasts are not correct. Sample forecasts below.
|
Beta Was this translation helpful? Give feedback.
-
tasks/obb/
Learn how to use oriented object detection models with Ultralytics YOLO. Instructions on training, validation, image prediction, and model export.
https://docs.ultralytics.com/tasks/obb/
Beta Was this translation helpful? Give feedback.
All reactions