modes/train/ #8075
Replies: 197 comments 542 replies
-
how to print IOU and f-score with the training result? |
Beta Was this translation helpful? Give feedback.
-
How are we able to save sample labls and predictions on the validation set during training? I remember it being easy from yolov5 but I have not been able to figure it out with yolov8. |
Beta Was this translation helpful? Give feedback.
-
If I am not mistaken, the logs shown during training also contain the box(P,R,[email protected] and [email protected]:0.95) and mask(P,R,[email protected] and [email protected]:0.95) for validation set during each epoch. Then why is it happening that during model.val() using the best.pt, I am getting worse metrics. From the training and validation curves, it is clear that the model is overfitting for the segmentation task but that is separate issue of overfitting. Can you please help me out in this? |
Beta Was this translation helpful? Give feedback.
-
So, imgsz works different when training than when predicting? For train: if it's an Is this right? |
Beta Was this translation helpful? Give feedback.
-
Hi all, I have a segment model with customed data with single class, but there is a trend to overfit in the recent several training results, I tried adding more data in the training set with reduce box_loss and cls_loss in val, but dfl_loss is increasing. Is there suggestion to tuing the model. Thanks a lot. |
Beta Was this translation helpful? Give feedback.
-
I have a question for training the segmentation model. I have objects in my dataset that screen each other, such that the top object separates the segmentation mask of the bottom object into two independent parts. as far as I can see, the coordinates of each point are listed sequentially in the label file. If I add the points of the two masks one after the other in the coordinates of the same object, will I solve the problem? |
Beta Was this translation helpful? Give feedback.
-
Hello there! |
Beta Was this translation helpful? Give feedback.
-
Hello, I am working on a project for android devices. The gpu and cpu powers of the device I have are weak. Will it speed up if I make the imgsz value 320 for train? Or what are your recommendations? What happens if the imgsz parameter for training is 640 and the imgsz parameter for prediction is 320? Or what changes if imgsz for training is 320 and imgsz for prediction is 320? Sorry for my English Note: I converted it to tflite model. Thanks. You are amazing |
Beta Was this translation helpful? Give feedback.
-
I've come to rely on YOLOv8 in my daily work; it's remarkably user-friendly. Thank you to the Ultralytics team for your excellent work on these models! I'm currently tackling a project focused on detecting minor defects on automobile engine parts. As the defects will be a smaller object in a given frame ,could you offer guidance on training arguments or techniques while training a model that might improve performance for this type of data? I'm also interested in exploring attention mechanisms to enhance the model performance, but I'd appreciate help understanding how to implement this. Special appreciation to Ultralytics team. |
Beta Was this translation helpful? Give feedback.
-
Running this provided example Which lead me to this Stackoverflow: https://stackoverflow.com/q/75111196/815507 There are solutions from Stackoverflow: I wonder if you could help and update the guide to provide the best resolution? |
Beta Was this translation helpful? Give feedback.
-
We need to disable blur augmentation. I have filed an issue, Glenn suggested me to use blur=0, but it is not a valid argument. #8824 |
Beta Was this translation helpful? Give feedback.
-
How can I train YOLOv8 with my custom dataset? |
Beta Was this translation helpful? Give feedback.
-
Hey, Was trying out training custom object detection model using pretrained YOLO-v8 model.
0% 0/250 [00:00<?, ?it/s] |
Beta Was this translation helpful? Give feedback.
-
Hi! I'm working on a project where I plan to use YOLOv8 as the backbone for object detection, but I need a more hands-on approach during the training phase. How to I train the model manually, looping through epochs, perform forward propagation, calculate loss functions, backpropagate, and update weights? At the moment the model.train() seems to handle all of this automatically in the background. The end goal is knowledge distillation, but for a start I need to access these things. I haven't been able to find any examples of YOLOv8 being used in this way, some code and tips would be helpful. |
Beta Was this translation helpful? Give feedback.
-
Im trying to understand concept of training. I would like to extend default classes with helmet, gloves, etc.
Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
What is the best ratio of small object in the image size to imgsz?Increasing imgsz also reduces speed of inferencing?What is the use of mask_ratio? |
Beta Was this translation helpful? Give feedback.
-
How to include suitable CONTEXT while making label? For example, if I want to detect "Cat on table", should I include whole cat and table as label or just cat and half of table? |
Beta Was this translation helpful? Give feedback.
-
Hi, I noted that a "results.csv" file is created after training a model in a specific directory along its weights, etc... However it's not clear if the column "metrics/accuracy_top1" refers to the training or validation datasets. Additionally, how can I export more metrics as recall, precision, etc?? Thank you ! |
Beta Was this translation helpful? Give feedback.
-
How to set the data augmentation off ? It looks like that YOLO11 uses DA by default settings and I want to compare its metrics with raw data. |
Beta Was this translation helpful? Give feedback.
-
Hello! I trained my dataset with YOLOv11 to detect objects. The command I ran was: "yolo detect train model=yolo11n.yaml data=data.yaml epochs=100 batch=32 imgsz=640 ". Now I have a problem, in the output graph of my training, such as val_batch0_pred, where the prediction box shows "class name" and "confidence", but the confidence value only shows one decimal place, how can I change the confidence level to show more decimal places? |
Beta Was this translation helpful? Give feedback.
-
Do the images' sizes matter? Should I guarantee the size of all the images from my own dataset to 640*640, or it really doesn't matter? |
Beta Was this translation helpful? Give feedback.
-
I used your pre-trained model yolo11x.pt for detection, but there is double detection in the output. I have attached a sample picture. How can I fix this? Is there any solution to resolve this? |
Beta Was this translation helpful? Give feedback.
-
I have a doubt. When I used the yolo11x.pt model and tested it with 100 images for detection and classification, the detection accuracy was 99% and the classification accuracy was 82%. But when I pre-trained the model using the yolo11x.pt model and the COCO 2017 dataset, and then tested it with my own model, I got detection accuracy of 97% and classification accuracy of 72%. Why is there a drop in accuracy? And if I want the same output as the first case, how should I train it? |
Beta Was this translation helpful? Give feedback.
-
To train a model, does is better to use a pre-trained model downloaded from official website, or just train from a created .json file is okay. |
Beta Was this translation helpful? Give feedback.
-
Hi There, I've been reading through the documentation and the posts and am still a little unclear on the imgsz parameter for both training and inference. I'm specifically interested in how larger images (e.g. 1280x1280) are handled by the v8 and v11 object detection models. My assumption was that in both cases (training and inference) images were simply resized to 640 if they were larger. Is this actually the case? If I train on 1280 and perform inference on 1280, is the full resolution used in some or all of the inference process? Thanks! |
Beta Was this translation helpful? Give feedback.
-
When training the yolov11 obb model, does it default to data enhancement for Soviet and Korean dramas? |
Beta Was this translation helpful? Give feedback.
-
May I ask which metric should be compared to select the best model, best.pt, and then save it? |
Beta Was this translation helpful? Give feedback.
-
why everything is different when we train model in colab and bring own device. when i use colab it implements albumentations automatically but when i use my own pc even training process was going wrong , am i missing something |
Beta Was this translation helpful? Give feedback.
-
Hello, I am currently working on for object detection task with yolo v 11. I have one question with augmentation which is about copy_pasting. The document mentions that we need segmentation mask for using the implemented copy paste method, however my current yolo annotation files for object detection only includes bbox coordinated along with class labels. Thus, my question essentially is how should my label files be structured to use copy paste augmentation? Should i use yolo segmentation label format rather than bbox format? Also, great thanks for ultralytics community for open-sourcing finest model for everyone. |
Beta Was this translation helpful? Give feedback.
-
I'm facing a double detection issue when using the YOLO pretrained model, particularly with motorbikes and people. To address this, I adjusted the confidence threshold and IoU threshold values. This has reduced the double detection issue for motorbikes, but not entirely—what used to detect as 10 errors has now reduced to 2 errors. However, this adjustment has affected detection accuracy in other classifications. Is there a way to set values specifically for motorbikes to reduce double detection without impacting other classifications? Or is there any other method to solve this issue? Do you know of any related solutions? |
Beta Was this translation helpful? Give feedback.
-
modes/train/
Step-by-step guide to train YOLOv8 models with Ultralytics YOLO including examples of single-GPU and multi-GPU training
https://docs.ultralytics.com/modes/train/
Beta Was this translation helpful? Give feedback.
All reactions