zh/modes/track/ #8563
Replies: 21 comments 51 replies
-
Can I use YOLO V8 to track the same object on different surveillance screens, and if so, how? |
Beta Was this translation helpful? Give feedback.
-
How can I use YOLO to show the center point in the window? |
Beta Was this translation helpful? Give feedback.
-
After accessing the tracking ID of each object when using Ultratics YOLO for object tracking, if I use several images I have collected myself, such as three images, how can I match the features detected in the three images I provide? Because some features may appear repeatedly |
Beta Was this translation helpful? Give feedback.
-
How can I determine my position in this scene based on three images collected at a fixed location after accessing the tracking ID of each object when using Ultratics YOLO for object tracking |
Beta Was this translation helpful? Give feedback.
-
I want to get some performance indicators about trackers, such as MOTA, IDF1, etc. How should I do it? |
Beta Was this translation helpful? Give feedback.
-
Hello! I am so glad to use yolov8,but I have a question that which tracking method the under code use.Bytetrack or Botsort? import cv2 from ultralytics import YOLO Load the YOLOv8 modelmodel = YOLO('yolov8n.pt') Open the video filevideo_path = "path/to/video.mp4" Store the track historytrack_history = defaultdict(lambda: []) Loop through the video frameswhile cap.isOpened():
Release the video capture object and close the display windowcap.release() |
Beta Was this translation helpful? Give feedback.
-
Can YOLOV8 do object tracking on the image to distinguish between different ids? |
Beta Was this translation helpful? Give feedback.
-
from collections import defaultdict import cv2 Load the YOLOv8 modelmodel = YOLO("yolov8n.pt",tracker='bytetrack.yaml') Open the video filevideo_path = "path/to/video.mp4" Store the track historytrack_history = defaultdict(lambda: []) Loop through the video frameswhile cap.isOpened():
Release the video capture object and close the display windowcap.release() The code is error. |
Beta Was this translation helpful? Give feedback.
-
How can I deal with the object ID discontinuity when using yolov8 for multi-object tracking? For example, ID 6 is missing from ID 1 to ID 10. |
Beta Was this translation helpful? Give feedback.
-
Hello.I have a question. If I need to modify the network structure of bytetrack, do I only need to modify some of the code in yolov8 or do I need to copy the code of bytetrack from github and combine yolov8 with bytetrack before modifying the code of bytetrack.please help me to solve this question. |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a question. My detection model includes multiple categories. How can I specify to track only a certain category? |
Beta Was this translation helpful? Give feedback.
-
如何使用bytetrack跟踪算法 |
Beta Was this translation helpful? Give feedback.
-
I think of MaxID, what do I need to do? |
Beta Was this translation helpful? Give feedback.
-
I want to get the maximum value of the ID displayed on the detection box in the video for each target tracking! Is that value all the quantities of the target? I just want to get that value! |
Beta Was this translation helpful? Give feedback.
-
[ WARN:[email protected]] global cap_gstreamer.cpp:2617 cv::CvVideoWriter_GStreamer::open OpenCV | GStreamer warning: cannot link elements |
Beta Was this translation helpful? Give feedback.
-
I want to know how to use C++to implement the bytetrack code for YOLOV11? |
Beta Was this translation helpful? Give feedback.
-
I want to know how to implement the bytetrack code for YOLOV11 in C++, and how to implement it in Java? |
Beta Was this translation helpful? Give feedback.
-
How can the bytetrack code of YOLOV11 be executed in Android Studio? |
Beta Was this translation helpful? Give feedback.
-
while cap.isOpened(): Read a frame from the videosuccess, frame = cap.read() When executing Run YOLO11 tracking on the frame, persisting tracks between framesresults = model.track(frame, persist=True, conf=0.75,show_conf=False,line_width=1,device=device) Visualize the results on the frameannotated_frame = results[0].plot() Display the annotated framecv2.imshow("YOLO11 Tracking", annotated_frame) |
Beta Was this translation helpful? Give feedback.
-
When executing |
Beta Was this translation helpful? Give feedback.
-
zh/modes/track/
了解如何使用Ultralytics YOLO 在视频流中进行物体跟踪。使用不同跟踪器和自定义跟踪器配置的指南。
https://docs.ultralytics.com/zh/modes/track/
Beta Was this translation helpful? Give feedback.
All reactions