Ultralytics YOLO11 Launch 🚀 #16603
Replies: 3 comments 8 replies
-
👋 Hello @glenn-jocher, thank you for sharing the exciting news about Ultralytics YOLO11 🚀! We're thrilled to see the advancements in real-time object detection, segmentation, and more. If you have any 🐛 Bug Reports regarding YOLO11, please provide a minimum reproducible example to help us address it efficiently. For custom training ❓ Questions or if you're starting fresh, the Docs are a great resource. You'll find detailed guidance on Tasks like detection, segmentation, pose estimation, and more to help you get the best results. Join our community for real-time discussions on Discord 🎧, deeper conversations on Discourse, or check out our Subreddit. UpgradeEnsure you have the latest pip install -U ultralytics EnvironmentsYOLO11 can be explored in diverse environments. Try it in Colab or on Docker for seamless integration. StatusCheck our Ultralytics CI for the latest operational updates: This is an automated response, and an Ultralytics engineer will assist further soon. Enjoy exploring YOLO11! 🎉 |
Beta Was this translation helpful? Give feedback.
-
can not export the v11 model to tflite, error message as below: Ultralytics 8.3.2 🚀 Python-3.8.0 torch-2.0.1+cu117 CPU (12th Gen Intel Core(TM) i7-12700H) PyTorch: starting from 'tk11s_150eps_lrf0.3_mxp_640x384.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (18.3 MB) TensorFlow SavedModel: starting export with tensorflow 2.13.0... ONNX: starting export with onnx 1.16.2 opset 17... ONNX: slimming with onnxslim 0.1.34... Dimensions must be equal, but are 64 and 32 for '{{node tf.math.add_5/Add}} = AddV2[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,80,160,64], [1,80,160,32]. Call arguments received by layer "tf.math.add_5" (type TFOpLambda): ERROR: input_onnx_file_path: tk11s_150eps_lrf0.3_mxp_640x384.onnx |
Beta Was this translation helpful? Give feedback.
-
Hello ! I recently considered loading a tri-modal data set based on the yolov8 algorithm framework, including visible and infrared data sets, and a text data set. I don 't know how to load data, can you give me some advice |
Beta Was this translation helpful? Give feedback.
-
Ultralytics YOLO11
We are thrilled to announce the official launch of YOLO11, the latest iteration of the Ultralytics YOLO series, bringing unparalleled advancements in real-time object detection, segmentation, pose estimation, and classification. Building upon the success of YOLOv8, YOLO11 delivers state-of-the-art performance across the board with significant improvements in both speed and accuracy.
🚀 Key Performance Improvements:
📊 Quantitative Performance Comparison with YOLOv8:
Each variant of YOLO11 (n, s, m, l, x) is designed to offer the optimal balance of speed and accuracy, catering to diverse application needs.
🚀 Versatile Task Support
YOLO11 builds on the versatility of the YOLO series, handling diverse computer vision tasks seamlessly:
📦 Quick Start Example
To get started with YOLO11, install the latest version of the Ultralytics package:
pip install ultralytics>=8.3.0
Then, load the pre-trained YOLO11 model and run inference on an image:
With just a few lines of code, you can harness the power of YOLO11 for real-time object detection and other computer vision tasks.
🌐 Seamless Integration & Deployment
YOLO11 is designed for easy integration into existing workflows and is optimized for deployment across a variety of environments, from edge devices to cloud platforms, offering unmatched flexibility for diverse applications.
You can get started with YOLO11 today through the Ultralytics HUB and the Ultralytics Python package. Dive into the future of computer vision and experience how YOLO11 can power your AI projects! 🚀
Beta Was this translation helpful? Give feedback.
All reactions