-
-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to crop image using DetectedObject rect values captured from camera? #48
Comments
@imhafeez hello, Thank you for reaching out! It sounds like you're encountering issues with cropping detected objects using your custom-trained model. Let's see if we can help you resolve this. First, ensure that you are using the latest version of the Ultralytics YOLOv8 package. Updates often include bug fixes and improvements that might resolve your issue. Here's a concise example of how you can crop detected objects from an image using YOLOv8: import cv2
from ultralytics import YOLO
# Load your custom-trained model
model = YOLO("path/to/your/custom_model.pt")
# Capture an image from your mobile camera
image_path = "path/to/your/captured_image.jpg"
image = cv2.imread(image_path)
# Perform detection
results = model.predict(image)
# Extract bounding boxes and class labels
boxes = results[0].boxes.xyxy.cpu().tolist()
clss = results[0].boxes.cls.cpu().tolist()
# Iterate over detected objects and crop them
for idx, (box, cls) in enumerate(zip(boxes, clss)):
x1, y1, x2, y2 = map(int, box)
cropped_image = image[y1:y2, x1:x2]
cv2.imwrite(f"cropped_object_{idx}.png", cropped_image)
print("Cropping completed successfully!") Make sure to replace If the issue persists, please double-check the coordinates returned by the model to ensure they are correct. Sometimes, issues can arise from incorrect scaling or coordinate transformations. For more detailed guidance, you can refer to our Object Cropping Guide. Feel free to share any additional details or error messages if you need further assistance. We're here to help! |
Thanks @pderrenger. But this is in python. I need to implement it in flutter. |
Hello @imhafeez, Thank you for your follow-up! Implementing object detection and cropping in Flutter involves a few additional steps, especially when it comes to normalizing the coordinate space. While my expertise is primarily in Python, I can certainly provide some guidance on how to approach this in Flutter. Here’s a general approach you can take:
Here’s a simplified example of how you might normalize the coordinates and crop the image in Flutter: import 'dart:io';
import 'package:image/image.dart' as img;
void cropImage(String imagePath, List<double> rect) {
// Load the image
File imageFile = File(imagePath);
img.Image image = img.decodeImage(imageFile.readAsBytesSync());
// Normalize the coordinates
double x1 = rect[0] * image.width;
double y1 = rect[1] * image.height;
double x2 = rect[2] * image.width;
double y2 = rect[3] * image.height;
// Crop the image
img.Image croppedImage = img.copyCrop(image, x1.toInt(), y1.toInt(), (x2 - x1).toInt(), (y2 - y1).toInt());
// Save the cropped image
File('cropped_image.png').writeAsBytesSync(img.encodePng(croppedImage));
}
void main() {
// Example usage
String imagePath = 'path/to/your/captured_image.jpg';
List<double> rect = [0.1, 0.2, 0.5, 0.6]; // Example normalized coordinates
cropImage(imagePath, rect);
} In this example, If you encounter any issues or have further questions, please ensure you are using the latest versions of the relevant packages. If the problem persists, feel free to provide more details, and we’ll do our best to assist you. Best of luck with your implementation! 😊 |
why the comments has been removed? |
Thank you for your question. Comments may be removed for various reasons, such as maintaining clarity or relevance. If you have specific concerns or need assistance, please let us know how we can help. |
Now the return value recognized by the image is automatically scaled to the screen size, you can take a look at my pull, I just submitted one that directly returns the raw data #56 |
Thank you for your contribution! We'll review your pull request and provide feedback soon. |
I am facing the same issue that. Is there any solution to this ? The predicted class and number of detection is correct but bounding boxes are always at an offset. |
@fahadismail95 Yes i did. Some how i managed to get it. I think the issue is in training. You need to provide image size while training. having wrong size will have issues on detection and cropping. |
@imhafeez how to provide image size while training ?? |
@mnawazshah you can do it while training and setting up data on python script.
the image size should be after that follow the guide line in https://pub.dev/packages/ultralytics_yolo package for exporting the model. Then you can easily detect the object and can crop it from image with detected points. It will give you correct results. Please let me know if you need further guidance. |
@imhafeez JAZAK ALLAH |
Nope. Stick to that while training your images. even if your training images size are big. no worries. I did the training with these params and working perfectly. Don't worry about the size of images while training or while using the model. |
Thank you for your clarification, @imhafeez. However, to address the concern of bounding box scaling, ensure that during inference, you rescale the bounding box coordinates back to the original image dimensions if they are normalized or based on the resized input image. Let us know if further clarification is needed! |
I am facing issue on cropping.
I have capture an image from mobile camera then i want to detect the card using my owned custom trained model. I cannot crop the exact detected image as it always gives me wrong rect values in DetectedObject.
Please help. any example on how to do it.
Thank you
The text was updated successfully, but these errors were encountered: