Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

COCO dataset validation #2041

Open
Jozefov opened this issue Aug 7, 2024 · 1 comment
Open

COCO dataset validation #2041

Jozefov opened this issue Aug 7, 2024 · 1 comment

Comments

@Jozefov
Copy link

Jozefov commented Aug 7, 2024

💡 Your Question

Hi, I need help with the evaluation model on the coco dataset. As I am using the model pre-trained on coco, I would expect a similar result as stated; however, I got the wrong results, suspecting using wrong output format, and I needed help to come up with the correct solution. I use YOLO_NAS_S

# image_paths are yolo val2017 images
model_predictions = model.to(device).predict(image_paths, conf=0.50)

coco_predictions = []

for i, image_path in enumerate(image_paths):
    predictions = model_predictions[i].prediction
    bboxes = predictions.bboxes_xyxy  # assuming this is structured as [[xmin, ymin, xmax, ymax]]
    image_size_x = model_predictions[i].image.shape[0]
    image_size_y = model_predictions[i].image.shape[1]

    labels = [int(label)+1 for label in predictions.labels]  # Convert labels to coco
    confidences = [float(conf) for conf in predictions.confidence]  

    image_id = int(os.path.splitext(os.path.basename(image_path))[0])

    for bbox, label, confidence in zip(bboxes, labels, confidences):
        coco_prediction = {
            "image_id": image_id,
            "category_id": label,
            # "bbox": [float(bbox[0]), float(bbox[1]), float(bbox[2] - bbox[0]), float(bbox[3] - bbox[1])],
            "bbox": pbx.convert_bbox(bbox, from_type="voc", to_type="coco",\
                             image_size=(image_size_x, image_size_y)),
            "score": confidence
        }
        coco_predictions.append(coco_prediction)

with open('/data/prediction/val/coco_predictions.json', 'w') as f:
    json.dump(coco_predictions, f)
   
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval

annotations_file = '/data/coco/annotations/instances_val2017.json'
predictions_file = '/data/prediction/val/coco_predictions.json'

coco_gt = COCO(annotations_file)
coco_dt = coco_gt.loadRes(predictions_file)

coco_eval = COCOeval(coco_gt, coco_dt, 'bbox')

coco_eval.evaluate()
coco_eval.accumulate()
coco_eval.summarize()

However, I got 5-10% precision. Thanks for every suggestion!

Versions

No response

@haoliang-in-Ge
Copy link

haoliang-in-Ge commented Sep 29, 2024

modify the 'category_id', the value of category_id should be corresponding to categrey ids with instances_val2017.json.
e.g the category id is in range 1 to 90.
However, the prediction id ranges 1 to 80. it would work if you generated a map to map the predicted id to the category id.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants