You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
sys.platform: darwin
Python: 3.9.20 (main, Oct 3 2024, 02:27:54) [Clang 14.0.6 ]
CUDA available: False
MUSA available: False
numpy_random_seed: 2147483648
GCC: Apple clang version 15.0.0 (clang-1500.3.9.4)
PyTorch: 1.11.0
PyTorch compiling details: PyTorch built with:
GCC 4.2
C++ Version: 201402
clang 12.0.0
Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
I encountered the same errors when I try to visualize kitti demo and my own config (alongwith custom kitti format dataset) on my iMac.
I ran the command:
python demo/pcd_demo.py output_kitti.pcd configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth --show --device cpu
Reproduces the problem - error message
python demo/pcd_demo.py output_kitti.pcd configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth --show --device cpu
12/09 21:47:51 - mmengine - WARNING - Display device not found. --show is forced to False
/Users/dongming/Documents/mmdetection3d/mmdet3d/models/dense_heads/anchor3d_head.py:94: UserWarning: dir_offset and dir_limit_offset will be depressed and be incorporated into box coder in the future
warnings.warn(
Loads checkpoint by local backend from path: checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth
The model and loaded state dict do not match exactly
size mismatch for bbox_head.conv_cls.weight: copying a param with shape torch.Size([18, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 384, 1, 1]).
size mismatch for bbox_head.conv_cls.bias: copying a param with shape torch.Size([18]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for bbox_head.conv_reg.weight: copying a param with shape torch.Size([42, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 384, 1, 1]).
size mismatch for bbox_head.conv_reg.bias: copying a param with shape torch.Size([42]) from checkpoint, the shape in current model is torch.Size([14]).
size mismatch for bbox_head.conv_dir_cls.weight: copying a param with shape torch.Size([12, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([4, 384, 1, 1]).
size mismatch for bbox_head.conv_dir_cls.bias: copying a param with shape torch.Size([12]) from checkpoint, the shape in current model is torch.Size([4]).
12/09 21:47:51 - mmengine - WARNING - Failed to search registry with scope "mmdet3d" in the "function" registry tree. As a workaround, the current "function" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet3d" is a correct scope, or whether the registry is initialized.
/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the save_dir argument.
warnings.warn(f'Failed to add {vis_backend.class}, '
Inference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Traceback (most recent call last):
File "/Users/dongming/Documents/mmdetection3d/demo/pcd_demo.py", line 93, in
main()
File "/Users/dongming/Documents/mmdetection3d/demo/pcd_demo.py", line 83, in main
inferencer(**call_args)
File "/Users/dongming/Documents/mmdetection3d/mmdet3d/apis/inferencers/base_3d_inferencer.py", line 210, in call
for data in (track(inputs, description='Inference')
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/rich/progress.py", line 168, in track
yield from progress.track(
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/rich/progress.py", line 1210, in track
for value in sequence:
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmengine/infer/infer.py", line 291, in preprocess
yield from map(self.collate_fn, chunked_data)
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmengine/infer/infer.py", line 588, in _get_chunk_data
processed_data = next(inputs_iter)
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmengine/dataset/base_dataset.py", line 60, in call
data = t(data)
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmcv/transforms/base.py", line 12, in call
return self.transform(results)
File "/Users/dongming/Documents/mmdetection3d/mmdet3d/datasets/transforms/loading.py", line 1145, in transform
return self.from_file(inputs)
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmcv/transforms/base.py", line 12, in call
return self.transform(results)
File "/Users/dongming/Documents/mmdetection3d/mmdet3d/datasets/transforms/loading.py", line 646, in transform
points = self._load_points(pts_file_path)
File "/Users/dongming/Documents/mmdetection3d/mmdet3d/datasets/transforms/loading.py", line 623, in _load_points
points = np.frombuffer(pts_bytes, dtype=np.float32)
ValueError: buffer size must be a multiple of element size
Prerequisite
Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
Branch
main branch https://github.com/open-mmlab/mmdetection3d
Environment
sys.platform: darwin
Python: 3.9.20 (main, Oct 3 2024, 02:27:54) [Clang 14.0.6 ]
CUDA available: False
MUSA available: False
numpy_random_seed: 2147483648
GCC: Apple clang version 15.0.0 (clang-1500.3.9.4)
PyTorch: 1.11.0
PyTorch compiling details: PyTorch built with:
TorchVision: 0.12.0
OpenCV: 4.10.0
MMEngine: 0.10.4
MMDetection: 3.2.0
MMDetection3D: 1.4.0+fe25f7a
spconv2.0: False
Reproduces the problem - code sample
pcd_demo.py
Reproduces the problem - command or script
I encountered the same errors when I try to visualize kitti demo and my own config (alongwith custom kitti format dataset) on my iMac.
I ran the command:
python demo/pcd_demo.py output_kitti.pcd configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth --show --device cpu
Reproduces the problem - error message
python demo/pcd_demo.py output_kitti.pcd configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-car.py checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth --show --device cpu
12/09 21:47:51 - mmengine - WARNING - Display device not found.
--show
is forced to False/Users/dongming/Documents/mmdetection3d/mmdet3d/models/dense_heads/anchor3d_head.py:94: UserWarning: dir_offset and dir_limit_offset will be depressed and be incorporated into box coder in the future
warnings.warn(
Loads checkpoint by local backend from path: checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth
The model and loaded state dict do not match exactly
size mismatch for bbox_head.conv_cls.weight: copying a param with shape torch.Size([18, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 384, 1, 1]).
size mismatch for bbox_head.conv_cls.bias: copying a param with shape torch.Size([18]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for bbox_head.conv_reg.weight: copying a param with shape torch.Size([42, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 384, 1, 1]).
size mismatch for bbox_head.conv_reg.bias: copying a param with shape torch.Size([42]) from checkpoint, the shape in current model is torch.Size([14]).
size mismatch for bbox_head.conv_dir_cls.weight: copying a param with shape torch.Size([12, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([4, 384, 1, 1]).
size mismatch for bbox_head.conv_dir_cls.bias: copying a param with shape torch.Size([12]) from checkpoint, the shape in current model is torch.Size([4]).
12/09 21:47:51 - mmengine - WARNING - Failed to search registry with scope "mmdet3d" in the "function" registry tree. As a workaround, the current "function" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet3d" is a correct scope, or whether the registry is initialized.
/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the
save_dir
argument.warnings.warn(f'Failed to add {vis_backend.class}, '
Inference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Traceback (most recent call last):
File "/Users/dongming/Documents/mmdetection3d/demo/pcd_demo.py", line 93, in
main()
File "/Users/dongming/Documents/mmdetection3d/demo/pcd_demo.py", line 83, in main
inferencer(**call_args)
File "/Users/dongming/Documents/mmdetection3d/mmdet3d/apis/inferencers/base_3d_inferencer.py", line 210, in call
for data in (track(inputs, description='Inference')
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/rich/progress.py", line 168, in track
yield from progress.track(
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/rich/progress.py", line 1210, in track
for value in sequence:
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmengine/infer/infer.py", line 291, in preprocess
yield from map(self.collate_fn, chunked_data)
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmengine/infer/infer.py", line 588, in _get_chunk_data
processed_data = next(inputs_iter)
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmengine/dataset/base_dataset.py", line 60, in call
data = t(data)
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmcv/transforms/base.py", line 12, in call
return self.transform(results)
File "/Users/dongming/Documents/mmdetection3d/mmdet3d/datasets/transforms/loading.py", line 1145, in transform
return self.from_file(inputs)
File "/opt/anaconda3/envs/openmmlab/lib/python3.9/site-packages/mmcv/transforms/base.py", line 12, in call
return self.transform(results)
File "/Users/dongming/Documents/mmdetection3d/mmdet3d/datasets/transforms/loading.py", line 646, in transform
points = self._load_points(pts_file_path)
File "/Users/dongming/Documents/mmdetection3d/mmdet3d/datasets/transforms/loading.py", line 623, in _load_points
points = np.frombuffer(pts_bytes, dtype=np.float32)
ValueError: buffer size must be a multiple of element size
Additional information
I followed the instructions on https://mmdetection3d.readthedocs.io/en/latest/user_guides/inference.html.
The code said the the it should be a 'PCD_FILE' but the example below was a .bin file. I tried both formats. The .bin one would crash so I wrote code to convert .bin to .pcd:
import numpy as np
import open3d as o3d
file_path = 'data/TJ4DRadSet_4DRadar/training/velodyne_reduced/000004.bin'
points = np.fromfile(file_path, dtype=np.float32).reshape(-1, 8) # original ptc has 8 dims
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(points[:, :3])
output_pcd_path = 'output_full_dimensions.pcd'
with open(output_pcd_path, 'w') as f:
Then I encountered the error stated above, whether using either kitti demo or my own config file.
The text was updated successfully, but these errors were encountered: