You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the author did not shows how to export the pytorch model to onnx. I write a script to do it.
You can launch the script by running:
python tools/export_onnx.py --cfg experiments/coco/hrnet/w32_256x192_adam_lr1e-3.yaml TEST.MODEL_FILE models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth**
I have not try to inference with this onnx file. I would appreciate it a lot if someone can provide the script to inference with onnx file. Thanks!
Here is the code to export to onnx:
from __future__ importabsolute_importfrom __future__ importdivisionfrom __future__ importprint_functionimportargparseimportosimportpprintimporttorchimporttorch.nn.parallelimporttorch.backends.cudnnascudnnimporttorch.optimimporttorch.utils.dataimporttorch.utils.data.distributedimporttorchvision.transformsastransformsimport_init_pathsfromconfigimportcfgfromconfigimportupdate_configfromcore.lossimportJointsMSELossfromcore.functionimportvalidatefromutils.utilsimportcreate_loggerimportmodels#python tools/export_onnx.py --cfg experiments/coco/hrnet/w32_256x192_adam_lr1e-3.yaml TEST.MODEL_FILE models/pytorch/pose_coco/pose_hrnet_w32_256x192.pthdefparse_args():
parser=argparse.ArgumentParser(description='Train keypoints network')
# generalparser.add_argument('--cfg',
help='experiment configure file name',
required=True,
type=str)
parser.add_argument('opts',
help="Modify config options using the command-line",
default=None,
nargs=argparse.REMAINDER)
parser.add_argument('--modelDir',
help='model directory',
type=str,
default='')
parser.add_argument('--logDir',
help='log directory',
type=str,
default='')
parser.add_argument('--dataDir',
help='data directory',
type=str,
default='')
parser.add_argument('--prevModelDir',
help='prev Model directory',
type=str,
default='')
args=parser.parse_args()
returnargsdefmain():
args=parse_args()
update_config(cfg, args)
logger, final_output_dir, tb_log_dir=create_logger(
cfg, args.cfg, 'valid')
logger.info(pprint.pformat(args))
logger.info(cfg)
# cudnn related settingcudnn.benchmark=cfg.CUDNN.BENCHMARKtorch.backends.cudnn.deterministic=cfg.CUDNN.DETERMINISTICtorch.backends.cudnn.enabled=cfg.CUDNN.ENABLEDmodel=eval('models.'+cfg.MODEL.NAME+'.get_pose_net')(
cfg, is_train=False
)
ifcfg.TEST.MODEL_FILE:
logger.info('=> loading model from {}'.format(cfg.TEST.MODEL_FILE))
model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False)
else:
model_state_file=os.path.join(
final_output_dir, 'final_state.pth'
)
logger.info('=> loading model from {}'.format(model_state_file))
model.load_state_dict(torch.load(model_state_file))
dummy_input=torch.randn(1, 3, 256, 192)
# Export the model to an ONNX fileprint('exporting model to ONNX...')
torch.onnx.export(model, dummy_input, 'pose_hrnet_w32_256x192.onnx')
if__name__=='__main__':
main()
The text was updated successfully, but these errors were encountered:
Since the author did not shows how to export the pytorch model to onnx. I write a script to do it.
You can launch the script by running:
python tools/export_onnx.py --cfg experiments/coco/hrnet/w32_256x192_adam_lr1e-3.yaml TEST.MODEL_FILE models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth**
I have not try to inference with this onnx file. I would appreciate it a lot if someone can provide the script to inference with onnx file. Thanks!
Here is the code to export to onnx:
The text was updated successfully, but these errors were encountered: