Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dealing with nan in ..._filt_<filter>.trc causing condition 'convertOk' not met during kinematics #143

Open
jtoerber opened this issue Nov 5, 2024 · 27 comments
Labels
help wanted Extra attention is needed

Comments

@jtoerber
Copy link

jtoerber commented Nov 5, 2024

Hi,
I have 2 Cameras. One of the camera could see my right small toe all of the time, the other not.
So after FIltering I have nan in the trc-File for the corresponding Landmark/Keypoint.
When I do Inverse Kinematics to get the angles, I get an error message:

Pose2Sim.kinematics()
  File "C:\Users\toerberj\.conda\envs\Pose2Sim\lib\site-packages\Pose2Sim\Pose2Sim.py", line 505, in kinematics
    kinematics(config_dict)
  File "C:\Users\toerberj\.conda\envs\Pose2Sim\lib\site-packages\Pose2Sim\kinematics.py", line 627, in kinematics
    perform_scaling(trc_file, kinematics_dir, osim_setup_dir, model_name, right_left_symmetry=right_left_symmetry, subject_height=subject_height[p], subject_mass=subject_mass[p], remove_scaling_setup=remove_scaling_setup)
  File "C:\Users\toerberj\.conda\envs\Pose2Sim\lib\site-packages\Pose2Sim\kinematics.py", line 476, in perform_scaling
    opensim.ScaleTool(scaling_path_temp).run()
  File "C:\Users\toerberj\.conda\envs\Pose2Sim\lib\opensim\tools.py", line 2220, in __init__
    _tools.ScaleTool_swiginit(self, _tools.new_ScaleTool(*args))
RuntimeError: std::exception in 'OpenSim::ScaleTool::ScaleTool(std::string const &)': SimTK Exception thrown at String.h:466:
  Error detected by Simbody method String::convertTo(): Couldn't interpret string 'inf inf inf' as type T=SimTK::Array_.
  (Required condition 'convertOK' was not met.)

Lowering the thresholds did not work.
I am not able to work with more than two Cameras as they need high FPS (>=100) in my case.
In this specific case I am able to adjust the positions of the cameras.
But, in another case I want to track arm and fingers and it will happen that there are gaps, which are even larger than (large) interp_if_gap_smaller_than-Parameter, especially for landmarks/keypoints, which are in a specific case not interesting.
How am I able to get InverseKinematics running?
I could replace nan with 0.0?
I could (at least try to) remove the non interesting landmark(s)?
Any recommendations?
Interestingly the filtering has a problem with that not the triangulation, which may decide to take the Landmark/Keypoint from the Camera, that is able to see it.

@jtoerber
Copy link
Author

jtoerber commented Nov 5, 2024

Issue: #114: fill_large_gaps_with = 'last_value' # 'last_value', 'nan', or 'zeros'
Could the (only) value of the other Camera be taken? Or is then in the Case of 2 cameras the 3D information missing?

@davidpagnon
Copy link
Collaborator

Hi,

You need at least 2 cameras to triangulate the coordinates. When there is a hole for a few frames you can interpolate it, but if a point is not seen for the whole sequence, it is not going to work.

Here are a few ideas, by order of hackiness:

  • Capture the sequence again and make sure the foot is seen by both cameras
  • Capture with more cameras
  • Edit Pose2Sim/OpenSim_Setup/Scaling_Setup_Pose2Sim*.xml, and set to false
    Do it for your IK_Setup file as well
    Then set fill_large_gaps_with = 'zeros' (if it does not work, replace the columns with zeros in pandas or a spreadsheet editor)

Please tell me how it goes!

@davidpagnon davidpagnon added the help wanted Extra attention is needed label Nov 5, 2024
@jtoerber
Copy link
Author

jtoerber commented Nov 6, 2024

Hi,
I guess you mean:

<IKMarkerTask name="RSmallToe">
    <!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
    <!-- <apply>true</apply> -->
    <apply>false</apply>
    <!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
    <weight>1</weight>
</IKMarkerTask>

in
IK_Setup_Pose2Sim_.xml
Scaling_Setup_Pose2Sim_.xml

With
fill_large_gaps_with = 'zeros' # 'last_value', 'nan', or 'zeros'
I get a lot of 0.0, not only for the nan.
With
fill_large_gaps_with = 'last_value' # 'last_value', 'nan', or 'zeros'
and manually replacing the nan with 0.0 makes "kinematics" possible.

In Opensim there are some issues with it as maybe the number of Cameras may not be enough or the alignment of the two used Cameras is not properly. The person is not really standing nor doing what it did in the Video. In OpenCap there were no problems with two cameras.
(if interested the last_values, zeros and manually are in attached zip-file)
Demo_SinglePerson.zip

@davidpagnon
Copy link
Collaborator

There are definitely issues in your triangulated trc files. How did you calibrate?

@jtoerber
Copy link
Author

jtoerber commented Nov 6, 2024

Hi,
I did
Pose2Sim.calibration()
with a 3x5 (45mm) checkerboard.

You did not specify which issues there are in the trc files.
And btw.: The synchronization seems to be not deterministic as I usually get 3 frames difference in this case, but once 166 frames, which are about 10 seconds here. I guess setting a frame-range may help here.

@davidpagnon
Copy link
Collaborator

davidpagnon commented Nov 6, 2024

Check this thread starting from the linked comment maybe. It seems like if you download synchronized videos from OpenCap, they are exported at half the resolution, which leads to a faulty calibration
#142 (comment)

EDIT: Sorry I had not seen your last message, let me read it

@davidpagnon
Copy link
Collaborator

The issues are that the trc files look don't look like a human being doing anything 😅

Synchronization is likely to be another issue indeed. Check the appropriate doc section to make sure it is correct: https://github.com/perfanalytics/pose2sim/blob/main/README.md#synchronization

If results are not satisfying, edit your Config.toml file:
All keypoints can be taken into account, or a subset of them.
The whole capture can be used for synchronization, or you can choose a time range when the participant is roughly horizontally static but with a clear vertical motion (set approx_time_maxspeed and time_range_around_maxspeed accordingly).

N.B.: Works best when:

  • the participant does not move towards or away from the cameras
  • they perform a clear vertical movement
  • the capture lasts at least 5 seconds long, so that there is enough data to synchronize on
  • the capture lasts a few minutes maximum, so that cameras are less likely to drift with time

@jtoerber
Copy link
Author

jtoerber commented Nov 6, 2024

I think in calibration.py you should check the resolution of all the videos and their frames per second.
Your advantage/disadvantage opposite to OpenCap is that you allow all kind of cameras.
This may even lead to inhomogenous cameras or camera settings during one recording.

Got new camera. I'll try my final setting tomorrow.

Synchronization:
I did calibration and poseEstimation.
Then I did all but calibration and poseEstimation.
Hence same videos and Pose-Estimations and then I got once 166 frames difference instead of 3 frames with a clear right wrist raise in the first seconds and Videos captured in one Thread to be (relatively) safe/close.

@davidpagnon
Copy link
Collaborator

Calibration does check the resolution of videos, I just tried to make a guess based on the information your provided but I went in the wrong direction. To be clearer, if you capture with OpenCap and convert its calibration files, and the export the wrong videos, you will have a discrepancy between the resolutions of the calibration file and the videos. But that's not what you did.

The framerate is also detected automatically, see Config.toml:
frame_rate = 'auto' # fps # int or 'auto'. If 'auto', finds from video (or defaults to 60 fps if you work with images)

Synchronization should be deterministic if you run on the same videos with the same parameters. If the right wrist clearly raises only at one time, it should not give you such different results, unless there is a problem in the parameters of your Config.toml file.

If you want to send some of your data to [email protected], I can look at it and search for the bug.

@jtoerber
Copy link
Author

jtoerber commented Nov 8, 2024

What does Pose2Sim.personAssociation() do?
I get:

\personAssociation.py:588: RuntimeWarning: Mean of empty slice
  mean_error_px = np.around(np.nanmean(error), decimals=1)

--> Mean reprojection error for Neck point on all frames is nan px, which roughly corresponds to nan mm.
--> In average, 0.01 cameras had to be excluded to reach the demanded 20 px error threshold after excluding points with likelihood below 0.3.

Tracked json files are stored in ...

Associating persons took 00h00m02s.

Then when running triangulation I get:

...
  raise Exception('No persons have been triangulated. Please check your calibration and your synchronization, or the triangulation parameters in Config.toml.')
Exception: No persons have been triangulated. Please check your calibration and your synchronization, or the triangulation parameters in Config.toml.

@davidpagnon
Copy link
Collaborator

personAssociation sorts out people to ensure that you triangulate the same person across cameras rather than trying to triangulate two different persons together (in single-person mode, it removes people in the background). If you only have one person in the scene, you do not need to run it.

This error still means that either your calibration or your synchronization is not good. Did you try trimming the video by hand to make sure the problem was not about synchronization?

@jtoerber
Copy link
Author

jtoerber commented Nov 8, 2024

Before I had one Webcam vertical and one Smartphone horizontal because of DroidCam-Setting.
Perhaps this was the reason for the synchronization problem.

Now:
Got new Camera. Now complete new setting with two different cameras. New video recording.
Played a little bit with approx_time_maxspeed.

It seems that also personAssociation (and perhaps triangulation) need some minimal video length to work properly (in addition to synchronization). Perhaps you should display a warning or even an error. Switched temporarily to lightweight for my local computer.
With 7 seconds video length it did not work. Now I have 11 seconds.
This worked except kinematics. Skeleton is standing at the beginning, then it is creeping on the ground.
Found: "The OpenSim uses relative coordinates in its formulation and a skeleton is
generally modeled as a set of rigid bodies interconnected by kinematic joints, which
define how a child body can move with respect to its parent body."
Perhaps the
fill_large_gaps_with = zeros
"nail" it at least at some point on the ground?
(https://repositorium.sdum.uminho.pt/bitstream/1822/24564/3/Part%20III.pdf)

What means 0.03 excluded cameras? 3% of something?

@davidpagnon
Copy link
Collaborator

Hi again,
It should work with 7 seconds videos, and in any case, kinematics should also work.
It's really hard to help you figure it out, could you maybe send me a project folder (with short videos, I'm running out of space on my laptop), so that I can see what's wrong?

@medpar
Copy link

medpar commented Nov 11, 2024

I am getting the same error as you when trying to process the demo videos on the OpenCap page. I have downloaded the raw videos (.mov) and it does all the steps correctly, but when I try to calculate the inverse kinematics, it returns this error:

RuntimeError: std::exception in 'OpenSim::ScaleTool::ScaleTool(std::string const &)': SimTK Exception thrown at String.h:466:
  Error detected by Simbody method String::convertTo(): Couldn't interpret string 'inf inf inf' as type T=SimTK::Array_<double,unsigned>.
  (Required condition 'convertOK' was not met.)

The only video from which I have been able to calculate the inverse kinematics is the one for the activity '1legjumpdroplanding' . All the others have given me the same error.

@jtoerber
Copy link
Author

Hi,

in my video cam01.mp4 with the script below I can see that if I use 9 times each 30th frame I get different rvecs values than if I do it only one, two or three times.

If you use

cv2.drawChessboardCorners(img, CHECKERBOARD, corners2, ret)

usually the chessboard is red to ... in colors. At some point you can see that the Chessboard is ... to red, so 180 degress rotated or something like that.

Perhaps this helps finding the problem.

Btw.: Could not execute "caliscope" after pip installing it.

How large must the Charuco board be for Freemocap?

my_calibration.zip

@davidpagnon
Copy link
Collaborator

Hi all, sorry my job is keeping me quite busy, but I'll try to answer by the end of the week!

@davidpagnon
Copy link
Collaborator

davidpagnon commented Nov 15, 2024

Hi Jens and Mario,

@medpar, It seems like there is something shady with OpenCap data that I can't figure out there. Would you mind sending me your full project folders, including the calibration files and/or videos?). I suppose there must be some Nan in the TRC file, which would be caused by a bad triangulation. Are results for "1legjumpdroplanding" good?

@jtoerber I just tried the data you are sent me.

  • First thing, the toes are not well detected at all. The video is not of great quality but I've never seen anything like that.
    Be aware that COCO_133 models don't perform as well on body points as Halpe_26. So you could either ditch the hand keypoints, or use it in "performance" mode.
  • Synchronization worked when I set approx_time_maxspeed = [1.0,1.0] # the left wrist motion roughly happens at 1 second on each camera
  • Considering the poor results of pose estimation, I increased reproj_error_threshold_triangulation = 30 instead of 15, and interp_if_gap_smaller_than = 50 instead of 10. Of course, triangulation is not good on toes.
  • It seems like the person is lying to the side, which is probably due to the different OpenCap calibration. You can reorient your TRC file prior kinematics with this script: https://github.com/perfanalytics/pose2sim/blob/main/Pose2Sim/Utilities/trc_Zup_to_Yup.py
  • Since the feet are not well detected, I removed them from kinematics. You have to find your Pose2Sim\OpenSim_Setup\Scaling_Setup_Pose2Sim_Coco133.xml and D:\softs\github_david\pose2sim\Pose2Sim\OpenSim_Setup\IK_Setup_Pose2Sim_Coco133.xml files (type pip show pose2sim for fine Pose2Sim location). In scaling, set to false. In scaling and IK, you should set IKMarkerTask to false for all feet points. I would do the same for the hands, because since they are not well detected, the make the shoulder lock anddisturb the whole body kinematics.
    With so few cameras with poor quality, I would actually tinker with the parameters directly within the OpenSim API: https://github.com/perfanalytics/pose2sim?tab=readme-ov-file#within-opensim-gui

@jtoerber
Copy link
Author

Hi,

there is a misunderstanding. I do not use Opencap. Hence Calibration is done by Pose2Sim.
I need COCO_133 because of fingers.
Currently not much time, so will Take a while to Test it.

And: Would you please so kind and remove the Image in your last Post. Thanks in advance.

@davidpagnon
Copy link
Collaborator

Sure, sorry for the image!

Okay, I may have misunderstood concerning OpenCap then. In this case, you just need to flip Y and Z coordinates and make Z=-Z in your configuration file and you should have the person upright!

So by order of impact, you should:

  • have better lighting
  • use more cameras
  • use the performance mode

@davidpagnon
Copy link
Collaborator

This issue has been marked as stale. Do not hesitate to reopen it if needed!

@jtoerber
Copy link
Author

jtoerber commented Dec 8, 2024

Hi,
with the settings

reproj_error_threshold_triangulation = 30
interp_if_gap_smaller_than = 50

(and some transformation of coordinates) I can see my capturing in OpenSim, but:
The upper body first is correct and then rotates 180 degrees, so that the knees show to the front and the head looks back.

When using OpenCap we had no problem with 2 cameras. So does the Pose2Sim triangulation need at least 3 cameras to work properly?
For me it is additionally important, because I only have 2 cameras with high FPS rate and am not able to add more cameras of those.
(From other videos I myself now generated a trc-file with Mediapipe matching your Pose2Sim model, which works in OpenSim)

@jtoerber
Copy link
Author

jtoerber commented Dec 8, 2024

@davidpagnon There is no Reopen issue button so it seems I have not the right to do so.

@davidpagnon davidpagnon reopened this Dec 9, 2024
@davidpagnon
Copy link
Collaborator

Hi,
2 cameras are of course not ideal, but we have made it work even in multi-person mode and challenging conditions (such as handstand, large occlusions, etc), so it should definitely work.
Did you try on performance mode and with better lighting? From what I remember there were a lot of flickering where the feet were detected on the head, and probably some right-left switches.

@jtoerber
Copy link
Author

Hi,
yes the person and the chair were doing some kind of fusion due to lack of contrast.
But in my understanding the underlying human skeleton model should not allow feet pointing to the front and shoulder/hips pointing to the back.

Regarding different cameras:
As far as I can see in the code, we are not able to take Charuco Markers for the calculation of the calibration?
Because then I could place a camera in the back and in the front.
In OpenCap there is described if you lay down the checkerboard on the floor (so that multiple cameras can see it) the resulting OpenSim animation will have rotation problems. Even if the checkerboard is rotated/tilted lets say 20 degrees, the OpenSim skeleton would also not be vertical anymore.
With the limitation to only accept horizontal checkerboard patterns it would even get worse with a checkerborard laying on the ground.

So how am I able to calibrate multiple (>3) cameras lets say pointing/looking to the centre of a circle?

@jtoerber
Copy link
Author

Hi,
was recording new Video with 120 FPS.

During Pose-Estimation the following error occurs:

[mpeg4 @ 0000018e88722b80] timebase 1000/119193 not supported by MPEG 4 standard, the maximum admitted value for the timebase denominator is 65535
[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:3194 open Could not open codec mpeg4, error: Unspecified error (-22)
[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:3211 open VIDEOIO/FFMPEG: Failed to initialize VideoWriter

In [project] I can set frame_rate to an int instead of 'auto'.

@davidpagnon
Copy link
Collaborator

Hi, sorry for the late answer, I'm pretty swamped with work this week... I'll do my best to spend some time on issues by the end of the week!

@jtoerber
Copy link
Author

jtoerber commented Dec 17, 2024

Hi,

tried raw RTMPose (COCO_133) with leightweight and performance. In my CSV-File I now have (original) keypoint-Column-Names via:

def getInfos(keypoints, openpose_skeleton):
    # https://github.com/Tau-J/rtmlib/blob/main/rtmlib/visualization/draw.py
    num_keypoints = keypoints.shape[1]
    if openpose_skeleton:
        if num_keypoints == 18:
            skeleton = 'openpose18'
        elif num_keypoints == 134:
            skeleton = 'openpose134'
        elif num_keypoints == 26:
            skeleton = 'halpe26'
        else:
            raise NotImplementedError
    else:
        if num_keypoints == 17:
            skeleton = 'coco17'
        elif num_keypoints == 133:
            skeleton = 'coco133'
        elif num_keypoints == 21:
            skeleton = 'hand21'
        elif num_keypoints == 26:
            skeleton = 'halpe26'
        else:
            raise NotImplementedError

    skeleton_dict = eval(f'{skeleton}')
    keypoint_info = skeleton_dict['keypoint_info']
    skeleton_info = skeleton_dict['skeleton_info']
    
    return keypoint_info, skeleton_info

def getKeypointNamesAndIds(keypoints, Coord2DPlusScore=False, openpose_skeleton=True):
    keypoint_info, skeleton_info = getInfos(keypoints, openpose_skeleton)
    keypointNames = []
    keypointIds = []
    for i, kpt_info in keypoint_info.items():
        if Coord2DPlusScore:
            keypointNames.append(kpt_info['name'] + "_x")
            keypointNames.append(kpt_info['name'] + "_y")
            keypointNames.append(kpt_info['name'] + "_score")
            
            keypointIds.append(str(kpt_info['id']) + "_x")
            keypointIds.append(str(kpt_info['id']) + "_y")
            keypointIds.append(str(kpt_info['id']) + "_score")
        else:
            keypointNames.append(kpt_info['name'])
            keypointIds.append(kpt_info['id'])
    
    return keypointNames, keypointIds

min(score) for lightweight was about 0,24 (, which btw. is not bad regarding only one camera was used from left behind)
min(score) for performance was about 2,4

If no person is in the frame, this is detected (resized by namedWindow):
No_Person_Detected

If a person is detected, the sport devices (see above) are not detected.

So:

  • scores are not in [0,1] at least for performance (<-> confidence, visivility ?)
  • region of interest detection and/or skeleton detection has some problems if no person is in the frame

Sounds a little strange.
Only small documentation in
https://github.com/Tau-J/rtmlib/blob/main/rtmlib/tools/pose_estimation/rtmpose.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants