Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you use python threading for cv2.imshow during detection #10

Open
picameratk opened this issue Apr 21, 2017 · 4 comments
Open

Can you use python threading for cv2.imshow during detection #10

picameratk opened this issue Apr 21, 2017 · 4 comments

Comments

@picameratk
Copy link

If I recall correctly you had to remove cv2.imshow when detection was in progress to speed up the program. After looking at your program and watching the sample monitoring you had to work quite hard to get the speed you needed for this program to get enough samples for speed stabilization.

Since my python skills are non-existent could you have shoved cv2.imshor("Speed Camera", image) to another python thread to offload that processing to another CPU assuming a quad core Pi? In other words, maybe all of the cv.imshow, once you enter the frame loop, could be offloaded and, therefore, the image of the car will be viewable when passing through the monitoring area?

@gregtinkers
Copy link
Owner

See http://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/ for information on how this might be done. A good way to learn Python is to enhance an existing program. Good luck and please share your results.

@picameratk
Copy link
Author

I took your suggestion and created a threaded approach as well as a multiprocessing approach. My objective was to be able to run the pi camera at around 1600x900 (approx) at 30 fps while show images during Tracking. My experimentation made me conclude that a higher fps is required for measuring higher speeds more accurately so that became an objective.

I came to the following conclusions:

  1. The multiprocessing approach, using queues, was too slow. While I successfully moved the image retrieval to another process I discovered, using cProfile, that the queue get time was .045 per retrieval. Note that if you want 30 fps you only have .033 seconds to work with. So, multiprocessing using queues did not operate quickly enough.
  2. With the threaded model I was able to achieve 22 fps at 1024x768 with SHOW_IMAGE = False. This model proved to be sensitive to the FPS I setting in the camera. If I put FPS = 30 there was a good chance that my fps rate would be about 10 because the mainline program could not retrieve the next image fast enough OR the thread processing model of Python, using GIL, would not time slice the threads well enough. As you already know the cv2.waitKey is VERY expensive and with SHOW_IMAGE = True the fps dropped to around 12 fps. I concluded that waitKey is not a wait event that would allow thread switching therefore thread switching would not automatically occur during the waitKey event. Again using cProfile I see that waitKey requires .034 seconds which, by itself, won't allow for 30 fps (i.e., 0.033).

If my Raspberry Pi 3 was had a faster single engine speed or if the waitkey was an wait event then showing the images would be more practical.

@gregtinkers
Copy link
Owner

@picameratk - that is a great analysis. Perhaps someday the Raspberry Pi 4 or 5 will come along and have the necessary processing power.

@RawLiquid
Copy link

RawLiquid commented Apr 29, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants