-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you use python threading for cv2.imshow during detection #10
Comments
See http://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/ for information on how this might be done. A good way to learn Python is to enhance an existing program. Good luck and please share your results. |
I took your suggestion and created a threaded approach as well as a multiprocessing approach. My objective was to be able to run the pi camera at around 1600x900 (approx) at 30 fps while show images during Tracking. My experimentation made me conclude that a higher fps is required for measuring higher speeds more accurately so that became an objective. I came to the following conclusions:
If my Raspberry Pi 3 was had a faster single engine speed or if the waitkey was an wait event then showing the images would be more practical. |
@picameratk - that is a great analysis. Perhaps someday the Raspberry Pi 4 or 5 will come along and have the necessary processing power. |
Still won't put those corporate thieves out of business I'm afraid....
On Thu, Apr 27, 2017 at 4:40 PM Greg Barbu ***@***.***> wrote:
@picameratk <https://github.com/picameratk> - that is a *great* analysis.
Perhaps someday the Raspberry Pi 4 or 5 will come along and have the
necessary processing power.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#10 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEM5o_3R9FN7DCjSrwFgxnJQbntKCwHLks5r0SdqgaJpZM4NElG1>
.
--
I respectfully decline the invitation to join your hallucination.
|
If I recall correctly you had to remove cv2.imshow when detection was in progress to speed up the program. After looking at your program and watching the sample monitoring you had to work quite hard to get the speed you needed for this program to get enough samples for speed stabilization.
Since my python skills are non-existent could you have shoved cv2.imshor("Speed Camera", image) to another python thread to offload that processing to another CPU assuming a quad core Pi? In other words, maybe all of the cv.imshow, once you enter the frame loop, could be offloaded and, therefore, the image of the car will be viewable when passing through the monitoring area?
The text was updated successfully, but these errors were encountered: