You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 27, 2022. It is now read-only.
First of all, thanks for this great project. I want to know if we consider GL_EXT_disjoint_timer_query, what additional influence it has.
What I have seen glBeginQuery and glEndQuery blocking commands. You used those to get the rendering time whereas using GL_EXT_disjoint_timer_query we can get the time without stalling the rendering pipeline. Is that means here we can have measured rendering time >= actual rendering time?
If you have any idea how to use GL_EXT_disjoint_timer_query, please tell me.
One another question, this time is obtained from the GPU timer (not same as CPU time). Is there any way to get the CPU time elapsed during rendering?
Just want to know is there any way to include the time taken for CPU to send draw calls to GPU before the glBeginQuery()?
Why you choose initial buffer-size to be 4?
From my understanding, you are calculating the time taken by each query object if the buffer is non-empty. But while rendering each frame, we can have numerous query objects (draw calls). Just to clarify, are you taking the cumulative elapsed time until the command buffer size is zero?
Again this project is really very helpful.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
First of all, thanks for this great project. I want to know if we consider GL_EXT_disjoint_timer_query, what additional influence it has.
What I have seen glBeginQuery and glEndQuery blocking commands. You used those to get the rendering time whereas using GL_EXT_disjoint_timer_query we can get the time without stalling the rendering pipeline. Is that means here we can have measured rendering time >= actual rendering time?
If you have any idea how to use GL_EXT_disjoint_timer_query, please tell me.
One another question, this time is obtained from the GPU timer (not same as CPU time). Is there any way to get the CPU time elapsed during rendering?
Just want to know is there any way to include the time taken for CPU to send draw calls to GPU before the glBeginQuery()?
Why you choose initial buffer-size to be 4?
From my understanding, you are calculating the time taken by each query object if the buffer is non-empty. But while rendering each frame, we can have numerous query objects (draw calls). Just to clarify, are you taking the cumulative elapsed time until the command buffer size is zero?
Again this project is really very helpful.
The text was updated successfully, but these errors were encountered: