You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using the run.py with standard parameters It takes roughly 20-30min before my GPU starts working, before that only 4 CPU cores have constant spikes. And judging by the timing of the logs It seems that a lot processing is done before the actual training starts... Could you provide any insight into what is happening and if there is something that can be done to optimize? I run this on a 12C/24T CPU @4,7 GhZ and an RTX4090 (cuda enabled, and it starts using the GPU when the training starts).
The text was updated successfully, but these errors were encountered:
I'm uncertain... 20-30 minutes? That seems excessively long. When I
executed the script, the training initiated in possibly 2 minutes, after
which the GPU was fully utilized at 100% capacity. Actually, there isn't
any preprocessing involved in this case; the data loader simply reads the
CSV file and that's the extent of it. Could you insert a breakpoint to
determine the exact point in the script where it stalls for those 20-30
minutes? Doing so would be instrumental in pinpointing the root of the
issue.
On Thu, Jan 11, 2024 at 11:38 PM Jannic Horst ***@***.***> wrote:
Using the run.py with standard parameters It takes roughly 20-30min before
my GPU starts working, before that only 4 CPU cores have constant spikes.
And judging by the timing of the logs It seems that a lot processing is
done before the actual training starts... Could you provide any insight
into what is happening and if there is something that can be done to
optimize? I run this on a 12C/24T CPU @4 <https://github.com/4>,7 GhZ and
an RTX4090 (cuda enabled, and it starts using the GPU when the training
starts).
—
Reply to this email directly, view it on GitHub
<#75>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3JGOZ6X2GJ5LYCVORZHJLYOABOLAVCNFSM6AAAAABBWWM3A6VHI2DSMVQWIX3LMV43ASLTON2WKOZSGA3TMOJYGYYTCOI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
Using the run.py with standard parameters It takes roughly 20-30min before my GPU starts working, before that only 4 CPU cores have constant spikes. And judging by the timing of the logs It seems that a lot processing is done before the actual training starts... Could you provide any insight into what is happening and if there is something that can be done to optimize? I run this on a 12C/24T CPU @4,7 GhZ and an RTX4090 (cuda enabled, and it starts using the GPU when the training starts).
The text was updated successfully, but these errors were encountered: