-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ACCESS TOKEN not regenerating multiple times in a single execution of script #113
Comments
Run Also, run send log file contents or just the file |
I apologize my problem was on the absolute path I was going through, when moving the folder to the / root directory for example and uploading it worked, but now I have errors to upload all the files, this is missing several of them, I ran the command that me was informed and I will send the log output here as soon as it is finished, even in the part that starts to give error. the directory I was trying to send when I had an upload error earlier was like: |
Follows the log file of the attempt to upload a folder containing several subfolders and files for my gdrive, but without success, I await an answer, thanks for the support. log.gz ( link removed due to sensitive information ) Status: 602 Uploaded | 3772 Failed |
gupload --info: REPO: "labbots/google-drive-upload" |
So, i looked at the log today. The problem is that if access token expires before uploading all the files, it doesn't renew it automatically untill the script is executed again. Just have to add the required code for that. Will try to do asap. |
ok thank you very much, besides this problem, i'm having problems and to upload files of 200GB each, i'm having to upload a backup to the drive but with big files it is not working: / |
Wow, those are quite big files, i really never tested on such big files. Can you give more info on what error occurs ? 🤔 Also, create a new issue for it so i can track it properly For grabbing logs, use below commands ( will hide sensitive information from logs )
Change |
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, will update before 3 seconds of expiry process will be killed when script exits Fix labbots#113
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, will update before 3 seconds of expiry process will be killed when script exits Fix labbots#113
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, will update before 3 seconds of expiry process will be killed when script exits Fix labbots#113
So, i have pushed some fixes. Run below command to install the gupload command with fixes. Command name will be
Different command name so the existing installation doesn't need to be touched. |
all right thanks for the support, I will try to send this backup folder with files of 200GB each to my drive, in case of errors I send the log file using the commands mentioned above to capture the logs. |
Unfortunately I can't upload all my files, he managed to upload only one of them, after the first error I tried to upload again using the -d option to skip existing ones but he says that all files are already there, but there is only one file there 243GB, I will leave the logs of the two attempts below for analysis. logs ( link removed for sensitive info ) |
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits Fix labbots#113
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits Fix labbots#113
I have pushed some new fixes run below commands to update the test gupload command
For grabbing logs, use below commands ( will hide sensitive information from logs )
Change filename to your file name I am writing this again because you didn't run all the commands and because of that your sensitive information was again visible in logs. |
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits create a temp file where updated access token will be stored by the bg service every function that uses access token will source it on every call make a new function named _api_request for all oauth network calls Fix labbots#113
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits create a temp file where updated access token will be stored by the bg service every function that uses access token will source it on every call make a new function named _api_request for all oauth network calls Fix labbots#113
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits create a temp file where updated access token will be stored by the bg service every function that uses access token will source it on every call make a new function named _api_request for all oauth network calls Fix labbots#113 google-oauth2.[bash|sh]: Apply new changes
okay, I ended up not paying attention to that detail, I'll run it again and bring the result logs. |
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits create a temp file where updated access token will be stored by the bg service every function that uses access token will source it on every call make a new function named _api_request for all oauth network calls decrease a network request ( to fetch expiry of access token, just calculate it locally using remaining time given in json as expires_in ) Fix labbots#113 google-oauth2.[bash|sh]: Apply new changes
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits create a temp file where updated access token will be stored by the bg service every function that uses access token will source it on every call make a new function named _api_request for all oauth network calls decrease a network request ( to fetch expiry of access token, just calculate it locally using remaining time given in json as expires_in ) Fix labbots#113 google-oauth2.[bash|sh]: Apply new changes
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits create a temp file where updated access token will be stored by the bg service every function that uses access token will source it on every call make a new function named _api_request for all oauth network calls decrease a network request ( to fetch expiry of access token, just calculate it locally using remaining time given in json as expires_in ) Fix labbots#113 google-oauth2.[bash|sh]: Apply new changes
launch a background service to check access token and update it checks ACCESS_TOKEN_EXPIRY, try to update before 5 mins of expiry, a fresh token gets 60 mins ( 3600 seconds ) process will be killed when script exits create a temp file where updated access token will be stored by the bg service every function that uses access token will source it on every call make a new function named _api_request for all oauth network calls decrease a network request ( to fetch expiry of access token, just calculate it locally using remaining time given in json as expires_in ) Fix labbots#113 google-oauth2.[bash|sh]: Apply new changes
I was successful in uploading only one file of 243GB the rest it failed, I don't know what else to do, I've tried several scripts besides yours but without success, I believe there is some limitation on the part of google. |
Have you tried uploading those files one by one ? About limitation by google, it is 750 gb per day. |
I'm doing this at the moment as a last alternative, I put the next 243GB file to go up to the drive, inside the folder that already existed there, I used the following parameter, script /folder_name/file_name folder_name on the google drive. |
I couldn't help noticing this error in the logs, apparently something related to credentials and authorization. "error": { |
I realized that sending the file to an existing folder didn't work, it is sending to the root of my drive, how can I send a file to an existing folder on the drive, could you tell me the syntax? |
sorry it was not clear to me, so the error that appeared in the last logs I sent is due to the limitation of google right? |
@renatowow14 Yes, you can increase daily upload limit of your account. You will have to use service accounts. I will try to integrate with script when i get some time. Reference: https://cloud.google.com/iam/docs/service-accounts I don't understand, what data are your referring too ? 🤔 |
I am wondering if the only problem was due to the 750GB limit, I will send the error part of the log that I sent. { |
So I read this problem and as my quota was exceeded, but I'm not sure if it's correct, I didn't go up so many subfolders to have crossed the limit, I believe it could be something in the script. https://developers.google.com/drive/api/v3/handle-errors#quota |
If you think it's because of script, then try uploading again, it should upload if you have not exceeded the qouta. |
I just put to upload a file that was missing from the folder and is loading, now I don't know what to say. |
In short, he loaded all files in the folder except for a 15GB one, this gave the error that appears at the end of the last log I sent you. |
So, I think i misunderstood your problem, it's more of an api error, rather than storage qouta error. Basically, a solution would be to use -R flag, so it retries itself in case of error. |
A long time has passed, i will try to send again, in case there are errors come back here, but apparently the problem of uploading large files has been solved, now it seems to me to be a google limit problem. |
Yeah, seems like you hit hard limit, you cannot upload anything for today. Only solution is to use service accounts. |
I'll try to increase the limit tomorrow, and try to send it again tomorrow, thanks for the support. |
@renatowow14 I will try to add the service account feature soon. I have created a issue to track it |
I have added some code to print to the user if upload limit is reached. Refrence: rclone/rclone#3857 (comment) |
will you update the fixes for the master repository? |
Just wait for some, i have almost updated the script for using service accounts. Need to finish it up. |
The fixes has been merged and a new release is created https://github.com/labbots/google-drive-upload/releases/tag/v3.4.3 For further discussions on rate limit error, move on to #122 |
right, so I can already use the master upload script which will already have corrections right? Thank you again. |
@renatowow14 Yes |
Command: gupload /home/user/folder
this command reproduces error, I have 826 sub-folders and 7593 files
/root/.google-drive-upload/bin/gupload: line 642:: File or directory not found
/root/.google-drive-upload/bin/gupload: line 647: /usr/bin/file: Too long argument list
/root/.google-drive-upload/bin/gupload: line 647: /usr/bin/mimetype: Too long argument list
The text was updated successfully, but these errors were encountered: