-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Creating exports on Laravel Vapor: Could not find zip member zip #4177
Comments
I'm not certain why but I just recently started to get the same error. Also using a s3 bucket for the temp_filder |
I don't use Vapor myself, so I have no clue what exactly goes wrong. Feel free to PR a fix |
I am on vapor and I can confirm this happens too. And it's been quite a problem lately. |
@fransjooste1 @chatisk We need to figure this out. What are the commonalities between our projects besides Vapor? Are you using Octane? |
@devinfd yes we are also using Octane |
Ok, Octane is my first guess to the source of the problem. I suspect this is not a Laravel Excel problem but I'm not certain yet. I'm going to spend some time on this bug over the next few days. |
@devinfd I am not using Octane. We use S3, and we have filled both settings for temporary files. Take a look 'local_path' => env('APP_ENV') === 'local'
? storage_path('framework/cache/laravel-excel')
: sys_get_temp_dir(),
'remote_disk' => 's3-tenant', // Multi-tenancy,
'force_resync_remote' => true, |
Same project as @chatisk here.
No better luck. The exception we get is: Our temp storages are configured as follow (vapor.yml):
As @fransjooste1 mentioned in the first post, it's all like the system is trying to access the export while it's not there already. It's like the workers are running in parallel, not on the same container (so not the same temp storage). |
Same here, also not on Octane but on plain Laravel & Vapor. Our config looks about the same as mentioned above |
It's possible this PR #4034 has something to do with it. I don't use Vapor (or a multi server setup) (and no time to set it up to test this), so If someone wants to figure this out, feel free to PR a fix. |
just as a follow up, commenting L169 to L172 of
|
Hi all, I no longer believe that Octane has anything to do with this bug. I agree/suspect with @jbajou and @fransjooste1 assessment that the file "system is trying to access the export while it's not there already" |
I just PR'ed something that addresses the issue. I've deployed it in our sandbox env and it's working so far. Are you guys able to test it on your end too ? |
If people using Vapor can let me know if the PR works, I'll release a fix with it |
The soonest I can try is this weekend. I’ll let you know soon after |
@jbajou I'm not on a serverless environment and I get the same error. We are running in k8s and have multiple queue workers running. Is a more general check possible? |
The PR has been merged and is released. Our issue was resolved like that. Perhaps you need to add some extra checks on the new function introduced |
The PR adresses only runs on AWS as our issues is on lambda functions. |
Can confirm the same issue. Been struggling with this since export size got bigger than 1 request could handle. After upgrading 3.1.55 => 3.1.58 it feels that the problem has actually gotten worse - even less chance of successful export. Serverless Vapor. Without Octane. I actually tried to work around this using EDIT: if ($temporaryFile instanceof RemoteTemporaryFile && !$temporaryFile->existsLocally() && !$this->isRunningServerless()) {
$temporaryFile = resolve(TemporaryFileFactory::class)
->makeLocal(Arr::last(explode('/', $temporaryFile->getLocalPath())));
} But fails on this: $writer->save(
$temporaryFile->getLocalPath()
); Relevant stacktrace
|
I have this error in export:
No octane, no vapor. I get an error only on a large amount of data, on a small one everything is ok. I use fromQuery + queue. Locale machine and stage project (the same server as the prod) no problem for 9-10k records. But in production project - has this error with 6k, 10k records.
|
i add I also added a separate "exports" queue and started running exports on it instead of "defaullt" as before. I also reduced the size of chunks from a thousand to 300 pieces per chunk. But I think the main problem was still in the memory limit |
Is the bug applicable and reproducable to the latest version of the package and hasn't it been reported before?
What version of Laravel Excel are you using?
3.1.55
What version of Laravel are you using?
11.19.0
What version of PHP are you using?
8.3
Describe your issue
When doing exports within a Vapor environment, it seems like the AppendQueryToSheet job is spawned before the file is fully created on S3.
Keep running into:
PhpOffice\PhpSpreadsheet\Reader\Exception
Could not find zip member zip:///tmp/laravel-excel-BSReujKfyopUuGIeaQy50PP4DyuxvXee.xlsx#_rels.rels
When viewing S3 the file does exist.
Additionally, sometimes the export works, but most of the times it fails.
Vapor (AWS) doesn't allow you to create a queue of size 1, so the other chunks of jobs are usually picked up by another worker.
Config:
FileSystem:
How can the issue be reproduced?
This isn't easy to recreate unfortunately.
You need to be on Laravel Vapor.
Create an environment with queued workers.
Create an export with a chained mail or reporting job on a specific queue.
You need to export items that has a large number of rows, forcing chucking of query results.
The jobs being spawned will grab the file before it is ready creating the error.
What should be the expected behaviour?
I think a solution could be just to wrap the AppendQueryToSheet within a quick retry function or anywhere where the file is being accessed if jobs were spawned, just to try and get the file again creating a small delay. I believe another job is fired too fast off the queue and is accessing the file while it is being created on S3 in an incomplete state.
This is how I solved the same issue (error was the same) in another line of code where we would download the file directly if small enough.
I could be entirely off the mark also, any feedback or suggestions would be greatly appreciated.
The text was updated successfully, but these errors were encountered: