-
-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for downloading input images from S3 bucket. #181
Comments
How I'm going to implement this:Here, I'll add a similar function to Line 77 in 6acd372
Modifications will have to be made in places like these: Lines 201 to 284 in 6acd372
I'd like to know your thoughts on this, and any directions on implementing this are much appreciated :) |
Also interested in doing something similar. My thought was to mount GCP bucket onto the local filesystem with gcloud api. |
Did you make any progress on this? I'm trying to do the same thing and would love to know if you got it working. |
No, I'm working on a different feature in the ODM engine. You're welcome to add this enhancement :) |
Hello everyone. I would like to update this issue. Currently, I have make some research and realized that there are two ways to transfer an image from S3 to ODM. The first method is to add an S3 photo upload feature in the ODM python Docker itself. And the second method is to add an S3 photo upload feature to the nodeODM and then pass these images into python. I have some draft code. Can I pull request it? |
Always feel free to open pull requests. |
This sounds fascinating! I look forward to seeing what you have, but Piero and Stephen likely need to weigh in first. |
Hi. My ODM container is configured to download images from an S3 bucket upon each startup. I'm planning to modify NodeODM to pass S3 bucket parameters for each task. However, I need to confirm how NodeODM handles ODM Docker containers for each task. Does NodeODM initiate a new ODM Docker container for each task, or does it reuse an existing container? Understanding this will help me ensure that my modifications work as intended. Thanks |
Hello, I think that similar results can be achieved using the zipUrl param of this route: https://github.com/OpenDroneMap/NodeODM/blob/master/docs/index.adoc#post-tasknew Just zip the images before upload them to S3 |
This issue can be bypassed with Minio by downloading a zipped directory directly from the API (see below). I'm not sure if AWS S3 supports this, or may require a Lambda function to achieve the same. Then, as mentioned above, the Further ContextHopefully the info below is useful to someone. Our Existing Workflow
Obviously this is extremely inefficient. It uses unnecessary resources on the Python server, including possibly locking a thread for hours while NodeODM is polled for completion. It would be much nicer to have NodeODM download the files it needs for processing instead. Proposed Solution
Efficiency gain 1: downloading the files from S3 once, on the NodeODM service, instead of download/re-upload combo on an intermediary service (e.g. Python PyODM). Efficiency gain 2: TBC, a way to avoid polling NodeODM for job completion? I feel this problem is probably already solved by ClusterODM, so will look into how it's solved there. |
Check the |
What is the problem?
There should be a way for NodeODM to directly get input images from a S3 bucket/folder and use it for processing. Currently, one has to upload it from their machine to a remote instance.
What should be the expected behavior?
There should be a field where one can simply enter the name of the S3 path to images folder, and be able to process it, saving the results in the folder specified by the user.
How can we reproduce this? (What steps did you do to trigger the problem? Be detailed)
It's a feature request.
The text was updated successfully, but these errors were encountered: