Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help Harmonizing my own image ? #2

Open
AlonDan opened this issue Jul 26, 2022 · 7 comments
Open

Help Harmonizing my own image ? #2

AlonDan opened this issue Jul 26, 2022 · 7 comments

Comments

@AlonDan
Copy link

AlonDan commented Jul 26, 2022

Hi There,
I installed everything on Windows 10 + Anaconda,
I've followed the DEMO instructions and it seems like it works perfectly fine with the 3 images.
So far so good. 🙂

Unfortunately, I got lost on the part where it mentions if I want to try my own images:

If you want to test your own samples, please refer to the folder ./demo/image_harmonization/exampleto prepare thecompositeimage and the corresponding foregroundmask image.

By refer, I guess it means to put it on that directory, so I did...
or is there another command or file I need to run in order to prepare the composite and mask, as you can see I got lost.

I would like to try my own image but I don't understand what I need to do next?
I tried to put my image.jpg in the demo path:
Harmonizer\demo\image_harmonization\example\

But when running the demo:
python -m demo.image_harmonization.run --example-path ./demo/image_harmonization/example
It ignores my image and just create the 3 provided example images.

🤔 Can someone please explain how to make it work with my own images?

I'm not a programmer so please be kind and explain it step-by-step if possible, Thanks ahead for any help!


UPDATE:
When I tried the Enhanced demo, it works also with my image when I insert it on the original folder.

So, I guess Harmonizer won't produce the Mask for me like it is on the website demo?

I need to somehow make with another software? not sure if I understand, I'm still exploring the whole thing, I want it to work like the online demo where I just upload an image and it does the job automatically, I'm missing something but not sure what.
I hope that somebody can explain it so I can follow :)

@ZHKKKe
Copy link
Owner

ZHKKKe commented Jul 26, 2022

Hi, thanks for your attention.
To test your own image, you also need to provide the foreground mask. Note that the foreground mask image will not be generated automatically, so you have to prepare it. Specifically,

  1. Put your RGB composite image in the folder ./demo/image_harmonization/example/composite/
  2. Put the foreground mask image that has the same name as the RGB composite image in the folder demo/image_harmonization/example/mask/
  3. Run the demo code

@AlonDan
Copy link
Author

AlonDan commented Jul 26, 2022

Hi, thanks for your attention. To test your own image, you also need to provide the foreground mask. Note that the foreground mask image will not be generated automatically, so you have to prepare it. Specifically,

  1. Put your RGB composite image in the folder ./demo/image_harmonization/example/composite/
  2. Put the foreground mask image that has the same name as the RGB composite image in the folder demo/image_harmonization/example/mask/
  3. Run the demo code

Thank you for the friendly reply, I appreciate it and your amazing work!

Is there a branch or version I can use BOTH?
so it will also automatically generate the alpha like in the online demo?

Alternative, what do you recommend for most accurate / high quality results for Matt producing for both images and videos?

I really like the online demo which is easy to use and done all automatically, I wonder if there is a way to install something like that on Windows 10 and Anaconda locally? (none online)

Thanks ahead once again!

@ZHKKKe
Copy link
Owner

ZHKKKe commented Jul 28, 2022

Q1: Is there a branch or version I can use BOTH? so it will also automatically generate the alpha like in the online demo?
The model we used to automatically generate the alpha matte is not released. You may have to do some works to combine a matting model and Harmonizer.

Q2: Alternative, what do you recommend for most accurate / high quality results for Matt producing for both images and videos?
You may try some open-source project:

  1. For general object matting/segmentation: https://github.com/xuebinqin/U-2-Net
  2. For portrait video matting: https://github.com/PeterL1n/RobustVideoMatting or https://github.com/ZHKKKe/MODNet

Q3: I wonder if there is a way to install something like that on Windows 10 and Anaconda locally?
Nope. I just develop an online demo for fun.

@AlonDan
Copy link
Author

AlonDan commented Jul 28, 2022

Thank you so much for your detailed reply, I appreciate it!

The model we used to automatically generate the alpha matte is not released.
Will you please consider to release the model you're using as additional one?

It will be very interesting to test and compare locally rather than online, I would also love to help with some reports and different results as a simple user (since I'm not a programmer) it may help and give you some indication and ideas how to improve if needed of course.


You may try some open-source project: For general object matting/segmentation: https://github.com/xuebinqin/U-2-Net For portrait video matting: https://github.com/PeterL1n/RobustVideoMatting or https://github.com/ZHKKKe/MODNet

Thank you for your suggestions, I've tried All,
Most interesting was RVM (Robust Video Matting) for video but the results are not as good as in your online demo I did some comparisons, different resolutions and less/more complex cases (mostly simplistic for less background confusion)
U-2-Net is a bit tricky but I could also test MODNet which shows great potential but the problem is that the pre-trained models is mostly are not as good as the video demonstrations.

That's why I'm really interested in the online DEMO on your website that uses your own trained model which is more accurate, I also compared it to other project and in most cases YOUR was more accurate, I can share the other online demo it if you like, they're probably use another technique or maybe same as yours but different model.


Q3: I wonder if there is a way to install something like that on Windows 10 and Anaconda locally? Nope. I just develop an online demo for fun.

I found it very interesting to do comparisons of how accurate same source images are, but with the current pre-trained model I can't really get good results, only with your online demo.

I really hope that you can either release the same one you used, or if even for sake of research sent me in private so I can experiment and share some reports and comparisons.

Thank you once again for your hard work, please keep up the good work!
and don't hesitate contact me if you'll consider to share your demo trained model.

@ZHKKKe
Copy link
Owner

ZHKKKe commented Jul 29, 2022

@AlonDan
I've been thinking about how to open source these powerful pretrained models for the research community.
However, I don't want these models to be used commercially without telling me. An open source license does not effectively prevent others from using the model for commercial purposes (the early license of MODNet was a non-commercial license, but I have found some commercial products that ignore the license).
So I need more time to think about this. It will take quite a while, at least until next year.

@AlonDan
Copy link
Author

AlonDan commented Jul 29, 2022

@ZHKKKe I understand and respect that, Licensing could be complex for sure.

I may misunderstood, but related to MODNet (maybe even for Harmonize)
Can you please take the time and read my other post related to Training?

I believe that if you or anyone who's expert in TRAINING in general could answer most of the confusion questions which I took the time to post for the sake of "TRAINING Newbies" newcomers to appear in the future.

I'm very curious learning about it from experimenting, because using pre-trained models is one thing, making your own dedicated models is a different story.

@ZHKKKe
Copy link
Owner

ZHKKKe commented Jul 30, 2022

The other post you release is a huge topic. I will try to check and comment it soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants