You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do you know of any good workflow for texturing my 3d model though ? i already projected the initial texture from the front, but now i need to texture the rest of my monsters body, from the side and back at least.
I thought maybe i can use stencil projection mode in texture painting -> which camera settings would i need to set in blender or substance painter to exactly match the generated images from the different angles?
if you know of any python code to do this automatically would be nice too. zero12345 wouldnt work as they generate their own mesh , i already have my mesh from which i made the first original texture (front projection).
maybe this could be partly scripted in blender to become a useful and outstanding texture pipeline tool.
EDIT: Now that i thought about it. Would it be possible to send an arbritrary (or fixed) angle to zero123+ ??
Like, we move around the object in blender, when we find a good angle (this might be different for different objects) , then send the camera position and angle (and maybe even an image for controlnet) to zero123+, generate the next image, receive the image ... then move around it again , and again send to zero123+ and again receive the image.
the intermediate steps in the diffusion could be kept in the mean time so we always get the consistent style and shading from the first/initial picture (similar to how comfyui only evaluates changed parts of the diffusion pipeline) just like it does now..
Maybe comfyUI would be the tool to make this all alot easier as it can already communicate with other progs such as blender and has a really good codebase and makes integrating new modules "easy", without having to write everything from scratch.
The text was updated successfully, but these errors were encountered:
Ulf3000
changed the title
What to do with the generated images ?
Discussion/question: What to do with the generated images ? (texturing 3d models)
Nov 25, 2023
Hi @Ulf3000 As far as I understood this paper and codebase, it will not allow you to pass any other view to project the texture on. They have their set of angles, only they can be used (all at once). If you want to do one at a time, then you will need to look somewhere else that generate consistent images with respect to your geometry.
By the way, if you find such models, please share with me too.
Hi , the tool works great. Thanx a lot!
Do you know of any good workflow for texturing my 3d model though ? i already projected the initial texture from the front, but now i need to texture the rest of my monsters body, from the side and back at least.
I thought maybe i can use stencil projection mode in texture painting -> which camera settings would i need to set in blender or substance painter to exactly match the generated images from the different angles?
if you know of any python code to do this automatically would be nice too. zero12345 wouldnt work as they generate their own mesh , i already have my mesh from which i made the first original texture (front projection).
maybe this could be partly scripted in blender to become a useful and outstanding texture pipeline tool.
EDIT: Now that i thought about it. Would it be possible to send an arbritrary (or fixed) angle to zero123+ ??
Like, we move around the object in blender, when we find a good angle (this might be different for different objects) , then send the camera position and angle (and maybe even an image for controlnet) to zero123+, generate the next image, receive the image ... then move around it again , and again send to zero123+ and again receive the image.
the intermediate steps in the diffusion could be kept in the mean time so we always get the consistent style and shading from the first/initial picture (similar to how comfyui only evaluates changed parts of the diffusion pipeline) just like it does now..
Maybe comfyUI would be the tool to make this all alot easier as it can already communicate with other progs such as blender and has a really good codebase and makes integrating new modules "easy", without having to write everything from scratch.
The text was updated successfully, but these errors were encountered: