-
-
Notifications
You must be signed in to change notification settings - Fork 35.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebGLRenderer: Allow for binding, rendering into mipmap storage of textures #29779
Comments
Perhaps it's best to accept a number for pure storage rather than overloading an array of three.js/examples/webgl_materials_cubemap_render_to_mipmaps.html Lines 117 to 118 in 841ca14
Implementation-wise, it would be nice to support |
The "mipmaps" field isn't documented all that well and I don't fully understand how to use it currently. But as far as I know it's used for storing and uploading mipmap data when it's already generated and stored in a file format. The cubemap case looks like an odd workaround / hack to get mipmaps generated. In terms of specifying a number of levels, are there common use cases for not just generating mipmaps down to a 1x1 pixel when they're needed?
I expected this could be set by the user using
That should work if all the MRT attachments are attached to the framebuffer with the appropriate mipmap levels, I believe. |
Yes, Hi-Z as one example, which does min/max reduction (depending on use of reverse depth), and the rest of the pipeline is very particular with the actual size and number of levels, like NPOT. For lower-spec devices, just a few coarse levels are enough. Many other techniques use hierarchical structures, which don't simply blur or carry over data but merge or interpolate. Fanciest I suppose would be Radiance Cascades which is worth a read itself. I'm not sure if PMREM would count, but maybe that's a decent place to try it.
I expect this needs API changes in core, but the actual implementation is supported by both WebGL and WebGPU. Here's the WebGPU sample I was thinking of, where these are parameters of both the attachment and texture view. In WebGL, these are binding calls at a level with the framebuffer and to the texture at another level. Maybe we can first assume a reasonable level for sampling based on the level we are rendering to rather than leave it to configuration. gpuweb/gpuweb#386 (comment)
I think it's reasonable to expect the configurations of all attachments and textures to be the same for anything MRT. I've implemented this before externally by hacking |
I guess the questions is more is it a problem if memory is just allocated down to 1x1 mipmaps in these cases - depending on the need you would be able to only generate the first few mip levels. Of course it's ideal to not allocate memory that's unused but it might be a trade for a more ergonomic / easier to integrate change.
An example would be nice but it looks like it would amount to calling |
Description
Rendering custom mipmaps can be valuable for a number of use cases for post processing, stylization, etc but it's not something that three.js supports currently. Use cases include:
I think there are a few concept disconnects currently. One is that "generateMipmaps" indicates that both mipmap storage should be generated and the mip chain should be generated. When generating custom mipmaps though these concepts should be separate. Ie you may want an option that says "generateMipmapStorage" and "generateMipmapContents". Or a setting that enumerates the three options. Another is that you cannot currently render into just any textures storage.
cc @CodyJasonBennett
Solution
These are some solutions that come to mind - there are no doubt others. I can't say these are optimal or align with what's possible in WebGPU but I'll list them here to start the discussion:
Generating Mipmap Storage w/o Contents
The
generateMipmaps
setting could be changed to take three options so attaching storage does not implicitly mean generating content:NO_MIPMAPS
(currentfalse
),MIPMAP_STORAGE
, orMIPMAP_CONTENTS
(currenttrue
).Rendering to Mipmaps (#29844)
Currently
setRenderTarget
supports taking aactiveMipmapLevel
but as far as I can tell this will only work if the user has specified textures in thetexture.mipmaps
array, is a 3d texture, or cube map. The active mipmap level could also apply to the automatically-generated mipmap storage using the framebufferTexture2D.Writing to Regular Texture Mipmaps
The above solutions only really apply to RenderTargets but generating custom mipmaps for regular textures, normals maps, data textures, etc are all relevant. A simple solution would be to enable setting a regular non-rendertarget texture as a depth buffer-less renderable target.
Alternatives
Generating Mipmap Storage w/o Contents
To do this currently you can create a render target, initialize it with
generateMipmaps = true
, and then disable it to ensure the storage is available. This however still incurs the overhead of generating mipmaps on creation:Rendering to Mipmaps / Writing to Regular Texture Mipmaps
Using
copyTextureToTexture
, custom mipmaps can be generated with render targets and then copied into the appropriate mipmap level. The additions in #29769 allow for copying any existing mip map data, as well.This solutions incurs unneeded overhead copying and an additional render target, however.
Additional Context
WebGPU does not support automatic generation of mipmaps: gpuweb/gpuweb#386
The answer to this stackoverflow question shows that it's possible to render into a mipmap storage while sampling from the immediate parent mip by setting the
TEXTURE_MAX_LEVEL
,TEXTURE_BASE_LEVEL
,TEXTURE_MAX_LOD
. Setting these can probably be left to the user.The text was updated successfully, but these errors were encountered: