Environment Map Lighting with seperate reflections. #218
-
Hi, I have limited experience with Optix, and I am currently working in parallel with the Owl project and the Pathtracer in Nvidia Samples. I would prefer to use only Owl, but I am uncertain if the project will continue and be updated with future Optix versions. I have successfully managed to load an environment map into both the Pathtracer and the Simple Triangulars example in Owl. In both examples, I used the MissData to pass either a cudaTextureObject_t or an OWLTexture hdrTexture, loaded the texture into the necessary CUDA/Owl types, and called it from my Miss program. Everything worked fine, but when I tried to set my HDR into the rtow-mixed geometries, things did not go as expected (Using the Miss program I passed the texture but I can not work out how to manipulate the scatterEvent == rayDidntHitAnything within the tracePath function...). I tried both options: firstly, I attempted to update the miss color function, as Owl does not require a Miss program for this particular example, but I also tried to implement a Miss program. Considering that my ultimate goal is to illuminate my scene with two environment maps – one for lighting my scene and setting the background, and a second one just for reflections (which is necessary for solar glare assessment) – what would be your recommendation in terms of organizing my pipeline? Any suggestions/advice are welcome. P.S. I have been working with raytracing for daylight using Radiance for 20 years, and it's just amazing to load 2,000,000 spheres into Owl and achieve this level of performance realtime! Many thanks... |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 4 replies
-
Hi, @ingowald might chime in here, but what I usually do with environment maps when using OWL is I evaluate them in the ray generation program directly. In the rtow-mixed-geometry sample the Note that in other projects we do a lot of processing in Cheers |
Beta Was this translation helpful? Give feedback.
-
Hi, I passed my texture through RayGenData to my Raygen program and after calling traceRay, in the section "rayDidntHitAnything", I picked up the prd.out.scattered_direction, to get my uv's and the color from the texture. Finally, I am miltiplying attenuation with the color. The image i am getting shows all the reflections of the sky correct but the actually sky is not mapped properly. I also attached an image... |
Beta Was this translation helpful? Give feedback.
-
Hey, George, Sorry for the late answer, but it took a while to get to it - you're actually asking several questions at once, and I wanted to properly reply rather than just dashing something off quickly. Re "will this survive". I cannot guarantee that it does; however, if you look at the "traffic" in this project it is very much alive right now. i can also tell you that owl is "the" workhorse for all the different optix-related projects I and other people involved in this project have done in the four or so years since the project started - it's just so much easier to use OptiX through OWL, why would I ever go back to not using it? So can I guarantee that it will live forever? No - it's not a official "supported project" with a guaranteed support team - but as long as I'm going to do anything with OptiX I'm going to keep on using and maintaining OWL. Re "owl vs nvidia sample path tracer" : I didn't write this path tracer, so can only say limited things. However, I'd point out that the two are not "opposites" or even "competitors", but actually two different sides of the same medal, and complementary: The path tracer sample's main goal is to teach/show users how to write a path tracer - ie, how to generate the rays, how to trace them, how to perform bounding and shading, lighting, etcpp. It also has to set up a pipeline, create SBTs, etc, but only for "whatever is required to get the actual contribution of that project (ie, the device-side path tracer) to do something. OWL, on the other hand, focusses exclusively on making it easier to do things like create pipelines, create different geometry types, build accel structs, etc - but doesn't so much tell you - or focus on - what exactly you'll do on the device, or what rays to trace for whatever rendering algorithm you want to implement. In the ideal case, you'd probably want to "use" both: Look at the sample path tracer as a guide for how to write the device code for your renderer; then use OWL to create all the "crud" around to get this running in a larger project where your SBTs and scene structure are likely to get more complicated than whatever the sample path tracer supports. Finally, re "how to use the env lighting". That is a much harder question, because every renderer is different, and everybody's idea of how to write a renderer is different, too. Case in point: the RTOW sample you seem to be using wasn't originally written by me (or stefan), but by Pete Shirley (in his RTOW book); I just ported it over to OptiX/OWL (my own renderers look very different, though not necessarily better). How to get these features into his renderer design is something you'd best ask him; to goal of OWL is not to tell you how to write a renderer, or how to best handle a given effect - it will only make it (much!) easier to actually implement the pipeline you want to have. That said, what I would usually do for "truly global" data like an env map is put into the the optixLaunchParams (OWL will make it quite easy to create those, and to put two different textures into it). In the launchparams they are visible to all programs, including miss, CH, and raygen. My personal choice is to have CH progs only "scatter" the rays but not actually trace the scattred rays, and leave the trace to the raygen program; in this case you'd probably also do the envmap in the raygen program, but again - everybody's renderers are different, mine may not be the best (there's better renderer writers than me!), and the purpose of OWL is not to tell or guide you into a certain way of writing a renderer - it'll just make it easier to set up that pipeline. Hope that helpss |
Beta Was this translation helpful? Give feedback.
-
Dear Ingo, Many thanks for your time and your detailed reply. George |
Beta Was this translation helpful? Give feedback.
-
Dear Ingo, Those are my first efforts so I am sure it is me doing something wrong not Owl. This is how i am computing my coordinates within the tracePath function (This is also how I make it work in the SimpleTriangles example, but through MissData and miss() :
George |
Beta Was this translation helpful? Give feedback.
-
Dear @szellmann, Your advice was spot on. Looking https://github.com/szellmann/anari-visionaray/blob/main/sampleCDF.h I saw that my miskake was that when depth is zero and we have no hit I should be using the ray direction and not the scattered one. Everything works, image attached! George |
Beta Was this translation helpful? Give feedback.
Hi,
@ingowald might chime in here, but what I usually do with environment maps when using OWL is I evaluate them in the ray generation program directly. In the rtow-mixed-geometry sample the
scatterEvent
variable is returned insideprd
and if I read the example correctly should be set torayDidntHitAnything
if the scattering loop exits without intersecting anything.Note that in other projects we do a lot of processing in
raygen
, including things like BRDF evaluation that in the samples are implemented inclosest_hit
. I don't think this causes big performance hits, though I'm not sure. But definitely simplifies development a lot. In that caseclosest_hit
becomes more or less a pass-throug…