Does anyone have suggestions on a good tutorial/codebase to help render nice-looking point clouds for visualization in papers? Path tracing, shadows, lighting control, etc. would be nice. @tolga_birdal has a nice one in Mitsuba inspired by PointFlow: https://github.com/tolgabirdal/Mitsuba2PointCloudRenderer
-
Andrea Tagliasacchi: As far as you tweak the internals to render a point set (e.g. convert point cloud to sphere cloud, or oriented splats, you can get most of the other stuff from Kubric: https://github.com/google-research/kubric
-
Tolga Birdal: A billion dollar question :) Maybe not as 'aesthetic', but here are others I happen to know of: POTree (JS-based but efficient): https://github.com/potree/potree; Panda3D (python, C++): https://github.com/ikalevatykh/panda3d_viewer; PPTK (python): https://github.com/heremaps/pptk; For Blender: https://github.com/uhlik/bpy
-
Chandradeep Pokhariya: You can install the PCV add-on in the blender using https://github.com/uhlik/bpy/blob/master/space_view3d_point_cloud_visualizer.py. You can import and convert point cloud to Particle System in the blender. I found it most aesthetic.
reference: https://twitter.com/drsrinathsridha/status/1454873466998595592?s=20
-
Today, as part of a larger tactile-sensing ecosystem, we’re announcing two major advances: DIGIT, a commercially available touch-sensing hardware produced in partnership with GelSight, and ReSkin, a replaceable, low-cost tactile skin.
In addition to hardware such as DIGIT, developing touch sensing as a modality requires an ecosystem of touch processors, simulators, benchmarks & data sets. Read more about our ongoing work in developing & open-sourcing this ecosystem: https://ai.facebook.com/blog/teaching-robots-to-perceive-understand-and-interact-through-touch
We’re also open-sourcing ReSkin, an affordable tactile sensor that autocalibrates through self-supervision. ReSkin will help to advance AI’s ability to obtain & use tactile data, getting us closer to deploying reliable touch sensing in research. More: https://ai.facebook.com/blog/reskin-a-versatile-replaceable-low-cost-skin-for-ai-research-on-tactile-perception
These innovations will advance AI and help us make progress toward developing gentler, safer robots capable of understanding the world through touch. Accessible touch-sensing hardware could also open up the field of tactile sensing to more researchers.
reference: https://twitter.com/facebookai/status/1455144066698596357?s=20
-
Giant new v1.2 release of Polyscope! Adds support for tet & hex meshes, transparency, ground shadows, slice planes, variable-size points, setting camera views, and more. Try Polyscope for easy 3D visualization in C++ & Python: https://polyscope.run/
Volume meshes have been added as a new structure, supporting tetrahedral and hexahedral elements, including mixed meshes with both tets and hexes. As always, you can also add scalar/color/vector data associated with the vertices or cells. http://polyscope.run/structures/volume_mesh/basics/
Slices planes let you see inside your volume meshes, and any other surfaces/point clouds/etc in Polyscope too! Add slice planes to the scene, and position them manually or with a few lines of code. You can also apply them selectively to only some data. http://polyscope.run/features/slice_planes/
Transparency is great for visualizations and figures where hidden content needs to be seen. Set an alpha value per-structure and you're good to go! Polyscope renders transparency with either weighted averaging (fast) or depth peeling (pretty). http://polyscope.run/features/transparency/
The ground plane now supports soft transparent shadows, for those figure-ready renderings Smiling face with sunglasses Also, enable built-in antialiasing (SSAA) to bump up the visual quality. http://polyscope.run/features/ground_and_shadows/, http://polyscope.run/basics/program_options/#ssaa-anti-aliasing-factor
Point clouds can now use a scalar quantity to set a radius per-point, and this works in conjunction with all the other features point clouds support! https://polyscope.run/structures/point_cloud/variable_radius/
Surface meshes expose a back face policy, so reversed faces can be styled according to your application. As always, you can set this policy ahead of time in code, or change it at runtime in the GUI.
Sometimes it's the little things: Polyscope now has a lookAt() function to easily position the camera. Great for creating your own renderings in automated scripts! Also, you can always copy & restore camera poses from the GUI with ctrl-C/ctrl-V. http://polyscope.run/basics/camera_controls/
And more! New font, scalar isolines, a more flexible shader builder backend, new apple silicon python builds, and many bugfixes. Thanks to others who tested and contributed!
Check it out: cpp: https://polyscope.run py: https://polyscope.run/py/ (on pip & conda)
- Rana Hanocka: I love polyscope. We’ve used it to render figures for a few papers, and I think it’s even improved since then, b/c there is now support for shadows on transparent backgrounds with the newest release (kudos to @nmwsharp !)
reference: https://twitter.com/nmwsharp/status/1421091312141574151?s=20
-
Fun weekend project accepted as a paper to the #NeurIPS2021 Creativity and Design workshop! Make pretty(?!) pictures by learning to generate 3D objects and angles to view them at, with the process guided by ImageNet / CLIP. https://web.media.mit.edu/~echu/assets/projects/evolving-views/paper.pdf
Specifically, we have (1) a generator — the superformula — that creates 3D surfaces, (2) a scorer (ImageNet/CLIP) of a 2D view of that surface, and (3) a genetic algorithm -based optimization loop that learns superformula parameters and viewing angles.
With the current generator, you end up with a lot of “anamorphic” art that’s only recognizable at highly specific angles, e.g. the turkey. Also I like this dude on the left discovered through novelty search. Morris Louis -esque turkey deconstructed.
Some related thoughts that are more research-y: Thought 1: How well does do purely image-generative models like VQGAN+CLIP work on 3D objects, materials, and actions taken on them? Inspired by Richard Serra’s “verb” + “material” sculptures. Thought 2: Using the superformula to generate 3D objects was a bit out of convenience. Could we replace it with point cloud / vowel / mesh 3D generative models? I’m guessing not quite yet without more parameter-efficient models or larger 3D object datasets ... ... A personal connection to this proj: I only know the superformula bc… the superformula generalizes the superellipsoid -> a special case of the superellipsoid is the superEGG -> the superegg was popularized by Piet Hein -> I worked w/ Piet’s son Jotun briefly (he's amazing).
reference: https://twitter.com/its_ericchu/status/1455583209425555462?s=20
-
Learning Co-segmentation by Segment Swapping for Retrieval and Discovery
http://imagine.enpc.fr/~shenx/SegSwap/
One simple trick — generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
reference: https://twitter.com/quantombone/status/1456080626017263618?s=20
-
Testing fluid interation with lots of objects. Real time 3D liquid simulations can finally be used as a gameplay feature. Can't wait to see how this can be used in games.
-
Erwin Coumans: How is the fluid rendering done?
- Mykhailo Moroz: The fluid rendering is a bit complicated. First the centers of particles are rasterized to a buffer using atomic min operations, after, using a jump flood algorithm we distribute the data about the closest sphere intersection to neighbor pixels. Then we blur the normals.
-
Lubos Brieda: Related to this, I need to implement interacting solids in one of my codes. How is this handled here? Thinking to start with tri-tri intersects with tris stored in octree, but not sure how to get the rotation right. Do you define moment of inertia and centroid for each solid?
- Mykhailo Moroz: Sadly that is not a question to me, since the rigid body solver was already implemented in Unity. But mesh-mesh collision detection is pretty complicated in general. The fluid solid interaction on the other hand is handled by using signed distance fields created for each object.
-
Benjamin Sawyer: How hard would it be to program smaller bubbles or some kind of foam? Without it it appears a bit more like alcohol or gelatin.
- Mykhailo Moroz: That will take some time, but it's in the roadmap. Basically adding another type of particles for bubbles and foam specifically.
reference: https://twitter.com/Michael_Moroz_/status/1455920076406861825?s=20
-
-
Let me gladly (re)share that our CWNs got accepted at #NeurIPS2021 ! The camera ready version is now on arXiv: https://arxiv.org/abs/2106.12575 – it includes new fancy figures :)
reference: https://twitter.com/ffabffrasca/status/1458503837514354693?s=20
-
My favorite example of late is “sphere tracing”, which is a big part of the NeRF/neural implicit craze: http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=2CF1EE32A25835E406AEFEF6ADC5AC5F?doi=10.1.1.48.3825&rep=rep1&type=pdf.
This algorithm was invented by my undergrad advisor John Hart; we used to poke fun at him for working on “useless” stuff like rendering fractals …
- Vinícius da Silva: Really good example. I have been using the technique myself in several projects recently. I'm a big fan of Hart's work! Neural Nets are good examples too, they exist since 1960s, but GPGPU enabled their actual super-saiyan mode.
- Keenan Crane: Yep—if not for a bunch of graphics folks working really hard on making video games faster, there would have never been economies of scale needed for fast, cheap GPUs. No deep learning craze, no Turing Award, no renaissance in computer vision and robotics … History is strange.
reference: https://twitter.com/keenanisalive/status/1459529998105255945?s=20
- Vinícius da Silva: Really good example. I have been using the technique myself in several projects recently. I'm a big fan of Hart's work! Neural Nets are good examples too, they exist since 1960s, but GPGPU enabled their actual super-saiyan mode.