Skip to content

Latest commit

 

History

History
89 lines (70 loc) · 7.96 KB

20220404-20220417.md

File metadata and controls

89 lines (70 loc) · 7.96 KB

意见性分享

工具分享

课程和报告分享

成果推荐及讨论

  • AK

  • Andrew Davison

    • iSDF uses the main incremental neural field training methods of iMAP, but interprets the MLP output as a signed distance field rather than occupancy. Similar reconstruction quality, with auto hole-filling. Directly building an SDF could be useful for some robot planning cases.
    • paper: https://arxiv.org/abs/2204.02296
    • project page: https://joeaortiz.github.io/iSDF/
  • Michael Black

    • IMavatar (#CVPR2022) learns implicit head avatars from monocular videos. It represents the expression and pose deformations via learned blendshapes and skinning fields. These are used to morph the canonical geometry and texture fields given novel expression and pose parameters.
    • IMavatars are learned end-to-end from video and provide fine-grain animation control using FLAME parameters. This produces implicit head avatars with more realistic shape (see the detailed normal maps) and appearance than the recent SOTA.
    • paper: https://arxiv.org/abs/2112.07471
    • project page: https://ait.ethz.ch/projects/2022/IMavatar/
  • Matthias Niessner

    • Check out AutoRF: Learning 3D Object Radiance Fields from Single View Observations #CVPR2022 by @Normanisation.
    • Key idea: we learn shape & appearance priors from single-view training samples by exploiting machine-annotated labels!
    • project page: https://sirwyver.github.io/AutoRF/
    • video: https://www.youtube.com/watch?v=mDcsSK3GaF4
    • comments:
      • question1: How does it figure out the homography, angle and probably rotation of the camera? What about places where the floor is on different heights through the scene?
      • reply1: We leverage off-the-shelf monocular 3D detections to calculate the relative pose of the camera. This way we can also support different floor levels
  • Hao Li

    • Our @fxguidenews article on @PinscreenInc's latest AI VFX pipeline for translating entire movies from any language to English using neural facial retargeting. In partnership with @Adapt_Films, @mikeseymour, we present the Champion of Auschwitz in English.
    • Our latest work featured in @TheWrap! In partnership with @Adapt_Films and @mikeseymour, @PinscreenInc demonstrates the first complete AI VFX pipeline that can translate an entire feature film in a foreign language to English using neural rendering tech:
    • project page: https://www.fxguide.com/fxfeatured/the-neural-rendering-of-the-champion/
  • AK

  • Zhiqin Chen

    • Pleased to announce our @NVIDIAAI #CVPR2022 paper AUV-Net! AUV-Net learns aligned UV maps for a set of 3D shapes, allowing us to easily transfer textures between 3D shapes by simply swapping their texture images.
    • project page: https://nv-tlabs.github.io/AUV-NET/
  • Marko Mihajlovic

    • COAP: Compositional Articulated Occupancy of People
    • Our generalizable neural implicit body leverages a localized encoder-decoder to model volumetric humans #CVPR2022
    • COAP is useful for resolving self-intersections and collisions with other objects
    • paper: https://arxiv.org/abs/2204.06184
    • project page: https://neuralbodies.github.io/COAP/

招聘

  • Drew Jaegle DeepMind
    • I’m hiring a Research Scientist to work on anymodal architectures and representation learning! If you’re interested in building systems for all the world’s data 📷⚗️🎙️📚🔬, please apply to @DeepMind's RS London role and mention you’d like to work with me.