Skip to content

Latest commit

 

History

History
319 lines (174 loc) · 18.1 KB

File metadata and controls

319 lines (174 loc) · 18.1 KB

意见性分享

  • 关于论文迭代修改

    Matthias Niessner

    Research can be quite brutal when a paper gets rejected.

    Successful groups have the problem with many rejected submissions; but they don't give up, take reviewer feedback, and keep improving their work until it's ready for publication. Key is to recognize what was missing.

    Abdullah Hamdi: How to deal with deteriorating motivation to work on rejected papers and to ignore the exciting new topics that are emerging all the time?

    Alessandro Saccon: Even unsuccessful groups face the same issue. Just the key idea is neither good nor new enough, and their paper will never get accepted or less cited. So the real question for me is: how do you get a great idea?

    reference: https://twitter.com/MattNiessner/status/1429856784420900868?s=20

  • 关于期刊投稿

    Keenan Crane

    PSA: If you're waiting many months for a review to come back from ACM Transactions on Graphics (and, I suspect, many other journals), you can and should politely nudge the Editor in Chief. TOG reviews are supposed to take only 1 month, and the EiC already knows the author names.

    reference: https://twitter.com/keenanisalive/status/1433422779311869957?s=20

  • 审稿相关

Kosta Derpanis

As a reviewer, it is insufficient to simply state a paper lacks novelty. This is a lazy response and disrespectful to the authors’ efforts. The onus is on you, the reviewer, to point to specific papers and draw connections to substantiate your novelty claim.

Arash Vahdat:When I get a review like that I'm dying to say: "please let us know which prior work is threatening our novelty." Everything becomes obvious after reading a well-written paper. But that doesn't mean that the paper is less novel.

Victor Escorcia:Agree and good tips. Now, 1)Why aren't ACs & reviewers affected by tolerating / doing that kind of stuff? 2)What about rating ACs and reviewers? Scores could be used for other conferences and editions.

reference:https://twitter.com/CSProfKGD/status/1433872171441479684

  • 如何准备CV

Jia-Bin Huang

How to prepare your Curriculum Vitae?

Your curriculum vitae (Latin for "course of life") is the most important document for all sorts of career opportunities. But not all students have access to resources for learning to write a good CV.

Check out the thread below!

Include a photo of yourself?

Be aware of the culture differences when it comes to including a headshot photo in your CV. It may be standard in your regions (e.g., Europe, Asia), but it's weird if you are applying positions in US and UK (for anti-discrimination reasons).

Add hyperlinks

It's 2021. Don't treat your CV as a plain text document. Add hyperlinks throughout your CV and make it easy for the readers to learn more about you and your work, e.g., personal website, email, project page, paper link, videos, and code on GitHub.

Make good tables

Tables are great for presenting structured data (e.g., your education/work history). But spend some time learning to make a one.

Provide context

nclude background information to help the readers (e.g., admission committee) understand your achievement. Some examples: rank X (out of how many)? GPA X (equivalence in 4.0 scale?), accuracy of 95% (on what task? and how well does baseline work?).

Describe your work

In each of your project, talk about (1) what the problem is? (2) what did you do? and (3) what the impact/results are? Most of the time (3) is missing. It's hard for people to judge the significance of your work without (3)

Pick a good LaTeX template

Pick a visually pleasing template to work with. Usually there is no page limit for your CV. Don't sacrifice readability by squeezing contents into 1-2 pages.

Self-rate skills?

Listing your skills is fine, but I don't really know what "4.2 stars Python" or "3.5 stars C++" on your CV mean.

Seek feedback

Many student-led organization offer help to review and provide feedback on your CV and other application material (particularly for graduate school applications). Check them out before you submit the application!

reference:https://twitter.com/jbhuang0604/status/1433651161882669078

课程和报告分享

Symposium on Computer Animation 2021

original twitter link(from Michiel van de Panne):

https://twitter.com/Mvandepanne/status/1429182659859820548

webpage:

http://computeranimation.org/program.html#Session1

location & time:

September 7-10,2021. Online.

registration web: http://computeranimation.org/#Registration

计算机动画相关论文报告会

Geometric Deep Learning - Algorithms

original twitter link(from Isaac Newton Institute):

https://twitter.com/NewtonInstitute/status/1430084967858589715?s=20

webpage:

https://www.newton.ac.uk/seminar/20210824100011001/

location & time:

24 August 2021 – 10:00 to 11:00

video page:

https://www.youtube.com/watch?v=9mOX_JbVTNY&ab_channel=INISeminarRoom1

几何深度学习算法

Advances in Neural Rendering is now freely available on YouTube

original twitter link(from Justus Thies):

https://twitter.com/JustusThies/status/1431291841404719109

video page

https://www.youtube.com/watch?v=otly9jcZ0Jg https://www.youtube.com/watch?v=aboFl5ozImM

神经渲染前沿介绍视频公开Image

creating yarn art in using Metropolis-Hastings sampling

成果推荐及讨论

  • Happy to announce our "Complementary Dynamics" source code is now publicly available.

    Cover

    Matlab repo: https://github.com/ErisZhang/complementary-dynamics

    C++ repo: https://github.com/seungbaebang/complementary-dynamics-cpp

    reference: https://twitter.com/Seungbae13/status/1429905759270277131?s=20

  • 3D Reconstruction from public webcams

    Hey @AmirRubin, check out this 3D #computervision project which uses SuperGlue, a deep feature-based matcher—extremely useful for creating a robust #digitaltwin of outdoor spaces. Thanks @ducha_aiki for sharing!

    Cover

    abs: https://arxiv.org/abs/2108.09476

    • Amir Rubin: I love this! SuperGlue and deep front ends are game changers for 3D reconstruction. The robustness of learned features+homography is bringing digital twin creation into the hands of non-expert users. I mean, I’m not talking metaverse scale just yet but who knows?

    • Noé: Wow indeed, this is wild. It's the difference between an idea not working at all, and working quite well. There have to be ideas from the past which did not pan out that have to be re-investigated! Do you have insights on use-cases where it made a large difference?

    Cover
    reference: https://twitter.com/quantombone/status/1430146858773630994?s=20
  • imGHUM: Implicit Generative Models of 3D Human Shape and Articulated Pose

    pdf:https://arxiv.org/pdf/2108.10842.pdf

    code:https://github.com/google-research/google-research/tree/master/imghum

    imGHUM shares it parameterization with the explicit body model GHUM (https://github.com/google-research/google-research/tree/master/ghum). But imGHUM is a SDF, thus fitting to point clouds is straight-forward and fully differentiable. Here we use imGHUM to recover pose and shape parameters of (partial) scans.

    Image

    The implicit semantics returned by imGHUM allows e.g. for surface coloring or texturing. Together with the signed distance they are also a 4D descriptor of points in space.

    Cover

    imGHUM generalizes well to novel shapes and poses. We provide gender-neutral, male and female imGHUM models of the full body, head, left and right hand.

    Cover
  • DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras

    pdf: https://arxiv.org/abs/2108.10869

    The method uses recurrent iterative updates of camera pose and pixelwise depth through a Dense Bundle Adjustment layer. Runs in real-time with two 3090 GPUs.

    Image

  • Semantic NeRF We add semantics outputs to NeRF models of 3D occupancy/colour. Joint representation allows very sparse or noisy in-place supervision to generate high quality dense prediction.

    project page:https://shuaifengzhi.com/Semantic-NeRF/

    • Andrew Davison: Of course this is just the beginning of potential more general use of neural implicit models to represent scene properties, such as surface properties or affordances which could be very useful in robotics.

    reference:https://twitter.com/AjdDavison/status/1430884726705901568

  • MoveNet is now on mobile!

    The state-of-the-art pose estimation model finally comes to mobile via TensorFlow Lite with CPU acceleration enabled. Now you can run pose estimation in realtime on modern smartphones.

    Learn more

    • Glooface: Does it work on cats? My one has always fanatasised about being a saber-toothed tiger.

      • Khanh: This model can only detect human poses at this moment, but I took note of the feature request.
    • Khanh: This MoveNet model is provided as a standalone TFLite model with sample implementation written in Kotlin, Python and Swift (coming soon) while MediaPipe requires you to be familiar with Bazel and C++. I think MoveNet is easier to use for mobile developers.

    • hichem le s: This one seems to be better, i used years ago mediapipe and the model was not so performant as this one, maybe mediapipe become better than years ago i dont know.

    reference: https://twitter.com/TensorFlow/status/1427300403494928386?s=20

  • We are excited to share Isaac Gym tech report https://arxiv.org/abs/2108.10470. Physics simulation data is directly passed to pytorch without ever going through any CPU bottlenecks in the process allowing blazing fast training on many challenging environments.

    We can train shadow hand cube rotation with all sorts of randomisations (as detailed in https://arxiv.org/abs/1808.00177) in ~ 1 hour using feed forward networks and ~6 hours with LSTMs.

    Cover

    Many other interesting results are available here: https://sites.google.com/view/isaacgym-nvidia

    reference: https://twitter.com/ankurhandos/status/1431130561804865538?s=20

  • Guess the dance! Produced by the amazing notebook developed by @nikoskolot @geopavlakos based on our #ICCV2021 ProHMR.(So easy to run that even I an old professor can run it. Video cut to hide the name of the dance).

    reference: https://twitter.com/KostasPenn/status/1432351412885676033?s=20

  • "Controlling Neural CA with noise" -- new tutorial where I'm trying to document the exploration precess (idea -> experiment -> early results). I'd really appreciate your feedback!

    Watch the video

    final colab code

    • MΞMO AKTEN: AFAIU noise amt is essentially a conditioning signal. In fact have you already tried any arbitrary (real valued?) signal? i.e. 0=> bubbles, 1=> tiles, & then vary this signal spatially & temporally? in this case Up pointing backhand index, it's still one (albeit parametrized) neural controller shared by all agents. Of course broader question is :), have you had a chance to try Neural CA w two populations of agents (coop and/or competitive) with one controller per population (but trained together)?

    • Scott Condron: I don’t think there’s enough of these types of videos where you’re brought through the process of experimentation. I personally would enjoy a distill-style post about this, but just the video and colab get the ideas across without it. As for future ideas, I thought the way the bubble stayed intact over the noise boundary was surprising and could be interesting to study further. Is there certain patterns that are more robust than others? Can you encourage that robustness through training?

      • Alex Mordvintsev: I began with training a noise-robust CA, then switched to noise-controlled. I think there are so many curious aspects that deserve deeper study about these models, that I decided to do tutorials to show how to do that, rather than have all fun myself and never release most it. I'm thinking about making a few more small tutorials and then combining them into more in-depth article.

    reference: https://twitter.com/zzznah/status/1432322113856229379?s=20

  • 3DStyleNet

    Excited to share #3DStyleNet, a Neural Style Transfer method for 3D Shapes! The work is accepted to #ICCV2021 as an oral presentation.

    arXiv: https://arxiv.org/abs/2108.12958

    Project page: https://nv-tlabs.github.io/3DStyleNet/

Cover
reference:https://twitter.com/kangxue_yin/status/1432833663453040643

Neural Marching Cubes will be presented at SIGGRAPHAsia

Code is available below.

Abs: https://arxiv.org/abs/2106.11272

Code: https://github.com/czq142857/NMC

Paper: https://www2.cs.sfu.ca/~haoz/pubs/chen_2021_nmc.pdf

Cover

a data-driven approach for extracting a triangle mesh from a discretized implicit field. NMC recovers sharp edges, a long-standing issue of MC and variants, and reconstructs local mesh topologies more accurately.

reference:https://twitter.com/richardzhangsfu/status/1433934224743043073

  • We present Self-Calibrating NeRF! NeRF learns the geometry of 3D space end-to-end. However, we show that we can learn not just the geometry, but camera intrinsics, poses, as well as non-linear complex camera model without any calibration patterns.

    github:https://t.co/nIqEWJpgI4?amp=1

  • NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo

    arxiv:https://t.co/sfQsB3OXtY?amp=1

    Cover