Skip to content

Latest commit

 

History

History
279 lines (178 loc) · 16.6 KB

README.md

File metadata and controls

279 lines (178 loc) · 16.6 KB

Paper List for Robotics & Embodied AI - Tianxing Chen

0. Table of contents

Hits

  1. Manipulation

    • Imitation Learning
      • Diffusion Policy & Diffuser
    • Humanoid
    • Dexterous Manipulation
  2. LLM for Embodied AI

    • LLM Agent
  3. Foundation Models for Embodied AI

    • Affordance
    • Correspondence
    • Tracking & Estimation
    • Generative Models
  4. Reinforcement Learning

  5. Motion Generation

  6. Robot Hardware

  7. Dataset & Benchmark

  8. Diffusion Model for Planning, Policy, and RL

  9. 3D-based Manipulation

  10. 2D-based Manipulation

  11. LLM for robotics

  12. LLM Agent (Planning)

  13. Generative Model for Embodied

  14. Visual Feature: Correspondence, Affordance

  15. Detection & Segmentation

  16. Pose Estimation and Tracking

  17. Humanoid

  18. Dataset & Benchmark

  19. Hardware

  20. 2D to 3D Generation

  21. Gaussion Splatting

  22. Robotics for Medical

  23. Companies

1. Diffusion Model for Planning, Policy, and RL

  • [arXiv] Diffusion Models for Reinforcement Learning: A Survey, arXiv

  • [ICLR 23 (Top 5% Notable)] Is Conditional Generative Modeling all you need for Decision-Making?, website

  • [RSS 23] Diffusion Policy: Visuomotor Policy Learning via Action Diffusion, website

  • [ICML 22 (Long Talk)] Planning with Diffusion for Flexible Behavior Synthesis, website

  • [ICML 23 Oral] Adaptdiffuser: Diffusion models as adaptive self-evolving planners, website

  • [CVPR 24] SkillDiffuser: Interpretable Hierarchical Planning via Skill Abstractions in Diffusion-Based Task Execution, website

  • [arXiv] Learning a Diffusion Model Policy From Reward via Q-Score Matching, arXiv

  • [CoRL 23] ChainedDiffuser: Unifying Trajectory Diffusion and Keypose Prediction for Robotic Manipulation, website

  • [CVPR 23] Affordance Diffusion: Synthesizing Hand-Object Interactions, website

  • [arXiv] DiffuserLite: Towards Real-time Diffusion Planning, arXiv

  • [arXiv] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations, website

  • [arXiv] 3D Diffuser Actor: Policy Diffusion with 3D Scene Representations, website

  • [arXiv] SafeDiffuser: Safe Planning with Diffusion Probabilistic Models, arXiv

  • [CVPR 24] Hierarchical Diffusion Policy for Kinematics-Aware Multi-Task Robotic Manipulation, arXiv

  • [arXiv 24] Render and Diffuse: Aligning Image and Action Spaces for Diffusion-based Behaviour Cloning, arXiv

  • [arXiv 24] Surgical Robot Transformer: Imitation Learning for Surgical Tasks, website

  • [CoRL 24] GenDP: 3D Semantic Fields for Category-Level Generalizable Diffusion Policy, website

2. 3D-based Manipulation

  • [RSS 24] RVT-2: Learning Precise Manipulation from Few Examples website

  • [arXiv 23] D3 Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Robotic Manipulation, website

  • [arXiv 24] UniDoorManip: Learning Universal Door Manipulation Policy Over Large-scale and Diverse Door Manipulation Environments, website

  • [CoRL 23 (Oral)] GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields, website

  • [ECCV 24] ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation, website

  • [IROS 24] RISE: 3D Perception Makes Real-World Robot Imitation Simple and Effective, website

Grasping

  • GraspNet website
    • [TRO 23] AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains, arXiv
  • [arXiv 24] ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter, website
  • [arXiv 24] GaussianGrasper: 3D Language Gaussian Splatting for Open-vocabulary Robotic Grasping, website

Articulated Object

  • [CVPR 22 Oral] Ditto: Building Digital Twins of Articulated Objects from Interaction, website
  • [ICRA 24] RGBManip: Monocular Image-based Robotic Manipulation through Active Object Pose Estimation, website

3. 2D-based Manipulation

  • [NIPS 23] MoVie: Visual Model-Based Policy Adaptation for View Generalization, website

4. LLM for robotics (LLM Agent)

  • [arXiv 24] OK-Robot: What Really Matters in Integrating Open-Knowledge Models for Robotics, website

  • [CoRL 23] VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models, website

  • [arXiv 23] ChatGPT for Robotics: Design Principles and Model Abilities, arXiv

  • [arXiv 24] Language-Guided Object-Centric Diffusion Policy for Collision-Aware Robotic Manipulation, arXiv

  • [PMLR 23] RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control, website

5. LLM Agnet (planning)

  • [NIPS 23] Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning, website

6. Generative Model for Embodied

  • [arXiv 24] Generative Image as Action Models, website

  • [arXiv 24] Genie: Generative Interactive Environments, website

7. Visual Feature

7.1 Correspondence

  • [arXiv 23] D3 Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Robotic Manipulation, website

  • [CoRL 20] Transporter Networks: Rearranging the Visual World for Robotic Manipulation, website

  • [ICLR 24] SparseDFF: Sparse-View Feature Distillation for One-Shot Dexterous Manipulation, website

  • [ICRA 24] UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence, website

  • [CoRL 2018] Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation, PDF

  • [arXiv 24] Theia: Distilling Diverse Vision Foundation Models for Robot Learning, website, Github repo

7.2 Affordance

  • [CoRL 22] Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation, website

  • [arXiv 24] Robo-ABC: Affordance Generalization Beyond Categories via Semantic Correspondence for Robot Manipulation, arXiv

  • [arXiv 24] PreAfford: Universal Affordance-Based Pre-Grasping for Diverse Objects and Environments, arXiv

  • [ICLR 22] VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects, website

  • [ICLR 23] DualAfford: Learning Collaborative Visual Affordance for Dual-gripper Object Manipulation, arXiv

  • [CVPR 22] Joint Hand Motion and Interaction Hotspots Prediction from Egocentric Videos, website

  • [ICCV 23] AffordPose: A Large-scale Dataset of Hand-Object Interactions with Affordance-driven Hand Pose, website

8. Detection & Segmentation

  • [ECCV 24] Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection, Github repo

  • [arXiv 24] Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect, Segment and Generate Anything, Github repo

  • [ICCV 23] DEVA: Tracking Anything with Decoupled Video Segmentation, website

  • [ECCV 22] Mem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model, website

  • [ICCV 23] VLPart: Going Denser with Open-Vocabulary Part Segmentation, website

  • LangSAM Github repo, combining Grounding DINO and SAM

9. Pose Estimation and Tracking

  • [CVPR 24 (Highlight)] FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects, website

  • [CVPR 23 (Highlight)] GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable Parts, website

  • [arXiv 23] GAMMA: Generalizable Articulation Modeling and Manipulation for Articulated Objects, website

  • [arXiv 24] ManiPose: A Comprehensive Benchmark for Pose-aware Object Manipulation in Robotics, website

  • [ICCV 23] AffordPose: A Large-scale Dataset of Hand-Object Interactions with Affordance-driven Hand Pose, website

  • [CVPR 23] BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects, website

  • [arXiv 24] WiLoR: End-to-end 3D hand localization and reconstruction in-the-wild, website

10. Humanoid

  • [arXiv 24] HumanPlus: Humanoid Shadowing and Imitation from Humans, website

11. Dataset & Benchmark

  • [arXiv 24] Empowering Embodied Manipulation: A Bimanual-Mobile Robot Manipulation Dataset for Household Tasks, website, zhihu
  • [arXiv 24] GRUtopia: Dream General Robots in a City at Scale, Github Repo
  • [ICLR 24] AgentBoard: An Analytical Evaluation Board of Multi-Turn LLM Agents, website
  • [arXiv 24] RoboCAS: A Benchmark for Robotic Manipulation in Complex Object Arrangement Scenarios, Github repo
  • [arXiv 24] BiGym: A Demo-Driven Mobile Bi-Manual Manipulation Benchmark, website
  • [arXiv 24] Evaluating Real-World Robot Manipulation Policies in Simulation, website
  • [arXiv 23] Objaverse-XL: A Universe of 10M+ 3D Objects, website

12. Hardware

  • [arXiv 24] DexCap: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation, website

13. 2D to 3D Generation

  • [arXiv 24] Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image, website

14. Gaussian Splatting

  • [SIGGRAPH 24] 2DGS: 2D Gaussian Splatting for Geometrically Accurate Radiance Fields, website

15. Robotics for Medical

  • [arXiv 24] Surgical Robot Transformer: Imitation Learning for Surgical Tasks, website

TO READ

  1. Stabilizing Transformers for Reinforcement Learning

    • Summary: 本文提出了Gated Transformer-XL (GTrXL),一种改进的Transformer架构,用于解决标准Transformer在强化学习中的优化难题。通过引入层归一化和门控机制,GTrXL在部分可观察性环境中取得了优于LSTM的性能。
    • 链接
  2. CoBERL: Contrastive BERT for Reinforcement Learning

    • Summary: 文章介绍了CoBERL,它结合了对比损失和Transformer架构,通过双向掩码预测和对比学习方法提高强化学习中的数据效率和性能。
    • 链接
  3. Adaptive Transformers in RL

    • Summary: 该研究探索了在强化学习中使用具有自适应注意力跨度的Transformer模型,发现这种方法能够提高模型在需要长期依赖的环境中的性能。
    • 链接
  4. Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation

    • Summary: 本文提出了Actor-Learner Distillation (ALD)方法,通过从大型学习者模型向小型执行者模型进行知识蒸馏,以提高Transformer在强化学习中的样本效率。
    • 链接
  5. Deep Transformer Q-Networks for Partially Observable Reinforcement Learning

    • Summary: 介绍了Deep Transformer Q-Networks (DTQN),这是一种新型的强化学习架构,使用Transformer的自注意力机制来处理部分可观察性任务,并在多个挑战性环境中展示了有效性。
    • 链接
  6. CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer

    • Summary: CtrlFormer是一种新型的Transformer架构,专注于通过学习可迁移的状态表示来提高视觉控制任务的样本效率,特别强调了在跨任务迁移学习方面的优势。
    • 链接

Sapiens: Foundation for Human Vision Models: https://about.meta.com/realitylabs/codecavatars/sapiens General Flow as Foundation Affordance for Scalable Robot Learning https://general-flow.github.io/