Skip to content
@tml-epfl

Theory of Machine Learning, EPFL

Popular repositories Loading

  1. llm-adaptive-attacks llm-adaptive-attacks Public

    Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]

    Shell 142 11

  2. understanding-fast-adv-training understanding-fast-adv-training Public

    Understanding and Improving Fast Adversarial Training [NeurIPS 2020]

    Python 92 12

  3. sharpness-vs-generalization sharpness-vs-generalization Public

    A modern look at the relationship between sharpness and generalization [ICML 2023]

    Jupyter Notebook 42 3

  4. understanding-sam understanding-sam Public

    Towards Understanding Sharpness-Aware Minimization [ICML 2022]

    Jupyter Notebook 35 3

  5. why-weight-decay why-weight-decay Public

    Why Do We Need Weight Decay in Modern Deep Learning? [arXiv, Oct 2023]

    Python 35

  6. sgd-sparse-features sgd-sparse-features Public

    SGD with large step sizes learns sparse features [ICML 2023]

    Jupyter Notebook 31 5

Repositories

Showing 10 of 11 repositories
  • llm-adaptive-attacks Public

    Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]

    tml-epfl/llm-adaptive-attacks’s past year of commit activity
    Shell 142 MIT 11 1 0 Updated Jul 2, 2024
  • icl-alignment Public

    Is In-Context Learning Sufficient for Instruction Following in LLMs?

    tml-epfl/icl-alignment’s past year of commit activity
    Python 16 Apache-2.0 3 0 0 Updated May 31, 2024
  • long-is-more-for-alignment Public

    Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]

    tml-epfl/long-is-more-for-alignment’s past year of commit activity
    Python 9 0 0 0 Updated May 2, 2024
  • why-weight-decay Public

    Why Do We Need Weight Decay in Modern Deep Learning? [arXiv, Oct 2023]

    tml-epfl/why-weight-decay’s past year of commit activity
    Python 35 0 0 0 Updated Oct 9, 2023
  • sam-low-rank-features Public

    Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]

    tml-epfl/sam-low-rank-features’s past year of commit activity
    Jupyter Notebook 24 1 1 0 Updated Sep 22, 2023
  • sharpness-vs-generalization Public

    A modern look at the relationship between sharpness and generalization [ICML 2023]

    tml-epfl/sharpness-vs-generalization’s past year of commit activity
    Jupyter Notebook 42 3 0 0 Updated Sep 11, 2023
  • sgd-sparse-features Public

    SGD with large step sizes learns sparse features [ICML 2023]

    tml-epfl/sgd-sparse-features’s past year of commit activity
    Jupyter Notebook 31 5 0 0 Updated Apr 24, 2023
  • tml-epfl.github.io Public

    Creating a repository to store all related information for the weekly TML group meetings.

    tml-epfl/tml-epfl.github.io’s past year of commit activity
    HTML 0 MIT 0 0 0 Updated Nov 16, 2022
  • understanding-sam Public

    Towards Understanding Sharpness-Aware Minimization [ICML 2022]

    tml-epfl/understanding-sam’s past year of commit activity
    Jupyter Notebook 35 3 0 0 Updated Jun 14, 2022
  • adv-training-corruptions Public

    On the effectiveness of adversarial training against common corruptions [UAI 2022]

    tml-epfl/adv-training-corruptions’s past year of commit activity
    Python 30 1 1 0 Updated May 16, 2022

Top languages

Loading…

Most used topics

Loading…