Skip to content

Latest commit

 

History

History
99 lines (92 loc) · 5.09 KB

index.md

File metadata and controls

99 lines (92 loc) · 5.09 KB
title
Home

Games Optimization Algorithms Learning

The Games, Optimization, Algorithms, and Learning Lab (GoalLab) studies theory of machine learning and its interface with learning in games and algorithmic game theory, optimization, dynamical systems, probability and statistics.

Research Highlights

{% capture text %} Our recent focus has been on finding Nash equilibria in Markov games. Two representative papers include computing Nash Equilibria in Adversarial Team Markov Games. and in Markov Potential Games. Other works include analysis of natural policy gradient in multi-agent settings.
{:.center} {% endcapture %} {% include feature.html image="images/papers/advteamgames.png" title=" Multi-agent Reinforcement Learning " text= text %}

{% capture text %} The PI and the group is actively working on learning in games. Papers include results on last iterate convergence using optimism in zero-sum games and beyond (like potential games). Other works include proving cycling or even chaotic behavior of the learning dynamics, the analysis of average performance of learning in games and stability analysis in evolutionary dynamics. {:.center} {% endcapture %} {% include feature.html image="images/lastiterate.png" title="Learning in Games" flip=true text=text %}

{% capture text %} Inspired by the success of Stochastic Gradient Descent in training neural networks, the group has done works on non-convex optimization. Using techniques from dynamical systems, we are able to show that Gradient Descent and other first order methods with constant stepsize avoid strict saddle points. We extend these results to multiplicative weights update (polytope constraints) and time varying stepsizes. Inpired by the success of Generative Adversarial Networks, the group has worked on min-max optimization for non-convex non-concave landscapes, characterising the limit points of first-order methods. {:.center} {% endcapture %} {% include feature.html image="images/nonconvex.png" title="Non-convex and min-max optimization" text=text %} {% capture text %} The group has works related to proper learning in Graphical models with applications to learning from dependent data (see also this paper for a setting using hypergraphs), structural learning from truncated data, learning mixtures from truncated data. Other works are about bounding the mixing time in Markov Chains (see also follow-up paper) {:.center} {% endcapture %} {% include feature.html image="images/trajectories.png" title="Probability and Statistics" flip=true text=text %}

{% capture text %} Our group has focused on the problem of expressivity of Neural Networks. Using techniques from Dynamical systems, we are able to prove tradeoffs between the depth and the width in feedforward Neural Networks. Here is also a follow-up paper that strenghtens the results. {:.center} {% endcapture %} {% include feature.html image="images/neuralmap.png" title="Deep Learning Theory" text=text %}

Our TEAM

{% capture text %} Our team includes 3 PhD students, multiple undergrads and external collaborators. {% include link.html link="team" icon="fas fa-arrow-right" text="Meet our team" flip=true %} {:.center} {% endcapture %} {% include feature.html image="images/team.png" link="team" text=text %}

Latest NEWS:

  • 9/2024 One paper got accepted in NeurIPS 2024.
  • 9/2024 Two papers got accepted in WINE 2024.
  • 5/2024 One paper got accepted in ICML 2024.
  • 4/2024 One paper got accepted in UAI 2024.
  • 1/2024 Two papers got accepted in ICLR 2024.
  • 12/2023 Two papers got accepted in AAAI 2024.
  • 9/2023 Four papers got accepted in NeurIPS 2023.
  • 5/2023 One paper accepted in EC 2023.
  • 4/2023 One paper accepted in ICML 2023 as oral.
  • 2/2023: New paper on time-varying games.
  • 1/2023: Two papers accepted in ICLR 2023, one oral.