-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
executable file
·649 lines (587 loc) · 35.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<!-- Hi, Jon Here. Please DELETE the two <script> tags below if you use this HTML, otherwise my analytics will track your page -->
<!-- Global site tag (gtag.js) - Google Analytics -->
<title>Hannah Lawrence</title>
<meta name="author" content="Hannah Lawrence">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/MITLogo.png">
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<name>Hannah Lawrence</name>
</p>
<!--
<p>
Note: this page is under construction.
</p> -->
<p>I am a PhD student in machine learning at MIT, where I am fortunate to be advised by <a href="https://people.csail.mit.edu/moitra/">Ankur Moitra</a>. I am also a member of the wonderful <a href="https://atomicarchitects.github.io/">Atomic Architects</a>, led by Tess Smidt. Previously, I was a summer research intern at the Open Catalyst Team at Meta FAIR, studying equivariance for chemistry applications.
Before graduate school, I was a research analyst at the <a href="https://www.simonsfoundation.org/flatiron/center-for-computational-mathematics/">Center for Computational Mathematics</a> of the <a href="https://www.simonsfoundation.org/flatiron/">Flatiron Institute</a> in New York, where I worked on developing algorithms at the interface of equivariant deep learning and signal processing for <a href="https://en.wikipedia.org/wiki/Cryogenic_electron_microscopy">cryoEM</a>. Broadly, I enjoy developing theoretically principled tools for deep learning (often in scientific or image domains), with a focus on learning with symmetries.
</p>
<p>
I spent summer 2019 at <a href="https://www.microsoft.com/en-us/research/lab/microsoft-research-new-england/">Microsoft Research</a>, where I was lucky to be mentored by <a href=
"https://people.cs.umass.edu/~cmusco/">Cameron Musco</a>. I've also spent productive summers at <a href="https://www.reservoir.com/">Reservoir Labs</a> and the <a href="https://www.simonsfoundation.org/flatiron/center-for-computational-biology/">Center for Computational Biology</a>. I was an undergrad at Yale in applied math and computer science, where I had the good fortune of being advised by <a href="https://seas.yale.edu/faculty-research/faculty-directory/amin-karbasi">Amin Karbasi</a> and <a href="http://www.cs.yale.edu/homes/spielman/">Dan Spielman</a>.
</p>
<p>
Finally, I co-founded the <a href="https://bostonsymmetry.github.io/">Boston Symmetry Group</a>, which hosts a recurring workshop for researchers interested in symmetries in machine learning. Follow us on <a href="https://twitter.com/bostonsymmetry">Twitter</a>, shoot us an <a href="mailto:[email protected]">email</a>, or join our <a href="https://groups.google.com/g/boston-symmetry">mailing list</a> if you're interested in attending!
</p>
<p style="text-align:center">
<a href="mailto:[email protected]">Email</a>  / 
<a href="https://github.com/hannahlawrence/">Github</a>  / 
<a href="https://www.linkedin.com/in/hannah-lawrence-417b5a130/"> LinkedIn </a>  / 
<!-- <a href="data/CVFall2020_forwebsite.pdf"> CV </a>  /  -->
<a href="https://twitter.com/HLawrenceCS"> Twitter </a>  / 
<a href="https://scholar.google.com/citations?user=tYE9bLoAAAAJ&hl=en"> Google Scholar </a>   
</p>
</td>
<td style="padding:2.5%;width:40%;max-width:40%">
<a href="images/HannahLawrence.jpg"><img style="width:100%;max-width:100%" alt="profile photo" src="images/HannahLawrence_circle.jpg" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Research</heading>
<p>
My primary research interests include symmetry-aware (equivariant) machine learning and scientific applications. <!-- sparse recovery, and scientific imaging/inverse problems, and especially applications in which these paradigms intersect. -->In addition, I enjoy developing theoretically principled tools for deep learning, for applications from vision to interpretability to PDEs.
<br />
<br />
Here is a non-exhaustive list of a few high-level questions I've been thinking about recently (or at least, the last time I updated this website):
<ul>
<li> <i> What kinds of approximate symmetries arise in practice, e.g. in scientific applications? How should this structure inform our choice of architecture, and when is approximate symmetry still a powerful enough inductive bias to benefit learning? </i> </li>
<li> <i> What is the role of equivariance, e.g. to permutations, in large language models (LLMs)? To what extent is equivariance learned? To what extent should it be enforced? </i> </li>
<li> <i> How can we harness equivariance to learn useful representations, especially in applications with complicated symmetries (such as PDEs)? </i> </li>
<!-- <li> <i> How can we enforce equivariance to non-compact groups, like the special linear group? </i> </li> -->
<li> <i> How can we make canonicalization work, in theory and in practice, as an approach for enforcing symmetries? </i> </li>
<li> <i> Does equivariance have a role to play in NLP? In fairness? </i> </li>
<!--
<li> When might the minimax-optimal mini-batching algorithm for switching-constrained online convex optimization be useful in practical applications, and what real-world factors beyond switching might we care about? </li>
<li>How much hot chocolate can I consume at a single research institution? </li> -->
</ul>
<!-- And here are some older questions:
<ul>
<li> <i> Are there efficient methods for (1) enforcing equivariance w.r.t large groups (given access to a uniform sampler) on linear/kernel learning? (2) enforcing approximate equivariance? </i> </li>
<li> <i> Are there efficient universal approximation for certain subclasses of equivariant functions? What is the right way of measuring the "smoothness" of an equivariant function? </i> </li>
<li> <i> Is there a clear theoretical justification for the empirical success of equivariance as an inductive prior in neural architectures? What's the right framework for formulating this? </i> </li>
<li> <i> Given data, can one learn an underlying dictionary if the corresponding coefficients are not sparse, but rather come from a known generative model? </i> </li>
<li> <i> When does equivariance with respect to the wreath product group arise in deep learning applications? </i> </li>
<li> <i> Is it possible to characterize the class of generative models under which Fourier phase retrieval is well-conditioned? </i> </li>
<li> <i> What's the fastest way to rotationally align two spherical functions? </i> </li>
<li> <i> What generalizations of (1) the restricted isometry property and (2) leverage score sampling might be useful for off-grid sparse recovery? </i></li>
<li> When might the minimax-optimal mini-batching algorithm for switching-constrained online convex optimization be useful in practical applications, and what real-world factors beyond switching might we care about? </li>
<li>How much hot chocolate can I consume at a single research institution? </li>
</ul> -->
</p>
</td>
</tr>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="canon_stop()" onmouseover="canon_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='canon_image'><img src='images/lexsort_after.jpg'></div>
<img src='images/lexsort_before.jpg'>
</div>
<script type="text/javascript">
function canon_start() {
document.getElementById('canon_image').style.opacity = "1";
}
function canon_stop() {
document.getElementById('canon_image').style.opacity = "0";
}
canon_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2402.16077.pdf">
<papertitle>Equivariant Frames and the Impossibility of Continuous Canonicalization</papertitle>
</a>
<br>
<a href="https://nadavdym.github.io/">Nadav Dym<sup>*</sup></a>,
<strong>Hannah Lawrence<sup>*</sup></strong>,
<a href="https://jwsiegel2510.github.io/">Jonathan Siegel<sup>*</sup></a>
<br>
<em>Under review</em>, 2023.
<br>
<p></p>
<p>We demonstrate that, perhaps surprisingly, there is no continuous canonicalization (or even efficiently implementable frame) for many symmetry groups. We introduce a notion of weighted frames to circumvent this issue.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="poly_stop()" onmouseover="poly_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='poly_image'><img src='images/poly_after.jpg'></div>
<img src='images/poly_before.jpg'>
</div>
<script type="text/javascript">
function poly_start() {
document.getElementById('poly_image').style.opacity = "1";
}
function poly_stop() {
document.getElementById('poly_image').style.opacity = "0";
}
poly_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2312.02146.pdf">
<papertitle>Learning Polynomial Problems with SL(2,R) Equivariance</papertitle>
</a>
<br>
<strong>Hannah Lawrence<sup>*</sup></strong>,
<a href="https://harris-mit.github.io/">Mitchell Harris<sup>*</sup></a>
<br>
ICLR 2024, to appear.
<br>
<p></p>
<p>We propose machine learning approaches, which are equivariant with respect to the non-compact group of area-preserving transformations SL(2,R), for learning to solve polynomial optimization problems.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="hardness_stop()" onmouseover="hardness_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='hardness_image'><img src='images/hardness.jpg'></div>
<img src='images/hardness.jpg'>
</div>
<script type="text/javascript">
function hardness_start() {
document.getElementById('hardness_image').style.opacity = "1";
}
function hardness_stop() {
document.getElementById('hardness_image').style.opacity = "0";
}
hardness_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2401.01869.pdf">
<papertitle>On the hardness of learning under symmetries</papertitle>
</a>
<br>
<a href="https://bkiani.github.io/">Bobak T. Kiani<sup>*</sup></a>,
<a href="https://scholar.google.com/citations?user=WhFGh74AAAAJ&hl=en">Thien Le<sup>*</sup></a>,
<strong>Hannah Lawrence<sup>*</sup></strong>,
<a href="https://people.csail.mit.edu/stefje/">Stefanie Jegelka</a>,
<a href="http://melanie-weber.com/">Melanie Weber</a>
<br>
ICLR 2024, to appear.
<br>
<p></p>
<p>We give statistical query lower bounds for learning symmetry-preserving neural networks and other invariant functions.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="ssl_stop()" onmouseover="ssl_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='ssl_image'><img src='images/ssl_after.jpg'></div>
<img src='images/ssl_before.jpg'>
</div>
<script type="text/javascript">
function ssl_start() {
document.getElementById('ssl_image').style.opacity = "1";
}
function ssl_stop() {
document.getElementById('ssl_image').style.opacity = "0";
}
ssl_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2307.05432.pdf">
<papertitle>Self-Supervised Learning with Lie Symmetries for Partial Differential Equations</papertitle>
</a>
<br>
<a href="https://gregoiremialon.github.io/">Grégoire Mialon<sup>*</sup></a>,
<a href="https://garridoq.com/">Quentin Garrido<sup>*</sup></a>,
<strong>Hannah Lawrence</strong>,
<a href="https://scholar.google.ca/citations?user=XdyK1qoAAAAJ&hl=en">Danyal Rehman</a>,
<a href="https://bkiani.github.io/">Bobak Kiani</a>
<br>
<em>ICLR</em>, 2023.
<br>
<p></p>
<p>We apply self-supervised learning to partial differential equations, using the equations' Lie point symmetries as augmentations.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="atomistic_stop()" onmouseover="atomistic_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='atomistic_image'><img src='images/atomistic.jpg'></div>
<img src='images/atomistic.jpg'>
</div>
<script type="text/javascript">
function atomistic_start() {
document.getElementById('atomistic_image').style.opacity = "1";
}
function atomistic_stop() {
document.getElementById('atomistic_image').style.opacity = "0";
}
atomistic_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2307.08423.pdf">
<papertitle>Artificial Intelligence for Science in
Quantum, Atomistic, and Continuum
Systems</papertitle>
</a>
<br>
<a href="https://scholar.google.com/citations?user=DrsDZg4AAAAJ&hl=en">Xuan Zhang<sup>*</sup></a>,
<a href="https://people.tamu.edu/~limei/">Limei Wang<sup>*</sup></a>,
<a href="https://scholar.google.com/citations?user=NtqpyUAAAAAJ&hl=en">Jacob Helwig<sup>*</sup></a>,
<a href="https://people.tamu.edu/~yzluo/">Youzhi Luo<sup>*</sup></a>,
<a href="https://congfu.github.io/">Cong Fu<sup>*</sup></a>,
<a href="https://www.linkedin.com/in/yaochen-xie-44602994/?locale=en_US">Yaochen Xie<sup>*</sup></a>,
...,
<strong>Hannah Lawrence</strong>,
...,
<a href="https://people.tamu.edu/~sji/">Shuiwang Ji</a>
<br>
<em>Under review</em>, 2023.
<br>
<p></p>
<p>A survey of machine learning for physics.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="distill_stop()" onmouseover="distill_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='distill_image'><img src='images/with_svm.jpg'></div>
<img src='images/without_svm.jpg'>
</div>
<script type="text/javascript">
function distill_start() {
document.getElementById('distill_image').style.opacity = "1";
}
function distill_stop() {
document.getElementById('distill_image').style.opacity = "0";
}
distill_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2206.14754.pdf">
<papertitle>Distilling Model Failures as Directions in Latent Space</papertitle>
</a>
<br>
<a href="http://people.csail.mit.edu/saachij/">Saachi Jain<sup>*</sup></a>,
<strong>Hannah Lawrence<sup>*</sup></strong>,
<a href="https://people.csail.mit.edu/moitra/">Ankur Moitra</a>,
<a href="https://madry.mit.edu/">Aleksander Madry</a>
<br>
<em>ICLR (spotlight presentation)</em>, 2023. See also the <a href="https://gradientscience.org/failure-directions/">blog post</a>
<br>
<p></p>
<p>We present a framework for automatically identifying and captioning coherent patterns of errors made by any trained model. The key? Keeping it simple: linear classifiers in a shared vision-language embedding space.</p>
</td>
</tr>
</tbody></table>
<!-- positional encodings -->
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="gulp_stop()" onmouseover="gulp_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='cauli_image'><img src='images/gulp_width.jpg'></div>
<img src='images/gulp_depth.jpg'>
</div>
<script type="text/javascript">
function gulp_start() {
document.getElementById('cauli_image').style.opacity = "1";
}
function gulp_stop() {
document.getElementById('cauli_image').style.opacity = "0";
}
gulp_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/2210.06545.pdf">
<papertitle>GULP: a prediction-based metric between representations</papertitle>
</a>
<br>
<a href="http://web.mit.edu/eboix/www/">Enric Boix-Adsera</a>,
<strong>Hannah Lawrence</strong>,
<a href="https://scholar.google.com/citations?user=CKYZLxYAAAAJ&hl=en">George Stepaniants</a>,
<a href="https://math.mit.edu/~rigollet/">Philippe Rigollet</a>
<br>
<em>NeurIPS (Oral Presentation)</em>, 2022
<br>
<p></p>
<p>We define a family of distance pseudometrics for comparing learned data representations, directly inspired by transfer learning. In particular, we define a distance between two representations based on how differently (worst-case over all downstream, bounded linear predictive tasks) they perform under ridge regression.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="barron_stop()" onmouseover="barron_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='barron_image'><img src='images/sphere_two.jpg'></div>
<img src='images/sphere_one.jpg'>
</div>
<script type="text/javascript">
function barron_start() {
document.getElementById('barron_image').style.opacity = "1";
}
function barron_stop() {
document.getElementById('barron_image').style.opacity = "0";
}
barron_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/2210.06545pdf">
<papertitle>Barron's Theorem for Equivariant Networks</papertitle>
</a>
<br>
<strong>Hannah Lawrence</strong>
<br>
<em>NeurIPS Workshop: <a href="https://www.neurreps.org/">Symmetry and Geometry in Neural Representations</a> (Poster, to appear)</em>, 2022
<br>
<p></p>
<p>We extend Barron’s Theorem for efficient approximation to invariant neural networks, in the cases of invariance to a permutation subgroup or the rotation group.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="sublinear_stop()" onmouseover="sublinear_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='sublinear_image'><img src='images/toep_noisy.jpg'></div>
<img src='images/toep_clean.jpg'>
</div>
<script type="text/javascript">
function sublinear_start() {
document.getElementById('sublinear_image').style.opacity = "1";
}
function sublinear_stop() {
document.getElementById('sublinear_image').style.opacity = "0";
}
sublinear_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2211.11328.pdf">
<papertitle>Toeplitz Low-Rank Approximation with Sublinear Query Complexity</papertitle>
</a>
<br>
<a href="https://theory.epfl.ch/kapralov/">Michael Kapralov</a>,
<strong>Hannah Lawrence</strong>,
<a href="https://people.epfl.ch/mikhail.makarov?lang=en">Mikhail Makarov</a>,
<a href="https://people.cs.umass.edu/~cmusco/">Cameron Musco</a>,
<a href="https://ksheth96.github.io/">Kshiteej Sheth</a>
<br>
<em>Symposium on Discrete Algorithms (SODA), to appear</em>, 2023
<br>
<p></p>
<p>We prove that any nearly low-rank Toeplitz positive semidefinite matrix has a low-rank approximation that is itself Toeplitz, and give a sublinear query complexity algorithm for finding it. </p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="bias_stop()" onmouseover="bias_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='bias_image'><img src='images/web_bias_lower.jpg'></div>
<img src='images/web_bias_upper.jpg'>
</div>
<script type="text/javascript">
function bias_start() {
document.getElementById('bias_image').style.opacity = "1";
}
function bias_stop() {
document.getElementById('bias_image').style.opacity = "0";
}
bias_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2110.06084.pdf">
<papertitle>Implicit Bias of Linear Equivariant Networks</papertitle>
</a>
<br>
<strong>Hannah Lawrence</strong>,
<a href="https://kristian-georgiev.github.io/">Kristian Georgiev</a>,
<a href="https://www.linkedin.com/in/andrew-dienes-83981914a">Andrew Dienes</a>,
<a href="https://bkiani.github.io/">Bobak T. Kiani<sup>*</sup></a>
<br>
<em>Appearing at ICML</em>, 2022
<br>
<p></p>
<p>We characterize the implicit bias of linear group-convolutional networks trained by gradient descent. In particular, we show that the learned linear function is biased towards low-rank matrices in Fourier space.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="phase_stop()" onmouseover="phase_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='phase_image'><img src='images/phase_after.png'></div>
<img src='images/phase_before.png'>
</div>
<script type="text/javascript">
function phase_start() {
document.getElementById('phase_image').style.opacity = "1";
}
function phase_stop() {
document.getElementById('phase_image').style.opacity = "0";
}
phase_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2012.07386">
<papertitle>Phase Retrieval with Holography and Untrained Priors: Tackling the Challenges of Low-Photon Nanoscale Imaging</papertitle>
</a>
<br>
<strong>Hannah Lawrence <sup>*</sup> </strong>,
<a href="https://davidbar.org/">David A. Barmherzig <sup>*</sup></a>,
<a href="https://math.yale.edu/people/henry-li">Henry Li</a>,
<a href="https://eickenberg.github.io/">Michael Eickenberg</a>,
<a href="https://marylou-gabrie.github.io/">Marylou Gabrié</a>
<br>
<em>Appeared at MSML</em>, 2021
<br>
<p></p>
<p>By using a maximum-likelihood objective coupled with a deep decoder prior for images, we achieve superior image reconstruction for holographic phase retrieval, including under several challenging realistic conditions. To our knowledge, this is the first dataset-free machine learning approach for holographic phase retrieval. </p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="online_stop()" onmouseover="online_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='online_image'><img src='images/online_after.jpg'></div>
<img src='images/online_before.jpg'>
</div>
<script type="text/javascript">
function online_start() {
document.getElementById('online_image').style.opacity = "1";
}
function online_stop() {
document.getElementById('online_image').style.opacity = "0";
}
online_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/1910.10873">
<papertitle>Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition</papertitle>
</a>
<br>
<a href="http://campuspress.yale.edu/lchen/">Lin Chen</a>,
<a href="https://sites.google.com/usc.edu/qyu/home">Qian Yu</a>,
<strong>Hannah Lawrence</strong>,
<a href="https://seas.yale.edu/faculty-research/faculty-directory/amin-karbasi">Amin Karbasi</a>
<br>
<em>Appeared at NeurIPS</em>, 2020
<br>
<p></p>
<p>We establish the minimax regret of switching-constrained online convex optimization, a realistic optimization framework where algorithms must act in real-time to minimize cumulative loss, but are penalized if they are too erratic.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr onmouseout="toeplitz_stop()" onmouseover="toeplitz_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<div class="two" id='toeplitz_image'><img src='images/toeplitz_after.jpg'></div>
<img src='images/toeplitz_before.jpg'>
</div>
<script type="text/javascript">
function toeplitz_start() {
document.getElementById('toeplitz_image').style.opacity = "1";
}
function toeplitz_stop() {
document.getElementById('toeplitz_image').style.opacity = "0";
}
toeplitz_stop()
</script>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/1911.08015">
<papertitle>Low-Rank Toeplitz Matrix Estimation via Random Ultra-Sparse Rulers</papertitle>
</a>
<br>
<strong>Hannah Lawrence</strong>,
<a href="https://jerryzli.github.io/">Jerry Li</a>,
<a href="https://people.cs.umass.edu/~cmusco/">Cameron Musco</a>,
<a href="https://www.chrismusco.com/">Christopher Musco</a>
<br>
<em>Appeared at ICASSP</em>, 2020
<br>
<p></p>
<p>By building new, randomized "ruler" sampling constructions, we show how to use sublinear sparse Fourier transform algorithms for sample efficient, low-rank, Toeplitz covariance estimation.</p>
</td>
</tr>
</tbody></table>
<table width="100%" align="center" border="0" cellspacing="0" cellpadding="20"><tbody>
<tr>
<td>
<heading>Service</heading>
</td>
</tr>
</tbody></table>
<table width="100%" align="center" border="0" cellpadding="20"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="images/YaleUniversityLogo.jpg"></td>
<td width="75%" valign="center">
Organizer, Boston Symmetry Day, Fall and Spring 2023
<br>
<br>
Teaching Assistant, 6.S966 Symmetry and its Applications to Machine Learning, Spring 2023
<br>
<br>
<a href="https://www.hertzfoundation.org/news/volunteers-drive-community-building-connections-through-the-summer-workshop/">Hertz Foundation Summer Workshop Committee</a>, Fall 2021 and Spring 2022
<br>
<br>
Women in Learning Theory Mentor, Spring 2020
<br>
<br>
Applied Math Departmental Student Advisory Committee, Spring 2019
<br>
<br>
Dean's Committee on Science and Quantitative Reasoning, Fall 2018
<br>
<br>
Undergraduate Learning Assistant, CS 365 (Design and Analysis of Algorithms), Spring 2018
<br>
<br>
Undergraduate Learning Assistant, CS 223 (Data Structures and Algorithms), Spring 2017
<br>
<br>
Undergraduate Learning Assistant, CS 201 (Introduction to Computer Science), Fall 2017
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px">
<br>
<p style="text-align:right;font-size:small;">
<a href="https://jonbarron.info/">Website template credits.</a>
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
</body>
</html>