Publications

(*) denotes equal contribution

2024

  1. arXiv
    Tensor Decomposition Meets RKHS: Efficient Algorithms for Smooth and Misaligned Data
    Brett W. LarsenTamara G. Kolda, Anru R. Zhang, and Alex H. Williams
    arXiv preprint arXiv:2408.05677, 2024
  2. COLM
    Does your data spark joy? Performance gains from domain upsampling at the end of training.
    Cody Blakeney*, Mansheej Paul*Brett W. Larsen*, Sean Owen, and Jonathan Frankle
    Conference on Language Modeling (COLM), 2024
  3. ICLR
    Estimating Shape Distances on Neural Representations with Limited Samples
    Dean A. Psopisil, Brett W. LarsenSarah E. Harvey, and Alex H. Williams
    International Conference on Learning Representations (ICLR), 2024
    Poster presentation at COSYNE 2024.

2023

  1. UniReps
    Duality of Bures and Shape Distances with Implications for Comparing Neural Representations.
    Sarah E. HarveyBrett W. Larsen, and Alex H. Williams
    Proceedings of the 1st Workshop on Unifying Representations in Neural Models (UniReps),, 2023
    Best Proceedings Paper Honorable Mention
  2. ICLR
    Unmasking the Lottery Ticket Hypothesis: What’s Encoded in a Winning Ticket’s Mask?
    Mansheej Paul*, Feng Chen*Brett W. Larsen*Jonathan FrankleSurya Ganguli, and Gintare Karolina Dziugaite
    International Conference on Learning Representations (ICLR), 2023
    Notable Top 25% (Spotlight).

2022

  1. NeurIPS
    Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks
    Mansheej Paul*Brett W. Larsen*Surya GanguliJonathan Frankle, and Gintare Karolina Dziugaite
    36th Conference on Neural Information Processing Systems (NeurIPS), 2022
    Spotlight presentation at the Sparsity in Neural Networks (SNN) Workshop 2022.
  2. SIMAX
    Practical leverage-based sampling for low-rank tensor decomposition
    Brett W. Larsen, and Tamara G. Kolda
    SIAM Journal on Matrix Analysis and Applications (SIMAX), 2022
  3. PLOS Comp Bio
    Towards a more general understanding of the algorithmic utility of recurrent connections
    Brett W. Larsen, and Shaul Druckmann
    PLOS Computational Biology, 2022
    Poster presentation at COSYNE 2019.
  4. ICLR
    How many degrees of freedom do we need to train deep networks: a loss landscape perspective
    Brett W. LarsenStanislav Fort, Nic Becker, and Surya Ganguli
    International Conference on Learning Representations (ICLR), 2022