Data Augmentations in Deep Weight Spaces

Nov 17, 2023·
Aviv Shamsian*
Bar Ilan University
,
David W․ Zhang*
University of Amsterdam
,
Aviv Navon
Bar Ilan University
,
Yan Zhang
Samsung - SAIT AI Lab, Montreal
Miltiadis Kofinas
Miltiadis Kofinas
University of Amsterdam
,
Idan Achituve
Bar Ilan University
,
Riccardo Valperga
University of Amsterdam
,
Gertjan J. Burghouts
TNO
,
Efstratios Gavves
University of Amsterdam
,
Cees G. M. Snoek
University of Amsterdam
,
Ethan Fetaya
Bar Ilan University
,
Gal Chechik
Bar Ilan University, NVIDIA
,
Haggai Maron
Technion, NVIDIA
· 0 min read
Abstract
Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization. Recent works designed architectures for effective learning in that space, which takes into account its unique, permutation-equivariant, structure. Unfortunately, so far these architectures suffer from severe overfitting and were shown to benefit from large datasets. This poses a significant challenge because generating data for this learning setup is laborious and time-consuming since each data sample is a full set of network weights that has to be trained. In this paper, we address this difficulty by investigating data augmentations for weight spaces, a set of techniques that enable generating new data examples on the fly without having to train additional input weight space elements. We first review several recently proposed data augmentation schemes %that were proposed recently and divide them into categories. We then introduce a novel augmentation scheme based on the Mixup method. We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate, which can be valuable for future studies.
Type
Publication
In Workshop on Symmetry and Geometry in Neural Representations (NeurReps), NeurIPS 2023