File tree Expand file tree Collapse file tree 5 files changed +0
-28
lines changed Open diff view settings
Expand file tree Collapse file tree 5 files changed +0
-28
lines changed Open diff view settings
Original file line number Diff line number Diff line change 88Barlow Twins
99Link: https://arxiv.org/abs/2104.02057
1010Implementation: https://arxiv.org/abs/2103.03230
11-
12- + does not require large batch size
13- + does not require asymmetry between the network twins such as a predictor network
14- + does not require gradient stopping
15- + does not require moving average on the weight updates
16- - benefits from high-dimensional embeddings (projection_dim)
17- + cross-correlation matrix computed from twin embeddings as close to the identity matrix as possible
1811"""
1912
2013import torch
Original file line number Diff line number Diff line change 88BYOL: Bootstrap your own latent: A new approach to self-supervised Learning
99Link: https://arxiv.org/abs/2006.07733
1010Implementation: https://github.com/deepmind/deepmind-research/tree/master/byol
11-
12- TODO
13- - Cosine schedule for momentum update in EMA
1411"""
1512
1613import torch
Original file line number Diff line number Diff line change 88MoCo v2: Momentum Contrast v2
99Link: https://arxiv.org/abs/2003.04297
1010Implementation: https://github.com/facebookresearch/moco
11-
12- + larger batch size (like SimCLR)
13- + use MLP projection head with 2 layers (like SimCLR)
14- + stronger data augmentation (like SimCLR)
1511"""
1612
1713import torch
Original file line number Diff line number Diff line change 88MoCo v3: Momentum Contrast v3
99Link: https://arxiv.org/abs/2104.02057
1010Implementation: https://github.com/facebookresearch/moco-v3
11-
12- + use Vision Transformers
13- + use keys from mini batch
14- + large batch size ~4096
15- - remove queue
1611"""
1712
1813
Original file line number Diff line number Diff line change 88SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
99Link: https://arxiv.org/abs/2002.05709
1010Implementation: https://github.com/google-research/simclr
11-
12- + no specific architecture
13- + no memory bank
14- - large batch size
15- - strong data augmentation
16- - onlinear transformation between the representation and the contrastive loss
17- - normalized embeddings
18- - adjusted temperature parameter
19- - longer training
2011"""
2112
2213import torch
You can’t perform that action at this time.
0 commit comments