Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 5448b01

Browse filesBrowse files
authored
Add files via upload
1 parent c572eb4 commit 5448b01
Copy full SHA for 5448b01

File tree

Expand file treeCollapse file tree

5 files changed

+0
-28
lines changed
Open diff view settings
Filter options
Expand file treeCollapse file tree

5 files changed

+0
-28
lines changed
Open diff view settings
Collapse file

‎builders/barlow_twins.py‎

Copy file name to clipboardExpand all lines: builders/barlow_twins.py
-7Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,6 @@
88
Barlow Twins
99
Link: https://arxiv.org/abs/2104.02057
1010
Implementation: https://arxiv.org/abs/2103.03230
11-
12-
+ does not require large batch size
13-
+ does not require asymmetry between the network twins such as a predictor network
14-
+ does not require gradient stopping
15-
+ does not require moving average on the weight updates
16-
- benefits from high-dimensional embeddings (projection_dim)
17-
+ cross-correlation matrix computed from twin embeddings as close to the identity matrix as possible
1811
"""
1912

2013
import torch
Collapse file

‎builders/byol.py‎

Copy file name to clipboardExpand all lines: builders/byol.py
-3Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,6 @@
88
BYOL: Bootstrap your own latent: A new approach to self-supervised Learning
99
Link: https://arxiv.org/abs/2006.07733
1010
Implementation: https://github.com/deepmind/deepmind-research/tree/master/byol
11-
12-
TODO
13-
- Cosine schedule for momentum update in EMA
1411
"""
1512

1613
import torch
Collapse file

‎builders/mocov2.py‎

Copy file name to clipboardExpand all lines: builders/mocov2.py
-4Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,6 @@
88
MoCo v2: Momentum Contrast v2
99
Link: https://arxiv.org/abs/2003.04297
1010
Implementation: https://github.com/facebookresearch/moco
11-
12-
+ larger batch size (like SimCLR)
13-
+ use MLP projection head with 2 layers (like SimCLR)
14-
+ stronger data augmentation (like SimCLR)
1511
"""
1612

1713
import torch
Collapse file

‎builders/mocov3.py‎

Copy file name to clipboardExpand all lines: builders/mocov3.py
-5Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,6 @@
88
MoCo v3: Momentum Contrast v3
99
Link: https://arxiv.org/abs/2104.02057
1010
Implementation: https://github.com/facebookresearch/moco-v3
11-
12-
+ use Vision Transformers
13-
+ use keys from mini batch
14-
+ large batch size ~4096
15-
- remove queue
1611
"""
1712

1813

Collapse file

‎builders/simclr.py‎

Copy file name to clipboardExpand all lines: builders/simclr.py
-9Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,6 @@
88
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
99
Link: https://arxiv.org/abs/2002.05709
1010
Implementation: https://github.com/google-research/simclr
11-
12-
+ no specific architecture
13-
+ no memory bank
14-
- large batch size
15-
- strong data augmentation
16-
- onlinear transformation between the representation and the contrastive loss
17-
- normalized embeddings
18-
- adjusted temperature parameter
19-
- longer training
2011
"""
2112

2213
import torch

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.