Fixing Unsupervised Hyperbolic Contrastive Loss [D]
Our take
Hello all,
I am trying to implement Unsupervised Hyperbolic Contrastive Loss on the ImageNet-1k dataset. My results show that simple Euclidean unsupervised contrastive loss is much better than the hyperbolic version. Please help me understand the problem. I am using expmap() and projx() to ensure the embedding is on the Lorentzian manifold. Below is my code -
def hb_contrastive_loss(z, z1, model, temp=0.07):
z_to_neighbor = model.manifold.dist(z.unsqueeze(1), z1.unsqueeze(0))
labels = torch.arange(z.size(0), device=z.device)
logits = -z_to_neighbor / temp
loss = F.cross_entropy(logits, labels)
return loss
Current results for 1-NN accuracy:
Hyperbolic = 57%
Cosine = 64%
More information (if relevant):
Batch size = 2048
LR = 1e-4
[link] [comments]
Read on the original site
Open the publisher's page for the full experience