1 min readfrom Machine Learning

Fixing Unsupervised Hyperbolic Contrastive Loss [D]

Our take

In this discussion, I'm exploring the implementation of Unsupervised Hyperbolic Contrastive Loss on the ImageNet-1k dataset. Despite my efforts, the results indicate that the simpler Euclidean unsupervised contrastive loss outperforms the hyperbolic variant. I'm utilizing functions like expmap() and projx() to maintain embeddings on the Lorentzian manifold, yet I'm puzzled by the performance disparity. Currently, my 1-NN accuracy shows 57% for hyperbolic loss compared to 64% for cosine loss. I welcome insights to help unravel this issue and enhance

Hello all,

I am trying to implement Unsupervised Hyperbolic Contrastive Loss on the ImageNet-1k dataset. My results show that simple Euclidean unsupervised contrastive loss is much better than the hyperbolic version. Please help me understand the problem. I am using expmap() and projx() to ensure the embedding is on the Lorentzian manifold. Below is my code -

def hb_contrastive_loss(z, z1, model, temp=0.07):

z_to_neighbor = model.manifold.dist(z.unsqueeze(1), z1.unsqueeze(0))

labels = torch.arange(z.size(0), device=z.device)

logits = -z_to_neighbor / temp

loss = F.cross_entropy(logits, labels)

return loss

Current results for 1-NN accuracy:

Hyperbolic = 57%
Cosine = 64%

More information (if relevant):
Batch size = 2048
LR = 1e-4

submitted by /u/arjun_r_kaushik
[link] [comments]

Read on the original site

Open the publisher's page for the full experience

View original article

Tagged with

#no-code spreadsheet solutions#rows.com#natural language processing for spreadsheets#generative AI for data analysis#large dataset processing#Excel alternatives for data analysis#Unsupervised Hyperbolic Contrastive Loss#Euclidean unsupervised contrastive loss#hb_contrastive_loss#ImageNet-1k#1-NN accuracy#Lorentzian manifold#z_to_neighbor#cross_entropy#model.manifold.dist#Hyperbolic#Cosine#expmap()#projx()#accuracy results