Tripletloss

 

https://ceciliavision.wordpress.com/2016/03/21/caffe-hdf5-layer/Slide

http://tce.technion.ac.il/wp-content/uploads/sites/8/2016/01/Elad-Hofer.pdf

 

In caffe

https://github.com/wanji/caffe-sl

https://github.com/luhaofang/tripletloss/tree/master/models

https://github.com/luhaofang/tripletloss

https://github.com/hizhangp/triplet/blob/master/triplet/data_layer.py

http://crockpotveggies.com/2016/11/05/triplet-embedding-deeplearning4j-facenet.html

http://www.cnblogs.com/wangxiaocvpr/p/5452367.html

http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhuang_Fast_Training_of_CVPR_2016_paper.pdf

https://ceciliavision.wordpress.com/2016/03/21/caffe-hdf5-layer/

In torch

https://github.com/eladhoffer/TripletNet

https://github.com/jhjin/triplet-criterion

https://github.com/jhjin/triplet-criterion

https://github.com/Atcold/torch-TripletEmbedding

 

 

Issue

https://groups.google.com/forum/#!topic/torch7/VtG46T6jxlM

https://github.com/jhjin/triplet-criterion/pull/4

Good Links

http://slideplayer.com/slide/8088852/

http://vision.ia.ac.cn/zh/senimar/reports/Siamese-Network-Architecture-and-Applications-in-Computer-Vision.pdf

https://www.google.co.in/search?biw=1686&bih=878&tbm=isch&q=triplet+loss&spell=1&sa=X&ved=0ahUKEwj2u9id8NfRAhUCSI8KHQO9BDEQvwUIGSgA&dpr=1.1#imgrc=dvFoUj_5F5A3yM%3A

http://felixlaumon.github.io/2015/01/08/kaggle-right-whale.html

https://github.com/torch/nngraph

———————————————————

Triplet loss Code

https://github.com/Atcold/torch-TripletEmbedding/blob/master/xmp/fresh-embedding.lua

https://github.com/jhjin/triplet-criterion/blob/master/test.lua

https://github.com/eladhoffer/TripletNet

https://github.com/wanji/caffe-sl/tree/master/examples

https://groups.google.com/forum/#!topic/torch7/VtG46T6jxlM

—————————————————–

Saimese network

https://groups.google.com/forum/#!topic/torch7/Hp1r6rX3tHw

https://github.com/alykhantejani/siamese_network

https://github.com/sumehta/siamese_network_vqa

—————————————————————

Deep ranking Network paper

https://arxiv.org/pdf/1404.4661v1.pdf

http://users.eecs.northwestern.edu/~jwa368/pdfs/deep_ranking.pdf

http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/G_Learning_Local_Image_CVPR_2016_paper.pdf

 


REF : http://stackoverflow.com/questions/38260113/implementing-contrastive-loss-and-triplet-loss-in-tensorflow

You need to implement yourself the contrastive loss or the triplet loss, but once you know the pairs or triplets this is quite easy.


Contrastive Loss

Suppose you have as input the pairs of data and their label (positive or negative, i.e. same class or different class). For instance you have images as input of size 28x28x1:

left = tf.placeholder(tf.float32, [None, 28, 28, 1])
right = tf.placeholder(tf.float32, [None, 28, 28, 1])
label = tf.placeholder(tf.int32, [None, 1]). # 0 if same, 1 if different
margin = 0.2

left_output = model(left)  # shape [None, 128]
right_output = model(right)  # shape [None, 128]

d = tf.reduce_sum(tf.square(left_output - right_output), 1)
d_sqrt = tf.sqrt(d)

loss = label * tf.square(tf.maximum(0., margin - d_sqrt)) + (1 - label) * d

loss = 0.5 * tf.reduce_mean(loss)

Triplet Loss

Same as with contrastive loss, but with triplets (anchor, positive, negative). You don’t need labels here.

anchor_output = ...  # shape [None, 128]
positive_output = ...  # shape [None, 128]
negative_output = ...  # shape [None, 128]

d_pos = tf.reduce_sum(tf.square(anchor_output - positive_output), 1)
d_neg = tf.reduce_sum(tf.square(anchor_output - negative_output), 1)

loss = tf.maximum(0., margin + d_pos - d_neg)
loss = tf.reduce_mean(loss)

The real trouble when implementing triplet loss or contrastive loss in TensorFlow is how to sample the triplets or pairs. I will focus on generating triplets because it is harder than generating pairs.

The easiest way is to generate them outside of the Tensorflow graph, i.e. in python and feed them to the network through the placeholders. Basically you select images 3 at a time, with the first two from the same class and the third from another class. We then perform a feedforward on these triplets, and compute the triplet loss.

The issue here is that generating triplets is complicated. We want them to be valid triplets, triplets with a positive loss (otherwise the loss is 0 and the network doesn’t learn).
To know whether a triplet is good or not you need to compute its loss, so you already make one feedforward through the network…

Clearly, implementing triplet loss in Tensorflow is hard, and there are ways to make it more efficient than sampling in python but explaining them would require a whole blog post !

 

 

 

————————————————————————

https://tryolabs.com/blog/2017/01/25/building-a-chatbot-analysis–limitations-of-modern-platforms/

 

 

Simple L2/L1 Regularization in Torch 7

https://siavashk.github.io/2016/03/10/l21-regularization/

 

http://www.chioka.in/differences-between-the-l1-norm-and-the-l2-norm-least-absolute-deviations-and-least-squares/

Ref

karapathy

—————————————————————-

92.45% on CIFAR-10 in Torch

http://torch.ch/blog/2015/07/30/cifar.html

Training train.lua#

That’s it, you can start training:

CUDA_VISIBLE_DEVICES=0 th train.lua


Understanding the backward pass through Batch Normalization Layer

https://kratzert.github.io/2016/02/12/understanding-the-gradient-flow-through-the-batch-normalization-layer.html

tutorials/2_supervised/4_train.lua

https://github.com/torch/tutorials/blob/master/2_supervised/4_train.lua


Batch Normalization

https://groups.google.com/forum/#!topic/torch7/nPvkE9uV550
net.imageSize = 224
net.imageCrop = 224

net:evaluate()

im1 = torch.rand(3, 224, 224)

minibatch = torch.Tensor(2, 3, 224, 224)

minibatch[1] = im1
minibatch[2] = im1

out = net:forward(minibatch)
——————————————

It works perfectly with minibatch = torch.Tensor(2, 3, 224, 224) as input (or more than 2), but not with torch.Tensor(1, 3, 224, 224).

I understand that the particular case of “purely stochastic” is not supported during training (), but for testing, with only forward passes, it would have been interesting to support this particular case.

Sam,

Sergey Zagoruyko
2/5/16
It is supported. You have 2 problems with your script:

1. nn.View(n) shoud be nn.View(n):setNumInputDims(3) or nn.View(-1,n)
2. im1 = torch.rand(1, 3, 224, 224) (input tensor should always be 4D for a net with BN)
– show quoted text –
– show quoted text –


You received this message because you are subscribed to the Google Groups “torch7” group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un…@googlegroups.com.
To post to this group, send email to tor…@googlegroups.com.
Visit this group at https://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.

Sam
2/5/16
Thanks a lot! The problem was actually nn.View()

The second point was’nt necessary, since  minibatch = torch.Tensor(1, 3, 224, 224) was alread a 4D tensor

Br,

S.

Sam
2/8/16
Hi again Torchers,

I have another Batch normalization related question : i was wondering if i had to use dropout in conjunction with batch normalization ? In the original paper (by Google), the say that BN regularizes the model and replaces Dropout. But i also found this CIFAR experiment on Torch7 blog, which explains that combining BN and Dropout works better (http://torch.ch/blog/2015/07/30/cifar.html)

What are your experience and your feedback on this?

Many thanks.

Br,

Sam.

smth chntla
2/8/16
My experience:

Dropout + BatchNorm > Dropout
Dropout + BatchNorm > BatchNorm
– show quoted text –
– show quoted text –

You received this message because you are subscribed to the Google Groups “torch7” group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un…@googlegroups.com.
To post to this group, send email to tor…@googlegroups.com.
Visit this group at https://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.
 
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s