Common issues

Fix “firefox is already running” issue in Linux

  1. First find the process id of firefox using the following command in any directory:
    pidof firefox
    
  2. Kill firefox process using the following command in any directory:
    kill [firefox pid]
    

Then start firefox again.

Or you can do the same thing in just one command.As don_crissti said:

kill $(pidof firefox)

Ref
http://unix.stackexchange.com/questions/78689/fix-firefox-is-already-running-issue-in-linux

It’s all about AI

View at Medium.com

AI Projects:  https://medium.com/udacity/ai-nanodegree-program-syllabus-term-1-in-depth-80c41297acaf#.7hzlt1spe

http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/?utm_content=buffer59670&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

https://www.udemy.com/big-data-and-hadoop-for-beginners/?couponCode=BIGLEAP100

http://artificialbrain.xyz/artificial-intelligence-movie-of-february-2017/

https://www.springboard.com/blog/machine-learning-interview-questions/

https://hbr.org/2017/01/deep-learning-will-radically-change-the-ways-we-interact-with-technology?utm_campaign=hbr&utm_source=facebook&utm_medium=social

https://worldwritable.com/natural-language-processing-for-programmers-c21a4aff3cb9#.1i7rqtxw5

http://artificialbrain.xyz/would-like-to-be-a-part-of-artificial-brain-xyz-group-family/

https://www.engadget.com/2017/01/27/apple-joins-partnership-on-ai/

https://futureoflife.org/ai-principles/

http://artificialbrain.xyz/deep-learning-for-self-driving-cars-lecture-3/

 

 

https://worldwritable.com/natural-language-processing-for-programmers-c21a4aff3cb9#.1i7rqtxw5

http://www.infoworld.com/article/3162413/artificial-intelligence/tensorflow-10-unlocks-machine-learning-on-smartphones.html

http://fortune.com/2017/01/27/apple-artificial-intelligence-non-profit/

 

https://www.facebook.com/groups/1738168866424224/?multi_permalinks=1842617559312687%2C1842548185986291&notif_t=group_activity&notif_id=1485768683408813

View at Medium.com

http://www.theguardian.stfi.re/commentisfree/2017/jan/29/no-one-can-read-cards-artificial-intelligence-poker-libratus-sergey-brin?sf=ngybkoz#ab

 

 

View at Medium.com

how to check .h5 attributs

h5dump example.h5
HDF5 "example.h5" {
GROUP "/" {
   DATASET "dset" {
      DATATYPE  H5T_STD_I32LE
      DATASPACE  SIMPLE { ( 6, 15 ) / ( 6, 15 ) }
      DATA {
      (0,0): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
      (1,0): 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
      (2,0): 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
      (3,0): 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
      (4,0): 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75,
      (5,0): 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90
      }
   }
}
}
ref
http://www2.epcc.ed.ac.uk/~amrey/ARCHER_Data_Management/
http://matlab.izmiran.ru/help/techdoc/matlab_prog/ch_imp41.html
https://cran.r-project.org/web/packages/h5/h5.pdf
--------------------------------------------------------------------------------------

fileinfo = hdf5info('example.h5');

hdf5info returns a structure that contains various information about the HDF5 file, including the name of the file and the version of the HDF5 library that MATLAB is using:

  • fileinfo = 
              Filename: 'example.h5'
            LibVersion: '1.4.2'
                Offset: 0
              FileSize: 8172
        GroupHierarchy: [1x1 struct]
    
  • toplevel = fileinfo.GroupHierarchy
    
    toplevel = 
    
          Filename: 'C:\matlab\toolbox\matlab\demos\example.h5'
              Name: '/'
            Groups: [1x2 struct]
          Datasets: []
         Datatypes: []
             Links: []
        Attributes: [1x2 struct]
    

The following figure illustrates the organization of the root group.

  • data = hdf5read('example.h5','/g2/dset2.1');
    

The return value contains the values in the data set, in this case a 1-by-10 vector of single-precision values:

  • data =
    
      Columns 1 through 8 
    
        1.0000    1.1000    1.2000    1.3000    1.4000    1.5000    1.6000    1.7000
    
      Columns 9 through 10 
    
        1.8000    1.9000

-=======================================================================

  • h5ls --full -r data.h5
    /                        Group
    /data                    Group
    /data/data               Dataset {11, 612}
    /data_descr              Group
    /data_descr/names        Dataset {11}
    /data_descr/ordering     Dataset {1}
    
  • $ h5dump -d /data/data data.h5
    HDF5 "data.h5" {
    DATASET "/data/data" {
       DATATYPE  H5T_IEEE_F64LE
       DATASPACE  SIMPLE { ( 11, 612 ) / ( 11, 612 ) }
       DATA {
       (0,0): -1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1,
       (0,18): -1, -1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1, -1,
       (0,36): -1, 1, -1, 1, -1, -1, 1, -1, 1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1,
       (0,55): -1, 1, -1, 1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 1, -1, 1, -1, 1, -1,
       (0,74): 1, 1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, -1, 1, 1, -1, -1, 1,
       (0,92): 1, 1, 1, -1, -1, 1, 1, -1, 2, 2, -2, 2, -2, -2, -2, 2, 2, -2, -2,
       (0,111): -2, -2, -2, 2, 2, 2, 2, -2, -2, 2, -2, -2, -2, -2, -2, -2, 2, -2,
    ...
    

Access in Matlab

info=hdf5info('data.h5');

x=hdf5read('data.h5','/data/data')

ans =

  Columns 1 through 7

   -1.0000    0.2107    0.0044    0.0013    0.0001    0.0000    0.0000
   -1.0000    0.2152    0.0042    0.0014    0.0002    0.0000    0.0000
    1.0000    0.1972    0.0023    0.0015    0.0000    0.0000   -0.0000
...

Access in python

h5py will come to your aid:

$ python
>>> import h5py
>>> f = h5py.File('data.h5','r')
>>> f.values()
[<HDF5 group "/data" (1 members)>, <HDF5 group "/data_descr" (2 members)>]
>>> f["/data/data"]
<HDF5 dataset "data": shape (11, 612), type "<f8">
>>> f["/data/data"][:,:]
array([[ -1.00000000e+00,  -1.00000000e+00,   1.00000000e+00, ...,
          3.00000000e+00,  -3.00000000e+00,   3.00000000e+00],
       [  2.10663000e-01,   2.15192000e-01,   1.97153000e-01, ...,
          3.15029000e-01,   2.96945000e-01,   4.08534000e-01],
       [  4.43414000e-03,   4.18483000e-03,   2.30872000e-03, ...,
          3.37745000e-02,   5.68704000e-02,   6.02136000e-02],
       ...,
       [  2.23000000e+00,   2.20000000e+00,   2.35000000e+00, ...,
          9.40000000e-01,   6.00000000e-01,   1.00000000e+00],
       [  1.27000000e+00,   1.28000000e+00,   1.28000000e+00, ...,
          1.24000000e+00,   1.31000000e+00,   1.30000000e+00],
       [  1.28000000e+00,   1.28000000e+00,   1.28000000e+00, ...,
          1.33000000e+00,   1.33000000e+00,   1.32000000e+00]])

Tripletloss

 

https://ceciliavision.wordpress.com/2016/03/21/caffe-hdf5-layer/Slide

http://tce.technion.ac.il/wp-content/uploads/sites/8/2016/01/Elad-Hofer.pdf

 

In caffe

https://github.com/wanji/caffe-sl

https://github.com/luhaofang/tripletloss/tree/master/models

https://github.com/luhaofang/tripletloss

https://github.com/hizhangp/triplet/blob/master/triplet/data_layer.py

http://crockpotveggies.com/2016/11/05/triplet-embedding-deeplearning4j-facenet.html

http://www.cnblogs.com/wangxiaocvpr/p/5452367.html

http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhuang_Fast_Training_of_CVPR_2016_paper.pdf

https://ceciliavision.wordpress.com/2016/03/21/caffe-hdf5-layer/

In torch

https://github.com/eladhoffer/TripletNet

https://github.com/jhjin/triplet-criterion

https://github.com/jhjin/triplet-criterion

https://github.com/Atcold/torch-TripletEmbedding

 

 

Issue

https://groups.google.com/forum/#!topic/torch7/VtG46T6jxlM

https://github.com/jhjin/triplet-criterion/pull/4

Good Links

http://slideplayer.com/slide/8088852/

http://vision.ia.ac.cn/zh/senimar/reports/Siamese-Network-Architecture-and-Applications-in-Computer-Vision.pdf

https://www.google.co.in/search?biw=1686&bih=878&tbm=isch&q=triplet+loss&spell=1&sa=X&ved=0ahUKEwj2u9id8NfRAhUCSI8KHQO9BDEQvwUIGSgA&dpr=1.1#imgrc=dvFoUj_5F5A3yM%3A

http://felixlaumon.github.io/2015/01/08/kaggle-right-whale.html

https://github.com/torch/nngraph

———————————————————

Triplet loss Code

https://github.com/Atcold/torch-TripletEmbedding/blob/master/xmp/fresh-embedding.lua

https://github.com/jhjin/triplet-criterion/blob/master/test.lua

https://github.com/eladhoffer/TripletNet

https://github.com/wanji/caffe-sl/tree/master/examples

https://groups.google.com/forum/#!topic/torch7/VtG46T6jxlM

—————————————————–

Saimese network

https://groups.google.com/forum/#!topic/torch7/Hp1r6rX3tHw

https://github.com/alykhantejani/siamese_network

https://github.com/sumehta/siamese_network_vqa

—————————————————————

Deep ranking Network paper

https://arxiv.org/pdf/1404.4661v1.pdf

http://users.eecs.northwestern.edu/~jwa368/pdfs/deep_ranking.pdf

http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/G_Learning_Local_Image_CVPR_2016_paper.pdf

 


REF : http://stackoverflow.com/questions/38260113/implementing-contrastive-loss-and-triplet-loss-in-tensorflow

You need to implement yourself the contrastive loss or the triplet loss, but once you know the pairs or triplets this is quite easy.


Contrastive Loss

Suppose you have as input the pairs of data and their label (positive or negative, i.e. same class or different class). For instance you have images as input of size 28x28x1:

left = tf.placeholder(tf.float32, [None, 28, 28, 1])
right = tf.placeholder(tf.float32, [None, 28, 28, 1])
label = tf.placeholder(tf.int32, [None, 1]). # 0 if same, 1 if different
margin = 0.2

left_output = model(left)  # shape [None, 128]
right_output = model(right)  # shape [None, 128]

d = tf.reduce_sum(tf.square(left_output - right_output), 1)
d_sqrt = tf.sqrt(d)

loss = label * tf.square(tf.maximum(0., margin - d_sqrt)) + (1 - label) * d

loss = 0.5 * tf.reduce_mean(loss)

Triplet Loss

Same as with contrastive loss, but with triplets (anchor, positive, negative). You don’t need labels here.

anchor_output = ...  # shape [None, 128]
positive_output = ...  # shape [None, 128]
negative_output = ...  # shape [None, 128]

d_pos = tf.reduce_sum(tf.square(anchor_output - positive_output), 1)
d_neg = tf.reduce_sum(tf.square(anchor_output - negative_output), 1)

loss = tf.maximum(0., margin + d_pos - d_neg)
loss = tf.reduce_mean(loss)

The real trouble when implementing triplet loss or contrastive loss in TensorFlow is how to sample the triplets or pairs. I will focus on generating triplets because it is harder than generating pairs.

The easiest way is to generate them outside of the Tensorflow graph, i.e. in python and feed them to the network through the placeholders. Basically you select images 3 at a time, with the first two from the same class and the third from another class. We then perform a feedforward on these triplets, and compute the triplet loss.

The issue here is that generating triplets is complicated. We want them to be valid triplets, triplets with a positive loss (otherwise the loss is 0 and the network doesn’t learn).
To know whether a triplet is good or not you need to compute its loss, so you already make one feedforward through the network…

Clearly, implementing triplet loss in Tensorflow is hard, and there are ways to make it more efficient than sampling in python but explaining them would require a whole blog post !

 

 

 

————————————————————————

https://tryolabs.com/blog/2017/01/25/building-a-chatbot-analysis–limitations-of-modern-platforms/

 

 

Simple L2/L1 Regularization in Torch 7

https://siavashk.github.io/2016/03/10/l21-regularization/

 

http://www.chioka.in/differences-between-the-l1-norm-and-the-l2-norm-least-absolute-deviations-and-least-squares/

Ref

karapathy

—————————————————————-

92.45% on CIFAR-10 in Torch

http://torch.ch/blog/2015/07/30/cifar.html

Training train.lua#

That’s it, you can start training:

CUDA_VISIBLE_DEVICES=0 th train.lua


Understanding the backward pass through Batch Normalization Layer

https://kratzert.github.io/2016/02/12/understanding-the-gradient-flow-through-the-batch-normalization-layer.html

tutorials/2_supervised/4_train.lua

https://github.com/torch/tutorials/blob/master/2_supervised/4_train.lua


Batch Normalization

https://groups.google.com/forum/#!topic/torch7/nPvkE9uV550
net.imageSize = 224
net.imageCrop = 224

net:evaluate()

im1 = torch.rand(3, 224, 224)

minibatch = torch.Tensor(2, 3, 224, 224)

minibatch[1] = im1
minibatch[2] = im1

out = net:forward(minibatch)
——————————————

It works perfectly with minibatch = torch.Tensor(2, 3, 224, 224) as input (or more than 2), but not with torch.Tensor(1, 3, 224, 224).

I understand that the particular case of “purely stochastic” is not supported during training (), but for testing, with only forward passes, it would have been interesting to support this particular case.

Sam,

Sergey Zagoruyko
2/5/16
It is supported. You have 2 problems with your script:

1. nn.View(n) shoud be nn.View(n):setNumInputDims(3) or nn.View(-1,n)
2. im1 = torch.rand(1, 3, 224, 224) (input tensor should always be 4D for a net with BN)
– show quoted text –
– show quoted text –


You received this message because you are subscribed to the Google Groups “torch7” group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un…@googlegroups.com.
To post to this group, send email to tor…@googlegroups.com.
Visit this group at https://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.

Sam
2/5/16
Thanks a lot! The problem was actually nn.View()

The second point was’nt necessary, since  minibatch = torch.Tensor(1, 3, 224, 224) was alread a 4D tensor

Br,

S.

Sam
2/8/16
Hi again Torchers,

I have another Batch normalization related question : i was wondering if i had to use dropout in conjunction with batch normalization ? In the original paper (by Google), the say that BN regularizes the model and replaces Dropout. But i also found this CIFAR experiment on Torch7 blog, which explains that combining BN and Dropout works better (http://torch.ch/blog/2015/07/30/cifar.html)

What are your experience and your feedback on this?

Many thanks.

Br,

Sam.

smth chntla
2/8/16
My experience:

Dropout + BatchNorm > Dropout
Dropout + BatchNorm > BatchNorm
– show quoted text –
– show quoted text –

You received this message because you are subscribed to the Google Groups “torch7” group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un…@googlegroups.com.
To post to this group, send email to tor…@googlegroups.com.
Visit this group at https://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.
 

Autoencoder and VAE

Training Autoencoders on ImageNet Using Torch 7

REF

https://siavashk.github.io/2016/02/22/autoencoder-imagenet/

 

function autoencoder:initialize()
  local pool_layer1 = nn.SpatialMaxPooling(2, 2, 2, 2)
  local pool_layer2 = nn.SpatialMaxPooling(2, 2, 2, 2)

  self.net = nn.Sequential()
  self.net:add(nn.SpatialConvolution(3, 12, 3, 3, 1, 1, 0, 0))
  self.net:add(nn.ReLU())
  self.net:add(nn.SpatialConvolution(12, 12, 3, 3, 1, 1, 0, 0))
  self.net:add(nn.ReLU())
  self.net:add(pool_layer1)
  self.net:add(nn.SpatialConvolution(12, 24, 3, 3, 1, 1, 0, 0))
  self.net:add(nn.ReLU())
  self.net:add(pool_layer2)
  self.net:add(nn.Reshape(24 * 14 * 14))
  self.net:add(nn.Linear(24 * 14 * 14, 1568))
  self.net:add(nn.Linear(1568, 24 * 14 * 14))
  self.net:add(nn.Reshape(24, 14, 14))
  self.net:add(nn.SpatialConvolution(24, 12, 3, 3, 1, 1, 0, 0))
  self.net:add(nn.ReLU())
  self.net:add(nn.SpatialMaxUnpooling(pool_layer2))
  self.net:add(nn.SpatialConvolution(12, 12, 3, 3, 1, 1, 0, 0))
  self.net:add(nn.ReLU())
  self.net:add(nn.SpatialMaxUnpooling(pool_layer1))
  self.net:add(nn.SpatialConvolution(12, 3, 3, 3, 1, 1, 0, 0))

  self.net = self.net:cuda()
end

Markdown Editors for Linux

List of good markdown editor for Linux

  1. Remarkable

  2. Git book

  3. Typora

 

Steps to install Remarkable

  1. Download  remarkable from : http://remarkableapp.github.io/linux/download.html
  2. To install Gdebi, just search for and install the package from Ubuntu Software Center:
  3.  Once installed, right-click on an .deb file and navigate to its “Properties window -> Open Withtab”. There highlight Gdebi Package Installer and click Set as default.
  4. Click on Install Package in Gdebi.

 

 

 

————————————————–

Ref

https://itsfoss.com/best-markdown-editors-linux/

https://jbt.github.io/markdown-editor/#bVNBbtswELzzFVs4gO3GltpremqTpgkQA0WTnoICoUVSpC1yBXJlJyn69y4pw84hgAnJ5HBmd2Y1gdkPRzfDenndyR1GreawknGrcB/gu3KEUYhvMrkGhqTN0IHRkoaooXOJLoSAj3BJsTu/hxouveInISS500BWg3GdPkGsM3QCHv4xvLGISR8vynQqASPcPKzuMsdVlC3IoEBF7EEWbnCBb1kdy+0OpQJHGXydDxsMpAMlkHyeqfk0lLp+/7qDhPCCAzQyQLIZkQmTEOJ26iEgtIgKJME+OnKhZQLfM2mdcZ2OQPqZFpmlxYJhCfSabMYycUy6M5UQd4jbBZeb/fpQ/DKI+bGWcXy8CvGV28pdTFMh4cqV/gAX55/Z4aenp43cydRE15M4m5khNOQwzOZ/BcDZbKrcbjqvLPluNr0F6VmMt6rp/Iv4x4vvC/FgXQL+PWKAMfA/M0vUp4u6bh3ZYV016OvNmmp/8H6pS/zz3GKnCbisbcA9OAO3U45p/Slus6NUSt7n8rld8ZPTSTmNVazgCnFdMsvyj7krGFlHdRb3UTGmiPcRN7qhVGfcQb2eL8BE9LC3rrEiZ+dC6l2U2YIsw4anRdEozll+ewGXo/Kc/QjjTIhBjfS5xMlkAvc0GJNHWmUOL7e6EI0D/Xi0wNG7Nr05f/s+52jjaXZ7GRMPQ2G85JZWLsY3necufdmqgqZ6vJwblHtdWkkvXP/z0rrWdryIax1NKYzH7WqTjpwJDe15lr0MTjapwtiWvfqIrnU4SI30cDzJc8ufBw7UD1QmENYdNttU5DZpqbTpJOl3HVEybLGVrj7hRpX21fV9pkYDSpI82s1zY3ixIn+MSfwH

https://www.maketecheasier.com/markdown-editors-linux/

http://codeboje.de/markdown-editors/

http://railsware.com/blog/2014/04/16/creating-books-with-gitbook/

LMNN

paper

http://www.jmlr.org/papers/volume10/weinberger09a/weinberger09a.pdf

http://john.blitzer.com/papers/nips05.pdf

 

Links

http://www.cs.cornell.edu/~kilian/code/lmnn/lmnn.html

https://github.com/cmusatyalab/openface/blob/master/training/attic/train.lua

—————————————————————–

code

http://www.shogun-toolbox.org/static/notebook/current/LMNN.html

http://www.shogun-toolbox.org/examples/latest/examples/multiclass_classifier/large_margin_nearest_neighbours.html

https://github.com/all-umass/metric-learn

https://pypi.python.org/pypi/metric-learn

 

————————————————————————–

Matlab Code LMNN

https://github.com/gabeos/lmnn

https://bitbucket.org/mlcircus/lmnn

http://www.jmlr.org/papers/v10/weinberger09a.html

————————————————————————

 

https://all-umass.github.io/metric-learn/metric_learn.lmnn.html

http://stackoverflow.com/questions/22672172/jama-and-matlab-lmnn-and-eigenvalues

http://stats.stackexchange.com/questions/tagged/k-nearest-neighbour

 

 

Book

https://hal.archives-ouvertes.fr/tel-01314392/document

 

https://en.wikipedia.org/wiki/Large_margin_nearest_neighbor

Material

http://www.cs.utah.edu/~piyush/teaching/cs5350.html

 

 

 

 

 

https://en.wikipedia.org/wiki/Large_margin_nearest_neighborhttps://en.wikipedia.org/wiki/Large_margin_nearest_neighborhttps://en.wikipedia.org/wiki/Large_margin_nearest_neighbor

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

DEEP LEARNING AND MACHINE LEARNING

 

 

1.1 From AI to Deep Learning
Artificial
Intelligence
Machine
Learning
Logistic,
Regression, SVM,
Neural Network
Deep Learn...http://www.humphreysheil.com/blog/deep-learning-and-machine-learning

deep learning vs machine learning

 

 

Ref:

https://www.quora.com/What-is-the-difference-between-Deep-Learning-Machine-learning-and-Artificial-Intelligence-Is-Deep-learning-related-to-data-science

 

 

1.1 From AI to Deep Learning
Artificial
Intelligence
Machine
Learning
Logistic,
Regression, SVM,
Neural Network
Deep Learn...

 

 

https://www.quora.com/Whats-the-difference-between-the-terms-machine-learning-deep-learning-and-AI

https://blogs.nvidia.com/blog/2016/08/30/eye-tracking-deep-learning/

https://oakmachine.com/machine-learning/

 

 


Add and substract


https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-sequence-learning/

https://www.google.co.in/imgres?imgurl=https%3A%2F%2Fdevblogs.nvidia.com%2Fparallelforall%2Fwp-content%2Fuploads%2F2016%2F03%2Frnn.png&imgrefurl=https%3A%2F%2Fdevblogs.nvidia.com%2Fparallelforall%2Fdeep-learning-nutshell-sequence-learning%2F&docid=C–JMfUHt-PBOM&tbnid=5koG3M-0VOt6yM%3A&vet=1&w=826&h=1000&bih=878&biw=1686&q=deep-learning-nutshell-sequence-learning&ved=0ahUKEwjVxv3MqMbRAhVLqo8KHflsDZgQMwgdKAEwAQ&iact=mrc&uact=8#h=1000&vet=1&w=826


GAN


 

Generative Adversarial Networks ( GAN )
The coolest idea in ML in the last twenty years - Yann Lecun
2017.01.13 김남주 (burib...