Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests #5

Open
ardila opened this issue Oct 9, 2013 · 4 comments
Open

Tests #5

ardila opened this issue Oct 9, 2013 · 4 comments

Comments

@ardila
Copy link
Member

ardila commented Oct 9, 2013

We need tests to make sure our implementation gives the same results as NYs version.

@ardila
Copy link
Member Author

ardila commented Oct 10, 2013

I didn't have time to finish this tonight since I don't know how to call scripts and then check their standard out, but here is something you can test your convnet.py against (gives the same results every time)

python convnet.py --data-path=/home/ardila/archconvnets/dropnn-release/cifar-10-py-colmajor --save-path=/home/ardila/test_run_diego/ --test-range=6 --train-range=1-5 --layer-def=./cifar-layers/layers-80sec.cfg --layer-params=./cifar-layers/layer-params-80sec.cfg --data-provider=cifar-cropped-rand --test-freq=13 --epochs=1

Should give the following output:

Initialized data layer 'data', producing 1728 outputs
Initialized data layer 'labels', producing 1 outputs
Initialized convolutional layer 'conv1', producing 24x24 32-channel output
Initialized max-pooling layer 'pool1', producing 12x12 32-channel output
Initialized convolutional layer 'conv2', producing 12x12 32-channel output
Initialized avg-pooling layer 'pool2', producing 6x6 32-channel output
Initialized convolutional layer 'conv3', producing 6x6 64-channel output
Initialized avg-pooling layer 'pool3', producing 3x3 64-channel output
Initialized fully-connected layer 'fc128', producing 64 outputs
Initialized fully-connected layer 'fc10', producing 10 outputs
Initialized softmax layer 'probs', producing 10 outputs
Initialized logistic regression cost 'logprob'
Initialized neuron layer 'pool1_neuron', producing 4608 outputs
Initialized neuron layer 'conv2_neuron', producing 4608 outputs
Initialized neuron layer 'conv3_neuron', producing 2304 outputs
Initialized neuron layer 'fc128_neuron', producing 64 outputs
=========================
Importing _ConvNet C++ module
============================================
learning rate scale     :  1
Reset Momentum          :  False
Image Rotation & Scaling:  False
============================================
=========================
Training ConvNet
Adaptive Drop Training                              : False [DEFAULT]
Check gradients and quit?                           : 0     [DEFAULT]
Compress checkpoints?                               : 0     [DEFAULT]
Conserve GPU memory (slower)?                       : 0     [DEFAULT]
Convert given conv layers to unshared local         :       
Cropped DP: crop border size                        : 4     [DEFAULT]
Cropped DP: logreg layer name (for --multiview-test):       [DEFAULT]
Cropped DP: test on multiple patches?               : 0     [DEFAULT]
Data batch range: testing                           : 6-6   
Data batch range: training                          : 1-5   
Data path                                           : /home/ardila/archconvnets/dropnn-release/cifar-10-py-colmajor 
Data provider                                       : cifar-cropped-rand 
Enable image rotation and scaling transformation    : False [DEFAULT]
GPU override                                        : -1    [DEFAULT]
Image Size                                          : 0     [DEFAULT]
Layer definition file                               : ./cifar-layers/layers-80sec.cfg 
Layer parameter file                                : ./cifar-layers/layer-params-80sec.cfg 
Learning Rate Scale Factor                          : 1     [DEFAULT]
Load file                                           :       [DEFAULT]
Maximum save file size (MB)                         : 0     [DEFAULT]
Minibatch size                                      : 128   [DEFAULT]
Model File Name                                     :       [DEFAULT]
Number of GPUs                                      : 1     [DEFAULT]
Number of channels in image                         : 3     [DEFAULT]
Number of epochs                                    : 1     
Reset layer momentum                                : False [DEFAULT]
Save path                                           : /home/ardila/test_run_diego/ 
Test and quit?                                      : 0     [DEFAULT]
Test on one batch at a time?                        : 1     [DEFAULT]
Testing frequency                                   : 13    
Unshare weight matrices in given layers             :       
Whether filp training image                         : True  [DEFAULT]
=========================
Running on CUDA device(s) -2
Current time: Wed Oct  9 21:15:27 2013
Saving checkpoints to /home/ardila/test_run_diego/ConvNet__2013-10-09_21.15.25
=========================
1.1... logprob:  2.176140, 0.815800 (1.355 sec)
1.2... logprob:  1.884615, 0.698000 (1.352 sec)
1.3... logprob:  1.761647, 0.646700 (1.349 sec)
1.4... logprob:  1.726771, 0.632500 (1.352 sec)
1.5... logprob:  1.657396, 0.607800 (1.343 sec)
epoch_cost: 9.20657008266
2.1... logprob:  1.612479, 0.583400 (1.350 sec)

@yamins81
Copy link
Contributor

Working on this now.

On Wed, Oct 9, 2013 at 9:16 PM, Diego Ardila [email protected]:

I didn't have time to finish this tonight since I don't know how to call
scripts and then check their standard out, but here is how something you
can test your convnet.py against (gives the same results every time)

python convnet.py --data-path=/home/ardila/archconvnets/dropnn-release/cifar-10-py-colmajor --save-path=/home/ardila/test_run_diego/ --test-range=6 --train-range=1-5 --layer-def=./cifar-layers/layers-80sec.cfg --layer-params=./cifar-layers/layer-params-80sec.cfg --data-provider=cifar-cropped-rand --test-freq=13 --epochs=1

Should give the following output:

Initialized data layer 'data', producing 1728 outputs
Initialized data layer 'labels', producing 1 outputs
Initialized convolutional layer 'conv1', producing 24x24 32-channel output
Initialized max-pooling layer 'pool1', producing 12x12 32-channel output
Initialized convolutional layer 'conv2', producing 12x12 32-channel output
Initialized avg-pooling layer 'pool2', producing 6x6 32-channel output
Initialized convolutional layer 'conv3', producing 6x6 64-channel output
Initialized avg-pooling layer 'pool3', producing 3x3 64-channel output
Initialized fully-connected layer 'fc128', producing 64 outputs
Initialized fully-connected layer 'fc10', producing 10 outputs
Initialized softmax layer 'probs', producing 10 outputs
Initialized logistic regression cost 'logprob'
Initialized neuron layer 'pool1_neuron', producing 4608 outputs
Initialized neuron layer 'conv2_neuron', producing 4608 outputs
Initialized neuron layer 'conv3_neuron', producing 2304 outputs
Initialized neuron layer 'fc128_neuron', producing 64 outputs Importing
_ConvNet C++ module

learning rate scale : 1
Reset Momentum : False
Image Rotation & Scaling: False

Training ConvNet
Adaptive Drop Training : False [DEFAULT]
Check gradients and quit? : 0 [DEFAULT]
Compress checkpoints? : 0 [DEFAULT]
Conserve GPU memory (slower)? : 0 [DEFAULT]
Convert given conv layers to unshared local :

Cropped DP: crop border size : 4 [DEFAULT]
Cropped DP: logreg layer name (for --multiview-test): [DEFAULT]
Cropped DP: test on multiple patches? : 0 [DEFAULT]
Data batch range: testing : 6-6

Data batch range: training : 1-5

Data path : /home/ardila/archconvnets/dropnn-release/cifar-10-py-colmajor
Data provider : cifar-cropped-rand
Enable image rotation and scaling transformation : False [DEFAULT]
GPU override : -1 [DEFAULT]
Image Size : 0 [DEFAULT]
Layer definition file : ./cifar-layers/layers-80sec.cfg
Layer parameter file : ./cifar-layers/layer-params-80sec.cfg
Learning Rate Scale Factor : 1 [DEFAULT]
Load file : [DEFAULT]
Maximum save file size (MB) : 0 [DEFAULT]
Minibatch size : 128 [DEFAULT]
Model File Name : [DEFAULT]
Number of GPUs : 1 [DEFAULT]
Number of channels in image : 3 [DEFAULT]
Number of epochs : 1

Reset layer momentum : False [DEFAULT]
Save path : /home/ardila/test_run_diego/
Test and quit? : 0 [DEFAULT]
Test on one batch at a time? : 1 [DEFAULT]
Testing frequency : 13

Unshare weight matrices in given layers :
Whether filp training image : True [DEFAULT]

Running on CUDA device(s) -2
Current time: Wed Oct 9 21:15:27 2013
Saving checkpoints to
/home/ardila/test_run_diego/ConvNet__2013-10-09_21.15.25

1.1... logprob: 2.176140, 0.815800 (1.355 sec)
1.2... logprob: 1.884615, 0.698000 (1.352 sec)
1.3... logprob: 1.761647, 0.646700 (1.349 sec)
1.4... logprob: 1.726771, 0.632500 (1.352 sec)
1.5... logprob: 1.657396, 0.607800 (1.343 sec)
epoch_cost: 9.20657008266
2.1... logprob: 1.612479, 0.583400 (1.350 sec)


Reply to this email directly or view it on GitHubhttps://github.com//issues/5#issuecomment-26022412
.

@ardila
Copy link
Member Author

ardila commented Oct 11, 2013

Maybe we want to test on this model which is trained on CIFAR:
home/ardila/test_run_diego/ConvNet__2013-10-09_21.23.05

@ardila
Copy link
Member Author

ardila commented Oct 11, 2013

This model was trained for longer and posted online by the nyu group
/home/ardila/model_fc128-dcf-50/model_fc128-dcf-50_run12/1500.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants