-
Notifications
You must be signed in to change notification settings - Fork 0
/
train_output.txt
executable file
·55 lines (55 loc) · 4.18 KB
/
train_output.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
There is 1 NVIDIA GPU available in this machine.
MemoryDataLayer dataTrain loading data:
179.443 MB
name:mnist train images dim[4]={60000,1,28,28}
234.375 KB
name:mnist train labels dim[4]={60000,1,1,1}
MemoryDataLayer dataTest loading data:
29.9072 MB
name:mnist test images dim[4]={10000,1,28,28}
39.0625 KB
name:mnist test labels dim[4]={10000,1,1,1}
=====================================================================================================================================
Layers: Responses:
=====================================================================================================================================
dataTrain
data[4]={100,1,28,28}
label[4]={100,1,1,1}
dataTest
* conv1 weight[4]={20,1,5,5} bias[4]={1,20,1,1}
* conv1[4]={100,20,24,24}
pool1
* pool1[4]={100,20,12,12}
* ip1 weight[2]={500,2880} bias[1]={500}
* ip1[4]={100,500,1,1}
relu1
* ip2 weight[2]={10,500} bias[1]={10}
* prob[4]={100,10,1,1}
prob
loss
=====================================================================================================================================
[Net] GPU 0: Total GPU memory: 17.1944 MB
=====================================================================================================================================
[Solver] GPU 0: Total GPU memory: 28.2267 MB
All GPUs: Total GPU memory: 28.2267 MB
===========================================================================================================================================
Training: Testing:
===========================================================================================================================================
Epoch 0 Iteration 0 loss = 2.31218 * 1 eval = 0.11
Epoch 0 Iteration 0 learning_rate = 0.01 loss = 2.33974 * 1 eval = 0.12
Epoch 0 Iteration 100 learning_rate = 0.00992565 loss = 0.782117 * 1 eval = 0.79
Epoch 0 Iteration 200 loss = 0.34111 * 1 eval = 0.92
Epoch 0 Iteration 200 learning_rate = 0.00985258 loss = 0.296047 * 1 eval = 0.91
Epoch 0 Iteration 300 learning_rate = 0.00978075 loss = 0.373389 * 1 eval = 0.91
Epoch 0 Iteration 400 loss = 0.317869 * 1 eval = 0.91
Epoch 0 Iteration 400 learning_rate = 0.00971013 loss = 0.19207 * 1 eval = 0.98
Epoch 0 Iteration 500 learning_rate = 0.00964069 loss = 0.293197 * 1 eval = 0.93
Epoch 1 Iteration 600 loss = 0.309794 * 1 eval = 0.88
Epoch 1 Iteration 600 learning_rate = 0.0095724 loss = 0.204257 * 1 eval = 0.94
Epoch 1 Iteration 700 learning_rate = 0.00950522 loss = 0.223026 * 1 eval = 0.94
Epoch 1 Iteration 800 loss = 0.28211 * 1 eval = 0.94
Epoch 1 Iteration 800 learning_rate = 0.00943913 loss = 0.236288 * 1 eval = 0.92
Epoch 1 Iteration 900 learning_rate = 0.00937411 loss = 0.419412 * 1 eval = 0.88
Epoch 1 Iteration 1000 loss = 0.287435 * 1 eval = 0.9
Epoch 1 Iteration 1000 learning_rate = 0.00931012 loss = 0.223635 * 1 eval = 0.95
After Train 1000 Iterations Needs Time: Time passes 22.1424 seconds