-
-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make pruning error #37
Comments
I've never met this problem before. Maybe #12 issue will help you
…On Sun, Oct 15, 2017 at 10:50 PM, yewuyue111 ***@***.***> wrote:
HI.I'm sorry to bother you.When I run python train.py -action c3 -caffe 0
,I got the error like this:
no lighting pack
[libprotobuf INFO google/protobuf/io/coded_stream.cc:610] Reading
dangerously large protocol message. If the message turns out to be larger
than 2147483647 <(214)%20748-3647> bytes, parsing will be halted for
security reasons. To increase the limit (or to disable these warnings), see
CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_
stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total
number of bytes read was 553432081
Process Process-1:
Traceback (most recent call last):
File "/home/tang/anaconda3/lib/python3.6/multiprocessing/process.py",
line 249, in _bootstrap
self.run()
File "/home/tang/anaconda3/lib/python3.6/multiprocessing/process.py",
line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/tang/channel-pruning/lib/worker.py", line 21, in job
ret = target(**kwargs)
File "train.py", line 26, in step0
net = Net(pt, model=model, noTF=1)
File "/home/tang/channel-pruning/lib/net.py", line 67, in *init*
self.net_param = NetBuilder(pt=pt)
File "/home/tang/channel-pruning/lib/builder.py", line 131, in *init*
pb2.text_format.Merge(f.read(), self.net)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 476, in
Merge
descriptor_pool=descriptor_pool)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 526, in
MergeLines
return parser.MergeLines(lines, message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 559, in
MergeLines
self._ParseOrMerge(lines, message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 574, in
_ParseOrMerge
self._MergeField(tokenizer, message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 675, in
_MergeField
merger(tokenizer, message, field)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 764, in
_MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 675, in
_MergeField
merger(tokenizer, message, field)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 764, in
_MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 675, in
_MergeField
merger(tokenizer, message, field)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 809, in
_MergeScalarField
value = tokenizer.ConsumeString()
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 1151, in
ConsumeString
the_bytes = self.ConsumeByteString()
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 1166, in
ConsumeByteString
the_list = [self._ConsumeSingleByteString()]
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 1191, in
_ConsumeSingleByteString
result = text_encoding.CUnescape(text[1:-1])
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_encoding.py", line 103, in
CUnescape
result = ''.join(_cescape_highbit_to_str[ord(c)] for c in result)
File "/home/tang/anaconda3/lib/python3.6/site-packages/
protobuf-3.2.0-py3.6.egg/google/protobuf/text_encoding.py", line 103, in
result = ''.join(_cescape_highbit_to_str[ord(c)] for c in result)
IndexError: list index out of range
Can you show me what is the problem?
Thanks!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#37>, or mute the
thread
<https://github.com/notifications/unsubscribe-auth/AJkBSxUAxFR2HiN7a_fCIXzjA5FKh3hBks5sssR2gaJpZM4P6BWt>
.
--
Best,
Yihui He <http://yihui-he.github.io/>
|
I have solved this problem. |
Glad to hear that
…On Wed, Oct 18, 2017 at 10:09 PM, yewuyue111 ***@***.***> wrote:
I have solved this problem.
Because the path of ILSVRC has Chinese in it.
Sorry about this.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#37 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AJkBS7572zS1rRtsaq8MFodqkqtIbsw6ks5stq89gaJpZM4P6BWt>
.
--
Best,
Yihui He
yihui-he.github.io
|
Tanks to reply. |
I've forgot how much exactly. Should no more than 100G.
It's very slow, because we are freezing features of each layer for 500x10
imgs.
- Once you have the pickle, you can specify `-frozen 1` from the next run.
It will directly load the pickle instead of freezing, which is fast.
- or you can specify `-nBatches 100` which only freeze 100x10 imgs, but the
final accuracy might not as high as reported.
…On Wed, Oct 18, 2017 at 11:03 PM, yewuyue111 ***@***.***> wrote:
Tanks to reply.
I want to know how much space should I have to run channel pruning?When I
run this code,I find it may freezing imgs to temp/frozen500.pickle ,and it
is very slowly.Or something I am wrong to do it.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#37 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AJkBSxM6Zdd8nDc1Kn9I9eevud7-xbZLks5strvqgaJpZM4P6BWt>
.
--
Best,
Yihui He
yihui-he.github.io
|
Thanks,I will try this or I need add one hard disk. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
HI.I'm sorry to bother you.When I run python train.py -action c3 -caffe 0 ,I got the error like this:
no lighting pack
[libprotobuf INFO google/protobuf/io/coded_stream.cc:610] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 553432081
Process Process-1:
Traceback (most recent call last):
File "/home/tang/anaconda3/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/tang/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/tang/channel-pruning/lib/worker.py", line 21, in job
ret = target(**kwargs)
File "train.py", line 26, in step0
net = Net(pt, model=model, noTF=1)
File "/home/tang/channel-pruning/lib/net.py", line 67, in init
self.net_param = NetBuilder(pt=pt)
File "/home/tang/channel-pruning/lib/builder.py", line 131, in init
pb2.text_format.Merge(f.read(), self.net)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 476, in Merge
descriptor_pool=descriptor_pool)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 526, in MergeLines
return parser.MergeLines(lines, message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 559, in MergeLines
self._ParseOrMerge(lines, message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 574, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 675, in _MergeField
merger(tokenizer, message, field)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 764, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 675, in _MergeField
merger(tokenizer, message, field)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 764, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 675, in _MergeField
merger(tokenizer, message, field)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 809, in _MergeScalarField
value = tokenizer.ConsumeString()
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 1151, in ConsumeString
the_bytes = self.ConsumeByteString()
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 1166, in ConsumeByteString
the_list = [self._ConsumeSingleByteString()]
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_format.py", line 1191, in _ConsumeSingleByteString
result = text_encoding.CUnescape(text[1:-1])
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_encoding.py", line 103, in CUnescape
result = ''.join(_cescape_highbit_to_str[ord(c)] for c in result)
File "/home/tang/anaconda3/lib/python3.6/site-packages/protobuf-3.2.0-py3.6.egg/google/protobuf/text_encoding.py", line 103, in
result = ''.join(_cescape_highbit_to_str[ord(c)] for c in result)
IndexError: list index out of range
Can you show me what is the problem?
Thanks!
The text was updated successfully, but these errors were encountered: