Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Killed (also skip-thoughts) #29

Open
itsss opened this issue Oct 3, 2017 · 3 comments
Open

Killed (also skip-thoughts) #29

itsss opened this issue Oct 3, 2017 · 3 comments

Comments

@itsss
Copy link

itsss commented Oct 3, 2017

Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import generate
z = generate.load_all()
/home/OOOO/story/files/romance.npz
Loading skip-thoughts...
Killed

skip-thoughts always sign -> 'Killed'.
when i execute skipthoughts example command 'model = skipthoughts.load_model()' this command kills too.

PLEASE HELP.....

@cuuupid
Copy link

cuuupid commented Nov 25, 2017

Killed may be the result of running out of memory/resources. Check resource consumption during run. What are your system specs?

@quintendewilde
Copy link

quintendewilde commented Mar 4, 2018

Having this too.. I still not sure if I 'pathed' caffe correctly (in config.py) in my lasagna/theano docker image.

>>> import generate
>>> z = generate.load_all()
models/romance.npz
Loading skip-thoughts...
Killed

Model Name: MacBook Pro
Model Identifier: MacBookPro12,1
Processor Name: Intel Core i5
Processor Speed: 2,7 GHz
Number of Processors: 1
Total Number of Cores: 2
L2 Cache (per Core): 256 KB
L3 Cache: 3 MB
Memory: 8 GB

@cuuupid
Copy link

cuuupid commented Mar 5, 2018

If you try to load the models from the interpreter like so:

>>> import numpy as np
>>> np.load('models/modelname.npz')

Are you able to load the models? If this gets killed it is likely because you do not have the memory necessary for numpy to load the entirety of the npz archive. This is a possibility as it uses almost 12GB RAM on my 16GB machine. Since a numpy archive is likely the best way to load the parameters, I'm not sure this can be improved much either. You may be able to load the parameters piece by piece and delete old variables to free up memory (e.g. old_var = None).

As far as the docker image, I haven't used it before but this particular issue is likely to do with loading the models into memory, as misconfiguring Pycaffe leads to a more verbose error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants