-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Random Nearest captions followed by an error #37
Comments
You may need to add a t.decode('utf-8') like this:
|
Thanks @ErikOmberg that solved that issue.. but i am faced with a new error in search.py Traceback (most recent call last): on trying to print ti values, i get the following: |
I think I had to add this f2 thing that casts. I'm not entirely sure it's "correct", so don't entirely trust it.
|
Thanks @ErikOmberg that solved the error. But the code now gives out almost similar output irrespective of the image i feed in for example: if i feed image ex3 (which i think is about french fries & ketchup) in the images folder, i get,
I also get the same nearest caption and output when i feed it a tennis image,
NEAREST-CAPTIONS: |
Yes, that's what I observe too. Please let me know if you find how to get quality captions. |
on running generate.story(z, './images/ex2.jpg') which is image of a flower, i get random nearest neighbor captions followed by an error as below:
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Dell\Documents\Neural_storyteller\neural-storyteller-master\generate.py", line 59, in story
print('')
File "C:\Users\Dell\Documents\Neural_storyteller\neural-storyteller-master\skipthoughts.py", line 84, in encode
X = preprocess(X)
File "C:\Users\Dell\Documents\Neural_storyteller\neural-storyteller-master\skipthoughts.py", line 149, in preprocess
sents = sent_detector.tokenize(t)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1237, in tokenize
return list(self.sentences_from_text(text, realign_boundaries))
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1285, in sentences_from_text
return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1276, in span_tokenize
return [(sl.start, sl.stop) for sl in slices]
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1276, in
return [(sl.start, sl.stop) for sl in slices]
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1316, in _realign_boundaries
for sl1, sl2 in _pair_iter(slices):
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 312, in _pair_iter
prev = next(it)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1289, in _slices_from_text
for match in self._lang_vars.period_context_re().finditer(text):
TypeError: cannot use a string pattern on a bytes-like object
The text was updated successfully, but these errors were encountered: