Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runing error #1

Open
mhmd-shadfar opened this issue Sep 2, 2018 · 22 comments
Open

runing error #1

mhmd-shadfar opened this issue Sep 2, 2018 · 22 comments

Comments

@mhmd-shadfar
Copy link

hi ...
when i run the code i got this following error :

currentUrl
currentUrl /home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec
parentUrl /home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master
video_files:
['data/train/lab/4p-c0.avi', 'data/train/lab/4p-c1.avi', 'data/train/lab/4p-c2.avi', 'data/train/lab/4p-c3.avi']
detection_files:
['data/train/lab/4p-c0.pickle', 'data/train/lab/4p-c1.pickle', 'data/train/lab/4p-c2.pickle', 'data/train/lab/4p-c3.pickle']
Traceback (most recent call last):
File "master.py", line 284, in
demo(save=True)
File "master.py", line 188, in demo
caps, detections, video_length = get_caps_and_pickles(video_files, detection_files)
File "master.py", line 26, in get_caps_and_pickles
detections.append(np.load(detection_file))
File "/home/mohi/.local/lib/python3.6/site-packages/numpy/lib/npyio.py", line 384, in load
fid = open(file, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'data/train/lab/4p-c0.pickle'

help to solve this issue thanks.

@Robootx
Copy link
Owner

Robootx commented Sep 16, 2018

please see the updated readme

@mhmd-shadfar
Copy link
Author

hi again and thanks for response
i do that but i got this following error 👍
mohi@mohi:~/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master$ sudo python3 master.py
[sudo] password for mohi:
currentUrl
currentUrl /home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec
parentUrl /home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master
video_files:
['data/train/lab/4p-c0.avi', 'data/train/lab/4p-c1.avi', 'data/train/lab/4p-c2.avi', 'data/train/lab/4p-c3.avi']
detection_files:
['data/train/lab/4p-c0.pickle', 'data/train/lab/4p-c1.pickle', 'data/train/lab/4p-c2.pickle', 'data/train/lab/4p-c3.pickle']
Get feature map: block_layer3 with is_training= True reuse: False
image_size: (?, 3, 224, 224)
conv_1: (?, 64, 224, 224)
identity_1: (?, 64, 224, 224)
identity_2: (?, 64, 224, 224)
block_layer_1: (?, 64, 112, 112)
block_layer_2: (?, 128, 56, 56)
block_layer_3: (?, 256, 56, 56)
feature map shape: (?, 56, 56, 256)
feature_with_position: (?, 12549)
Initialize Done...
resnet_size: 18
feature_map_layer: block_layer3
alpha: 0.5
Load weights from box2vec/model/model-86000 for box_to_vect!

frame_id: 100

frame_id: 101

frame_id: 102

frame_id: 103

frame_id: 104

frame_id: 105

frame_id: 106

frame_id: 107

frame_id: 108

frame_id: 109
Traceback (most recent call last):
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnimplementedError: Generic conv implementation only supports NHWC tensor format for now.
[[Node: resnet/conv2d/Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](transpose, resnet/conv2d/kernel/read)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "master.py", line 284, in
demo(save=True)
File "master.py", line 213, in demo
embedding = get_embedding(boxes, frames, box_to_vect, sess)
File "master.py", line 83, in get_embedding
embedding = box_to_vect.inference(image_batch, bbox_batch, box_ind_batch, sess)
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 167, in inference
embeddings = sess.run(self.embeddings, feed_dict=feed_dict)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnimplementedError: Generic conv implementation only supports NHWC tensor format for now.
[[Node: resnet/conv2d/Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](transpose, resnet/conv2d/kernel/read)]]

Caused by op 'resnet/conv2d/Conv2D', defined at:
File "master.py", line 284, in
demo(save=True)
File "master.py", line 198, in demo
box_to_vect, sess = init_box_to_vect_net(model_file)
File "master.py", line 98, in init_box_to_vect_net
mode='test',
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 115, in init
self._build_net()
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 123, in _build_net
self.embeddings = self.feature_to_embedding()
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 605, in feature_to_embedding
feature_map = self.resnet_v2_maps() # [image_batch, h, w, c]
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 600, in resnet_v2_maps
params['block'], params['layers'], feature_map_layer=self.feature_map_layer, is_training=is_training, data_format=self.data_format)
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 509, in features_v2_generator
data_format=data_format)
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 337, in conv2d_fixed_padding
data_format=data_format)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/layers/convolutional.py", line 621, in conv2d
return layer.apply(inputs)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 828, in apply
return self.call(inputs, *args, **kwargs)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 717, in call
outputs = self.call(inputs, *args, **kwargs)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/layers/convolutional.py", line 168, in call
outputs = self._convolution_op(inputs, self.kernel)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 868, in call
return self.conv_op(inp, filter)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 520, in call
return self.call(inp, filter)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 204, in call
name=self.name)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 956, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/home/mohi/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

UnimplementedError (see above for traceback): Generic conv implementation only supports NHWC tensor format for now.
[[Node: resnet/conv2d/Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](transpose, resnet/conv2d/kernel/read)]]

i dont know why its happening like this. but i should say that i have tensorflow on my cpu cause i have amd graphic card .
how sholud i solve this?

@Robootx
Copy link
Owner

Robootx commented Sep 17, 2018

may be you need a GPU, the code doesn't support cpu

@mhmd-shadfar
Copy link
Author

mhmd-shadfar commented Sep 17, 2018 via email

@Robootx
Copy link
Owner

Robootx commented Sep 17, 2018

nvidia gtx1070, tensorflow-gpu

@mhmd-shadfar
Copy link
Author

mhmd-shadfar commented Sep 17, 2018 via email

@mhmd-shadfar
Copy link
Author

mhmd-shadfar commented Sep 17, 2018 via email

@Robootx
Copy link
Owner

Robootx commented Sep 17, 2018

you can try input image in NHWC format and set data_format = 'channels_last' here line 97 in master.py

@mhmd-shadfar
Copy link
Author

hi
i change the the code as you said
data_format = 'channels_last' in master.py
but i got this following error how can i solve it?

mohi@mohi:~/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master$ python3 master.py
currentUrl
currentUrl /home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec
parentUrl /home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master
video_files:
['data/train/lab/4p-c0.avi', 'data/train/lab/4p-c1.avi', 'data/train/lab/4p-c2.avi', 'data/train/lab/4p-c3.avi']
detection_files:
['data/train/lab/4p-c0.pickle', 'data/train/lab/4p-c1.pickle', 'data/train/lab/4p-c2.pickle', 'data/train/lab/4p-c3.pickle']
Get feature map: block_layer3 with is_training= True reuse: False
Traceback (most recent call last):
File "master.py", line 284, in
demo(save=True)
File "master.py", line 198, in demo
box_to_vect, sess = init_box_to_vect_net(model_file)
File "master.py", line 98, in init_box_to_vect_net
mode='test',
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 115, in init
self._build_net()
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 123, in _build_net
self.embeddings = self.feature_to_embedding()
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 605, in feature_to_embedding
feature_map = self.resnet_v2_maps() # [image_batch, h, w, c]
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 600, in resnet_v2_maps
params['block'], params['layers'], feature_map_layer=self.feature_map_layer, is_training=is_training, data_format=self.data_format)
File "/home/mohi/Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master/box2vec/resnet.py", line 506, in features_v2_generator
print('image_size:', inputs.shape)
UnboundLocalError: local variable 'inputs' referenced before assignment

@Robootx
Copy link
Owner

Robootx commented Sep 18, 2018

sorry for the bug, i updated the line 498~504 in resnet.py

@mhmd-shadfar
Copy link
Author

mhmd-shadfar commented Sep 18, 2018 via email

@mhmd-shadfar
Copy link
Author

mhmd-shadfar commented Sep 18, 2018 via email

@mhmd-shadfar
Copy link
Author

mhmd-shadfar commented Sep 26, 2018 via email

@ahz97
Copy link

ahz97 commented Oct 19, 2018

Hi
when i run the code i got this following error :

currentUrl E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master
currentUrl E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\box2vec
parentUrl E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master
video_files:
['E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\data\train\lab\4p-c0.avi', 'E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\data\train\lab\4p-c1.avi', 'E:\Human Detections\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\data\train\lab\4p-c2.avi', 'E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\data\train\lab\4p-c3.avi']
detection_files:
['E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\data\train\lab\4p-c0.pickle', 'E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\data\train\lab\4p-c1.pickle', 'E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\data\train\lab\4p-c2.pickle', 'E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\data\train\lab\4p-c3.pickle']
Get feature map: block_layer3 with is_training= True reuse: False
image_size: (?, 3, 224, 224)
conv_1: (?, 64, 224, 224)
identity_1: (?, 64, 224, 224)
identity_2: (?, 64, 224, 224)
block_layer_1: (?, 64, 112, 112)
block_layer_2: (?, 128, 56, 56)
block_layer_3: (?, 256, 56, 56)
feature map shape: (?, 56, 56, 256)
Traceback (most recent call last):
File "E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\master.py", line 284, in
demo(save=True)
File "E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\master.py", line 198, in demo
box_to_vect, sess = init_box_to_vect_net(model_file)
File "E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\master.py", line 98, in init_box_to_vect_net
mode='test',
File "E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\box2vec\resnet.py", line 115, in init
self._build_net()
File "E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\box2vec\resnet.py", line 123, in _build_net
self.embeddings = self.feature_to_embedding()
File "E:\Multi-Camera-Object-Tracking-via-Transferring-Representation-to-Top-View-master\box2vec\resnet.py", line 619, in feature_to_embedding
crop_size = Config['roi_pooling_size']
TypeError: 'type' object is not subscriptable

help me to solve this issue.
thanks

@BhaskarNallani
Copy link

Please set data_format=None simply it will work on GPU or CPU.

@leijuzi
Copy link

leijuzi commented Jul 10, 2019

hi when i try master.py i got this error, anyone can help me?

@leijuzi
Copy link

leijuzi commented Jul 10, 2019

image

@Sukhoimaster
Copy link

image

hey @leijuzi , i had the same output when run the program .
did you manage to resolve this issue ?

@kumarshrestha009
Copy link

Downgrade your numpy to 1.16.3 or 1.16.2 or 1.16.1

@kumarshrestha009
Copy link

kumarshrestha009 commented Oct 16, 2019 via email

@InzamamAnwar
Copy link

@windspirit95
Copy link

image

hey @leijuzi , i had the same output when run the program .
did you manage to resolve this issue ?

Instead of downgrading numpy, just modify: np.load(detection_file, allow_pickle=True) in line 26.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants