Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you add inference script? #12

Open
abhiksark opened this issue Jun 2, 2020 · 8 comments
Open

Can you add inference script? #12

abhiksark opened this issue Jun 2, 2020 · 8 comments

Comments

@abhiksark
Copy link

Also any plan to prune to the model to speed up the results?

Can we increase batch size during inferencing?

@ThiagoMateo
Copy link

same question. did you increase batch size?

@ThiagoMateo
Copy link

Screenshot from 2020-06-11 11-18-12

i saw input and output had None on 0-dimension. Why doesn't it support batching?

@peteryuX
Copy link
Owner

Hey, @ThiagoMateo. The script can be modified for batching inference in simple way, as you use for-loop to process the NMS and decoding function. The network part can support batch of input like you say. Sorry for your inconvenience.

@ThiagoMateo
Copy link

hey @peteryuX . i change your code like below. Is it correct?
Screenshot from 2020-06-11 14-41-52

But how to set dynamic batching?

@peteryuX
Copy link
Owner

Did you try tf.shape(x)[0] to get the inference batch size?

@ThiagoMateo
Copy link

sorry @peteryuX . could you help me?

@HoangTienDuc
Copy link

hi @peteryuX . i has follow your code many time and it is beatiful work.
i am not familiar with tf-2 and i has a same question how to do dynamic batching.
Can you guide me where to add tf.shape(x)[0] to get dynamic batching?

Thank you!

@peteryuX
Copy link
Owner

peteryuX commented Jun 15, 2020

Hey Guys, I think I share not suitable information for you.
The simpler way to figure this problem out is moving the decode nms out of keras model, which like following:

In ./modules/model.py L 233

    if training:
        out = (bbox_regressions, landm_regressions, classifications)
    else:
        out = (bbox_regressions, landm_regressions, classifications)

In ./test.py L 72

        bbox_regressions, landm_regressions, classifications = model(img)

        results = []
        for i in range(img.shape[0]):
            preds = tf.concat(  # [bboxes, landms, landms_valid, conf]
                [bbox_regressions[i], landm_regressions[i],
                 tf.ones_like(classifications[i, :, 0][..., tf.newaxis]),
                 classifications[i, :, 1][..., tf.newaxis]], 1)
            priors = prior_box_tf((img.shape[1], img.shape[2]),
                                  cfg['min_sizes'],  cfg['steps'], cfg['clip'])
            decode_preds = decode_tf(preds, priors, cfg['variances'])

            selected_indices = tf.image.non_max_suppression(
                boxes=decode_preds[:, :4],
                scores=decode_preds[:, -1],
                max_output_size=tf.shape(decode_preds)[0], # the max output size you want
                iou_threshold=0.4,
                score_threshold=0.02)

            results.append(tf.gather(decode_preds, selected_indices).numpy())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants