-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pretrained models/newest_model.npz #314
Comments
Is this the one you are looking for: https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR |
Hello! Thanks for using our library! |
Hi, Looking forward to the release! I tried to run the inference code using lw openpose model but I am getting this error: ValueError: Training / inference mode not defined. Argument |
@sulgwyn sorry, I've been busy developing pifpaf in the past few weeks. |
I have tried to run
but it comes out:
|
Hey @lengyuner ! |
@bhumikasinghrk |
@lengyuner thanks for your quick response. but it shows error in line 15 in infer.py ---->model.load_weights(weight_path) ValueError:Cannot assign to variable block_1_1_ds_conv1/filters:0 due to variable shape (1, 1, 64, 256) and value shape (64,) are incompatible |
@bhumikasinghrk |
@lengyuner thanks for replying. |
try this if you have got the onnx file (this document is written in Chinese. if you have difficulties, let me know) https://www.jianshu.com/p/3a51f7d3357f And change the code for your purpose. ( You maybe need 'Netron' during changing the code. |
Hi. I faced same issue. Have you found the solution to load weights properly? |
haven't |
Dear @Gyx-One & @ganler, |
Hello @orestis-z , |
Thanks for your quick reply @Gyx-One I've evaluated the C++ OpenPose version on COCO and achieved an accuracy of only ~30 AP. In addition, I've edited the .onnx graph to accept images with the resolution of the COCO dataset which increased the accuracy to ~42 AP. This version however, runs slower than the original OpenPose implementation from CMU. Given this, I wanted to check if something is wrong with my evaluation script or if the Python version with 1 scale and small input resolution would lead to the same result. Given your provided weights, I can't reproduce the accuracy that you've reported for OpenPose. Neither with the C++ nor with Python version. I believe it would be fair to provide the .npz weights for all reported methods, so that we can reproduce them. If we can reproduce them, I'll be very thankful for your great work and the amazing results, as you'd have engineered a system that can run in real-time and still perform well. |
Hello @orestis-z , |
Click the wrong place accidentally, sorry |
@Gyx-One thanks for elaborating. I'm going to open a PR in the next few days with some C++ python bindings so you can also reproduce the results. |
@orestis-z Sure! Thanks again for your contribution! :) |
Hello! @orestis-z I have already upload a series of The evaluation command lines and the results are followed:
1.model_name: new_opps 2.model_name: new_lopps 3.model_name: new_lopps_resnet50 4.model_name: new_lopps_vggtiny 5.model_name: new_mbopps In the next days, I will try to convert the original openpose weight to npz_dict format and figure out the difference between our training procedure and openpose oirginal training procedure. |
@Gyx-One Thanks for the update! I didn't make an MR as I didn't have time to familiarize with your build system. But please find in the following the Python wrapper for hyperpose: // Adapted from https://github.com/tensorlayer/hyperpose/blob/master/examples/cli.cpp
#include <assert.h>
#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <array>
#include <hyperpose/hyperpose.hpp>
#include <hyperpose/logging.hpp>
#include <ostream>
#include <stdexcept>
#include <string>
#include <vector>
#include "opencv2/core/mat.hpp"
namespace py = pybind11;
inline constexpr auto log_hp = []() -> std::ostream& {
std::cout << "[HyperPose] ";
return std::cout;
};
class ParserVariant {
public:
using var_t = std::variant<hyperpose::parser::pose_proposal, hyperpose::parser::paf, hyperpose::parser::pifpaf>;
ParserVariant(var_t v) : parser_(std::move(v)) {}
template<typename Container>
const std::vector<hyperpose::human_t> process(Container&& featureMapContainers) {
return std::visit([&featureMapContainers](auto& arg) { return arg.process(featureMapContainers); }, parser_);
}
private:
var_t parser_;
};
class HyperPose {
const bool keepRatio_;
hyperpose::dnn::tensorrt engine_;
ParserVariant parser_;
static constexpr int MAX_BATCH_SIZE = 1;
static hyperpose::dnn::tensorrt getEngine(const std::string& modelPath,
const cv::Size& networkResolution,
const bool keepRatio,
const bool enableLogging) {
if (enableLogging) hyperpose::enable_logging();
hyperpose::info("Model: ", modelPath, "\n");
constexpr std::string_view ONNX_SUFFIX = ".onnx";
constexpr std::string_view UFF_SUFFIX = ".uff";
if (std::equal(ONNX_SUFFIX.crbegin(), ONNX_SUFFIX.crend(), modelPath.crbegin()))
return hyperpose::dnn::tensorrt(hyperpose::dnn::onnx{modelPath}, networkResolution, MAX_BATCH_SIZE, keepRatio);
if (std::equal(UFF_SUFFIX.crbegin(), UFF_SUFFIX.crend(), modelPath.crbegin())) {
hyperpose::warning(
"For .uff model, the program only takes 'image' as input node, and "
"'outputs/conf,outputs/paf' as output nodes.\n");
return hyperpose::dnn::tensorrt(hyperpose::dnn::uff{modelPath, "image", {"outputs/conf", "outputs/paf"}},
networkResolution,
MAX_BATCH_SIZE,
keepRatio);
}
hyperpose::warning("Your model file's suffix is not [.onnx | .uff]. Your model file path: ", modelPath, "\n");
hyperpose::warning("We assume this is a serialized TensorRT model, and we'll evaluate it in this way.\n");
return hyperpose::dnn::tensorrt(hyperpose::dnn::tensorrt_serialized{modelPath}, networkResolution, MAX_BATCH_SIZE, keepRatio);
}
static ParserVariant::var_t getParser(const std::string& postProcessingMethod, const cv::Size& inputSize) {
if (postProcessingMethod == "paf") return hyperpose::parser::paf{};
if (postProcessingMethod == "ppn") return hyperpose::parser::pose_proposal(inputSize);
if (postProcessingMethod == "pifpaf") return hyperpose::parser::pifpaf(inputSize.height, inputSize.width);
throw std::invalid_argument("Unknown post-processing method '" + postProcessingMethod +
"'. Use 'paf', 'ppn' or 'pifpaf'.");
}
public:
HyperPose(const std::string& modelPath,
const cv::Size& networkResolution,
const bool keepRatio = true,
const std::string& postProcessingMethod = "paf",
const bool enableLogging = false) :
keepRatio_(keepRatio),
engine_{getEngine(modelPath, networkResolution, keepRatio, enableLogging)},
parser_{getParser(postProcessingMethod, engine_.input_size())} {}
HyperPose(const py::object& config) :
HyperPose(config.attr("model_path").cast<std::string>(),
{config.attr("network_resolution").attr("__getitem__")("width").cast<int>(),
config.attr("network_resolution").attr("__getitem__")("height").cast<int>()},
config.attr("keep_ratio").cast<bool>(),
config.attr("post_processing_method").cast<std::string>(),
config.attr("enable_logging").cast<bool>()) {}
const std::vector<hyperpose::human_t> infer(const cv::Mat& mat) {
// * TensorRT Inference.
const std::vector featureMaps = engine_.inference({mat});
assert(featureMaps.size() == 1);
// * Post-Processing.
std::vector poses = parser_.process(featureMaps[0]);
for (auto&& pose : poses) {
if (keepRatio_) hyperpose::resume_ratio(pose, mat.size(), engine_.input_size());
pose.score /= 100; // convert from percentage to fraction format
}
return poses;
}
const std::vector<hyperpose::human_t> infer(const py::array_t<uint8_t>& mat) {
auto rows = mat.shape(0);
auto cols = mat.shape(1);
auto type = CV_8UC3;
const cv::Mat cvMat(rows, cols, type, const_cast<unsigned char*>(mat.data()));
return infer(cvMat);
}
};
PYBIND11_MODULE(hyperpose, module) {
py::class_<HyperPose>(module, "HyperPose")
.def(py::init<const py::object&>())
.def("infer", py::overload_cast<const py::array_t<uint8_t>&>(&HyperPose::infer));
py::class_<hyperpose::human_t>(module, "human_t")
.def_readwrite("score", &hyperpose::human_t::score)
.def_readwrite("parts", &hyperpose::human_t::parts);
py::class_<hyperpose::body_part_t>(module, "body_part_t")
.def_readwrite("has_value", &hyperpose::body_part_t::has_value)
.def_readwrite("x", &hyperpose::body_part_t::x)
.def_readwrite("y", &hyperpose::body_part_t::y)
.def_readwrite("score", &hyperpose::body_part_t::score);
} And cmake:
Pybind11: you can install it with |
Hi, I am unable to find the file newest_model.npz for Resnet50 backbone architecture. Is the pretrained models released? If so where can i find the .npz file? If not, when can i expect the pretrained models to be released for inference?
The text was updated successfully, but these errors were encountered: