We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在windows下cmake编译了cpp_infer,然后下载上图中的推理模型,是可以正常使用的 然后下载上图中的训练模型,并直接使用命令 python tools/export_model.py -c configs/rec/PP-OCRv3/ch_PP-OCRv3_rec_distillation.yml -o Global.pretrained_model=./ckpt/best_accuracy Global.save_inference_dir=./inference_model/ch_PP-OCRv3_rec/ 推理出来后,在python中是可以推理的,但是在cpp中推理就会crash,crash在paddle_inference.dll里的识别接口中。 同样也是.pdmodel的文件比较小,请问该如何处理呢
The text was updated successfully, but these errors were encountered:
No branches or pull requests
在windows下cmake编译了cpp_infer,然后下载上图中的推理模型,是可以正常使用的
然后下载上图中的训练模型,并直接使用命令
python tools/export_model.py -c configs/rec/PP-OCRv3/ch_PP-OCRv3_rec_distillation.yml -o Global.pretrained_model=./ckpt/best_accuracy Global.save_inference_dir=./inference_model/ch_PP-OCRv3_rec/
推理出来后,在python中是可以推理的,但是在cpp中推理就会crash,crash在paddle_inference.dll里的识别接口中。
同样也是.pdmodel的文件比较小,请问该如何处理呢
The text was updated successfully, but these errors were encountered: