Releases: onnx/keras-onnx
v1.7.0
The major update includes:
- supports tensorflow 2.x
- support ONNX 1.7
- support most of huggingface/transformers
- improve the RNN model conversion
- pytest to be the default unit test
- enable flake8 style check
......
Details:
Support tf.nn.leaky_relu and fix advanced_activations (#514)
Add both tf.nn.X and tf.compat.v1.nn.X to activation_map (#513)
Handle different input cases for tf.nn.relu6 support (#510)
Support tf.nn.relu6 (#506)
Run flake8 for keras2onnx dir, add flake8 to UT (#504)
Add time_major handling for bidirectional lstms (#498)
try to enable flake8 checker. (#503)
Support Max/Min opset 12 (#500)
Support onnx 1.7 and ort 1.3 in UT and nightly build. (#496)
Fix shrink_mask for dynamic input shape (#499)
Add transformer TFXLNet to nightly build (#495)
Add support for tf.nn.depth_to_space lambda (#492)
add tf.where and logical ops supports. (#490)
Support dynamic end for tf.strided_slice conversion (#491)
in tf2.x, the tf.keras will be the default model format (#486)
Support tf.ArgMax/Min and Add UT for tf.Einsum (#488)
Support tensorflow 2.2 (#484)
Adjust input output sizes when any dim is None (#480)
Fix GPT2 UT for transformers==2.8.0 (#478)
Fix GPT2 output order mismatch (#476)
Handle conv layer spec when input0 is type SpaceToBatchND (#475)
improve the converter debugging (#466)
Add UT for PushTranspose optimizer Unsqueeze case (#472)
Unit test for TopK (#470)
Use greater/less_equal from onnxconverter_common (#469)
Increase error bar for test_efn (#468)
Enable test for efficientNet, fix coverage pytest (#464)
Update the behavior on custom op. (#459)
Handle mask-rcnn conversion for ort 1.2 (#452)
Handle time_major in lstm and remove reshape from embedding (#457)
support the random generator ops and fix the issues on tf.op (#453)
Convert tf EinSum, OneHot, LogicalAnd/Not etc (#449)
Better conversion for the subclassing model and code reformat. (#446)
Enable transformers in nightly build (#444)
Add conversion for FloorDiv, ZerosLike and Fix bug in slice (#439)
Add outputs to jupyter notebook EfficientNet (#445)
Fix some tf2.x conversion bugs. (#443)
Jupyter notebook for EfficientNet (#442)
Parametrizing RNN tests (#441)
Use layer_info.inputs as inputs for efficientNet (#438)
unittest -> pytest (#425)
Fix the depthwise conv_2d output issue. (#437)
Support tf.Cumsum conversion (#435)
Upgrade the converter to 1.7.0, along with the onnxconverter-common (#430)
Update unit test and coverage condition (#428)
Fix _create_keras_nodelist for test_rnn_state_passing (#427)
Support RNN for tf2 and tf.keras (#422)
patch for ir_version with onnx 1.7 packages. (#423)
Support masking for tf2 and tf.keras (#421)
Support initial states for Bidirectional RNN (#417)
Handle TimeDistributed layer for tf2 and tf.keras (#420)
Ping keras-segmentation==0.2.0 (#415)
Update README.md
Update README.md
Bidirectional GRU and SimpleRNN support (#413)
support tensorflow 2.2 and some fixing related to subclassed. (#414)
Fix LSTM layer conversion in tf 2.x (#412)
Add unit test for conv+batch fusion (#411)
add conv-1d keras layer spec. (#410)
Deleted the unused tf2onnx code. (#408)
Refactor RNN parameter extracting (#405)
Convert tf.add_n using keras _builtin (#406)
fix the conv/bn issue on NCHW tf.keras (#404)
Bidirectional Masking support (#400)
Relax vgg-seg error bound in nightly build (#402)
Disable vgg16 in tfv2 nightly build (#399)
fixing the conv auto-pads (#397)
Masking RNN with zeros input (#386)
Add InceptionV3 in tf2.x test. (#396)
Add vgg16 and nasnet to tfv2 application (#395)
Add DepthwiseConv2d to subclassed model and efficient-net test cases (#394)
Add tf2 to nightly build; add tf.square conversion (#393)
Fix get_attr string issue in convert_tf_depthwise (#391)
Custom Masking value (#389)
support the swish activation layer. (#390)
Enable more layer converters for the subclassing model. (#383)
Fixing pip install path in README (#388)
Add conv_add to unit test with constant input (#381)
Revert "add batchnormalization layer. (#380)"
add batchnormalization layer. (#380)
Pin onnxruntime for build in use of onnx < 1.6 (#377)
support the tf2.x variable in this converter. (#376)
test (#374)
conv-transpose layer conversion in the subclassing mode. (#372)
Add tf.cast to test_tf_slice in UT (#370)
Update README.md
Update README.md
Cast input argument of tf.slice to int32 (#369)
Contributions
Our community contributors in this release include @cjermain, @sonu1-p, @CNugteren , @buddhapuneeth and etc. Thanks a lot for their effort to make this converter better.
v1.6.1
v1.6.5
The major update for this release is to support the tf.keras in tensorflow 2.0/2.1, which enable some popular models conversion. like huggingface/transformers.
v1.6.0
v1.5.2
Major update:
- Improve submodel and shared model conversion to handle more challenging cases.
- Fix and validate several object detection, LSTM, GAN models correctly, and add them to nightly build.
- Enable tf direction conversion, add command line support.
- Add keras making layer, fix time_distributed layer, LSTM, dot, etc.
v1.5.1
Major update:
- Work with all tf.keras from multiple tensorflow version, and any bug fixed.
- Support ONNX symbolic name constraint.
- Better support Keras model layer conversion.
- support MaskRCNN and Yolo3 which be run with ONNXRuntime.
- Verify model conversion for more categories, such as Speech and GAN
- Fixed LSTM/BLSTM conversion bugs.
v1.5.0
keras2onnx version 1.5.0 is now available! This version features ONNX Opset 10 support, compatibility with conversion of state-of-the-art object detection models (YoloV3), and increased test coverage.
How do I use the latest keras2onnx package?
pip install keras2onnx --upgrade
python -c "import keras2onnx"
Note: keras2onnx
has been tested with Python 3.5, 3.6, and 3.7. It does not currently support Python 2.x.
Highlights since the last release
- Updating package version to 1.5.0 (#113)
- Add OnnxOperatorBuilder (#112)
- Handle multiple dimensions case for BatchNormalization (#110, #106, #104)
- Improving test coverage + documentation (#109, #107, #100, #99, #79, #72, #70)
- Enable the dynamic batch size for the converted model (#93)
- Bug fixes / Conversion Updates
- CI Build Updates
- Opset 10 updates