Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed all accessibility colors. Again. #23055

Merged
merged 47 commits into from
Dec 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
8189132
Fixed all accessibility colors. Again.
MaanavD Dec 9, 2024
ed72c86
Removed ruby version req. for checklinks.
MaanavD Dec 9, 2024
376e895
Forced ruby version 3.3 (current latest).
MaanavD Dec 9, 2024
2a55fcf
fixed checklinks.
MaanavD Dec 9, 2024
c8f9c9b
Fixed ignore syntax.
MaanavD Dec 9, 2024
dbca8bd
Attempted fix checklinks.
MaanavD Dec 9, 2024
43c415d
Attempted fix checklinks again.
MaanavD Dec 9, 2024
6e2a114
checklinks fix v3.
MaanavD Dec 9, 2024
7ac7c06
Trying gh action for checklinks.
MaanavD Dec 9, 2024
1fe28d7
update ruby.
MaanavD Dec 9, 2024
32dbc95
Update htmlproofer.
MaanavD Dec 9, 2024
b71a8a4
node 22 for LTS.
MaanavD Dec 9, 2024
99ae397
Working! now to add the flags.
MaanavD Dec 9, 2024
bdd406f
trying to remove wrong flag.
MaanavD Dec 9, 2024
06de101
only check links.
MaanavD Dec 9, 2024
adef153
Updated checks syntax.
MaanavD Dec 9, 2024
251df49
Allow missing HREF.
MaanavD Dec 9, 2024
c433053
Don't check external hashes.
MaanavD Dec 9, 2024
79fdacc
replaced all instances of http with https
MaanavD Dec 9, 2024
4ee28c7
Fixed false.
MaanavD Dec 9, 2024
fcd0a48
Fixed false (again?)/
MaanavD Dec 9, 2024
d2f3328
removed false?
MaanavD Dec 9, 2024
cc02e5c
Trying again.
MaanavD Dec 9, 2024
0006f62
Trying again..
MaanavD Dec 9, 2024
95acbed
Trying again...
MaanavD Dec 9, 2024
e220f26
Ignore linkedin for spam.
MaanavD Dec 9, 2024
8b5e423
Trying different checklinks.
MaanavD Dec 10, 2024
cee8760
Trying different checklinks.
MaanavD Dec 10, 2024
8e2bbb8
Formatted using prettier.
MaanavD Dec 10, 2024
4d6646f
Using older htmlproofer.
MaanavD Dec 10, 2024
0f63d84
Using oldest htmlproofer.
MaanavD Dec 10, 2024
5d7f3b2
Tried older ruby.
MaanavD Dec 10, 2024
d9d0b86
attempting htmlproofer again.
MaanavD Dec 10, 2024
921db9f
Trying to fix formatting.
MaanavD Dec 10, 2024
cfcaf65
Trying again.
MaanavD Dec 10, 2024
73e2ee7
Trying again....
MaanavD Dec 10, 2024
b1d4539
Fix checks.
MaanavD Dec 10, 2024
8a3b7f1
doublequote links.
MaanavD Dec 10, 2024
2adc617
remove links only check.
MaanavD Dec 10, 2024
3b8da3e
don't check external hash.
MaanavD Dec 10, 2024
827f066
don't check external hash..
MaanavD Dec 10, 2024
68eca83
disable external hash check.
MaanavD Dec 10, 2024
650ca6a
no check external hash.
MaanavD Dec 10, 2024
0b109b5
block linkedin, allow_missing_href
MaanavD Dec 10, 2024
0ddcb8d
block linkedin properly.
MaanavD Dec 10, 2024
f9cbd18
fixed links.
MaanavD Dec 10, 2024
824c734
Fixed all links.
MaanavD Dec 10, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 27 additions & 4 deletions .github/workflows/check-website-links.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
name: CheckLinks

on:
push:
branches:
Expand All @@ -12,20 +13,21 @@ jobs:
checklinks:
name: Check website links
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: 2.6
ruby-version: 3.3
bundler-cache: true
- name: Build jekyll website with drafts
run: bundle exec jekyll build --drafts

- name: Use Node.js
uses: actions/setup-node@v3
with:
node-version: 19.x
node-version: 22.x

- name: Install dependencies
run: npm install
Expand All @@ -37,7 +39,28 @@ jobs:
run: |
sudo mv ./build/* ./_site
rm -rf ./_site/src

- name: Check for broken links
run: |
bundle exec htmlproofer --assume_extension --checks_to_ignore ImageCheck,ScriptCheck --only_4xx --http_status_ignore 429,403 --allow_hash_href --url_ignore "https://onnxruntime.ai/docs/reference/api/c-api.html,https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#c-api-example,https://www.onnxruntime.ai/docs/resources/graph-optimizations.html,onnxruntime/capi/onnxruntime_pybind11_state.html,https://github.com/microsoft/onnx-converters-private/issues/new/choose,https://aka.ms/onnx/exportissue,https://aka.ms/onnx/board" --log-level :info ./_site
bundle exec htmlproofer ./_site \
--only_4xx \
--ignore-status-codes 429,403 \
--allow_hash_href \
--allow_missing_href \
--ignore_urls "/.*linkedin\.com.*/,https://onnxruntime.ai/docs/reference/api/c-api.html,https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#c-api-example,https://www.onnxruntime.ai/docs/resources/graph-optimizations.html,onnxruntime/capi/onnxruntime_pybind11_state.html,https://github.com/microsoft/onnx-converters-private/issues/new/choose,https://aka.ms/onnx/exportissue,https://aka.ms/onnx/board" \
--no-check-external-hash
MaanavD marked this conversation as resolved.
Show resolved Hide resolved
# - name: Check for broken links
# uses: chabad360/htmlproofer@master
# with:
# directory: "./_site"
# # The directory to scan
# arguments: |
# --no-check_external_hash
# --assume_extension
# --only_4xx
# --ignore_status_codes 429,403,999
# --allow_missing_href
# --allow_hash_href
# --checks 'Links'
# --log-level :info
# --ignore_urls "^https://linkedin.com,https://onnxruntime.ai/docs/reference/api/c-api.html,https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#c-api-example,https://www.onnxruntime.ai/docs/resources/graph-optimizations.html,onnxruntime/capi/onnxruntime_pybind11_state.html,https://github.com/microsoft/onnx-converters-private/issues/new/choose,https://aka.ms/onnx/exportissue,https://aka.ms/onnx/board"
# # The arguments to pass to HTMLProofer
7 changes: 4 additions & 3 deletions _sass/color_schemes/onnxruntime.scss
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ $btn-primary-color: #226aca;
// }
// 2024 December Accessibility changes
.highlight .s { color: #3c7a3b ;}
.highlight .py {color: #a25f00;}
// Initial Theme
.highlight .hll { background-color: #ffffcc; }
.highlight { background: #ffffff; }
Expand All @@ -22,7 +23,7 @@ $btn-primary-color: #226aca;
.highlight .o { color: #333333; }
.highlight .ch { color: #707070 ; }
.highlight .cm { color: #707070 ; }
.highlight .cp { color: #557799; }
.highlight .cp { color: #507191; }
.highlight .cpf { color: #707070 ; }
.highlight .c1 { color: #707070 ; }
.highlight .cs { color: #cc0000; font-weight: bold; }
Expand Down Expand Up @@ -52,7 +53,7 @@ $btn-primary-color: #226aca;
.highlight .ni { color: #880000; font-weight: bold; }
.highlight .ne { font-weight: bold; color: #eb0000; }
.highlight .nf { color: #0066BB; font-weight: bold; }
.highlight .nl { font-weight: bold; color: #8f6f00; }
.highlight .nl { font-weight: bold; color: #876900; }
.highlight .nn { font-weight: bold; color: #0d77a2 ; }
.highlight .nt { color: #007700; }
.highlight .nv { color: #996633; }
Expand All @@ -68,7 +69,7 @@ $btn-primary-color: #226aca;
.highlight .sc { color: #0044DD; }
.highlight .dl { background-color: #fff0f0; }
.highlight .sd { color: #d54220; }
.highlight .s2 { background-color: #fff0f0; }
.highlight .s2 { color: #3c7a3b ; background-color: #fff0f0; }
.highlight .se { color: #666666; font-weight: bold; background-color: #fff0f0; }
.highlight .sh { background-color: #fff0f0; }
.highlight .si { background-color: #eeeeee; }
Expand Down
4 changes: 2 additions & 2 deletions docs/build/eps.md
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
*2024.3 is the current recommended OpenVINO™ version. [OpenVINO™ 2023.3](https://docs.openvino.ai/2023.3/home.html) is minimal OpenVINO™ version requirement.*

2. Configure the target hardware with specific follow on instructions:
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/latest/openvino_docs_install_guides_configurations_for_intel_gpu.html#gpu-guide-windows), [Linux](https://docs.openvino.ai/latest/openvino_docs_install_guides_configurations_for_intel_gpu.html#linux)
* To configure Intel<sup>®</sup> Processor Graphics(GPU) please follow these instructions: [Windows](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#windows), [Linux](https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-gpu.html#linux)


3. Initialize the OpenVINO™ environment by running the setupvars script as shown below. This is a required step:
Expand Down Expand Up @@ -306,7 +306,7 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
* `--use_openvino` builds the OpenVINO™ Execution Provider in ONNX Runtime.
* `<hardware_option>`: Specifies the default hardware target for building OpenVINO™ Execution Provider. This can be overriden dynamically at runtime with another option (refer to [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#summary-of-options) for more details on dynamic device selection). Below are the options for different Intel target devices.

Refer to [Intel GPU device naming convention](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_GPU.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.

| Hardware Option | Target Device |
| --------------- | ------------------------|
Expand Down
6 changes: 3 additions & 3 deletions docs/build/inferencing.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Basic CPU build
cd onnxruntime
```

* Install [Python 3.x](http://python.org/).
* Install [Python 3.x](https://python.org/).

* Install [cmake-3.27](https://cmake.org/download/) or higher.

Expand Down Expand Up @@ -394,7 +394,7 @@ This option is very fast and allows the package to be built in minutes, but is c

TLDR; Go to https://www.linaro.org/downloads/, get "64-bit Armv8 Cortex-A, little-endian" and "Linux Targeted", not "Bare-Metal Targeted". Extract it to your build machine and add the bin folder to your $PATH env. Then skip this part.

You can use [GCC](https://gcc.gnu.org/) or [Clang](http://clang.llvm.org/). Both work, but instructions here are based on GCC.
You can use [GCC](https://gcc.gnu.org/) or [Clang](https://clang.llvm.org/). Both work, but instructions here are based on GCC.

In GCC terms:
* "build" describes the type of system on which GCC is being configured and compiled.
Expand All @@ -412,7 +412,7 @@ This option is very fast and allows the package to be built in minutes, but is c
COLLECT_GCC=/usr/bin/aarch64-linux-gnu-gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/aarch64-linux-gnu/9/lto-wrapper
Target: aarch64-linux-gnu
Configured with: ../gcc-9.2.1-20190827/configure --bindir=/usr/bin --build=x86_64-redhat-linux-gnu --datadir=/usr/share --disable-decimal-float --disable-dependency-tracking --disable-gold --disable-libgcj --disable-libgomp --disable-libmpx --disable-libquadmath --disable-libssp --disable-libunwind-exceptions --disable-shared --disable-silent-rules --disable-sjlj-exceptions --disable-threads --with-ld=/usr/bin/aarch64-linux-gnu-ld --enable-__cxa_atexit --enable-checking=release --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++ --enable-linker-build-id --enable-lto --enable-nls --enable-obsolete --enable-plugin --enable-targets=all --exec-prefix=/usr --host=x86_64-redhat-linux-gnu --includedir=/usr/include --infodir=/usr/share/info --libexecdir=/usr/libexec --localstatedir=/var --mandir=/usr/share/man --prefix=/usr --program-prefix=aarch64-linux-gnu- --sbindir=/usr/sbin --sharedstatedir=/var/lib --sysconfdir=/etc --target=aarch64-linux-gnu --with-bugurl=http://bugzilla.redhat.com/bugzilla/ --with-gcc-major-version-only --with-isl --with-newlib --with-plugin-ld=/usr/bin/aarch64-linux-gnu-ld --with-sysroot=/usr/aarch64-linux-gnu/sys-root --with-system-libunwind --with-system-zlib --without-headers --enable-gnu-indirect-function --with-linker-hash-style=gnu
Configured with: ../gcc-9.2.1-20190827/configure --bindir=/usr/bin --build=x86_64-redhat-linux-gnu --datadir=/usr/share --disable-decimal-float --disable-dependency-tracking --disable-gold --disable-libgcj --disable-libgomp --disable-libmpx --disable-libquadmath --disable-libssp --disable-libunwind-exceptions --disable-shared --disable-silent-rules --disable-sjlj-exceptions --disable-threads --with-ld=/usr/bin/aarch64-linux-gnu-ld --enable-__cxa_atexit --enable-checking=release --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++ --enable-linker-build-id --enable-lto --enable-nls --enable-obsolete --enable-plugin --enable-targets=all --exec-prefix=/usr --host=x86_64-redhat-linux-gnu --includedir=/usr/include --infodir=/usr/share/info --libexecdir=/usr/libexec --localstatedir=/var --mandir=/usr/share/man --prefix=/usr --program-prefix=aarch64-linux-gnu- --sbindir=/usr/sbin --sharedstatedir=/var/lib --sysconfdir=/etc --target=aarch64-linux-gnu --with-bugurl=https://bugzilla.redhat.com/bugzilla/ --with-gcc-major-version-only --with-isl --with-newlib --with-plugin-ld=/usr/bin/aarch64-linux-gnu-ld --with-sysroot=/usr/aarch64-linux-gnu/sys-root --with-system-libunwind --with-system-zlib --without-headers --enable-gnu-indirect-function --with-linker-hash-style=gnu
Thread model: single
gcc version 9.2.1 20190827 (Red Hat Cross 9.2.1-3) (GCC)
```
Expand Down
10 changes: 5 additions & 5 deletions docs/execution-providers/OpenVINO-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ To use csharp api for openvino execution provider create a custom nuget package.

### OpenCL queue throttling for GPU devices

Enables [OpenCL queue throttling](https://docs.openvino.ai/latest/groupov_runtime_ocl_gpu_prop_cpp_api.html?highlight=throttling) for GPU devices. Reduces CPU utilization when using GPUs with OpenVINO EP.
Enables [OpenCL queue throttling](https://docs.openvino.ai/2024/api/c_cpp_api/group__ov__runtime__ocl__gpu__prop__cpp__api.html) for GPU devices. Reduces CPU utilization when using GPUs with OpenVINO EP.

### Model caching

Expand Down Expand Up @@ -118,7 +118,7 @@ Int8 models are supported on CPU, GPU and NPU.

OpenVINO™ Execution Provider now supports ONNX models that store weights in external files. It is especially useful for models larger than 2GB because of protobuf limitations.

See the [OpenVINO™ ONNX Support documentation](https://docs.openvino.ai/latest/classov_1_1Core.html).
See the [OpenVINO™ ONNX Support documentation](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-onnx.html).

Converting and Saving an ONNX Model to External Data:
Use the ONNX API's.[documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md#converting-and-saving-an-onnx-model-to-external-data).
Expand Down Expand Up @@ -177,7 +177,7 @@ Use `AUTO:<device 1><device 2>..` as the device name to delegate selection of an
From the application point of view, this is just another device that handles all accelerators in full system.

For more information on Auto-Device plugin of OpenVINO™, please refer to the
[Intel OpenVINO™ Auto Device Plugin](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_AUTO.html).
[Intel OpenVINO™ Auto Device Plugin](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#automatic-device-selection).

### Heterogeneous Execution for OpenVINO™ Execution Provider

Expand All @@ -186,7 +186,7 @@ The heterogeneous execution enables computing for inference on one network on se
* To utilize accelerator's power and calculate the heaviest parts of the network on the accelerator and execute unsupported layers on fallback devices like the CPU to utilize all available hardware more efficiently during one inference.

For more information on Heterogeneous plugin of OpenVINO™, please refer to the
[Intel OpenVINO™ Heterogeneous Plugin](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Hetero_execution.html).
[Intel OpenVINO™ Heterogeneous Plugin](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.html).

### Multi-Device Execution for OpenVINO EP

Expand All @@ -196,7 +196,7 @@ Multi-Device plugin automatically assigns inference requests to available comput
* More consistent performance, since the devices can now share the inference burden (so that if one device is becoming too busy, another device can take more of the load)

For more information on Multi-Device plugin of OpenVINO™, please refer to the
[Intel OpenVINO™ Multi Device Plugin](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Running_on_multiple_devices.html).
[Intel OpenVINO™ Multi Device Plugin](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#multi-stream-execution).

### Export OpenVINO Compiled Blob
Export the OpenVINO compiled blob as an ONNX model. Using this ONNX model for subsequent inferences avoids model recompilation and could have a positive impact on Session creation time. This feature is currently enabled for fully supported models only. It complies with the ORT session config keys
Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/add-op.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ the custom operator kernel C++ code example can be found [operators](https://git
* the third libraries API docs integrated in ONNXRuntime Extensions the can be used in C++ code
- OpenCV API docs https://docs.opencv.org/4.x/
- Google SentencePiece Library docs https://github.com/google/sentencepiece/blob/master/doc/api.md
- dlib(matrix and ML library) C++ API docs http://dlib.net/algorithms.html
- dlib(matrix and ML library) C++ API docs https://dlib.net/algorithms.html
- BlingFire Library https://github.com/microsoft/BlingFire
- Google RE2 Library https://github.com/google/re2/wiki/CplusplusAPI
- JSON library https://json.nlohmann.me/api/basic_json/
Expand Down
8 changes: 4 additions & 4 deletions docs/genai/tutorials/finetune.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Olive generates models and adapters in ONNX format. These models and adapters ca

Note: this operations requires a system with an NVIDIA GPU, with CUDA installed

Use the `olive fine-tune` command: https://microsoft.github.io/Olive/features/cli.html#finetune
Use the `olive fine-tune` command: https://microsoft.github.io/Olive/how-to/cli/cli-finetune.html

Here is an example usage of the command:

Expand All @@ -75,12 +75,12 @@ Olive generates models and adapters in ONNX format. These models and adapters ca

2. Optionally, quantize your model

Use the `olive quantize` command: https://microsoft.github.io/Olive/features/cli.html#quantize
Use the `olive quantize` command: https://microsoft.github.io/Olive/how-to/cli/cli-quantize.html


3. Generate the ONNX model and adapter using the quantized model

Use the `olive auto-opt` command for this step: https://microsoft.github.io/Olive/features/cli.html#auto-opt
Use the `olive auto-opt` command for this step: https://microsoft.github.io/Olive/how-to/cli/cli-auto-opt.html

The `--adapter path` can either be a HuggingFace adapter reference, or a path to the adapter you fine-tuned above.

Expand Down Expand Up @@ -162,4 +162,4 @@ python app.py -m <model folder> -a <.onnx_adapter files> -t <prompt template> -s
## References

* [Python API docs](../api/python.md#adapter-class)
* [Olive CLI docs](https://microsoft.github.io/Olive/features/cli.html)
* [Olive CLI docs](https://microsoft.github.io/Olive/how-to/index.html#working-with-the-cli)
2 changes: 1 addition & 1 deletion docs/get-started/with-c.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ is as follows
* Call ```Run()``` as usual
* **Share allocator(s) between sessions:**
* *Description*: This feature allows multiple sessions in the same process to use the same allocator(s).
* *Scenario*: You've several sessions in the same process and see high memory usage. One of the reasons for this is as follows. Each session creates its own CPU allocator which is arena based by default. [ORT implements](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/core/framework/bfc_arena.h) a simplified version of an arena allocator that is based on [Doug Lea's best-first with coalescing algorithm](http://gee.cs.oswego.edu/dl/html/malloc.html). Each allocator lives in its own session. It allocates a large region of memory during init time and thereafter it chunks, coalesces and extends this initial region as per allocation/deallocation demands. Overtime the arena ends up with unused chunks of memory per session. Moreover, the memory allocated by the arena is never returned to the system; once allocated it always remains allocated. All these factors add up when using multiple sessions (each with its own arena) thereby increasing the overall memory consumption of the process. Hence it becomes important to share the arena allocator between sessions.
* *Scenario*: You've several sessions in the same process and see high memory usage. One of the reasons for this is as follows. Each session creates its own CPU allocator which is arena based by default. [ORT implements](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/core/framework/bfc_arena.h) a simplified version of an arena allocator that is based on [Doug Lea's best-first with coalescing algorithm](https://gee.cs.oswego.edu/dl/html/malloc.html). Each allocator lives in its own session. It allocates a large region of memory during init time and thereafter it chunks, coalesces and extends this initial region as per allocation/deallocation demands. Overtime the arena ends up with unused chunks of memory per session. Moreover, the memory allocated by the arena is never returned to the system; once allocated it always remains allocated. All these factors add up when using multiple sessions (each with its own arena) thereby increasing the overall memory consumption of the process. Hence it becomes important to share the arena allocator between sessions.
* *Usage*:
* Create and register a shared allocator with the env using the ```CreateAndRegisterAllocator``` API. This allocator is then reused by all sessions that use the same env instance unless a session
chooses to override this by setting ```session_state.use_env_allocators``` to "0".
Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/with-python.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,4 +281,4 @@ For Python compiler version notes, see [this page](https://github.com/microsoft/
- [Python Tutorials](../tutorials/api-basics)
* [TensorFlow with ONNX Runtime](../tutorials/tf-get-started.md)
* [PyTorch with ONNX Runtime](https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html)
* [scikit-learn with ONNX Runtime](http://onnx.ai/sklearn-onnx/index_tutorial.html)
* [scikit-learn with ONNX Runtime](https://onnx.ai/sklearn-onnx/index_tutorial.html)
Loading
Loading