diff --git a/README.md b/README.md index 9560d21..4ca366e 100644 --- a/README.md +++ b/README.md @@ -43,9 +43,7 @@ A collection of native libraries with CPU support for a several common OS/archit #### onnxruntime-gpu -[![maven](https://img.shields.io/maven-central/v/com.jyuzawa/onnxruntime-gpu)](https://search.maven.org/artifact/com.jyuzawa/onnxruntime-gpu) - -A collection of native libraries with GPU support for a several common OS/architecture combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers like `osx-x86_64` to provide specific support. +See https://github.com/yuzawa-san/onnxruntime-java/issues/258 ### In your library @@ -58,7 +56,7 @@ This puts the burden of providing a native library on your end user. There is an example application in the `onnxruntime-sample-application` directory. The library should use the `onnxruntime` as a implementation dependency. The application needs to have acccess to the native library. -You have the option providing it via a runtime dependency using either a classifier variant from `onnxruntime-cpu` or `onnxruntime-gpu` +You have the option providing it via a runtime dependency using either a classifier variant from `onnxruntime-cpu`. Otherwise, the Java library path will be used to load the native library. @@ -74,7 +72,6 @@ Since this uses a native library, this will require the runtime to have the `--e ### Execution Providers Only those which are exposed in the C API are supported. -The `onnxruntime-gpu` artifact supports CUDA and TensorRT, since those are built off of the GPU artifacts from the upstream project. If you wish to use another execution provider which is present in the C API, but not in any of the artifacts from the upstream project, you can choose to bring your own onnxruntime shared library to link against. ## Versioning @@ -86,4 +83,4 @@ Upstream major version changes will typically be major version changes here. Minor version will be bumped for smaller, but compatible changes. Upstream minor version changes will typically be minor version changes here. -The `onnxruntime-cpu` and `onnxruntime-gpu` artifacts are versioned to match the upstream versions and depend on a minimum compatible `onnxruntime` version. +The `onnxruntime-cpu` artifacts are versioned to match the upstream versions and depend on a minimum compatible `onnxruntime` version. diff --git a/build.gradle b/build.gradle index 6c6d8b1..6994fb8 100644 --- a/build.gradle +++ b/build.gradle @@ -282,6 +282,7 @@ publishing { artifact tasks.named("osArchJar${it}") } } + /* onnxruntimeGpu(MavenPublication) { version = ORT_JAR_VERSION artifactId = "${rootProject.name}-gpu" @@ -293,6 +294,7 @@ publishing { artifact tasks.named("osArchJar${it}") } } + */ onnxruntime(MavenPublication) { from components.java pom { diff --git a/onnxruntime-sample-application/build.gradle b/onnxruntime-sample-application/build.gradle index d7c3b7c..08b2594 100644 --- a/onnxruntime-sample-application/build.gradle +++ b/onnxruntime-sample-application/build.gradle @@ -9,8 +9,6 @@ dependencies { // For the application to work, you will need to provide the native libraries. // Optionally, provide the CPU libraries (for various OS/Architecture combinations) // runtimeOnly "com.jyuzawa:onnxruntime-cpu:1.X.0:osx-x86_64" - // Optionally, provide the GPU libraries (for various OS/Architecture combinations) - // runtimeOnly "com.jyuzawa:onnxruntime-gpu:1.X.0:osx-x86_64" // Alternatively, do nothing and the Java library path will be used } diff --git a/src/main/java/com/jyuzawa/onnxruntime/OnnxRuntimeImpl.java b/src/main/java/com/jyuzawa/onnxruntime/OnnxRuntimeImpl.java index 9565129..2241039 100644 --- a/src/main/java/com/jyuzawa/onnxruntime/OnnxRuntimeImpl.java +++ b/src/main/java/com/jyuzawa/onnxruntime/OnnxRuntimeImpl.java @@ -9,7 +9,6 @@ import com.jyuzawa.onnxruntime_extern.OrtApiBase; import java.lang.System.Logger.Level; -import java.lang.foreign.Arena; import java.lang.foreign.MemorySegment; // NOTE: this class actually is more like OrtApiBase @@ -22,7 +21,6 @@ enum OnnxRuntimeImpl implements OnnxRuntime { private OnnxRuntimeImpl() { Loader.load(); - Arena scope = Arena.global(); MemorySegment segment = OrtGetApiBase(); this.ortApiVersion = ORT_API_VERSION(); MemorySegment apiAddress = OrtApiBase.GetApiFunction(segment).apply(ortApiVersion); diff --git a/src/main/java/module-info.java b/src/main/java/module-info.java index ecd8122..e2544b9 100644 --- a/src/main/java/module-info.java +++ b/src/main/java/module-info.java @@ -11,12 +11,9 @@ *
  • The {@code onnxruntime-cpu} artifact provides support for several common operating systems / CPU architecture * combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers like * {@code osx-x86_64} to provide specific support. - *
  • The {@code onnxruntime-gpu} artifact provides GPU (CUDA) support for several common operating systems / CPU - * architecture combinations. For use as an optional runtime dependency. Include one of the OS/Architecture classifiers - * like {@code osx-x86_64} to provide specific support. *
  • The {@code onnxruntime} artifact contains only bindings and no libraries. This means the native library will need * to be provided. Use this artifact as a compile dependency if you want to allow your project's users to bring use - * {@code onnxruntime-cpu}, {@code onnxruntime-gpu}, or their own native library as dependencies provided at runtime. + * {@code onnxruntime-cpu} or their own native library as dependencies provided at runtime. * * * @since 1.0.0