Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Windows] Support CPU shared memory (Client/Frontend) #551

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

fpetrini15
Copy link
Contributor

@fpetrini15 fpetrini15 commented Mar 27, 2024

Goal: Support CPU shared memory between the server and client for Windows

Server changes: triton-inference-server/server#7048

src/python/CMakeLists.txt Outdated Show resolved Hide resolved
#ifdef __cplusplus
extern "C" {
#endif

#ifdef _WIN32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a platform specific sub-struct with either HANDLE or int would clean up this logic as well.
Platform specific logic will be encapsulated within the sub-struct definition.
However, we must take care that we maintain client side backwards compatibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is handled opaquely now through use of a void pointer, as discussed, and is used to initialize the appropriate type when creating the substruct. None of the client APIs needed modifications in our L0_shared_memory test, which gives me confidence that we did not break backwards compatibility with this change. Please let me know if you have other concerns.

src/python/library/tritonclient/utils/CMakeLists.txt Outdated Show resolved Hide resolved
platform_package_data += ["libcshm.so"]
elif PLATFORM_FLAG == "win_amd64":
platform_package_data += ["cshm.dll"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add an else block catching unsupported platform flags?

Copy link
Contributor Author

@fpetrini15 fpetrini15 Apr 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The platform flag is currently passed in from build_wheel.py, so, from our perspective, the possible platforms are all handled. If the platform does not contain the substring "linux" or is not "win_amd64", it will be "any". In this case, we do not include either shared library object.

Are you suggesting raising an error in the event the setup.py script is called independently? Like so:

if "linux" in PLATFORM_FLAG:
    platform_package_data += ["libcshm.so"]
elif PLATFORM_FLAG == "win_amd64":
    platform_package_data += ["cshm.dll"]
elif PLATFORM_FLAG != "any":
    raise Exception(f"Unsupported platform flag \"{PLATFORM_FLAG}\" specified.") 

src/python/library/build_wheel.py Outdated Show resolved Hide resolved
src/python/library/build_wheel.py Outdated Show resolved Hide resolved
src/python/library/CMakeLists.txt Outdated Show resolved Hide resolved
src/python/library/CMakeLists.txt Outdated Show resolved Hide resolved
src/python/CMakeLists.txt Outdated Show resolved Hide resolved
@fpetrini15 fpetrini15 force-pushed the fpetrini-win-cpu-shm branch 2 times, most recently from fcee735 to 479dccc Compare April 6, 2024 02:43
Comment on lines +222 to +225
# if FLAGS.include_gpu_libs:
# cpdir(
# "tritonclient/utils/cuda_shared_memory",
# os.path.join(FLAGS.whl_dir, "tritonclient/utils/cuda_shared_memory"),

Check notice

Code scanning / CodeQL

Commented-out code Note

This comment appears to contain commented-out code.
@fpetrini15 fpetrini15 force-pushed the fpetrini-win-cpu-shm branch from 479dccc to 43700e2 Compare April 11, 2024 19:50
@fpetrini15 fpetrini15 force-pushed the fpetrini-win-cpu-shm branch from 43700e2 to f854d4a Compare April 13, 2024 21:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants