Skip to content
This repository has been archived by the owner on Jul 22, 2022. It is now read-only.

Unable to install the latest GFE version #7

Open
S4nfs opened this issue Dec 17, 2020 · 17 comments
Open

Unable to install the latest GFE version #7

S4nfs opened this issue Dec 17, 2020 · 17 comments

Comments

@S4nfs
Copy link

S4nfs commented Dec 17, 2020

The following Script is downloading the absolete GFE version. I changed the URL inside /install.requirement.ps1 file and still it downloads the v3.13.0.85 Beta Geforce experience. How to solve this.

@S4nfs S4nfs changed the title Unable to install the latest GE version Unable to install the latest GFE version Dec 18, 2020
@acceleration3
Copy link
Owner

It's supposed to install an "obsolete" version. The newer versions, while being more advanced and feature complete, have more and better checks implemented to detect and block hardware configurations it's not supposed to run on. I have tried to disassemble those versions and patching as many checks as I could but I still haven't gotten it to work. The one the script installs is the last version before all of the checks were implemented.

@S4nfs
Copy link
Author

S4nfs commented Dec 24, 2020

Is it possible that this script works with Nvidia vGPUs ?. I tried with tesla M60 vGPU (no passthrough) installed on my hypervisor 8.1 but the problem occurs when the vGPU tries to get license to break the 14fps capping limit. vGPUs somehow won't support the grid drivers you mentioned in the script. The driver is blocking the nvidia licence manager, However it's working without any other issues rather than it fails to get license which is required to achieve full gpu performance. I also tried with my previously installed vGPU drivers (xenserver vgpu driver) but its showing nvstream API failed while streaming.

One more thing i noticed that the script installs grid drivers from Microsoft, Why not from Nvidia?

@acceleration3
Copy link
Owner

That driver from Microsoft is the only one that I found that enables this to work under Windows 10 on Azure with an M60 GPU so I recommend that one for maximum compatibility. As for running this under Citrix Hypervisor, it should work fine but you will have to install the optimal driver yourself because the script is optimized for cloud services virtual machines. Remember that you need an audio solution for GameStream to work as well as the monitor you open GeForce Experience on to be connected to the NVIDIA GPU.

@S4nfs
Copy link
Author

S4nfs commented Dec 26, 2020

This is the error on TeslaM60 vGPU which i mentioned earlier. Shield is enabled but i saw it failed to load the complete script after restarting.
Also i mentioned that i can't use grid driver on vGPU.

Screenshot_20201226-210637

@acceleration3
Copy link
Owner

Can you run this command in the virtual machine's CMD:
start "" /D "C:\Program Files\NVIDIA Corporation\NvContainer" "C:\Program Files\NVIDIA Corporation\NvStreamSrv\nvstreamer.exe"
And paste a screenshot or the output here?

@acceleration3 acceleration3 reopened this Dec 26, 2020
@S4nfs
Copy link
Author

S4nfs commented Dec 26, 2020

Nope it's just skipping the content very fast and closes that window (another cmd)

@acceleration3
Copy link
Owner

My bad, the command should be start /B "" /D "C:\Program Files\NVIDIA Corporation\NvContainer" "C:\Program Files\NVIDIA Corporation\NvStreamSrv\nvstreamer.exe"

@S4nfs
Copy link
Author

S4nfs commented Dec 26, 2020

C:\Users\Administrator>start /B "" /D "C:\Program Files\NVIDIA Corporation\NvContainer" "C:\Program Files\NVIDIA Corporation\NvStreamSrv\nvstreamer.exe"

C:\Users\Administrator>[libprotobuf WARNING ..\src\google\protobuf\descriptor_database.cc:58] File already exists in database: BusMessage.proto
[libprotobuf WARNING ..\src\google\protobuf\descriptor_database.cc:115] Symbol name "BusMessage" conflicts with the existing symbol "BusMessage".
[libprotobuf WARNING ..\src\google\protobuf\message.cc:288] File is already registered: BusMessage.proto
You can run C:\Program Files\NVIDIA Corporation\NvStreamSrv\nvstreamer.exe -help to see all the help options.
#1(I)[2020-12-26 16:52:54,672]=22:22:54={000022D4} Custom Ctrl+C handler successfully set.
#2(I)[2020-12-26 16:52:54,672]=22:22:54={000022D4} PID 8900
#3(I)[2020-12-26 16:52:54,678]=22:22:54={000022D4} Streaming Process ID: 0
#4(I)[2020-12-26 16:52:54,685]=22:22:54={000022D4} P4 Changelist: 23612337
#5(I)[2020-12-26 16:52:54,693]=22:22:54={000022D4} Enumerating network adapters on this system
#6(I)[2020-12-26 16:52:54,708]=22:22:54={000022D4} --- ...: ... / ...
#7(I)[2020-12-26 16:52:54,719]=22:22:54={000022D4} Building head list
#8(I)[2020-12-26 16:52:54,745]=22:22:54={000022D4} Adapter vendor id: 0x10de, device name: \.\DISPLAY3
#9(I)[2020-12-26 16:52:54,756]=22:22:54={000022D4} Creating a Standalone SCI Thread
#0(I)[2020-12-26 16:52:54,764]=22:22:54={000022D4} Received ServerListenForIncomingStreams
#1(I)[2020-12-26 16:52:54,773]=22:22:54={000022D4} Using: OpenSSL 1.0.2n 7 Dec 2017
#2(I)[2020-12-26 16:52:54,933]=22:22:54={000022D4} RND is initialized
#3(I)[2020-12-26 16:52:54,941]=22:22:54={000024C0} Starting initialization of the main server thread
#4(I)[2020-12-26 16:52:54,951]=22:22:54={000024C0} RTSP Server: TCP instance
#5(E)[2020-12-26 16:52:54,958]=22:22:54={000024C0} Failed to start RTSP server on port 48010
#6(I)[2020-12-26 16:52:54,967]=22:22:54={000024C0} Failed to start RTSP tcp server
#7(I)[2020-12-26 16:52:54,975]=22:22:54={000024C0} Allocated 1 ConnectionInfo entries
#8(I)[2020-12-26 16:52:54,983]=22:22:54={000024C0} Completed initialization of the main server thread with state 2
#9(I)[2020-12-26 16:52:54,994]=22:22:54={000022D4} Not sending a message.
#0(I)[2020-12-26 16:52:55,001]=22:22:55={000022D4} SCI Thread creation done
#1(I)[2020-12-26 16:52:55,009]=22:22:55={000022D4} Computing SPS/PPS headers.
#2(I)[2020-12-26 16:52:55,018]=22:22:55={000022D4} Loaded library from '...'
#3(I)[2020-12-26 16:52:55,135]=22:22:55={000022D4} Initialized context for adapter 0: 1366 x 768 @ 60.0 Hz
#4(I)[2020-12-26 16:52:55,144]=22:22:55={000022D4} Driver version is 443.66, branch is r443_62.
#5(I)[2020-12-26 16:52:55,154]=22:22:55={000022D4} System is NOT co-proc.
#6(E)[2020-12-26 16:52:55,197]=22:22:55={000022D4} Failed to initialize CUDA driver API
#7(E)[2020-12-26 16:52:55,205]=22:22:55={000022D4} Failed to initialize adapter context for RTSP SPS/PPS header generation.
#8(I)[2020-12-26 16:52:55,260]=22:22:55={000022D4} Deinitialized context for adapter 0
#9(E)[2020-12-26 16:52:55,269]=22:22:55={000022D4} Error computing SPS/PPS headers.
#0(E)[2020-12-26 16:52:55,277]=22:22:55={000022D4} NvEncodeAPI Not Supported
#1(E)[2020-12-26 16:52:55,284]=22:22:55={000022D4} SCI is not enabled, NvEnc initialization failed
#2(I)[2020-12-26 16:52:55,294]=22:22:55={000022D4} Terminated the SCI thread
#3(I)[2020-12-26 16:52:55,301]=22:22:55={000022D4} Waiting on RTSP handshake to finish
#4(I)[2020-12-26 16:52:55,309]=22:22:55={000022D4} Starting shutdown of the main server thread
#5(I)[2020-12-26 16:52:55,320]=22:22:55={000022D4} Network Event Subscribe: 00007FF63FB390A0 - 00007FF63FCF8990
#6(I)[2020-12-26 16:52:55,331]=22:22:55={000022D4} NATT Initialize: STUN servers count 1. Retransmission period 500, count 5
#7(I)[2020-12-26 16:52:55,346]=22:22:55={000022D4} Network Host Lookup: blocking call
#8(I)[2020-12-26 16:52:55,360]=22:22:55={000022D4} NATT Initialize: use STUN server s1.stun.gamestream.nvidia.com:19308
#9(I)[2020-12-26 16:52:55,373]=22:22:55={000022D4} Network Event Unsubscribe: 00007FF63FCF8990
#0(I)[2020-12-26 16:52:55,384]=22:22:55={000022D4} Starting un-initialization of the main server thread
#1(I)[2020-12-26 16:52:55,395]=22:22:55={000022D4} Completed un-initialization of the main server thread
#2(I)[2020-12-26 16:52:55,407]=22:22:55={000022D4} Completed shutdown of the main server thread
#3(I)[2020-12-26 16:52:55,417]=22:22:55={000022D4} Terminated the main server thread

@acceleration3
Copy link
Owner

According to this line:
#0(E)[2020-12-26 16:52:55,277]=22:22:55={000022D4} NvEncodeAPI Not Supported
The drivers you are using right now don't support NvEnc. I updated the script a while ago to use jamesstringerparsec's GPU update script if the one from Microsoft doesn't work. Have you tried checking if that installs the correct drivers?

@S4nfs
Copy link
Author

S4nfs commented Dec 26, 2020

I tried the ParsecGPUcloud updater script but not with the same GPU. I'll try that later.

@acceleration3
Copy link
Owner

Alright, I will leave the issue open for now. Please report back when you've tried.

@S4nfs
Copy link
Author

S4nfs commented Jan 9, 2021

Yes i tried the james parsec script but it seems that it only works with tesla GPUs running without any hypervisor concept. Also they are more likely to brick your VM on some cloud platforms. Cloudgamestream runs fine on all hypervisor GPUs, (i didn't tried it with A100)

Later,I forked the cloudgamestream project but that isn't working somehow. How can i install a custom or a latest GFE version with the same script, i want to know what's blocking the nvidia server.

@albertostefanelli
Copy link

albertostefanelli commented Jan 20, 2021

same here.Here the powershell message

Step 2 - Patching GeForce Experience
Enabling NVIDIA FrameBufferCopy...
Unable to check the current status
System.Management.Automation.RuntimeException: Failed to enable NvFBC. (Error: -1)
Transcript stopped, output file is C:\Users\alberto_stefanelli_m\Downloads\cloudgamestream-master\Log.txt
Press Enter to continue...:

The machine is a GCP n1-standard-8 (8 vCPUs, 30 GB memory) with Tesla T4 prepared with the parsec script.

EDIT: here the log

C:\Users\alberto_stefanelli_m>[libprotobuf WARNING ..\src\google\protobuf\descriptor_database.cc:58] File already exists in database: BusMessage.proto
[libprotobuf WARNING ..\src\google\protobuf\descriptor_database.cc:115] Symbol name "BusMessage" conflicts with the existing symbol "BusMessage".
[libprotobuf WARNING ..\src\google\protobuf\message.cc:288] File is already registered: BusMessage.proto
You can run C:\Program Files\NVIDIA Corporation\NvStreamSrv\nvstreamer.exe -help to see all the help options.
#1(I)[2021-01-20 19:04:25,175]=19:04:25={000020F8} Custom Ctrl+C handler successfully set.
#2(I)[2021-01-20 19:04:25,175]=19:04:25={000020F8} PID 1676
#3(I)[2021-01-20 19:04:25,179]=19:04:25={000020F8} Streaming Process ID: 0
#4(I)[2021-01-20 19:04:25,185]=19:04:25={000020F8} P4 Changelist: 23612337
#5(I)[2021-01-20 19:04:25,190]=19:04:25={000020F8} Enumerating network adapters on this system
#6(I)[2021-01-20 19:04:25,205]=19:04:25={000020F8} --- ...: ... / ...
#7(I)[2021-01-20 19:04:25,534]=19:04:25={000020F8} Building head list
#8(I)[2021-01-20 19:04:25,639]=19:04:25={000020F8} Adapter vendor id: 0x10de, device name: \\.\DISPLAY2
#9(I)[2021-01-20 19:04:25,647]=19:04:25={000020F8} Creating a Standalone SCI Thread
#0(I)[2021-01-20 19:04:25,653]=19:04:25={000020F8} Received ServerListenForIncomingStreams
#1(I)[2021-01-20 19:04:25,660]=19:04:25={000020F8} Using: OpenSSL 1.0.2n  7 Dec 2017
#2(I)[2021-01-20 19:04:25,837]=19:04:25={000020F8} RND is initialized
#3(I)[2021-01-20 19:04:25,842]=19:04:25={00000058} Starting initialization of the main server thread
#4(I)[2021-01-20 19:04:25,879]=19:04:25={00000058} RTSP Server: TCP instance
#5(I)[2021-01-20 19:04:25,885]=19:04:25={00000058} RTSP server successfully started on port 48010 (thread count 4)
#6(I)[2021-01-20 19:04:25,893]=19:04:25={00000058} Allocated 1 ConnectionInfo entries
#7(I)[2021-01-20 19:04:25,900]=19:04:25={00000058} Completed initialization of the main server thread with state 2
#8(I)[2021-01-20 19:04:25,907]=19:04:25={000020F8} Not sending a message.
#9(I)[2021-01-20 19:04:25,912]=19:04:25={000020F8} SCI Thread creation done
#0(I)[2021-01-20 19:04:25,919]=19:04:25={000020F8} Computing SPS/PPS headers.
#1(I)[2021-01-20 19:04:26,079]=19:04:26={000020F8} Loaded library from '...'
#2(I)[2021-01-20 19:04:26,146]=19:04:26={000020F8} Initialized context for adapter 0: 1920 x 1080 @ 60.0 Hz
#3(I)[2021-01-20 19:04:26,154]=19:04:26={000020F8} Driver version is 432.44, branch is r430_00.
#4(I)[2021-01-20 19:04:26,161]=19:04:26={000020F8} System is NOT co-proc.
#5(I)[2021-01-20 19:04:27,450]=19:04:27={000020F8} Initialized CUDA for device 'Tesla T4' (SM 7.5) in compute mode 'CU_COMPUTEMODE_DEFAULT'.
#6(I)[2021-01-20 19:04:27,469]=19:04:27={000020F8} GPU's = {
#7(I)[2021-01-20 19:04:27,473]=19:04:27={000020F8}      {
#8(I)[2021-01-20 19:04:27,477]=19:04:27={000020F8}      FriendlyName=Tesla T4
#9(I)[2021-01-20 19:04:27,484]=19:04:27={000020F8}      }
#0(I)[2021-01-20 19:04:27,488]=19:04:27={000020F8} }
#1(I)[2021-01-20 19:04:27,491]=19:04:27={000020F8} Nvidia GPU count: 1, Non Nvidia GPU count: 0
#2(I)[2021-01-20 19:04:27,498]=19:04:27={000020F8} Found a match at [19]
#3(W)[2021-01-20 19:04:27,504]=19:04:27={000020F8} GPU does not support local streaming
#4(I)[2021-01-20 19:04:27,510]=19:04:27={000020F8} Initializing NvEnc7VideoEncoder.
#5(I)[2021-01-20 19:04:27,518]=19:04:27={000020F8} Using NvEnc header version 0.7.
#6(I)[2021-01-20 19:04:27,604]=19:04:27={000020F8} Found a match at [19]
#7(W)[2021-01-20 19:04:27,609]=19:04:27={000020F8} GPU does not support local streaming
#8(I)[2021-01-20 19:04:27,617]=19:04:27={000020F8} Found a match at [19]
#9(W)[2021-01-20 19:04:27,621]=19:04:27={000020F8} GPU does not support local streaming
#0(I)[2021-01-20 19:04:27,626]=19:04:27={000020F8} Generating SPS/PPS header for 1280x720@30 FPS in video format H.264.
#1(I)[2021-01-20 19:04:27,947]=19:04:27={000020F8} Encoder maxLumaPixelsPerSec 1297614208
#2(I)[2021-01-20 19:04:27,952]=19:04:27={000020F8} Encoder maxLumaPixelsPerSec 1409277696
#3(I)[2021-01-20 19:04:27,960]=19:04:27={000020F8} SPS/PPS LUT match, idx 0
#4(I)[2021-01-20 19:04:27,967]=19:04:27={000020F8} Found a match at [19]
#5(W)[2021-01-20 19:04:27,971]=19:04:27={000020F8} GPU does not support local streaming
#6(I)[2021-01-20 19:04:27,979]=19:04:27={000020F8} Generating SPS/PPS header for 1280x720@60 FPS in video format H.264.
#7(I)[2021-01-20 19:04:27,987]=19:04:27={000020F8} SPS/PPS LUT match, idx 3
#8(I)[2021-01-20 19:04:27,993]=19:04:27={000020F8} Found a match at [19]
#9(W)[2021-01-20 19:04:27,999]=19:04:27={000020F8} GPU does not support local streaming
#0(I)[2021-01-20 19:04:28,005]=19:04:28={000020F8} Generating SPS/PPS header for 1920x1080@30 FPS in video format H.264.
#1(I)[2021-01-20 19:04:28,014]=19:04:28={000020F8} SPS/PPS LUT match, idx 1
#2(I)[2021-01-20 19:04:28,020]=19:04:28={000020F8} Found a match at [19]
#3(W)[2021-01-20 19:04:28,025]=19:04:28={000020F8} GPU does not support local streaming
#4(I)[2021-01-20 19:04:28,031]=19:04:28={000020F8} Generating SPS/PPS header for 1920x1080@60 FPS in video format H.264.
#5(I)[2021-01-20 19:04:28,040]=19:04:28={000020F8} SPS/PPS LUT match, idx 4
#6(I)[2021-01-20 19:04:28,045]=19:04:28={000020F8} Found a match at [19]
#7(W)[2021-01-20 19:04:28,051]=19:04:28={000020F8} GPU does not support local streaming
#8(I)[2021-01-20 19:04:28,058]=19:04:28={000020F8} Generating SPS/PPS header for 3840x2160@30 FPS in video format H.264.
#9(I)[2021-01-20 19:04:28,067]=19:04:28={000020F8} SPS/PPS LUT match, idx 2
#0(I)[2021-01-20 19:04:28,073]=19:04:28={000020F8} Found a match at [19]
#1(W)[2021-01-20 19:04:28,078]=19:04:28={000020F8} GPU does not support local streaming
#2(I)[2021-01-20 19:04:28,085]=19:04:28={000020F8} Generating SPS/PPS header for 3840x2160@60 FPS in video format H.264.
#3(I)[2021-01-20 19:04:28,093]=19:04:28={000020F8} SPS/PPS LUT match, idx 5
#4(I)[2021-01-20 19:04:28,106]=19:04:28={000020F8} Found a match at [19]
#5(W)[2021-01-20 19:04:28,110]=19:04:28={000020F8} GPU does not support local streaming
#6(I)[2021-01-20 19:04:28,117]=19:04:28={000020F8} Generating SPS/PPS header for 1280x720@30 FPS in video format HEVC.
#7(I)[2021-01-20 19:04:28,124]=19:04:28={000020F8} SPS/PPS LUT match, idx 6
#8(I)[2021-01-20 19:04:28,131]=19:04:28={000020F8} Found a match at [19]
#9(W)[2021-01-20 19:04:28,136]=19:04:28={000020F8} GPU does not support local streaming
#0(I)[2021-01-20 19:04:28,142]=19:04:28={000020F8} Generating SPS/PPS header for 1280x720@60 FPS in video format HEVC.
#1(I)[2021-01-20 19:04:28,151]=19:04:28={000020F8} SPS/PPS LUT match, idx 9
#2(I)[2021-01-20 19:04:28,157]=19:04:28={000020F8} Found a match at [19]
#3(W)[2021-01-20 19:04:28,161]=19:04:28={000020F8} GPU does not support local streaming
#4(I)[2021-01-20 19:04:28,168]=19:04:28={000020F8} Generating SPS/PPS header for 1920x1080@30 FPS in video format HEVC.
#5(I)[2021-01-20 19:04:28,177]=19:04:28={000020F8} SPS/PPS LUT match, idx 7
#6(I)[2021-01-20 19:04:28,182]=19:04:28={000020F8} Found a match at [19]
#7(W)[2021-01-20 19:04:28,187]=19:04:28={000020F8} GPU does not support local streaming
#8(I)[2021-01-20 19:04:28,194]=19:04:28={000020F8} Generating SPS/PPS header for 1920x1080@60 FPS in video format HEVC.
#9(I)[2021-01-20 19:04:28,203]=19:04:28={000020F8} SPS/PPS LUT match, idx 10
#0(I)[2021-01-20 19:04:28,209]=19:04:28={000020F8} Found a match at [19]
#1(W)[2021-01-20 19:04:28,215]=19:04:28={000020F8} GPU does not support local streaming
#2(I)[2021-01-20 19:04:28,221]=19:04:28={000020F8} Generating SPS/PPS header for 3840x2160@30 FPS in video format HEVC.
#3(I)[2021-01-20 19:04:28,233]=19:04:28={000020F8} Encoder supports encoding 10 bit
#4(I)[2021-01-20 19:04:28,239]=19:04:28={000020F8} SPS/PPS LUT match, idx 8
#5(I)[2021-01-20 19:04:28,245]=19:04:28={000020F8} Found a match at [19]
#6(W)[2021-01-20 19:04:28,252]=19:04:28={000020F8} GPU does not support local streaming
#7(I)[2021-01-20 19:04:28,257]=19:04:28={000020F8} Generating SPS/PPS header for 3840x2160@60 FPS in video format HEVC.
#8(I)[2021-01-20 19:04:28,265]=19:04:28={000020F8} SPS/PPS LUT match, idx 11
#9(I)[2021-01-20 19:04:28,315]=19:04:28={000020F8} Deinitialized CUDA context for adapter 0.
#0(I)[2021-01-20 19:04:28,333]=19:04:28={000020F8} Deinitialized context for adapter 0
#1(W)[2021-01-20 19:04:28,340]=19:04:28={000020F8} IsEnableGFNMicSupport setting not found
#2(I)[2021-01-20 19:04:28,347]=19:04:28={000020F8} initializeAudio
#3(I)[2021-01-20 19:04:28,352]=19:04:28={000020F8} Trying to create and open audio source (new API), bIsMicEnabled: 0
#4(I)[2021-01-20 19:04:28,438]=19:04:28={000020F8} Setting channel count to 2
#5(I)[2021-01-20 19:04:28,443]=19:04:28={000020F8} Setting opus channel mapping mode to FALSE
#6(I)[2021-01-20 19:04:28,450]=19:04:28={000020F8} Setting opus channel mappingMode = FALSE
#7(I)[2021-01-20 19:04:28,489]=19:04:28={000020F8} initializing NvVAD m_bRegisterNvVADEndpoint: 0
#8(I)[2021-01-20 19:04:28,495]=19:04:28={000020F8} Surround is supported
#9(I)[2021-01-20 19:04:28,500]=19:04:28={000020F8} DLL version supports surround audio
#0(I)[2021-01-20 19:04:28,512]=19:04:28={000020F8} NvAudCapAudioSource successfully opened
#1(I)[2021-01-20 19:04:28,518]=19:04:28={000020F8} Created and opened audio source
#2(I)[2021-01-20 19:04:28,523]=19:04:28={000020F8} Updating Audio source
#3(I)[2021-01-20 19:04:28,529]=19:04:28={000020F8} Starting the SCI thread
#4(I)[2021-01-20 19:04:28,535]=19:04:28={000020F8} Not sending a message.
#5(I)[2021-01-20 19:04:28,540]=19:04:28={000020F8} Sent event StreamerInitOk:

@acceleration3
Copy link
Owner

@albertostefanelli The error with NVIDIA FrameBufferCopy not enabling is because it only accepts GRID capable GPUs. You have to use it on a GRID capable GPU as well as installing the GRID version of the driver. The GPU Update script should find the correct driver.

@S4nfs The issue isn't that NVIDIA blocks anything server side, it's that more and better hardware checks are present within GFE's files.

@albertostefanelli
Copy link

@acceleration3 thanks for your answer. This is with the driver installed by the GPU Update script. I guess it does install a GRID driver. Any idea of how I can check this ?

Thanks !

@acceleration3
Copy link
Owner

@albertostefanelli I'm not sure if there's a way to check it. I can't tell you exactly what's wrong with your setup since NvFBCEnable is entirely closed source and developed by NVIDIA, sorry.

@S4nfs
Copy link
Author

S4nfs commented Feb 12, 2021

According to this article (https://blog.parsec.app/rtx-cloud-gaming-with-the-new-aws-g4-instances-11d1c60c2d09/)
one can use jamesstringerparsec tool to enable RTX on tesla gpus, Is it true? Tesla t4 are RTX capable but the drivers it is supposed to install are the same as GRID and i don't think they are using RTX 8000 driver.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants