-
-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Direct GPU acceleration #373
Comments
Kasm worked around the issue in their DRI3 implementation, but the workaround is problematic. The basic problem is that their DRI3 implementation creates pixmaps in system memory and maintains a GBM buffer object (in GPU memory) for each, so it has to synchronize the pixels between system memory and GPU memory whenever either the buffer object or the pixmap changes. (NOTE: VirtualGL's implementation of I spent 20-30 uncompensated hours trying to improve the implementation but was unable to. To the best of my understanding, it would be necessary to store pixmaps in GPU memory in order to implement DRI3 cleanly. That would require storing the whole framebuffer in GPU memory, and virtual X servers such as Xvnc cannot do that. Thus, at the moment, I do not think that this solution is appropriate for TurboVNC, since it has significant performance drawbacks relative to VirtualGL. I think that the limited resources of The VirtualGL Project are better spent improving the compatibility of VirtualGL's EGL back end or looking into a TurboVNC Wayland compositor, which could cleanly use GPU memory and potentially pass through GPU acceleration to Xwayland without the need to deal with any of this mess at the X11 level. |
I changed my mind and implemented this, since it provides a solution for using Vulkan with the AMDGPU drivers. (Whereas nVidia's Vulkan implementation does something VirtualGL-like when running in TurboVNC, AMD's implementation doesn't work without the DRI3 extension.) Our implementation of DRI3 is based on KasmVNC's implementation, with only minor changes (mostly cosmetic, but I also used an Xorg linked list instead of a fixed array to track the DRI3 pixmaps.) |
Our DRI3 implementation has been overhauled based on the implementation in TigerVNC 1.14, which improves upon Kasm's implementation somewhat. (Most notably, it synchronizes the DRI3 pixmaps with the corresponding buffer objects in response to X drawing commands rather than on a schedule. I had originally tried to do that as well, but I missed the fact that I needed to add more X Render hooks as well as a hook for the |
Real kudos for this implementation. I was able to run a Steam game in TurboVNC with Vulkan acceleration on AMD GPU. Interestingly, the Steam interface itself is just black when running with |
There may be a lingering issue with the implementation, or maybe a window manager issue. (Running Steam directly with |
I wanted to create this issue to document my findings vis-a-vis adding GPU acceleration directly to TurboVNC, thus eliminating the need for VirtualGL. kasmtech/KasmVNC@d049821 implements DRI3 in KasmVNC, which ostensibly adds GPU acceleration when using open source GPU drivers. It was straightforward to port that code into TurboVNC (although it was necessary to build with
TVNC_SYSTEMX11=1
.) As of this writing, there are still some major bugs in the feature (kasmtech/KasmVNC#146), so I am not yet prepared to declare the problem solved, but I have high hopes that Kasm will iron out those issues. If they do, then TurboVNC will be able to provide GPU acceleration, without VirtualGL, when using open source GPU drivers. However, I don't think it will ever be possible to do likewise with nVidia's proprietary drivers, at least not as long as they retain their current architecture.To the best of my understanding (please correct any mistaken assertions I make below):
NV-GLX
extension to the X server.NV-GLX
is proprietary, undocumented, and probably doesn't have a stable interface, and nVidia's GLX stack cannot function without it. A physical X server (the hw/xfree86 code path in X.org, as opposed to the "virtual" X servers implemented by Xvnc or Xvfb) is necessary in order to load X.org modules.I certainly don't claim that my knowledge is ever complete or final, but to the best of my current understanding, implementing direct GPU acceleration in Xvnc when using nVidia's proprietary drivers will not be possible. VirtualGL will still be necessary with those drivers. I am certainly open to being proven wrong.
The text was updated successfully, but these errors were encountered: