You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Gert showed that it was possible to dump the value of over 50 million 16-bit values per second into the GPIOs using the Pi's GPU. His application was to have each 16 bits present one pixel (RGB565) and use a resistor DAC to send it to a VGA LCD, along with clock signals generated by the GPU, all with accurate enough timings for video (in order to correctly place each pixel, that requires timing accurate to about 20 ns).
Unfortunately, his code, at the time of writing, is only supplied as a binary blob and the documentation of the GPU features he used is nonexistent.
But there are other efforts to demystify the Pi's GPU. There is, for example, this tutorial for writing GPU code on the Pi. It may be possible to commandeer one GPU core permanently and use it to precisely drive the GPIO pins (Not sure how much control we have over scheduling).
The Thread Control section of this document seems to indicate that once a program is allocated space onto one of the GPU cores (called a "QPU"), it has full control of its scheduling - it can only lose control via an explicit Thread Switch signal.
I don't know how memory writes are coordinated. But I imagine there's a way to write to arbitrary memory locations (i.e. the GPIO bank). Docs say the QPUs allow for "I/O mapped into the register space". Edit: this appears to indicate that the GPIO section IS accessible from the Videocore (0x7E000000 - 0x7EFFFFFF, IO).
It looks like there's been some successful work getting the LLVM compiler to output code that can run on the QPUs.
There's also some source files for initializing & using the QPUs on bare metal
Gert showed that it was possible to dump the value of over 50 million 16-bit values per second into the GPIOs using the Pi's GPU. His application was to have each 16 bits present one pixel (RGB565) and use a resistor DAC to send it to a VGA LCD, along with clock signals generated by the GPU, all with accurate enough timings for video (in order to correctly place each pixel, that requires timing accurate to about 20 ns).
Unfortunately, his code, at the time of writing, is only supplied as a binary blob and the documentation of the GPU features he used is nonexistent.
But there are other efforts to demystify the Pi's GPU. There is, for example, this tutorial for writing GPU code on the Pi. It may be possible to commandeer one GPU core permanently and use it to precisely drive the GPIO pins (Not sure how much control we have over scheduling).
Useful documents:
Herman Hermitage Unofficial Videocore Docs
Official Broadcom Videocore Documentation
The text was updated successfully, but these errors were encountered: