Does Genio Platform Support GPU Acceleration in Docker Containers?

When deploying AI models on the Genio platform using Yocto and Ubuntu, developers can run models on the GPU without issue on both. However, after installing Docker and running the same AI inference commands inside a container, GPU acceleration fails with library errors .

For example, running the following command inside Docker results in an error:


benchmark_model --graph=path_to_mobilenet_v1_1.0_224_quant.tflite --use_gpu=1 --allow_fp16=0 --gpu_precision_loss_allowed=0 --num_runs=10

Error log excerpt:


ERROR: Can not open OpenCL library on this device - libOpenCL.so: cannot open shared object file: No such file or directory

ERROR: Falling back to OpenGL

ERROR: TfLiteGpuDelegate Init: OpenGL-based API disabled

...

  • On native Genio Yocto/Ubuntu, this command runs successfully on GPU.

  • Inside a Docker image, GPU acceleration consistently fails due to missing libOpenCL.so or libOpenGL libraries.

Is there an official solution or supported method to enable GPU (OpenCL/OpenGL) acceleration within Docker containers on Genio platforms? Can MediaTek provide source code, packages, or guidelines to build necessary GPU libraries for use in Docker images?

MediaTek currently does not support GPU hardware acceleration inside Docker containers on IoT Yocto for Genio platforms. Key points to consider:

  • The OpenGL/OpenCL (libmali) GPU libraries for Genio are proprietary to ARM. MediaTek is contractually unable to redistribute the source code or binaries. If your project requires them, you must obtain an ARM Mali GPU Linux DDK license directly from ARM.

  • The Ubuntu-on-Genio images use libmali packages maintained by Canonical (PPA reference), which enable GPU acceleration within the Ubuntu host OS (not guaranteed inside Docker containers). Proper integration requires adjusting Ubuntu’s compositor and OS framework components.

  • By MediaTek’s current official feature set (IoT Yocto v25.0), hardware acceleration for Docker guest containers is not supported or validated. Any such functionality is outside official technical support scope.

  • For customers requiring this feature, MediaTek recommends discussing business requirements with your MediaTek Customer Project Manager for potential development support.

Summary:

  • Native Yocto/Ubuntu: GPU acceleration is available if host libraries and drivers are in place.

  • Docker containers: GPU acceleration is unsupported due to kernel/driver/device access and proprietary library limitations.

For further needs, please consult the MediaTek official documentation or reach out for additional business engagement regarding feature support.

Hello @apuovia,

How is it going ?

If what you are trying to accomplish is to run accelerated inference on a Docker container.
You could try converting your model to MDLA format so that you can use MDLA instead of TFLite Stable Delegate. Then you can use NNStreamer to perform inference and easily integrate it with your media application either on host or Docker.

We have done this for one of our customers and it works perfectly.

Please do not hesitate to reach out if you require further help.

best regards,
Andres Campos
Embedded Software Engineer at ProventusNova