LiteRT on genio510

I’m actually developing on a mediatek genio510 and i have yocto Scarthgap.

Since i have python 3.12 i cannot install tflite_runtime as python module but instead i have liteRT (ai_edge_litert).

However i cannot find any tutorial or exaple on how to use this library implementing also delegates to exploits the gpu. Actually i am able only to do inference on CPU.
Any help?

Hello @Gian_Sal,

in your default rity image, you should already have tflite_runtime v2.17 by default.
you can benchmark your tflite models using tflite benchmark and invoke GPU delegate by setting –use_gpu=1

is this option working for you for benchmarking on GPU?

ok but using tflite_runtime but defining the external delegate as /usr/lib/gpu_external_delegate.so

it gives me this error:

RuntimeError: Can not open OpenCL library on this device - /usr/lib/gpu_external_delegate.so: undefined symbol: clGetCommandBufferInfoKHRFalling back to OpenGLTfLiteGpuDelegate Init: OpenGL-based API disabledTfLiteGpuDelegate Prepare: delegate is not initializedNode number 65 (TfLiteGpuDelegateV2) failed to prepare.Restored original execution plan after delegate application failure.

if i use the libarmnnDelegate it seem working

any idea?

It seems like the model that you are using has a node which is unsupported in its present form on LiteRT GPU delegate.
you can check the erroneous node and its parameters using netron