Workaround for Unsupported Ops

Hello,

A ML model can have various operations (Ops), but each MTK compute devices has its own guidelines and restrictions , for example: if a TFLite model has ABS and CEIL OPs, according to this MTK constrain table below, ABS is not supported on MVPU 2.5 and CEIL is not supported on MDLA 5.5:


I wanted to understand that as these operators are not supported on MVPU/MDLA, then how such operators are resolved when a model is run on MVPU/MDLA with these operators.

I can think of 2 different approaches here:

  1. to replace unsupported Ops with supported Ops (the retrain the model),
  2. another approach is to combine the model and distribute the Ops to various compute backends

Can you please help me understand what is the workaround here for unsupported ops?

Regards,

Nimesh

Hi @Nimesh_Katoriwala

Let’s assume your TFLite model contains both ABS and CEIL operations. If your target device includes both MDLA5.5 and MVPU2.5, the compiled DLA file will be generated such that the ABS operation is executed by MDLA, while the CEIL operation is executed by MVPU.

However, most Genio line products do not have MVPU. In this scenario, any unsupported operation will be delegated to the CPU at runtime.

The most widely used approach for maximizing deployment to the NPU is to replace unsupported model operations with supported ones (according to the constraints) and retrain the model. However, this method may not always yield the best performance and can result in significant engineering overhead. Since unsupported operations can fall back to the CPU, and if a slight performance hit is acceptable, you can distribute the operations across various compute backends (such as CPU/XNNPACK)

Let me know if this answers your question

1 Like

Here’s another topic write about dealing with the unsupported OP: Link