What do the parameters in the TFLiteRunner command mean, and why does NNAPI delegate fail while Neuron Delegate succeeds?

When running TFLiteRunner on a Genio platform, the following command:


./TFLiteRunner -a 1 -s 2 -r 1 -c 11 -m model.tflite

fails with error messages such as:


ERROR: NN API returned error ANEURALNETWORKS_BAD_DATA...

ERROR: Node number 8 (TfLiteNnapiDelegate) failed to prepare.

ERROR: MTKEXT_DEPTHWISE_CONV_2D operator is not implemented.

But when using:


 ./TFLiteRunner -a 2 -s 2 -r 1 -c 11 -m model.tflite

inference completes successfully.

  1. What do each of these command line parameters (-a, -s, -r, -c, -m) mean?

  2. Does this outcome mean only the MediaTek Neuron Delegate (-a 2) is supported, while NNAPI Delegate (-a 1) doesn’t work for this model?

Command parameter meanings:

  • -a 1: Use NNAPI delegate (Google’s standard Android Neural Networks API)
  • -a 2: Use Neuron delegate (MediaTek’s optimized runtime for their NPU)
  • -s 2: Use random input data for inference
  • -r 1: Allow FP32 model to be computed using reduced precision (FP16)
  • -c 11: Loop (repeat) inference 11 times
  • -m: Path to the .tflite model

Interpretation of results:

  • The error log with -a 1 indicates the model uses MTKEXT_DEPTHWISE_CONV_2D, which is not supported by the stock NNAPI delegate on this platform.
  • The successful run with -a 2 shows the MediaTek Neuron delegate supports this custom operator, and thus inference works.
  • Therefore, for this model and platform, only NeuroPilot TFLite Shim API + Neuron Delegate is supported.
    TFLite Shim API + NNAPI delegate cannot process models with MediaTek custom operators.