Unsupported Model Op on Yocto Stable Delegate not falling back to CPU

There are certain TFLite models, which when executed against standard TFLite benchmark model using stable delegate fail with the error “BAD_NEURON”
Though an unsupported op should fallback directly to CPU, seeing this error means that the op, in general is supported on NPU, but the version of the op in the model is unsupported.

For example, NPU supports ‘MEAN’ op upto a Rank of 4.
But if we try to execute a model which has 'MEAN" op but with a rank of 5, the execution errors out, though it should fallback to CPU (similar to TFLite GPU delegate, where supported ops with unsupported constraints fallback to CPU).

Hi Suyash,
Thank you for the question.

The failure is caused by a bug in the Neuron Adapter that the adapter did not correctly validate operator constraints (e.g., rank) before delegating the op to the NPU. As a result, an unsupported op configuration was passed to the NPU, which triggered BAD_NEURON instead of CPU fallback.

Here are some information for the fix and the suggested actions:

Fix Availability

  • The Yocto v25.1 stable delegate will add a PreOpCheck feature.

  • PreOpCheck consults the NPU driver to reflect the exact operator support and constraints.

  • With PreOpCheck, unsupported configurations (e.g., MEAN with rank 5) will not be delegated to the NPU; the stable delegate will then allow CPU fallback as expected, provided the TFLite CPU kernel supports the op.

  • Target availability: PR5 release around late November.

  • Track updates on Development Status for IoT Yocto v25.1-dev

Recommended Actions

  1. Upgrade to Yocto v25.1 (PR5 or later) to get PreOpCheck in the stable delegate.
  2. Until the upgrade is available, choose one of the following:
    • Disable NPU delegation and run CPU-only to avoid BAD_NEURON.
    • Modify the model to comply with NPU constraints (e.g., ensure MEAN rank ≤ 4).
    • Use the offline path only if all ops and constraints are supported by the NPU; note that offline inference has no CPU fallback.
  3. After upgrading, verify fallback behavior:
    • Run your model with the stable delegate enabled.
    • Check logs; unsupported op configurations should be rejected during PreOpCheck and executed on CPU.