Hi,
My team and I are trying to run .ptl (Pytorch Lite Model) on D9400 mobile phone. I was going through the converters available in the neuropilot 8 documentation, I found Pytorch to TFLite converter but not for Pytorch Lite.
Is there any way to run a Pytorch Lite model on MTK devices?
Your input would be really helpful to our work.
Thanks,
Nimesh
Hi @Nimesh_Katoriwala,
Thanks for reaching out and for sharing the context of your use case.
Today, our Neuropilot stack does not support running PyTorch Mobile / PyTorch Lite (.ptl) models directly on MTK devices. The converters and runtimes we provide are currently focused on TensorFlow Lite as the primary deployment format.
To run your model efficiently on MTK devices, we recommend the following approach:
- Start from your original PyTorch model (before exporting to
.ptl).
- Convert it to TensorFlow / ONNX (if this is not already part of your pipeline).
- From there, convert the model to TensorFlow Lite (TFLite).
- Deploy the TFLite model using Neuropilot on the D9400 platform.
This is the path we actively optimize and support today.
In parallel, we are preparing an ONNX-based proof-of-concept (PoC) pipeline that will be released for users to experiment with. Once that PoC is available, it will provide an additional path from PyTorch → ONNX → MTK deployment.
Regards:)
Joying