Question/Issue:
How can I deploy the object tracking post processing on the Grove Vision AI V2?
Project ID:
788304
Context/Use case:
I am training a vehicle and motorcycle detection model. I though when I built the binary for the Seeed Grove Vision AI Module V2 (Himax WiseEye2), the post-processing comes with it in deployment.
Steps Taken:
Saved my parameters for object tracking in post processing
Built the binary for the board
Deployed the model
Expected Outcome:
Tracking of objects as prediction results
Actual Outcome:
The typical object detection results
I’m after checking your project, and you are using a Custom Block, this must be the source of your issue. However I’m not 100% sure, are you already in touch with our solutions and sales team?
They would be best placed to review and advise on your custom block work, I wrote the doc for Object Tracking so can help with any questions regarding calling it, but not with implementation on a custom deployment block unfortunately.
If you’re using this for a product or platform etc you would be best to have a conversation with them earlier rather than later too, so you get the best support in your effort. As this is what they work on with our customers.
Hi, thank you for the response. We initially exported the project with the supported binary deployment for the board. Upon checking the version of the firmware in github, there is a new version released two days ago. We will update the deployed firmware and I’ll make another reply.
This is how we initially exported the project. The custom block you see as our current selected deployment was just us looking for possible solutions for the post processing to appear. Because another model deployment method for this board is through the SenseCraft website by Seeed Studio where we can upload the .tflite file to it and see results. And the custom block deployment have that model file. However the post processing results are still not able to show.
The warning is gone, but it seems like the model is not working because it cannot recognize anything. This is also the case when edge-impulse-run-impulse --debug is ran. It only shows the live camera feed but does not recognize objects.
When uploading the model solely from the supported binary build, this is the output:
The result is just like from the previous reply. By the way, this is the latest build from the project’s deployment. The binary was built yesterday, Nov 1, 2025.
Builds from an earlier date (before firmware updates) show results. For example, this is the output when deploying a build of the project from October 22, 2025: