MLOPS for TinyML
Training of ML models is still done in the cloud to iterate and continuously improve the accuracy of the model. When an ML model performs poorly in the cloud, it is easy to see the presented data and determine the cause of the bad performance. However, when a model is distributed to thousands of TinyML devices, often with no data stream coming back to the cloud, it is hard to debug and might require new approaches to monitor and maintain its performance. The talk will cover the techniques used for automation and monitoring of such systems, from integration, testing to releasing to deployment.