Presentation: Machine Learning on Mobile and Edge Devices With TensorFlow Lite
This presentation is now available to view on InfoQ.com
Watch videoAbstract
Machine learning enables some incredible applications, from human-centric user interfaces to generative art. But the traditional machine learning architecture is server-based, with data being sent from users' devices to the cloud, and users are rightly concerned about privacy, safety, and control over their data.
In this talk, we'll learn how developers can use TensorFlow Lite to build amazing machine learning applications that run entirely on-device. We'll see how running models on-device leads to lower latency, improved privacy, and robustness against connectivity issues. And we'll get familiar with the workflows, tools, and platforms that make on-device inference possible.
You'll leave this session ready to deploy machine learning models to a wide range of devices, from mobile phones to ultra-low power microcontrollers. You'll learn where to find pre-trained models that can solve a wide range of problems, and how to optimize your own models so they work well on devices.
Similar Talks
CI/CD for Machine Learning
Program Manager on the Azure DevOps Engineering Team @Microsoft
Sasha Rosenbaum
ML's Hidden Tasks: A Checklist for Developers When Building ML Systems
Senior Machine Learning Engineer @teamretrorabbit
Jade Abbott
From POC to Production in Minimal Time - Avoiding Pain in ML Projects
Chief Science Officer @StoryStreamAI
Janet Bastiman
ML in the Browser: Interactive Experiences with Tensorflow.js
Research Engineer in Machine Learning @cloudera
Victor Dibia
Machine Learning 101
Data Scientist @IBM
Grishma Jena
ML/AI Panel
Staff Developer Relations Engineer @Google Cloud Platform