Presentation: ML in the Browser: Interactive Experiences with Tensorflow.js
This presentation is now available to view on InfoQ.com
Watch video with transcriptWhat You’ll Learn
- Hear about Machine Learning in the Browser - when you should use it, example use cases and potential workflows.
- Hear about the Tensorflow.js library and its API with examples.
- Find out how to develop and train a model offline (GPUs, TPUs,) then export it to run it in the browser using Tensorflow.js
Abstract
Machine learning (ML) holds opportunity to build better experiences right in the browser! Using libraries such as Tensorflow.js, we can better anticipate user actions, reliably identify sentiment or topics in text, or even enable gesture based interaction - all without sending the user’s data to any backend server. However, the process of building an ML model and converting it to a format that can be easily imported into a front-end web application can be unclear and challenging.
In this talk, I provide a friendly introduction to machine learning, cover concrete steps on how front-end developers can create their own ML models and deploy them as part of web applications. To further illustrate this process, I will discuss my experience building Handtrack.js - a library for prototyping real time hand tracking interactions in the browser. Handtrack.js is powered by an object detection neural network (MobilenetV2, SSD) and allows users to predict the location (bounding box) of human hands in an image, video or canvas html tag.
Audience
- Front end engineers interested in using ML within their web applications.
- Software engineers interested in training ML models
- Data Scientists interested in deploying ML
What you can expect
- A friendly introduction to ML in the browser using Tensorflow.js
- When to use ML in the browser
- How to create a machine learning model with an example (data collection, model training, model evaluation, conversion to Tensorflow.js).
- Practical tips and pitfalls associated with ML projects (what model to use, data validation checks, what framework to use etc.)
- A live demo of hand gesture interaction in the browser, using a neural network model.
What is a research engineer doing at Cloudera Fast Forward Labs?
At Fast Forward Labs, we like to see ourselves as the link between academia and industry. Research Engineer have two main tasks. In the first part of our work, we research tools and technologies from academia and the goal is to focus on tech that makes sense for the industry within the next six months to two years. For each selected topic, we conduct in depth research and produce an accessible report for our clients. In addition to that, we build prototypes that communicate the ideas behind these technologies and provides insights on their use in practice. The second half of our jobs entails working with clients, to build and prototype machine learning solutions to their specific problems.
Why would you want to run a machine learning model in a browser?
My research interests are at the intersection of human computer interaction and applied machine learning, and one of the sub-topics in this area focuses on how machine learning can make user interactions more interactive and engaging. ML in the browser, provides opportunities to craft rich experiences such as predicting the user’s next action, message auto-complete or even gesture-based interaction. Beyond interaction improvement, browser deployment enables other benefits such as privacy, ease of distribution and improved latency. By performing predictions on user data locally in the browser, you could lay claim to strong privacy because user data is never sent to remote servers. Another really exciting benefit is the fact that the distribution of the model becomes a lot easier. For most ML developers, packaging and distributing an ML application can be a challenging process which involves the installation of drivers, libraries and other system specific dependencies. However, If you go ahead and do all of this in the browser, all of that installation and distribution hassle goes away. With Tensorflow.js, it is as easy as including a link to the Tensorflow.js library, and loading your ML model files where appropriate. In addition, there are also cases where, deploying a model in the browser can improve the latency of your application. For these situations, it can be faster to perform computations in the browser as opposed to the round trip of sending user data to a remote server which returns results. Of course, the caveat here is that your model needs to be relatively small for this to work well in practice.
What's the goal of the talk?
It's an intermediate level talk, and the goal is to first get users excited about the possibilities of ML in the browser, and to show an end-to-end use case of how they can develop a model and get that deployed in a web application. This includes steps such as data collection, training a model, converting it into a format we can deploy in the browser and looking at runtime statistics. The talk will also address questions such as - Should you train a model in the browser? Do you train it offline? And if you train it offline, what are the challenges associated with converting it to a format where you can run it in the browser?
In this example that you're going through, will you be training it within the browser or offline?
This is one of the things I will cover in this talk. With Tensorflow.js there are a couple of different flows that a developer can follow. Flow number one, you can collect user data and train your model from scratch in the browser. The user can even specify hyper parameters of the model such as the architecture, the number of layers, building blocks, etc. In practice, this approach is suitable for small models which are trained on relatively small datasets. In the second flow, the model is trained offline and loaded for inference in the browser. This offline training approach allows you to take advantage of fast compute resources (GPUs, TPUs etc), train on large data sets, use complex models and then convert the trained model into a format that can be loaded by a JavaScript application. My talk and use case example will focus on this approach. The third flow follows a similar pattern where the model is trained offline, loaded in the browser but can now be retrained or finetuned in the browser using additional user data.
Two questions. Who is the core audience and what do you hope that persona to walk away with?
I am talking to an ML engineer, preferably one who is comfortable deploying machine learning models in backend applications in Python but has limited experience with browser use cases. The second target is the software engineer, perhaps even the front-end software engineer who wants to integrate ML models into front-end applications built with Tools like React.
I am hoping audience members will walk away with an understanding of use cases for ML in the browser, how to implement these use cases with the Tensorflow.js library, and best practices around deployment these models.
Similar Talks
Machine Learning on Mobile and Edge Devices With TensorFlow Lite
Developer Advocate for TensorFlow Lite @Google and Co-Author of TinyML
Daniel Situnayake
Self-Driving Cars as Edge Computing Devices
Sr. Staff Engineer @UberATG
Matt Ranney
Pick Your Region: Earth; Cloudflare Workers
Core Rust Team @RustLang
Ashley Williams
License Compliance for Your Container Supply Chain
Open Source Engineer @VMware
Nisha Kumar
Observability in the SSC: Seeing Into Your Build System
Engineer @honeycombio
Ben Hartshorne
Evolution of Edge @Netflix
Engineering Leader @Netflix
Vasily Vlasov
Mistakes and Discoveries While Cultivating Ownership
Engineering Manager @Netflix in Cloud Infrastructure
Aaron Blohowiak
Optimizing Yourself: Neurodiversity in Tech
Consultant @Microsoft
Elizabeth Schneider
Monitoring and Tracing @Netflix Streaming Data Infrastructure
Architect & Engineer in Real Time Data Infrastructure Team @Netflix