Presentation: Automating Netflix ML Pipelines With Meson

Track: The Practice & Frontiers of AI

Location: Seacliff ABC

Day of week:

Slides: Download Slides

Level: Intermediate

Persona: Architect, CTO/CIO/Leadership, Data Engineering, ML Engineer, Technical Engineering Manager

What You’ll Learn

  • What challenges you will face in orchestrating an ML based system
  • How to leverage multiple technologies to increase the effectiveness of your ML and Data Science efforts.

Abstract

In this talk we discuss the evolution of ML automation at Netflix and how that lead us to build Meson, an orchestration system used for many of the personalization/recommendation algorithms. We will talk about challenges we faced, and what we learned automating thousands of ML pipelines with Meson.

Question: 

What is your motivation for this talk?

Answer: 

We want to give a broader story on what it means to try to automate experience tests with machine learning; when you're dealing with tons of smart data scientists that want to try lots of things. How do you build the infrastructure that's lasting and provides value while you're getting this flood of new ideas and new technology that you need to support.

Question: 

Who should come to your talk?

Answer: 

People who are familiar with the space or anyone curious about how we do orchestration at Netflix.

Question: 

What can people come take away from this talk?

Answer: 

How you transition from the experimentation phase into the production phase. That's dealing with the issues of the day to day workflow of a data engineer. And that effort doesn't have necessarily an easily measurable metric like runtime or prediction accuracy. Those are the kinds of problems that we often have and in some ways they have a bigger impact on the ability to actually leverage ML at scale, not whether or not you can run a giant neural net.

Question: 

What keeps you up at night?

Answer: 

Are there fundamental abstractions in how we think about modeling a pipeline? Because words like workflow and pipeline are sort of thrown around very interchangeably and I stay awake at night thinking about like where are the real seams here?

When you're interacting with an orchestration system like this, oftentimes so many things are sort of intertwined and braided together that you can't really tease out why it works the way it does. What are sort of the key pieces of how to actually construct these things especially in such an evolving space?

Speaker: Davis Shepherd

ML Management @Netflix

Davis Shepherd spent 4 years building re-enforcement learning systems before working at Netflix where he now develops their next generation of ML pipeline management software.

Find Davis Shepherd at

Speaker: Eugen Cepoi

Senior Software Engineer @Netflix

Eugen Cepoi has been working on data processing systems for general ETL like purposes and machine learning for the past 4 years. Before that he spent 3 years building web applications and infrastructure for them. He now works in the personalization infrastructure team at Netflix, where he focuses on developing the ML pipeline management software.

Find Eugen Cepoi at

Similar Talks

Linux Foundation's Project EVE: A Cloud-Native Edge Computing Platform

Qcon

Co-founder, VP Product and Strategy @ZededaEdge & Member Board Of Directors for LF Edge @linuxfoundation

Roman Shaposhnik

Machine Learning on Mobile and Edge Devices With TensorFlow Lite

Qcon

Developer Advocate for TensorFlow Lite @Google and Co-Author of TinyML

Daniel Situnayake

Evolution of Edge @Netflix

Qcon

Engineering Leader @Netflix

Vasily Vlasov

Mistakes and Discoveries While Cultivating Ownership

Qcon

Engineering Manager @Netflix in Cloud Infrastructure

Aaron Blohowiak

Monitoring and Tracing @Netflix Streaming Data Infrastructure

Qcon

Architect & Engineer in Real Time Data Infrastructure Team @Netflix

Allen Wang