Presentation: Scaling Patterns for Netflix's Edge
This presentation is now available to view on InfoQ.com
Watch video with transcriptWhat You’ll Learn
- Hear about Netflix’ scalability issues and some of the ways they addressed it.
- Learn how splitting a service into two can help with performance and consequently with scalability.
Abstract
In 2008 Netflix had less than a million streaming members. Today we have over 150 million. That explosive growth in membership has led to a similar growth in the number of microservices, in the amount of cloud resources, and our overall architectural complexity. Eventually, that sheer number of computation resources becomes hard to manage and sacrifices our reliability. At Netflix, we’ve found a few techniques that have helped keep our computation growth manageable and reliable.
There are the obvious tasks of performance tuning, reducing features, or reducing data. Going beyond just “tightening the belt” tactics, we had to rethink how we handle every request. At our scale, we can no longer call a customer database on every request, we can no longer fan out to a cascade of mid-tier requests on every request, and we can no longer log every request, so we don’t. This session will introduce the architectural patterns we’ve adopted to accomplish skipping those steps, which would normally be considered required for a functioning system.
I will also be sharing successes we’ve had from unintuitively partitioning computation into multiple services to get better runtime characteristics. Through this session, you will be introduced to useful probabilistic data structures, innovative bi-directional data passing, and open-source projects available from Netflix that make this all possible.
What is the work you're doing today?
I'm currently focused on Functions as a Service (FaaS) at Netflix, and how developers at Netflix can most leverage it to their advantage. I previously was working on, up until very recently, was our Edge Authentication team. This is where we authenticate users and devices right at our Edge layer. It was a team that we formed to help deal with the problem space and to really be focused on the complexities that can come with it. It's ultimately about the users' experience being tied to staying logged in and so having a team focused on it meant we could have more users stay logged in, which meant happier users.
What are your goals for the talk?
I think there are a lot of scaling patterns that people think are inaccessible to them. They might sound fancy or seem specific to complex database servers, but we found that digging into a few of them and spending a couple of days on it made them pretty attainable. My goal here is to share some of the patterns that we do that we know are successful, and we think other people could also be doing. Likewise, the kind of problems seen when architecting is very difficult to put into a library. It's not like I can just ship something up to Maven Central. These are architectural patterns, so it's something I have to describe to someone else and then architects have to adapt it to their platform. So I really need to have that conversation with people for them to really learn these.
In the abstract, you talk about unintuitively partitioning computation into multiple services to get better runtime characteristics. Can you expand on what you mean by that?
I call this the 1+1=3 problem. Quite a few of our servers still work like monoliths and there's a lot of things going on in their runtime and that inherent complexity causes issues for the JVM. By pulling apart a complex runtime into separate services, we can keep the runtime as simple as possible. The JVM can then do amazing things, there are a few cases where we just pulled out a libraries to another service and we saw drops in garbage collection by 20-30%. That performance gain was just by moving code around. The resulting deployment only needed 1 instance plus another instance with the extracted library, to do the same work that you were doing previously with 3. Hence 1+1=3. The simplicity lets everybody run less instances, less cost, better 99 latency, just better, better, better across the board.
What's driving that?
Sometimes workloads can be conflated, in that you might have your primary logic in Groovy scripts and some security code that's CPU bound that accidentally leaks long living objects. Both scenarios require different GCs characteristics. Yet, you have to pick one Garbage Collector, it's hard to optimize for both. By breaking them up, you can start to tune for those runtimes individually. You can tune maybe more inlining for one scenario and tune thread size for the other. When they get mixed together you've lost those capabilities for tuning.
What would you want people to leave the talk with?
Attendees will leave with the tools needed to implement one or more of the patterns we used at Netflix. There are a few patterns where there is a library that attendees can use to implement the pattern themselves. Some patterns serve as cost saving mechanisms. That's not necessary why Netflix necessarily used them, but attendees could see the patterns as opportunities to save money by implementing one of these algorithms or patterns over the course of maybe a couple of days. One or two engineers over the course of a week probably implement most of the patterns. Ideally, attendees will see the patterns as accessible and excited to try them.
Similar Talks
License Compliance for Your Container Supply Chain
Open Source Engineer @VMware
Nisha Kumar
Observability in the SSC: Seeing Into Your Build System
Engineer @honeycombio
Ben Hartshorne
Evolution of Edge @Netflix
Engineering Leader @Netflix
Vasily Vlasov
Mistakes and Discoveries While Cultivating Ownership
Engineering Manager @Netflix in Cloud Infrastructure
Aaron Blohowiak
Optimizing Yourself: Neurodiversity in Tech
Consultant @Microsoft
Elizabeth Schneider
Monitoring and Tracing @Netflix Streaming Data Infrastructure
Architect & Engineer in Real Time Data Infrastructure Team @Netflix
Allen Wang
Future of Data Engineering
Distinguished Engineer @WePay
Chris Riccomini
Coding without Complexity
CEO/Cofounder @darklang
Ellen Chisa
Holistic EdTech & Diversity
Holistic Tech Coach @unlockacademy