Presentation: Data Decisions With Realtime Stream Processing

Track: Stream Processing In The Modern Age

Location: Bayview AB

Day of week:

Level: Intermediate

Persona: Architect, Data Engineering, Developer

What You’ll Learn

  • Learn how Facebook is using stream processing at scale.
  • Hear why it is important to relieve data scientists from the burden of knowing how stream processing works and learn how to do it.
  • Find out how Facebook is using SQL over stream processing.

Abstract

At Facebook, we can move fast and iterate because of our ability to make data-driven decisions. Data from our stream processing systems provide real-time data analytics and insights; the system is also implemented into various Facebook products, which have to aggregate data from many sources. In this talk, we cover:

  1. the difficulties of stream processing at scale
  2. the solutions we've created to date
  3. three case studies on improving the time to deliver insights with data via stream processing

Our case studies include examples from search product development, accelerating daily pipelines in the Data Warehouse, and seamless integration with our machine learning platforms. Each case study shows how we can deliver value to more teams while continuing to abstract the details of stream processing from various teams at Facebook. We conclude by speaking to the future of stream processing.

Question: 

QCon: What's the focus of your work and of the team that you're on at Facebook?

Answer: 

Rajesh: My team is working on stream processing, and we are part of the real-time data organization which focuses on faster, simpler, and smarter delivery of data. We want to reduce the time to results for people and our data driven products and people wait on that rely on data driven. Our organization encompasses the stream processing, our real time monitoring, OLAP systems as well as our data visualization infrastructure.

Question: 

QCon: What's the goal for this talk?

Answer: 

Rajesh: The goal for this talk is to introduce what stream processing is and how we're thinking about it at Facebook. We don't want stream processing to be this entire new ecosystem people code against. We want stream processing to seamlessly integrate into the rest of Facebook's existing infrastructure. People shouldn't know that streaming is happening. The fact that people have to know that streaming is a different platform makes it difficult for people to start consuming and leveraging it. The goal of this talk is to address why stream processing is hard, then walk through three different examples of how we interact with our internal customers to give them better and more seamless access. We're obviously not finished, there is a long way to go, but it gives you a snapshot of we've done so far.

Question: 

QCon: Is this an introductory talk to stream processing?

Answer: 

Rajesh: It introduces stream processing, but it quickly jumps into challenges like scalability or distributed aggregation. Why is distributed aggregation hard? What is involved in running multiple data centers, how do you deal with multi-petabyte storage, how you deal with cross-datacenter network bandwidth, what are some of the tips and techniques that you can use for that, how do you deal with late arriving data?

Question: 

QCon: Are you going to be focused on Spark solutions?

Answer: 

Rajesh: We have our own late-arriving data algorithm, as a result of how we deal with late-arriving data. When you see the myriad of garbage data that you see in the world, it's important to have types, to get statistical conclusions, to decide what windows you are going to keep open, how you track time. I'm going to spend some time on that.

Question: 

QCon: Any specific platform like Flink or Spark?

Answer: 

Rajesh: We don't use any open source platform. The only thing we use is HBase for storage.

Question: 

QCon: Who's the main persona that you're talking to?

Answer: 

Rajesh: I think it's more on the software engineering side. We have talked a lot about Flink and Spark and how to merge those, but how do we provide value that automatically can translate stuff? We can't just use stream processing with this and get as much value as possible. There are still some scaling challenges to this. The data scientists should focus more on what they're trying to compute not how to get it done, and we can figure out what's the best way to do that. Data scientists should focus on the accuracy and correctness of the metrics and the queries. And we should just focus about how to get things done. That's the direction that we're taking in stream processing with Facebook.

Question: 

QCon: What do you want someone who comes to you talk to leave with?

Answer: 

Rajesh: I want them to understand the type of challenges existing in stream processing. Stop worrying about what is streaming, what is batching, all that stuff. Try to get that as automated as possible. I’ll share examples of techniques in which we've made automation work. Also, I’ll share examples of the real value the streaming can bring. For example, we talk about search experimentation, how we were able to get results from 40 hours down to one hour. With Facebook's continuous push system that we have now, we can do multiple search iterations within a single day. I want people to understand that stream processing in conjunction with a fast release cycle can really help. In order to do that it's infeasible to have people worry about streaming and batch, and the interoperability, there has to be a fluid space between them.

Speaker: Serhat Yilmaz

Software Engineer @Facebook

Serhat is an engineer in the Realtime Data team at Facebook. His focus is on making the power of realtime data analytics and insights more broadly accessible, simplifying the interfaces to these systems, and improving the systems' scalability and reliability. 

 

He is currently leading the effort of developing the imperative stream processing engine at Facebook. He has worked with teams to build and scale many mission critical stream processing applications. He is also one of the co-authors of "Realtime Data Processing at Facebook" published at SIGMOD 2016. Prior to coming to Facebook, Serhat got his MS degree from University of Southern California.

Find Serhat Yilmaz at

Similar Talks

Evolution of Edge @Netflix

Qcon

Engineering Leader @Netflix

Vasily Vlasov

Mistakes and Discoveries While Cultivating Ownership

Qcon

Engineering Manager @Netflix in Cloud Infrastructure

Aaron Blohowiak

Monitoring and Tracing @Netflix Streaming Data Infrastructure

Qcon

Architect & Engineer in Real Time Data Infrastructure Team @Netflix

Allen Wang

Future of Data Engineering

Qcon

Distinguished Engineer @WePay

Chris Riccomini

Coding without Complexity

Qcon

CEO/Cofounder @darklang

Ellen Chisa

Holistic EdTech & Diversity

Qcon

Holistic Tech Coach @unlockacademy

Antoine Patton