Track: Performance Mythbusting

Location: Bayview AB

Day of week:

Real world, applied performance proofs across stacks. Hear performance consideratiosn for .NET, Python, & Java. Learn performance use cases with OpenJ9, Instagram, and Netflix. 

Track Host: Monica Beckwith

Java Champion, First Lego League Coach, passionate about JVM Performance @Microsoft

Java Champion Monica Beckwith is considered a subject matter expert, has several published articles and gets regular invitations to give talks on JVM/JIT Compilation/Garbage Collection (GC). She is also a JavaOne Rock Star.

Monica has made various performance contributions to the Java HotSpot VM by identifying the need for a NUMA-aware allocator and allocation patterns, reduction of redundant instructions, reduction of the Java object header, prefetching patterns, redundant array checks in a loop and various other optimizations for the JIT compiler, the generated code, the JVM heuristics and garbage collection and collectors.

Prior to joining Microsoft, Monica was the JVM Performance Architect at Arm. Her past also includes leading Oracle’s Garbage First Garbage Collector performance team.

Asynchronous API With CompletableFuture

Since Java 8, CompletableFuture has enabled asynchronous, future-based programming in Java and is one of the most powerful features suitable for creating asynchronous APIs. This presentation, based on real project experience, goes beyond the CompletableFuture public API. It reveals internal details and shows who stands to benefit from it for better performance.

Sergey Kuksenko, Java Performance Engineer @Oracle

You Can Build a World-Class Search Engine in .NET

Microsoft's online services, especially Bing, are some of most important proving grounds for running .Net in large-scale, highly available systems. The platform that underlies Bing also runs significant online functionality for Cortana, Office, Xbox, Windows and more.

When deciding how to build core infrastructure for the next version of Microsoft's query serving platform, we had to make a number of hard choices. First and foremost? Whether to use .Net or stick with C++.

This talk will discuss the ramifications of choosing .Net, why it was the right choice for us, and how much we had to learn about writing high-performance, high-availability software on this platform. We'll also hear about some of the myths we busted along the way, and why understanding them will help you apply these principles in your own software.

Ben Watson , Principal Software Engineer @Microsoft focused on High-Performance .NET

Understanding Python Memory at Instagram

Instagram server is one of the biggest Python deployments in the world to support more than 700M active users. At Instagram, the computing parallelism is based on multi-processing instead of threading. Memory utilization becomes critical in such model, i.e., with less memory per process, we are able to improve the parallelism hence overall capacity. In this talk, we will start with how Python memory profiling is done at Instagram, what useful insights we got from memory profiling data, and how such insights turned into efficiency wins for Instagram servers. We are also going to share our learnings from tuning and improving Python memory garbage collection.

Min Ni, Engineering Manager @Instagram

Performance Mythbusting Panel

Monica Beckwith, Java Champion, First Lego League Coach, passionate about JVM Performance @Microsoft
Sergey Kuksenko, Java Performance Engineer @Oracle
Ben Watson , Principal Software Engineer @Microsoft focused on High-Performance .NET
Min Ni, Engineering Manager @Instagram
Ioannis Papapanagiotou, Senior Software Engineer @Netflix
Vinay Chella, Cloud Data Architect @Netflix

NDBench: Benchmarking Microservices at Scale

Netflix runs thousands of microservices to serve more than 100M users everyday. These services are backed by large fleet of data store instances running on the public cloud. It is nearly impossible to predict the traffic patterns imposed by our architecture upon our data stores. We needed a framework that would help us determine the behavior of our platform systems under various workloads. We wanted to be mindful of provisioning our clusters, scaling them either horizontally (by adding nodes) or vertically (by upgrading the instance types), and operating under a variety of conditions, such as node failures, network partitions, etc.

To address those complexities we designed a benchmarking system for Netflix's Cloud platform that can mimic the performance of production use cases. By integrating dynamic configuration management, middle-tier load balancing, and metrics, we can study the effect of different workload parameters. This helped us identify potential memory leaks and garbage collection issues. In addition it allowed us to test the impact of long running maintenance jobs such as database repairs or reconciliation. We are going to showcase how the deployment, management, and monitoring of multiple instances can be done from a single entry-point (UI). We finally going to show how we integrated a benchmarking tool into our release lifecycle.

Ioannis Papapanagiotou, Senior Software Engineer @Netflix
Vinay Chella, Cloud Data Architect @Netflix

Performance Beyond Throughput:An OpenJ9 Case Study

Curious about Java application and JVM performance and how they are continuing to evolve? Come to this talk to learn more about exciting results and new advancements in the area of JVM performance using the latest open source JVM technology at Eclipse OpenJ9 running with OpenJDK! We'll talk about new performance boosts across a wide variety of applications and present results using different workloads and metrics to give you a fuller picture of what to expect from OpenJ9. We will also explore some common low-level Java performance problems and show how to look for these issues in an application. Low-level performance bottlenecks can be more challenging to diagnose since they can arise either from the OS kernel or from performance critical parts of the JVM such as the garbage collector (GC) or the just-in-time compiler (JIT). Rather than focusing on any single monitoring tool, we will explain the data you need to gather and provide you some examples of how to do so using system commands and profiling tools (like Linux Perf) as well as explaining different JVM tracing and logging capabilities. The view from “the bottom of the stack” can help in finding and fixing some stubborn performance problems often missed by high-level performance analysis tools.

Marius Pirvu, Advisory Software Developer @IBM

Last Year's Tracks

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.