Presentation: Continuous Optimization of Microservices Using ML
Abstract
Performance tuning of microservices in the data center is hard because of the multitude of available knobs, the large number of microservices and variation in work loads, all of which combine to make the problem combinatorially intractable. Maintaining optimal performance in the face of continuous upgrades to the service, and of the platform software and hardware, makes the problem even harder. As a result, lots of performance is typically left on the table, and data center resources wasted. We share our recent experiences in applying a technique from machine learning, called Bayesian optimization, to the performance tuning problem. We describe the implementation of a service for continuously optimizing microservices in the data center using this technique.
Similar Talks
Machine Learning on Mobile and Edge Devices With TensorFlow Lite
Developer Advocate for TensorFlow Lite @Google and Co-Author of TinyML
Daniel Situnayake
Self-Driving Cars as Edge Computing Devices
Sr. Staff Engineer @UberATG
Matt Ranney
Stateful Programming Models in Serverless Functions
Principal Engineering Manager @Microsoft, helping lead the Azure Functions Team
Chris Gillum
Coding without Complexity
CEO/Cofounder @darklang
Ellen Chisa
User & Device Identity for Microservices @ Netflix Scale
Senior Software Engineer in Product Edge Access Services Team @Netflix
Satyajit Thadeshwar
CI/CD for Machine Learning
Program Manager on the Azure DevOps Engineering Team @Microsoft
Sasha Rosenbaum
Observability in the Development Process: Not Just for Ops Anymore
Cofounder @honeycombio
Christine Yen
ML's Hidden Tasks: A Checklist for Developers When Building ML Systems
Senior Machine Learning Engineer @teamretrorabbit
Jade Abbott
From POC to Production in Minimal Time - Avoiding Pain in ML Projects
Chief Science Officer @StoryStreamAI