Machine Learning Deployments with Diego Oppenheimer

Machine learning models allow our applications to perform highly accurate inferences. A model can be used to classify a picture as a cat, or to predict what movie I might want to watch. But before a machine learning model can be used to make these inferences, the model must be trained and deployed.

In the training process, a machine learning model consumes a data set and learns from it. The training process can consume significant resources. After the training process is over, you have a trained model that you need to get into production. This is known as the “deployment” step.

Deployment can be a hard problem. You are taking a program from a training environment to a production environment. A lot can change between these two environments. In production, your model is running on a different machine–which can lead to compatibility issues. If your model serves a high volume of requests, it might need to scale up. In production, you also need caching, and monitoring, and logging.

Large companies like Netflix, Uber, and Facebook have built their own internal systems to control the pipeline of getting a model from training into production. Companies who are newer to machine learning can struggle with this deployment process, and these companies usually don’t have the resources to build their own machine learning platform like Netflix.

Diego Oppenheiner is the CEO of Algorithmia, a company that has built a system for automating machine learning deployments. This is the second cool product that Algorithmia has built, the first being the algorithm marketplace that we covered in an episode a few years ago.

In today’s show, Diego describes the challenges of deploying a machine learning model into production, and how that product was a natural complement to the algorithms marketplace. Full disclosure: Algorithmia is a sponsor of Software Engineering Daily.

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.


Sponsors

DoiT International helps startups optimize the costs of their workloads across Google Cloud and AWS, so that they can spend more time building new software–and less time reducing cost. DoiT International helps clients optimize their costs–and if your cloud bill is over $10,000 per month, you can get a free cost-optimization assessment by going to doit-intl.com/sedaily.

An in-house team of engineers spent thousands of hours developing the Casper mattress. Casper offers free delivery and free returns with a 100-night home trial. As a special offer to Software Engineering Daily listeners, get $50 toward select mattresses by visiting casper.com/sedaily and using code SEDAILY at checkout.

A thank you to our sponsor, Datadog, a cloud monitoring platform bringing full visibility to dynamic infrastructure and applications. Start a free trial today & Datadog will send you a free T-shirt! Visit softwareengineeringdaily.com/datadog to get started.  

GoCD is a continuous delivery tool created by ThoughtWorks. It’s great to see the continued progress on GoCD with the new Kubernetes integrations–and you can check it out for yourself at gocd.org/sedaily.