CEO Dillon Erb of Paperspace cloud-native machine learning platform examines the challenges developers face in getting real value from ML programs
1. Tell us how you came to be the co-founder and CEO at Paperspace. How did developing cloud native machine learning platform become your career path?
I’m the co-founder and CEO of Paperspace, where we are building a cloud-based MLOps platform for machine learning teams. Prior to co-founding Paperspace, I worked in architecture and engineering, most notably looking at structural performance within large building structures. I’ve spent a significant amount of time developing genetic and evolutionary algorithms for tensile structures. My expertise is in low-level optimization problems and designing the ways we can more easily manage and interact with them. These days I spend most of my time applying that background in optimization techniques to cloud GPU pipelines and the exciting and emerging deep learning space.
2. How has the market for developer tools with AI evolved over the last decade?
We’re currently experiencing a kind of Cambrian explosion of developer tools in machine learning for the first time. The market for MLOps (machine learning + DevOps) tools today looks a lot like software engineering in the 1990s – the vast majority of teams are hacking together workflows and best practices that emphasize determinism, and tight feedback loops are just coming into vogue. The limiting factor today in machine learning is the lack of tooling– and we’re trying to fix that!
3. What are the current challenges in the MLOps space?
What we want to see is a world in which operationalized models are delivering business value quickly and reliably – and for teams to feel empowered to deliver great results with a consistent pipeline of machine learning output. This is why we need good MLOps that are designed to standardize and streamline the lifecycle of ML in production.
Enterprise teams are finding that it can be difficult to get real value from ML efforts right away. The development cycle is slow, difficult to scale, overly reliant on manual work, and collaboration is genuinely difficult.
4. How do you think the rise of cloud-native technologies has impacted the development of ML applications?
It is complex trying to figure out how to get 5 people on an ML team rowing in the same direction. Organizations have different compute requirements – some need to use AWS or Azure; others need to keep their sensitive data air-gapped from any public cloud. What we’re doing is building a toolset that will deterministically manage your compute no matter what cloud services you choose to use for ML.
We addressed the demand for multi-cloud with the latest iteration of Gradient, which includes options for cloud installations as well as a DIY cluster installer to be able to bring Gradient with you into your environment of choice.
5. The CI / CD approach a huge trend in DevOps right now. How does Paperspace’s carve out its role in this?
We were one of the first companies talking about CI/CD in a machine learning context and it’s at the heart of everything we do. We want ML teams to have the same determinism, reproducibility, and development velocity as standard software engineering teams.
Our Gradient platform accelerates and scales the development and deployment of machine learning models by offering shared primitives to machine learning teams: notebooks, experiments, jobs, models, and pipelines.
When teams have a common framework they can accelerate the development cycle greatly and start delivering real value faster.
6. We recently saw that Paperspace joined NVIDIA’s DGX program. Can you tell us how this came about and what it means for the ML platform industry?
We’ve worked together with NVIDIA for a long time and Gradient, our MLOps platform, is now certified under the new NVIDIA DGX-Ready Software program. What this means is that NVIDIA engineering vetted our MLOps software for compatibility with the latest advanced NVIDIA product line and they’ve certified that it will help you get more value out of your NVIDIA hardware purchase.
7. What do you think will be the next big breakthrough in the machine learning platform technology space?
A lack of common tooling is the number one hurdle holding back ML advancement. I’m confident MLOps will experience explosive growth in 2020 and pretty soon MLOps will have just as much institutional backing as DevOps. I also believe ML developer tools will continue to become more user-friendly with better front-ends, as more and more infrastructure is abstracted. AI chips and hardware will continue to de-commoditize, and more specialized chips will come to the forefront. Lastly, I believe 2020 will be remembered as the year the Chief AI Officer began to gain traction as an essential enterprise role in software companies.
8. What advice do you have for a company or an individual starting out in the cloud native machine learning platform development space?
I’d tell them to build something from the ground-up and not rely solely on the knowledge of what exists today. The field is moving extremely fast. Reading the latest research papers is a great way to self-study, but nothing beats hands-on experience.
I’d suggest joining a community and making as many connections as possible. Finding like-minded collaborators is an essential part of growing as a developer. It helps you sharpen your communication skills. My mantra is “you don’t really understand something until you can explain it well”. And, in my view, it’s the fastest way to figuring out what you really are interested in and care about. For aspiring leaders, I’d advise them to listen with empathy to as many different opinions within the company as possible. It’s just as important to hear what someone is saying as it is to understand why they’re saying it – empathy is critical.
As the co-founder and CEO at Paperspace, Dillon Erb, uses his extensive experience in computing architecture to drive their Kubernetes-based MLOps platform, so data scientists and machine learning developers build cloud-native applications effortlessly. Dillon and co-founder Daniel Kobran aim to accelerate cloud AI with frameworks accessible to all.