How can we help you?
Marija Mijic
HR & Office Allrounder
T +43 660 383 95 46
office@craftworks.at
craftworks GmbH
Schottenfeldgasse 20/6a
1070 Vienna
From Jupyter Notebooks to deploying Machine Learning: MLOps in the field of data science
Unlocking the potential of Machine Learning: moving beyond Jupyter Notebooks towards MLOps proficiency.
Know-How
reading time: 10 min
Flavia Cristian
Your first data science project. Or your 15th already. Let’s say it is a machine learning project for price predictions, and you want to measure the predictive performance with the R-squared metric. As per usual, you start your experiment in a Jupyter Notebook. You might make use of various popular Python libraries for their functionality, such as Pandas or Scikit-learn. Your end result could be quite satisfactory, you have reached an R-squared score of 0.9. What’s more, you’re finally done and you want to deploy your ML model in a docker container to see it in action. However, as with a lot of projects, it never makes it into production and nobody actually ends up using the price prediction service.
This scenario might sound a little gloomy, but unfortunately this is common in the AI industry, data science as a field is still maturing, so we often don’t know how to make it past the experimental phase. Once you have created a model and made some predictions, it’s when the difficult part starts—deploying it into production and operating it as a continuous service.
Some challenges we often see are the lack of:
central monitoring: not having an overview of your results, including outliers in the input data
documentation and a reproduction path: not having a standardized format
request/response store: data loss and not being able to compare historic data to generate insights
standardized security and access control
versioning: no programmatic approach for integrating development iterations
infrastructure scalability and availability: system compatibility & components reusability
To tackle these hurdles, there is MLOps. Like DevOps, the practice of MLOps (Machine Learning Operations) has the goal to close the gap between model development and production usage. Although good for learning and experimenting, building an ML model in a Jupyter notebook doesn’t entail any real business value—at least not until your model is integrated into an ML system and can be continuously operated in production. The latter is what makes machine learning truly valuable. Without production you’ll never be able to leverage it in your business or make data-driven decisions on issues.
By now, we all agree that MLOps is the key to transitioning your models to ML systems and, as such, any ML system should be operated with DevOps principles, or in our case, MLOps. These are continuous integration (CI) and continuous delivery (CD), and, since we are still talking about a data science project, continuous training (CT).
CI requires you to test and validate the code and components. For machine learning, CI is performed on data. You would also need data validation scripts to handle issues with input data that might arise. CD plays a crucial role in speeding up development iterations. To support data scientists through CI/CD tasks, there are many MLOps tools available. Unfortunately, with most, they either lock you into their system by requiring a vendor-specific format or require you to spend a lot of time on setting them up.
Some are even missing features like retraining or dashboards visualizing the performance of your model.
navio is an MLOps platform that supports you during the model development process all the way to running your model in production. With navio you can continue to work with your favorite python frameworks while developing the model, because it supports model interoperability through leveraging MLflow. Since navio is a vendor-agnostic product, it doesn’t restrict you in any way. You can package your model independent of the deep learning framework you use, be it Scikit-learn, TensorFlow, Apache MXNet, PyTorch, XGBoost, and others.
Deployment with navio is seamless, freeing you from the hassle and complexity of setting up a robust, reliable and scalable infrastructure, allowing you to focus on what’s important—solving real problems with innovative machine learning models. navio automatically generates secure REST endpoints, so you can integrate your model into any application, machine, or device. navio acts as a centralized model store and model management system on-premise, or in the cloud. Upload any custom python-compatible model to navio to make use of its sleek UI to easily manage, deploy and integrate your models, without writing a line of code. You can also use the developer API to achieve the same result and have it automated.
You also don’t have to develop any access management components yourself, allowing you to focus on the real data science tasks at hand. MLOps revolves around transforming the process of creating a model into a shared machine learning pipeline. One data scientist doesn’t need to develop the whole data science project, but rather a team can work on parts of the project such as data exploration, model creation or testing and consequently speeding up delivery. Using navio, you can leverage the included access control features to manage multiple deployments and secure access to your deployed models with the built-in security features.
You can also take advantage of the central monitoring user interface. Machine learning is experimental in nature, and there isn’t always a guarantee that your models will produce accurate and reliable results. In addition to that, "often, in production pipelines, data [is] missing, incomplete, or corrupted, causing model performance to sharply degrade" (Shankar, & Garcia, 2022). In short, errors, or changes in input data, often occur and, if not monitored correctly, the model's performance could degrade notably. Such a user interface as navio’s could give you the peace of mind you need. Furthermore, by pushing your models into production, your project will benefit from model overview capabilities for business users: The user interface allows you to showcase SHAP, default explanations for your models, as well as complex custom explanations that you’ve created in order to drive business value and data-driven decision-making capabilities for your stakeholders.
To ensure that your models reflect current conditions accurately, you need to retrain your models continuously. To make your data science life easier, you can just retrain your models within navio, whenever more data becomes available.
As (Shankar, & Garcia, 2022) mentioned, there is a list of strategies ML engineers can employ during monitoring and debugging to sustain model performance post-deployment. Some of them are:
Creating New Version: to frequently retrain on live data and label it accordingly
Maintaining old version: minimizing downtime by reverting to a fallback model
It is very quick and easy to replace current deployed models with an existing fallback one. By managing all your retrained models in one use case, you can switch them out easily to compare results. You can replace your model with a retrained version without having to regenerate the endpoints or reconfigure access, which ultimately speeds up collaboration between departments and ensures an efficient workflow.
So to sum up, for a sustainable, professional machine learning environment, you need to ensure:
Developing with the production environment in mind — push your code to a repository and deploy it to navio. Establish pipelines to build better systems collaboratively.
Continuous Training — integrating new input data into your project running on navio. Keep testing to set standards for your production models.
Continuous Integration — continuously upload new models to navio, where you can compare models to previous versions. Preferably programmatically as part of the ML pipeline to save time and speed up efficiency by shortening the development lifecycle.
Continuous Deployment—Continuously improve a deployed model by assigning new versions to existing deployments without the need to change any interface or infrastructure. This ensures that you’re always working with the latest and best model for any use case as they improve over time. Automate it to save time and build towards a continuously-improving system.
Ultimately, the goal is to minimize technical friction in order to shorten the span between a model being an idea and seeing it into production. The real business value of machine learning lies in having the ability to market with as little risk and information loss as possible. In doing so, the decision on the best tooling for reaching your goals lies in your hands.
Shankar,, S., & Garcia, R. (2022, September 16). [2209.09125] Operationalizing Machine Learning: An Interview Study. arXiv. Retrieved October 14, 2022, from https://arxiv.org/abs/2209.09125