Part I of this series, published in May, discussed the definition of MLOps and outlined the requirements for implementing this practice within an organisation. It also addressed some of the roles necessary within the team to support MLOps.
![]() |
| Lego Alike data assembly - Generated with Gemini |
This time, we move forward by exploring part of the technical stack that could be an option for implementing MLOps.
Before proceeding, below is a CTA to the first part of the article for reference.
Assembling an MLOps Practice - Part 1
ML components are key parts of the ecosystem, supporting the solutions provided to clients. As a result, DevOps and MLOps have become part of the "secret sauce" for success...
Components of your MLOps stack.
The MLOps stack optimises the machine learning life-cycle by fostering collaboration across teams, delivering continuous integration and deployment (CI/CD), modular code structures, and automated workflows. This approach accelerates time-to-market while enhancing model reliability.
Components.
- Version control.
- Allowing collaboration as the teams contribute to the ML code.
- Essential to the CI/CD pipeline and the automation.
- E.g. Github, bitbucket, others.
- Model pipeline.
- For model environments and libraries, you can leverage tools such as Tensorflow, Azure Machine Learning Studio, Pytorch, Cohere, others.
- For CI/CD and automation you leverage Jenkins, Github Actions, Apache Airflow, Luigi, Prefect
- For deployments and monitoring your model performance on the cloud you have options like Azure ML, AWS Sagemaker, Google Vertex, Databricks, MLFlow.
- Containerisation for your models. The usual suspects here: Docker, K8s (Kubernetes) with AKS (Azure) or EKS (AWS) or GKE (Google cloud).
Approach for monitoring ML pipeline.
You want to watch for certain things within the lifetime of your ML pipeline:
- Model drift. When the model is not performing because of changes in the data, leading to inaccuracy predictions.
- Data drift. A change to the input data to the model over time. If so, then you can start to notice the declining performance of the model.
- Performance.
- Fatigue due to overwhelming alerting/notifications.
To tackle the challenges above then leverage an ML framework and tools that allow for:
- Proactive monitoring.
- Automated remediation.
- Continuous improvement.
Here is a list of tools:
- Pytorch.
- Tensorflow.
- MLflow.
- AWS Sagemaker.
- Azure ML.
MLFlow and demo.
We are going to focus on MLflow because (at least at the time of writing this article) is one of the best options for MLOps activities, due to its versatility and ease of learning. It includes robust features for:
- Experiment Tracking.
- Model Management.
- Deployment.
- It is open-source.
- A great community behind it, with regular updates.
An additional advantage of this tool is its flexibility, which enables seamless integration with various other tools that users may prefer. Combined, these tools create a robust MLOps toolkit and provide an effective framework for project implementation. Below you will find a couple of suggestions:
- MLflow and AWS Sagemaker. MLflow can be focused on tracking and deployment. SageMaker deals with scalable infrastructure for model training.
- MLflow and Prometheus and Grafana. This is one of the most popular combinations that you will find on the internet these days. MLflow focuses on tracking, workflows and deployments. Prometheus can capture the metrics and perform real-time monitoring around the ML models, which can be leveraged for diagnostics (performance, health, etc.). Grafana complements all these with its visualization of the data provided via Prometheus.
Recommended article
Democratising AI. Democratising AI is all about empowering others to use it, by making it available to them...
Learnings from setting MLflow on AWS.
Tech used:
- MLflow,
- Docker,
- Microsoft VSCode as IDE,
- Github Copilot with Gemini 2.5 Pro model,
- Terraform,
- PostgreSQL as the DB Engine,
- Amazon AWS:
- ECS,
- ECR,
- Secret Manager,
- IAM (for user, role and policy),
- EC2 (load balancer),
- Aurora and RDS,
- CloudWatch.
Prerequisites.
Install the AWS CLI.
- Instructions: aws cli userguide - getting started
- Once you install the CLI then get your
Access-key-IDandSecret-access-key. - Use this command to configure the CLI:
aws configure - Check your work by making sure you are running the right version and the configure is correct. Handy commands:
Aws --versionAws configure list
Install Terraform.
- How-to links and sources:
- Hashicorp documentation - install cli.
- Microsoft Azure - terraform - windows bash.
- Hashicorp - developer - terraform installation.
- Terraform offers numerous useful commands, and a wealth of documentation is available, including from your preferred LLM, if you prefer that path of searching for content. A good practice is to verify your work, as well as to ensure you are using the correct version. Some commands for that:
Terraform --versionTerraform planTerraform showTerraform apply
At the end of our internal lab practice we ended up with a functional MLflow instance on AWS. See image and code repo link below.
![]() |
| MLflow running on Beolle - AWS instance |
Key takeaways.
- Whether you are new to Terraform or quite experienced, when using GitHub Copilot as a coding assistant, don’t simply accept its code without question. It’s important to evaluate the quality, security, and ensure the logic remains consistent. This final point is crucial because the assistant offers a range of possible code flows, which might diverge and affect the overall design of your code and the services you’ve chosen to implement.
- LLMs are good code assistants. It was helpful to get this done. However, do not fall under the trap of letting it do everything. Take time to learn and understand what you are producing. Also keep in mind that for production readiness you need to follow your quality and security controls.
- One important lesson was that all the necessary privileges were needed for Terraform to run and set up the required AWS services. Keep this in mind and enjoy the process; have some fun!
Public Github repo.
We plan to update the repo soon. Meanwhile, feel free to ask a friend who knows AWS and Terraform for help, or use your favourite LLM to assist you with that part. Good luck!



