We seek a talented individual to join our team and take an essential role in designing and implementing our MLOps platform. As an MLE focusing on MLOps, you will be responsible for creating and optimizing core modules of the MLOps platform, from the computation layer to the orchestrator layer. Working alongside our data scientists, you will enhance their workflows by providing efficient and effective solutions developed by the MLOps team. Your role will also involve maintaining and optimizing standard tools like CI/CD pipelines and managing Kubernetes deployments. Additionally, you will play a vital role in monitoring and optimizing system performance and cloud resource consumption.
MoMo is the market leader in mobile payments in Vietnam. We are making Machine Learning the core component of almost every part of our product – product recommendation, personalization, risk scoring, fraud detection, ads, and promotion targeting. With the growth of our product, we are investing in scaling our MLOps platform to effectively assist our Machine Learning applications to serve tens of millions of users.
What you will do
Collaboration with Data Scientists: You will work closely with our data scientists to understand their requirements and challenges and provide them with tailored solutions to optimize their workflows. By building a strong synergy with the data science team, you will ensure seamless integration of MLOps practices into the existing processes.
Performance Monitoring and Cloud Resource Optimization: You will be tasked with monitoring the performance of the MLOps platform and proactively identifying areas for optimization. This includes analyzing system performance, ensuring low latency, and optimizing cloud resource consumption for cost- effective operations.
MLOps Platform Development: Your primary responsibility will be to design, implement, and optimize the core modules of our MLOps platform. This includes developing solutions that enhance our machine learning workflows’ scalability, efficiency, and robustness.
Maintenance of CI/CD Pipelines and Kubernetes Deployments: As part of the MLOps team, you will maintain and optimize our Continuous Integration/Continuous Deployment (CI/CD) pipelines and manage Kubernetes deployments. This will involve streamlining the development and deployment processes to ensure efficiency and reliability.
What you will need
ML Modeling Experience: should have some hands- on ML modeling experience, involving the entire process from conceptualizing ideas to developing, evaluating, and deploying models, with the added advantage of having expertise in monitoring and managing model performance after deployment.
Cloud Provider Experience: You should have hands- on experience with cloud providers such as Google Cloud Platform (GCP) or Amazon Web Services (AWS). Your familiarity with cloud services and infrastructure will be essential in architecting scalable and reliable MLOps solutions.
Educational Background: You should possess a Bachelor’s or Master’s degree in Computer Science or a related field with a strong focus on Machine Learning or Data Engineering. Your academic background will provide a solid foundation for understanding complex ML systems and their integration within the MLOps platform.
Proficiency in Microservices Architecture: Strong expertise in designing efficient and well- architected microservices is crucial for this role. Your ability to develop modular and maintainable code will contribute to the success of the MLOps platform.
Record of Scalable System Development: We seek candidates with experience of building systems that serve many users or require low latency performance. Your past experiences in delivering high- quality, scalable solutions will be highly valued.