Job Description
We are looking for a passionate DevOps Engineer with 1–3 years of experience to join our Data & AI Operations team. The ideal candidate will have hands-on experience managing data warehouse operations, debugging ETL and orchestration workflows, and deploying machine learning models in production. This role focuses on building scalable MLOps frameworks and automated CI/CD pipelines to support our growing data and AI initiatives.
Key Responsibilities:
- Manage and optimize data warehouse operations, ensuring high availability and performance.
- Debug and maintain Informatica, SLJM, and Apache Airflow jobs and workflows.
- Deploy, monitor, and operationalize ML models, including retraining and performance evaluation.
- Design and develop scalable MLOps frameworks for model deployment, tracking, and governance.
- Build and maintain CI/CD pipelines to automate data and model workflows using Docker and Kubernetes.
- Collaborate with data scientists, data engineers, and software developers to ensure smooth integration and delivery of AI/ML solutions.
- Continuously improve automation, monitoring, and alerting across DevOps and MLOps environments.
Required Qualifications:
- Education: Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, or a related field.
- Experience: 1–3 years of experience in DevOps, DataOps, or MLOps roles.
- Technical Skills:
- Strong understanding of data warehouse operations.
- Hands-on experience with Informatica, SLJM, and Apache Airflow.
- Practical knowledge of ML model deployment and monitoring.
- Proficiency in containerization and orchestration tools — Docker and Kubernetes.
- Familiarity with CI/CD pipelines and version control (Git, Jenkins, etc.).
- Good scripting and automation skills (Python, Bash, etc.).
Preferred Qualifications:
- Exposure to cloud platforms (AWS, Azure, or GCP).
- Knowledge of MLOps tools (MLflow, Kubeflow, Airflow DAGs for ML pipelines).
- Relevant certifications such as AWS DevOps Engineer or Azure DevOps Engineer.