Job Overview

Location
Lahore, Punjab
Job Type
Full Time
Date Posted
2 days ago

Additional Details

Job ID
1404
Job Views
219
Work Mode *
On-site

Job Description

Key Responsibilities

  • Data Architecture: Design andimplement scalable, efficient, and reliable data architectures, including datawarehousing, ETL/ELT, and data lake solutions.
  • Data Pipelines: Develop andmaintain data pipelines using tools like mage.ai, Apache Beam, Apache Spark, orAWS Glue, ensuring data quality, integrity, and security.
  • Data Engineering: Collaboratewith data scientists and analysts to develop and implement data models, datamining, and data visualization solutions.
  • Data Quality: Implement dataquality checks, data validation, and data cleansing processes to ensurehigh-quality data.
  • Scalability and Performance:Optimize data pipelines and architectures for scalability, performance, andreliability, ensuring low latency and high throughput.
  • Collaboration: Work closelywith cross-functional teams, including Data Science, Product, and Engineering,to identify and prioritize data requirements and solutions.
  • Mentorship: Mentor junior dataengineers and provide technical guidance and oversight.
  • Staying Up to Date: Staycurrent with industry trends, emerging technologies, and best practices in dataengineering.
  • Team Leadereship: Lead a small teamof 3-4 developers, providing guidance, mentorship, and code reviews
  • Collaboration: collaborate withfront end and DevOps teams to ensure smooth deployment of code to AWS,architect, design and implement internal and external APIs

Requirements

  • Education: Bachelor's orMaster's degree in Computer Science, Computer Engineering, or a related field.
  • Experience: 6+ years ofexperience in data engineering, with a focus on building scalable dataarchitectures and pipelines.


Technical Skills:

  • Programming languages: Python
  • Data processing frameworks:Apache Airflow, Apache Beam, AWS Glue or similar tools.
  •  Data storage solutions:relational databases such as MySQL, NoSQL databases such as MongoDB,cloud-based data warehouses such as Amazon Redshift.
  • Data visualization tools:Tableau, Power BI, or D3.js, or similar tools.
  • Cloud platforms: Amazon WebServices (AWS), Microsoft Azure, or Google Cloud Platform (GCP), with Amazonpreferred.
  • Use of modern and popularPython libraries and Python ecosystem technologies and platforms.
  • Use of one or more LargeLanguage Model and Natural Language Processing to aid in data acquisition, dataquality and data analytics.

Soft Skills:

  •  Excellent communication andcollaboration skills.
  • Strong problem-solving skills,with the ability to work independently.
  • Experience with agiledevelopment methodologies.

Nice to Have:

  • Certifications: AWS CertifiedData Engineer, Google Cloud Certified - Professional Data Engineer, or similar.
  • Experience with: Machine Learning,OCR.
  • Familiarity with:Containerization and Docker), orchestration and Kubernetes, serverlesscomputing.

What We Offer:

  • Competitive salary and benefitspackage
  • Opportunity to lead a smallteam and contribute to the growth and development of our core platform
  • Collaborative and dynamic workenvironment
  • Professional development andgrowth opportunities
  • Flexible working hours andremote work options
  • Access to cutting-edgetechnologies and tools
  • Recognition and rewards foroutstanding performance

Location