Job Description
NexaQuanta offers expert consulting and implementation services for Generative AI-powered transformation of businesses and software applications in a responsible, safe, and cost-effective manner to drive higher revenues, profitability, and productivity.
Headquartered in London, UK, NexaQuanta is a team of highly experienced Generative AI Consultants, Architects, Researchers, Data Scientists, and Software Developers. The team has developed and delivered advanced Generative AI-powered transformation solutions to numerous large enterprises globally.
We are an IBM Silver Business Partner, bringing our customers the best and most trusted IBM watsonx Generative AI technologies and solutions.
Role Description
We are looking for a highly motivated and skilled Data Engineer with around 4 years of experience to join our data team. The ideal candidate will have hands-on experience in designing and building scalable data pipelines and a strong understanding of Kafka, AWS Redshift, Snowflake, PySpark, and Apache Airflow.
You will play a key role in enabling data-driven decision-making across the organization by building efficient data workflows and ensuring reliable data infrastructure.
Key Responsibilities:
- Design, develop, and maintain scalable and efficient data pipelines.
- Integrate and process data from diverse sources using Apache Kafka and other ingestion tools.
- Build ETL workflows leveraging PySpark and manage orchestration with Apache Airflow.
- Optimize data warehousing solutions using Amazon Redshift and Snowflake.
- Collaborate with data analysts, scientists, and other stakeholders to understand data requirements and deliver solutions.
- Monitor and troubleshoot data pipeline issues and ensure data integrity.
- Ensure best practices for data quality, security, and compliance are followed.
- Contribute to automation, performance tuning, and system architecture improvements.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field.
- 4+ years of experience in Data Engineering or a similar role.
- Strong programming skills in Python, particularly with PySpark.
- Proven experience with Apache Kafka for real-time data streaming.
- Hands-on experience with AWS Redshift and Snowflake for data warehousing.
- Solid understanding and practical experience with Airflow for workflow orchestration.
- Proficient in writing complex SQL queries for data transformation and reporting.
- Strong problem-solving skills and attention to detail.
Preferred Qualifications:
- Experience with cloud platforms (AWS, GCP, or Azure).
- Familiarity with containerization tools (Docker).
- Exposure to CI/CD pipelines and DevOps practices in data engineering.
What We Offer:
- Market competitive salary and benefits package.
- Opportunity to work on cutting-edge AI technologies and innovative projects.
- Collaborative and inclusive work environment.
- Professional development and growth opportunities.
NexaQuanta is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
- Join us at NexaQuanta and be a part of the future of AI-driven innovation!