Description
Job Title: Database Engineer
Full Time | Remote | Working Hours: CST
About Pavago:
Pavago partners with leading U.S.-based companies to deliver world-class operational, technical, and analytical support. One of our client's teams is building and maintaining high-performance systems that power data-driven decision-making across industries.
They are expanding their data engineering capabilities and seeking a Database Engineer who can design, optimise, and manage modern database systems that support large-scale data operations and analytics.
Position Overview:
The Database Engineer will be responsible for designing, optimising, and maintaining the core data infrastructure that drives our clients’ platforms. This role involves architecting scalable database solutions, building efficient data pipelines, and ensuring data integrity, reliability, and performance.
You will work closely with software engineers, data analysts, and DevOps to support mission-critical systems and streamline data operations across multiple environments.
Key Responsibilities:
Database Architecture & Design:
- Design, implement, and maintain relational and non-relational database schemas that support large, distributed data systems.
- Develop normalized, high-performance data models tailored to both transactional and analytical workloads.
- Collaborate with engineering and analytics teams to translate business requirements into optimized database structures.
Query Performance & Optimization:
- Write and tune SQL and NoSQL queries to maximize speed, scalability, and efficiency.
- Analyze slow-running queries and refactor them using indexing, partitioning, and caching strategies.
- Develop and maintain stored procedures, triggers, and functions for automation and consistency.
Data Pipelines & Integration:
- Build, maintain, and monitor ETL/ELT pipelines for ingestion and transformation of structured and unstructured data.
- Integrate data workflows between APIs, applications, and external systems.
- Automate ingestion and validation processes using Python and database scripting.
Database Administration & Reliability:
- Oversee database deployment, replication, and backup strategies to ensure uptime and data security.
- Monitor system performance, resource utilization, and data integrity.
- Implement and maintain backup, recovery, and high-availability solutions.
Data Governance & Validation:
- Apply constraints, validation checks, and auditing mechanisms to ensure data accuracy.
- Maintain version control and documentation for schema changes and database procedures.
- Enforce data access policies and security protocols.
Required Skills:
- Core Expertise: 3+ years of experience in SQL database design, performance tuning, and administration (PostgreSQL, MySQL, or MS SQL).
- Data Modeling: Strong command of normalized schema design and relationship management for large datasets.
- Querying: Expert-level SQL skills; able to write and optimize complex queries, joins, and aggregations.
- Programming: Proficient in Python for ETL scripting, data automation, and validation (experience with SQLAlchemy or similar libraries).
- Infrastructure / DevOps (plus): Familiar with Dockerized database deployments, monitoring tools (Grafana, Prometheus), and backup automation.
- NoSQL / Graph (plus): Exposure to MongoDB, Cassandra, or Neo4j is a plus.
What You’ll Do:
You’ll be responsible for the backbone of our data operations — ensuring databases are fast, reliable, and structured for scale.
Your work will directly support analytics, integrations, and application performance across client's company's ecosystem.
Key Performance Indicators (KPIs):
Metric Query Optimization: Improve query performance by 20–30% through indexing, refactoring, and caching strategies.
Schema Quality: Maintain zero critical data integrity issues and complete schema updates within sprint timelines.
Pipeline Reliability: Achieve 99%+ success rate for data ingestion and transformation processes.
Uptime & Recovery: Ensure 99.9% database uptime with tested backup and recovery plans.
Data Governance: Maintain <1% error rate in validation checks and full documentation for schema and pipeline updates.
Interview Process:
- Initial Phone Screen
- Technical Interview with Pavago Recruiter
- Practical Task (e.g., schema design or query optimisation exercise)
- Final Client Interview
- Offer & Background Verification