Job Description: Data Engineer
Key Responsibilities
- Design develop and maintain robust scalable and efficient ETL/ELT pipelines using DBT and AWS Redshift.
- Build and manage workflows using Argo pipelines.
- Collaborate with data analysts data scientists and business stakeholders to understand data requirements and deliver high-quality data solutions.
- Optimize and tune Redshift queries for large-scale datasets to ensure performance and efficiency.
- Implement data quality checks validation processes and monitoring mechanisms.
- Ensure data security governance and compliance with organizational standards.
- Participate in code reviews documentation and knowledge sharing across teams.
Required Skills & Qualifications
- 6 8 years of hands-on experience in data engineering or related roles.
- Strong proficiency in AWS Redshift including schema design performance tuning and workload management.
- Solid experience with DBT (Data Build Tool) for data transformation and modeling.
- Proficiency in SQL and scripting languages such as Python or Shell.
- Experience with AWS services: S3 Lambda Glue CloudWatch IAM etc.
- Familiarity with CI/CD pipelines version control (Git) and infrastructure-as-code tools.
- Strong understanding of data warehousing concepts dimensional modeling and data lake architectures