- Design develop and maintain robust data pipelines and ETL processes using Python SQL and PySpark
- Work with large-scale data storage on AWS (S3 DynamoDB MongoDB)
- Ensure high-quality consistent and reliable data flows between systems
- Optimize performance scalability and cost efficiency of data solutions
- Collaborate with backend developers and DevOps engineers to integrate and deploy data components
- Implement monitoring logging and alerting for production data pipelines
- Participate in architecture design propose improvements and mentor mid-level engineers.
Qualifications :
- 5 years of experience in data engineering or backend development
- Strong knowledge of Python and SQL
- Hands-on experience with AWS (S3 Glue Lambda DynamoDB)
- Practical knowledge of PySpark or other distributed processing frameworks
- Experience with NoSQL databases (MongoDB or DynamoDB)
- Good understanding of ETL principles data modeling and performance optimization
- Understanding of data security and compliance in cloud environments
- Fluent in English (Upper-Intermediate level or higher)
Additional Information :
PERSONAL PROFILE
- Strong communication and collaboration skills in cross-functional environments
- Proactive accountable and driven to deliver high-quality results
Remote Work :
Yes
Employment Type :
Full-time
Design develop and maintain robust data pipelines and ETL processes using Python SQL and PySpark Work with large-scale data storage on AWS (S3 DynamoDB MongoDB) Ensure high-quality consistent and reliable data flows between systems Optimize performance scalability and cost efficiency of data solutio...
- Design develop and maintain robust data pipelines and ETL processes using Python SQL and PySpark
- Work with large-scale data storage on AWS (S3 DynamoDB MongoDB)
- Ensure high-quality consistent and reliable data flows between systems
- Optimize performance scalability and cost efficiency of data solutions
- Collaborate with backend developers and DevOps engineers to integrate and deploy data components
- Implement monitoring logging and alerting for production data pipelines
- Participate in architecture design propose improvements and mentor mid-level engineers.
Qualifications :
- 5 years of experience in data engineering or backend development
- Strong knowledge of Python and SQL
- Hands-on experience with AWS (S3 Glue Lambda DynamoDB)
- Practical knowledge of PySpark or other distributed processing frameworks
- Experience with NoSQL databases (MongoDB or DynamoDB)
- Good understanding of ETL principles data modeling and performance optimization
- Understanding of data security and compliance in cloud environments
- Fluent in English (Upper-Intermediate level or higher)
Additional Information :
PERSONAL PROFILE
- Strong communication and collaboration skills in cross-functional environments
- Proactive accountable and driven to deliver high-quality results
Remote Work :
Yes
Employment Type :
Full-time
View more
View less