Data Engineer
Experience - 7 Years
Location: Boston
Job Overview
We are looking for a highly skilled Data Engineer to design build and optimize scalable data pipelines and cloud-based data solutions. The ideal candidate will have strong expertise in AWS services big data processing and ETL development enabling efficient handling of large-scale datasets.
Key Responsibilities
- Design develop and maintain robust data pipelines using modern ETL tools and frameworks
- Build and optimize data processing solutions using AWS Glue PySpark and Python
- Develop scalable solutions for large-scale data processing and transformation
- Integrate data from multiple sources into centralized data platforms
- Ensure data quality integrity and reliability across pipelines
- Collaborate with data scientists analysts and cross-functional teams to support business needs
- Monitor and troubleshoot data workflows to ensure optimal performance
- Implement best practices for data governance security and compliance
Required Skills & Qualifications
- Strong experience in Python and PySpark
- Hands-on experience with AWS Glue and other AWS data services
- Solid understanding of ETL processes and data pipeline development
- Experience working with large-scale data processing systems
- Proficiency in handling structured and unstructured data
- Strong problem-solving and analytical skills
Preferred Qualifications
- Experience with AWS ecosystem (S3 Redshift Lambda etc.)
- Familiarity with data warehousing concepts and architecture
- Knowledge of performance tuning and optimization techniques
- Experience in Agile/Scrum environments
Additional Requirements
- Willingness to work onsite at one of the specified locations
- Ability to attend in-person interviews
Data Engineer Experience - 7 Years Location: Boston Job Overview We are looking for a highly skilled Data Engineer to design build and optimize scalable data pipelines and cloud-based data solutions. The ideal candidate will have strong expertise in AWS services big data processing and ETL d...
Data Engineer
Experience - 7 Years
Location: Boston
Job Overview
We are looking for a highly skilled Data Engineer to design build and optimize scalable data pipelines and cloud-based data solutions. The ideal candidate will have strong expertise in AWS services big data processing and ETL development enabling efficient handling of large-scale datasets.
Key Responsibilities
- Design develop and maintain robust data pipelines using modern ETL tools and frameworks
- Build and optimize data processing solutions using AWS Glue PySpark and Python
- Develop scalable solutions for large-scale data processing and transformation
- Integrate data from multiple sources into centralized data platforms
- Ensure data quality integrity and reliability across pipelines
- Collaborate with data scientists analysts and cross-functional teams to support business needs
- Monitor and troubleshoot data workflows to ensure optimal performance
- Implement best practices for data governance security and compliance
Required Skills & Qualifications
- Strong experience in Python and PySpark
- Hands-on experience with AWS Glue and other AWS data services
- Solid understanding of ETL processes and data pipeline development
- Experience working with large-scale data processing systems
- Proficiency in handling structured and unstructured data
- Strong problem-solving and analytical skills
Preferred Qualifications
- Experience with AWS ecosystem (S3 Redshift Lambda etc.)
- Familiarity with data warehousing concepts and architecture
- Knowledge of performance tuning and optimization techniques
- Experience in Agile/Scrum environments
Additional Requirements
- Willingness to work onsite at one of the specified locations
- Ability to attend in-person interviews
View more
View less