Role: Data Engineer
Location: Boston MA 100% onsite must and in-person interview is must
Duration: Long Term
Job Description: An AWS data engineer job description typically involves designing building and maintaining scalable data pipelines architectures and solutions on the Amazon Web Services (AWS) cloud platform. Key responsibilities include data integration building ETL processes using services like AWS Glue and Redshift data modeling and ensuring data quality and security. This role often requires proficiency in programming languages like Python and skills with other technologies such as SQL Apache Spark and serverless architectures.
Minimum Skills Required:
- At least 6 years of relevant experience in design development complete end-end design of enterprise-wide big data solution.
- Experience in designing & developing a big data solution using Spark Scala AWS Glue Lambda SNS/SQS Cloudwatch is a must.
- Strong Application development experience in Scala/Python.
- Strong Database SQL experience preferably Redshift.
- Experience in Snowflake is an added advantage.
- Experience with ETL/ELT process and frameworks is a must.
- Strong background in AWS cloud services like lambda glue s3 emr sns sqs cloudwatch redshift
- Expertise in SQL and experience with relational databases like Oracle MySql PostgreSQL
- Proficient in Python programming for data engineering tasks automations
- Experience with shell scripting in Linux/Unix environments.
- Experience with Big Data Hadoop Spark
- Financial Services experience required
- Nice to have - knowledge in Machine Learning models regression validation
Role: Data Engineer Location: Boston MA 100% onsite must and in-person interview is must Duration: Long Term Job Description: An AWS data engineer job description typically involves designing building and maintaining scalable data pipelines architectures and solutions on the Amazon Web Servic...
Role: Data Engineer
Location: Boston MA 100% onsite must and in-person interview is must
Duration: Long Term
Job Description: An AWS data engineer job description typically involves designing building and maintaining scalable data pipelines architectures and solutions on the Amazon Web Services (AWS) cloud platform. Key responsibilities include data integration building ETL processes using services like AWS Glue and Redshift data modeling and ensuring data quality and security. This role often requires proficiency in programming languages like Python and skills with other technologies such as SQL Apache Spark and serverless architectures.
Minimum Skills Required:
- At least 6 years of relevant experience in design development complete end-end design of enterprise-wide big data solution.
- Experience in designing & developing a big data solution using Spark Scala AWS Glue Lambda SNS/SQS Cloudwatch is a must.
- Strong Application development experience in Scala/Python.
- Strong Database SQL experience preferably Redshift.
- Experience in Snowflake is an added advantage.
- Experience with ETL/ELT process and frameworks is a must.
- Strong background in AWS cloud services like lambda glue s3 emr sns sqs cloudwatch redshift
- Expertise in SQL and experience with relational databases like Oracle MySql PostgreSQL
- Proficient in Python programming for data engineering tasks automations
- Experience with shell scripting in Linux/Unix environments.
- Experience with Big Data Hadoop Spark
- Financial Services experience required
- Nice to have - knowledge in Machine Learning models regression validation
View more
View less