Summary: As a Data Engineer you will design develop and maintain data solutions that facilitate data generation collection and processing. Your typical day will involve creating data pipelines ensuring data quality and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. AWS Data Architect to lead the design and implementation of scalable cloud-native data platforms. The ideal candidate will have deep expertise in AWS data services along with hands-on proficiency in Python and PySpark for building robust data pipelines and processing frameworks. Roles & Responsibilities: - Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve data processes to ensure efficiency and effectiveness.- Design and implement enterprise-scale data lake and data warehouse solutions on AWS.- Lead the development of ELT/ETL pipelines using AWS Glue EMR Lambda and Step Functions with Python and PySpark.- Work closely with data engineers analysts and business stakeholders to define data architecture strategy.- Define and enforce data modeling metadata security and governance best practices.- Create reusable architectural patterns and frameworks to streamline future development.- Provide architectural leadership for migrating legacy data systems to AWS.- Optimize performance cost and scalability of data processing & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with data warehousing concepts and technologies.- Knowledge of programming languages such as Python or Java for data processing.- AWS Services: S3 Glue Athena Redshift EMR Lambda IAM Step Functions CloudFormation or Terraform- Languages: Python PySpark .SQL- Big Data: Apache Spark Hive Delta Lake- Orchestration & DevOps: Airflow Jenkins Git CI/CD pipelines- Security & Governance: AWS Lake Formation Glue Catalog encryption RBAC- Visualization: Exposure to BI tools like QuickSight Tableau or Power BI is a plusAdditional Information: - The candidate should have minimum 5 years of experience in AWS Architecture.- This position is based at our Pune office.- A 15 years full time education is required.
Summary: As a Data Engineer you will design develop and maintain data solutions that facilitate data generation collection and processing. Your typical day will involve creating data pipelines ensuring data quality and implementing ETL processes to effectively migrate and deploy data across various ...
Summary: As a Data Engineer you will design develop and maintain data solutions that facilitate data generation collection and processing. Your typical day will involve creating data pipelines ensuring data quality and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. AWS Data Architect to lead the design and implementation of scalable cloud-native data platforms. The ideal candidate will have deep expertise in AWS data services along with hands-on proficiency in Python and PySpark for building robust data pipelines and processing frameworks. Roles & Responsibilities: - Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve data processes to ensure efficiency and effectiveness.- Design and implement enterprise-scale data lake and data warehouse solutions on AWS.- Lead the development of ELT/ETL pipelines using AWS Glue EMR Lambda and Step Functions with Python and PySpark.- Work closely with data engineers analysts and business stakeholders to define data architecture strategy.- Define and enforce data modeling metadata security and governance best practices.- Create reusable architectural patterns and frameworks to streamline future development.- Provide architectural leadership for migrating legacy data systems to AWS.- Optimize performance cost and scalability of data processing & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with data warehousing concepts and technologies.- Knowledge of programming languages such as Python or Java for data processing.- AWS Services: S3 Glue Athena Redshift EMR Lambda IAM Step Functions CloudFormation or Terraform- Languages: Python PySpark .SQL- Big Data: Apache Spark Hive Delta Lake- Orchestration & DevOps: Airflow Jenkins Git CI/CD pipelines- Security & Governance: AWS Lake Formation Glue Catalog encryption RBAC- Visualization: Exposure to BI tools like QuickSight Tableau or Power BI is a plusAdditional Information: - The candidate should have minimum 5 years of experience in AWS Architecture.- This position is based at our Pune office.- A 15 years full time education is required.
View more
View less