Job Role: Data Engineer
Experience: 5 years
Location: Vadodara (Gujarat)
Work Mode: 5 Days WFO
Mandatory Skills: DataBricks SQL Data Modelling Azure/AWS
Key Roles & Responsibilities
Design implement and maintain scalable data solutions and pipelines on the AWS Cloud Platform.
Architect optimize and maintain data models data structures and databases (Redshift S3 etc.) to ensure performance and efficiency.
Develop and manage ETL processes using AWS Glue AppFlow SQL Python and PySpark.
Collaborate with business stakeholders to translate requirements into robust data solutions.
Build and optimize data models to extract and integrate information from diverse sources (SAP HRMS Salesforce Google Sheets etc.).
Write and optimize SQL queries for analytics dashboards and reporting use cases.
Monitor troubleshoot and optimize data pipelines to ensure reliability and accuracy.
Partner with software engineers to design and deliver data-driven features.
Perform root cause analysis and resolve data quality or pipeline issues.
Maintain detailed documentation for data architecture integration flows and ETL processes.
Identify and implement opportunities to improve database performance (e.g. indexing structure optimization).
Support and enhance existing applications by updating code features and integrations to meet evolving requirements.
Design and implement data security and access control measures in compliance with best practices.
Recommend infrastructure or architecture improvements to enhance capacity scalability and performance.
Apply knowledge of process industry operations to ensure business relevance of data solutions.
Experience
5 to 8 years of professional experience in data engineering integration and support roles.
Qualification
B.E / in Computer Science Information Technology or related discipline.
Hands-on Experience with AWS Services
AWS Glue (ETL Jobs)
AWS AppFlow (Data Integration)
Amazon Redshift (Data Warehouse)
Amazon S3 (Storage & Data Lake)
SQL Python PySpark
Core Competencies
Functional:
Strong foundation in Data Warehousing
Knowledge of Data Modelling
Expertise in ETL processes
Behavioral:
Effective communication skills with stakeholders and team members
Strong collaboration and teamwork abilities
Excellent problem-solving and decision-making skills
Proven time management and organizational capabilities
If interested please share your resume with Also dont forget to follow our company page @Ample Success Hr Solutions for regular updates on job openings.
Job Role: Data Engineer Experience: 5 years Location: Vadodara (Gujarat) Work Mode: 5 Days WFO Mandatory Skills: DataBricks SQL Data Modelling Azure/AWS Key Roles & Responsibilities Design implement and maintain scalable data solutions and pipelines on the AWS Cloud Platform. Architec...
Job Role: Data Engineer
Experience: 5 years
Location: Vadodara (Gujarat)
Work Mode: 5 Days WFO
Mandatory Skills: DataBricks SQL Data Modelling Azure/AWS
Key Roles & Responsibilities
Design implement and maintain scalable data solutions and pipelines on the AWS Cloud Platform.
Architect optimize and maintain data models data structures and databases (Redshift S3 etc.) to ensure performance and efficiency.
Develop and manage ETL processes using AWS Glue AppFlow SQL Python and PySpark.
Collaborate with business stakeholders to translate requirements into robust data solutions.
Build and optimize data models to extract and integrate information from diverse sources (SAP HRMS Salesforce Google Sheets etc.).
Write and optimize SQL queries for analytics dashboards and reporting use cases.
Monitor troubleshoot and optimize data pipelines to ensure reliability and accuracy.
Partner with software engineers to design and deliver data-driven features.
Perform root cause analysis and resolve data quality or pipeline issues.
Maintain detailed documentation for data architecture integration flows and ETL processes.
Identify and implement opportunities to improve database performance (e.g. indexing structure optimization).
Support and enhance existing applications by updating code features and integrations to meet evolving requirements.
Design and implement data security and access control measures in compliance with best practices.
Recommend infrastructure or architecture improvements to enhance capacity scalability and performance.
Apply knowledge of process industry operations to ensure business relevance of data solutions.
Experience
5 to 8 years of professional experience in data engineering integration and support roles.
Qualification
B.E / in Computer Science Information Technology or related discipline.
Hands-on Experience with AWS Services
AWS Glue (ETL Jobs)
AWS AppFlow (Data Integration)
Amazon Redshift (Data Warehouse)
Amazon S3 (Storage & Data Lake)
SQL Python PySpark
Core Competencies
Functional:
Strong foundation in Data Warehousing
Knowledge of Data Modelling
Expertise in ETL processes
Behavioral:
Effective communication skills with stakeholders and team members
Strong collaboration and teamwork abilities
Excellent problem-solving and decision-making skills
Proven time management and organizational capabilities
If interested please share your resume with Also dont forget to follow our company page @Ample Success Hr Solutions for regular updates on job openings.
View more
View less