Custom Software Engineer Data Engineering
Job Summary
As a Custom Software Engineer a typical day involves creating tailored software solutions by designing coding and improving various components within systems or applications. The role requires working with contemporary frameworks and following agile methodologies to ensure the delivery of scalable and efficient software that meets unique business requirements. Collaboration with different teams and continuous enhancement of software performance are integral parts of the daily routine fostering innovation and adaptability in a dynamic work environment.
Roles & Responsibilities: -
Expected to be an SME collaborate and manage the team to perform.
- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.
- Provide solutions to problems for their immediate team and across multiple teams.
- Lead efforts to identify and implement process improvements that enhance team productivity and software quality.
- Mentor junior team members to support their professional growth and integration within the team.- Coordinate cross-functional communication to ensure alignment of project goals and timely delivery.
Professional & Technical Skills: -
Must To Have Skills:
Proficiency in Data Engineering.
- Strong knowledge of data pipeline development and maintenance.
- Experience with cloud-based data platforms and distributed computing frameworks.
- Ability to optimize data workflows for performance and scalability.
- Familiarity with database management systems and data warehousing solutions.
- Competence in scripting and automation to support data integration tasks.
- Design build and maintain fault-tolerant batch and near real-time data pipelines on AWS.
- Develop and operate ETL/ELT workflows supporting structured and semi-structured data sources.
- Ability to implement data ingestion transformation and curation aligned to medallion architecture principles.
- Should orchestrate workflows using Apache Airflow including DAG design dependency management and monitoring.
- Build solutions using AWS services including S3 Glue Athena Redshift EMR and Lambda.
- Enable AI/ML workflows through SageMaker feature engineering and automated data preparation pipelines.
- Ensure data quality lineage observability and monitoring across pipelines and data products.
- Apply security governance and compliance controls including IAM encryption and access management.
- Collaborate with information architects data scientists analysts and application teams.
Additional Information: - The candidate should have minimum 5 years of experience in Data Engineering.- This position is based at our Gurugram office.- A 15 years full time education is required.
Roles & Responsibilities: -
Expected to be an SME collaborate and manage the team to perform.
- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.
- Provide solutions to problems for their immediate team and across multiple teams.
- Lead efforts to identify and implement process improvements that enhance team productivity and software quality.
- Mentor junior team members to support their professional growth and integration within the team.- Coordinate cross-functional communication to ensure alignment of project goals and timely delivery.
Professional & Technical Skills: -
Must To Have Skills:
Proficiency in Data Engineering.
- Strong knowledge of data pipeline development and maintenance.
- Experience with cloud-based data platforms and distributed computing frameworks.
- Ability to optimize data workflows for performance and scalability.
- Familiarity with database management systems and data warehousing solutions.
- Competence in scripting and automation to support data integration tasks.
- Design build and maintain fault-tolerant batch and near real-time data pipelines on AWS.
- Develop and operate ETL/ELT workflows supporting structured and semi-structured data sources.
- Ability to implement data ingestion transformation and curation aligned to medallion architecture principles.
- Should orchestrate workflows using Apache Airflow including DAG design dependency management and monitoring.
- Build solutions using AWS services including S3 Glue Athena Redshift EMR and Lambda.
- Enable AI/ML workflows through SageMaker feature engineering and automated data preparation pipelines.
- Ensure data quality lineage observability and monitoring across pipelines and data products.
- Apply security governance and compliance controls including IAM encryption and access management.
- Collaborate with information architects data scientists analysts and application teams.
Additional Information: - The candidate should have minimum 5 years of experience in Data Engineering.- This position is based at our Gurugram office.- A 15 years full time education is required.