Data Engineer

Synechron

Not Interested
Bookmark
Report This Job

profile Job Location:

Mumbai - India

profile Monthly Salary: Not Disclosed
Posted on: 20 hours ago
Vacancies: 1 Vacancy

Job Summary

Job Summary

Synechron is seeking a Data Engineer to design develop and maintain scalable data pipelines and ETL processes that support data-driven decision-making across the organization. This role involves working with both structured and unstructured data ensuring data quality security and performance. The Data Engineer will collaborate with data scientists analysts and application teams to deliver reliable data solutions that enable business insights and operational efficiency contributing to our strategic growth initiatives.

Software Requirements

Required Skills:

  • Proficiency in SQL (version 2016 advanced querying and optimization)
  • Strong programming experience in Python (2 years)
  • Knowledge of ETL concepts and data integration best practices
  • Experience with scheduling and workflow tools like Apache Airflow (or similar)
  • Familiarity with data storage solutions and data processing frameworks

Preferred Skills:

  • Experience with Hadoop ecosystem components: HDFS Hive
  • Knowledge of Kafka for real-time data streaming
  • Hands-on experience with Spark Hive and orchestration tools
  • Exposure to cloud data platforms (AWS GCP Azure)
  • Working knowledge of NoSQL databases such as MongoDB or Cassandra

Overall Responsibilities

  • Design build and optimize data pipelines for ingesting transforming and loading large volumes of structured and unstructured data.
  • Develop and maintain ETL workflows to ensure data accuracy consistency and reliability.
  • Utilize SQL and Python to process datasets implement data transformations and support data analysis.
  • Collaborate with technical and non-technical teams to deliver end-to-end data solutions.
  • Deploy monitor and troubleshoot data workflows in production environments.
  • Ensure data security governance and compliance across data pipelines.
  • Optimize workflows for efficiency scalability and cost management.
  • Document data processes maintain version control and promote best practices for reproducibility.

Technical Skills (By Category)

Programming Languages:

  • Required: SQL (advanced querying optimization) Python (2 years)
  • Preferred: Knowledge of other scripting languages like Scala or Java

Databases and Data Management:

  • Relational: SQL Server MySQL PostgreSQL
  • Big Data & NoSQL: Hive Hadoop HDFS MongoDB Cassandra

Cloud Technologies:

  • Familiarity with cloud data services on AWS Azure or GCP

Frameworks and Libraries:

  • Hadoop ecosystem components: HDFS Hive (preferred)
  • Streaming & Processing: Kafka Spark

Development Tools and Methodologies:

  • Version control: Git
  • Workflow orchestration: Airflow
  • Containerization and deployment: Docker

Security & Data Governance:

  • Basic knowledge of data security practices and data privacy standards

Experience Requirements

  • 3 to 12 years of experience in data engineering data pipelines or ETL development
  • Proven track record working with large complex datasets and big data tools
  • Experience in designing and optimizing scalable data workflows
  • Industry experience in finance healthcare or technology sectors is preferred
  • Equivalent practical experience or project-based work accepted

Day-to-Day Activities

  • Analyze data requirements collaborating with stakeholders to design appropriate pipelines
  • Develop test and deploy data ingestion and transformation workflows
  • Maintain and improve data pipelines to enhance performance and reliability
  • Monitor data workflows in production troubleshoot issues and implement fixes
  • Collaborate with data science and analysis teams to support their data needs
  • Document workflows processes and data schemas
  • Participate in team meetings sprint planning and code reviews
  • Stay updated with emerging data engineering tools and best practices
  • Ensure data security privacy and compliance protocols are followed

Qualifications

  • Bachelors degree in Computer Science Data Science Engineering or related field; Masters degree preferred
  • Strong understanding of data modeling database management and ETL processes
  • Industry certifications in cloud platforms or data engineering are a plus
  • Demonstrated ability to work with large data volumes and complex data architectures

Professional Competencies

  • Analytical and problem-solving skills with attention to detail
  • Effective communicator able to translate technical concepts for diverse audiences
  • Collaborative team player with strong organizational skills
  • Self-motivated learner open to continuous professional development
  • Results-oriented with a focus on delivering high-quality scalable solutions
  • Ability to adapt to changing project requirements and adopt new tools and technologies

SYNECHRONS DIVERSITY & INCLUSION STATEMENT

Diversity & Inclusion are fundamental to our culture and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity Equity and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger successful businesses as a global company. We encourage applicants from across diverse backgrounds race ethnicities religion age marital status gender sexual orientations or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements mentoring internal mobility learning and development programs and more.


All employment decisions at Synechron are based on business needs job requirements and individual qualifications without regard to the applicants gender gender identity sexual orientation race ethnicity disabled or veteran status or any other characteristic protected by law.

Candidate Application Notice

Job SummarySynechron is seeking a Data Engineer to design develop and maintain scalable data pipelines and ETL processes that support data-driven decision-making across the organization. This role involves working with both structured and unstructured data ensuring data quality security and performa...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala

About Company

Company Logo

Chez Synechron, nous croyons en la puissance du numérique pour transformer les entreprises en mieux. Notre cabinet de conseil mondial combine la créativité et la technologie innovante pour offrir des solutions numériques de premier plan. Les technologies progressistes et les stratégie ... View more

View Profile View Profile