Job Title : Data Engineer
Experience : 7-10 Years
Location : Mumbai
Roles & Responsibilities
- Gather functional and technical requirements during system design discussions
- Build deploy and maintain scalable data pipelines
- Design and manage ETL workflows for structured and unstructured data
- Develop data solutions using Python PySpark and SQL
- Work with streaming pipelines and modern big data tools
- Optimize SQL queries and improve pipeline performance
- Implement and manage Azure-based data engineering solutions
- Maintain ETL deployments following Agile methodology
- Suggest and apply best practices in data integration
- Support QA teams with integration testing
- Break deliverables into tasks and coordinate execution
- Perform system optimization and troubleshooting
- Collaborate with cross-functional and global teams
Requisites
- Degree in Computer Science / IT or related field
- Strong skills in Python PySpark and SQL
- Knowledge of Databricks Data Factory and Data Lake
- Hands-on exposure to Azure data ecosystem
- Understanding of ETL frameworks and big data architecture
- Experience with Hadoop Hive Airflow and Kafka
- Familiarity with Unix/Linux shell scripting
- Knowledge of CI/CD and Kubernetes
- Ability to work in Agile environments
Required Skills:
AzurePythonPysparkDataBricksData FactoryData LakeETL frameworksBig data architectureHadoopHiveAirflowKafkaCI/CDKubernetes
Job Title : Data EngineerExperience : 7-10 YearsLocation : MumbaiRoles & Responsibilities Gather functional and technical requirements during system design discussions Build deploy and maintain scalable data pipelines Design and manage ETL workflows for structured and unstructured data Develop d...
Job Title : Data Engineer
Experience : 7-10 Years
Location : Mumbai
Roles & Responsibilities
- Gather functional and technical requirements during system design discussions
- Build deploy and maintain scalable data pipelines
- Design and manage ETL workflows for structured and unstructured data
- Develop data solutions using Python PySpark and SQL
- Work with streaming pipelines and modern big data tools
- Optimize SQL queries and improve pipeline performance
- Implement and manage Azure-based data engineering solutions
- Maintain ETL deployments following Agile methodology
- Suggest and apply best practices in data integration
- Support QA teams with integration testing
- Break deliverables into tasks and coordinate execution
- Perform system optimization and troubleshooting
- Collaborate with cross-functional and global teams
Requisites
- Degree in Computer Science / IT or related field
- Strong skills in Python PySpark and SQL
- Knowledge of Databricks Data Factory and Data Lake
- Hands-on exposure to Azure data ecosystem
- Understanding of ETL frameworks and big data architecture
- Experience with Hadoop Hive Airflow and Kafka
- Familiarity with Unix/Linux shell scripting
- Knowledge of CI/CD and Kubernetes
- Ability to work in Agile environments
Required Skills:
AzurePythonPysparkDataBricksData FactoryData LakeETL frameworksBig data architectureHadoopHiveAirflowKafkaCI/CDKubernetes
View more
View less