We are looking for 6years to 8 years experienced a Data Engineer to design build and maintain scalable data pipelines and infrastructure. The ideal candidate will have strong experience in data processing ETL development and working with large datasets in cloud or on-prem environments. Design develop and optimize scalable data pipelines. Integrate data from multiple sources (APIs databases flat files). Ensure data quality integrity and availability. Collaborate with data scientists analysts and business teams. Monitor and troubleshoot data workflows and performance issues. Implement best practices for data engineering and governance. Strong programming experience in Python / Java / Scala. Expertise in SQL and relational databases (e.g. MySQL PostgreSQL SQL Server). Hands-on experience with ETL/ELT tools (e.g. Informatica Talend Apache NiFi)Experience with big data technologies (e.g. Apache Spark Hadoop ecosystem). Proficiency in data warehousing concepts (e.g. Snowflake Redshift BigQuery). Experience with cloud platforms (AWS / Azure / GCP). Knowledge of data modeling (star schema snowflake schema). Experience building and maintaining data pipelines (batch and streaming). Experience with workflow orchestration tools (e.g. Apache Airflow Prefect)Familiarity with CI/CD pipelines and DevOps practices. Knowledge of containerization (Docker Kubernetes). Exposure to real-time data streaming (e.g. Apache Kafka)Understanding of data governance data quality and security practices. Experience with BI/reporting tools (e.g. Tableau Power BI). Basic understanding of machine learning pipelines. Strong problem-solving and analytical skills
We are looking for 6years to 8 years experienced a Data Engineer to design build and maintain scalable data pipelines and infrastructure. The ideal candidate will have strong experience in data processing ETL development and working with large datasets in cloud or on-prem environments. Design develo...
We are looking for 6years to 8 years experienced a Data Engineer to design build and maintain scalable data pipelines and infrastructure. The ideal candidate will have strong experience in data processing ETL development and working with large datasets in cloud or on-prem environments. Design develop and optimize scalable data pipelines. Integrate data from multiple sources (APIs databases flat files). Ensure data quality integrity and availability. Collaborate with data scientists analysts and business teams. Monitor and troubleshoot data workflows and performance issues. Implement best practices for data engineering and governance. Strong programming experience in Python / Java / Scala. Expertise in SQL and relational databases (e.g. MySQL PostgreSQL SQL Server). Hands-on experience with ETL/ELT tools (e.g. Informatica Talend Apache NiFi)Experience with big data technologies (e.g. Apache Spark Hadoop ecosystem). Proficiency in data warehousing concepts (e.g. Snowflake Redshift BigQuery). Experience with cloud platforms (AWS / Azure / GCP). Knowledge of data modeling (star schema snowflake schema). Experience building and maintaining data pipelines (batch and streaming). Experience with workflow orchestration tools (e.g. Apache Airflow Prefect)Familiarity with CI/CD pipelines and DevOps practices. Knowledge of containerization (Docker Kubernetes). Exposure to real-time data streaming (e.g. Apache Kafka)Understanding of data governance data quality and security practices. Experience with BI/reporting tools (e.g. Tableau Power BI). Basic understanding of machine learning pipelines. Strong problem-solving and analytical skills
View more
View less