Job Title : Data Engineer PySpark & Snowflake
Experience : 35 years
Location : DTC Infotech Bangalore
Mode : 5 days WFO
Notice Period : Immediate / 15 Days / 30 Days
About the Role:
We are looking for a skilled Data Engineer to design build and maintain scalable data pipelines and data platforms. The ideal candidate has strong hands-on experience with PySpark and Snowflake and enjoys working in a fast-paced startup environment.
You will collaborate with product analytics and engineering teams to deliver reliable high-quality data solutions that support business decision-making.
Key Responsibilities:
Design develop and maintain ETL / ELT data pipelines using PySpark
Build and manage data models and transformations in Snowflake
Ingest data from multiple sources (APIs files databases cloud storage)
Optimise Spark jobs and Snowflake queries for performance and cost
Ensure data quality reliability and scalability of data pipelines
Work closely with analytics BI and downstream consumers
Troubleshoot data issues and support production pipelines
Participate in code reviews and follow best data engineering practices
Must-Have Skills:
35 years of experience in Data Engineering
Strong hands-on experience with PySpark / Apache Spark
Solid experience with Snowflake as a data warehouse
Strong SQL skills (complex queries joins aggregations)
Experience building ETL pipelines in production environments
Understanding of data modelling concepts
Familiarity with cloud platforms (AWS / Azure / GCP)
Good-to-Have Skills:
Experience with Databricks
Knowledge of workflow orchestration tools (Airflow ADF etc.)
Experience with streaming data (Kafka Spark Streaming)
Exposure to CI/CD for data pipelines
Interested candidate kindly share your updated resume to the mail id:
Job Title : Data Engineer PySpark & SnowflakeExperience : 35 yearsLocation : DTC Infotech BangaloreMode : 5 days WFONotice Period : Immediate / 15 Days / 30 DaysAbout the Role:We are looking for a skilled Data Engineer to design build and maintain scalable data pipelines and data platforms. The ide...
Job Title : Data Engineer PySpark & Snowflake
Experience : 35 years
Location : DTC Infotech Bangalore
Mode : 5 days WFO
Notice Period : Immediate / 15 Days / 30 Days
About the Role:
We are looking for a skilled Data Engineer to design build and maintain scalable data pipelines and data platforms. The ideal candidate has strong hands-on experience with PySpark and Snowflake and enjoys working in a fast-paced startup environment.
You will collaborate with product analytics and engineering teams to deliver reliable high-quality data solutions that support business decision-making.
Key Responsibilities:
Design develop and maintain ETL / ELT data pipelines using PySpark
Build and manage data models and transformations in Snowflake
Ingest data from multiple sources (APIs files databases cloud storage)
Optimise Spark jobs and Snowflake queries for performance and cost
Ensure data quality reliability and scalability of data pipelines
Work closely with analytics BI and downstream consumers
Troubleshoot data issues and support production pipelines
Participate in code reviews and follow best data engineering practices
Must-Have Skills:
35 years of experience in Data Engineering
Strong hands-on experience with PySpark / Apache Spark
Solid experience with Snowflake as a data warehouse
Strong SQL skills (complex queries joins aggregations)
Experience building ETL pipelines in production environments
Understanding of data modelling concepts
Familiarity with cloud platforms (AWS / Azure / GCP)
Good-to-Have Skills:
Experience with Databricks
Knowledge of workflow orchestration tools (Airflow ADF etc.)
Experience with streaming data (Kafka Spark Streaming)
Exposure to CI/CD for data pipelines
Interested candidate kindly share your updated resume to the mail id:
View more
View less