Roles & Responsibilities -
Deliver training on Data Engineering fundamentals and architectures.
-
Train on Python / SQL for data processing and analytics.
-
Teach ETL/ELT concepts data pipelines and workflow orchestration.
-
Provide hands-on training on Apache Spark (PySpark).
-
Explain data warehousing concepts (Star/Snowflake schema).
-
Train candidates on Big Data tools: Hadoop Hive (basic to intermediate).
-
Cover cloud data platforms: AWS / Azure / GCP (any one).
-
Teach streaming concepts using Kafka (basic).
-
Prepare training materials labs and real-time use cases.
-
Conduct assessments mock interviews and placement-oriented sessions.
-
Mentor trainees on best practices performance tuning and career paths.
Required Skills -
Strong knowledge of Python & SQL.
-
Hands-on experience with Apache Spark / PySpark.
-
Solid understanding of ETL tools and data pipelines.
-
Experience with relational & NoSQL databases.
-
Good understanding of data modeling & data warehousing.
-
Familiarity with Linux & Git.
-
Strong communication and presentation skills.
Roles & Responsibilities Deliver training on Data Engineering fundamentals and architectures. Train on Python / SQL for data processing and analytics. Teach ETL/ELT concepts data pipelines and workflow orchestration. Provide hands-on training on Apache Spark (PySpark). Explain data warehou...
Roles & Responsibilities -
Deliver training on Data Engineering fundamentals and architectures.
-
Train on Python / SQL for data processing and analytics.
-
Teach ETL/ELT concepts data pipelines and workflow orchestration.
-
Provide hands-on training on Apache Spark (PySpark).
-
Explain data warehousing concepts (Star/Snowflake schema).
-
Train candidates on Big Data tools: Hadoop Hive (basic to intermediate).
-
Cover cloud data platforms: AWS / Azure / GCP (any one).
-
Teach streaming concepts using Kafka (basic).
-
Prepare training materials labs and real-time use cases.
-
Conduct assessments mock interviews and placement-oriented sessions.
-
Mentor trainees on best practices performance tuning and career paths.
Required Skills -
Strong knowledge of Python & SQL.
-
Hands-on experience with Apache Spark / PySpark.
-
Solid understanding of ETL tools and data pipelines.
-
Experience with relational & NoSQL databases.
-
Good understanding of data modeling & data warehousing.
-
Familiarity with Linux & Git.
-
Strong communication and presentation skills.
View more
View less