Data Modeling Professional
Location Hyderabad/Pune
Experience:
- The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL Python Pyspark Hive ETL Unix Control-M (or similar scheduling tools) along with GCP.
Key Responsibilities:
- Develop and configure data pipelines across various platforms and technologies.
- Write complex SQL queries for data analysis on databases such as SQL Server Oracle and HIVE.
- Create solutions to support AI/ML models and generative AI.
- Work independently on specialized assignments within project deliverables.
- Provide solutions and tools to enhance engineering efficiencies.
- Design processes systems and operational models for end-to-end execution of data pipelines.
Preferred Skills:
- Experience with GCP particularly Airflow Dataproc and Big Query is advantageous.
Requirements
Requirements:
- Strong problem-solving and analytical abilities.
- Excellent communication and presentation skills.
- Ability to deliver high-quality materials against tight deadlines.
- Effective under pressure with rapidly changing priorities.
Note: The ability to communicate efficiently at a global level is paramount.
---
- Minimum 6 years of experience in data modeling with SQL Python Pyspark Hive ETL Unix Control-M (or similar scheduling tools).
- Proficiency in writing complex SQL queries for data analysis.
- Experience with GCP particularly Airflow Dataproc and Big Query is an advantage.
- Strong problem-solving and analytical abilities.
- Excellent communication and presentation skills.
- Ability to work effectively under pressure with rapidly changing priorities.
Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.
Education
Graduate