Responsibilities:
- Design develop and maintain scalable and efficient data pipelines and infrastructure to support autonomous driving research and development.
- Extract transform and load (ETL) data from various sources including sensors simulations and with data scientists researchers and engineers to understand data requirements and translate them into technical solutions.
- Develop and optimize data storage and retrieval strategies ensuring data quality and integrity.
- Implement data governance and security measures to protect sensitive data.
- Monitor and troubleshoot data pipelines and systems to identify and resolve issues promptly.
- Continuously explore and evaluate new data technologies and tools to improve data processing efficiency and scalability.
- Mentor and guide junior data engineers in their professional development.
Mandatory Core Skills & Competencies:
- Strong proficiency in Python or another programming language commonly used in data engineering.
- Expertise in data engineering tools and frameworks such as Apache Spark Hadoop Kafka and Airflow.
- Deep understanding of data warehousing and data lake concepts.
- Experience with cloud platform (Azure) and cloud-based data of SQL and NoSQL databases.
- Site Reliability Engineering.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a cross-functional communication and documentation skills.
Nice to Have Skills:
- Experience with machine learning and deep learning frameworks (TensorFlow PyTorch).
- Familiarity with autonomous driving technologies and of data visualization tools (e.g. Tableau Matplotlib).
- Experience with real-time data processing and streaming technologies.
- Understanding of data quality and validation with containerization technologies (Docker Kubernetes).
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience with data governance and compliance frameworks (e.g. GDPR CCPA).Ph.D. or Masters degree in Computer Science Data Engineering or a related in relevant conferences or journals.
Qualifications :
BE M Tech PhD in Computer Science
Additional Information :
6 years
Remote Work :
No
Employment Type :
Full-time
Responsibilities:Design develop and maintain scalable and efficient data pipelines and infrastructure to support autonomous driving research and development.Extract transform and load (ETL) data from various sources including sensors simulations and with data scientists researchers and engineers to...
Responsibilities:
- Design develop and maintain scalable and efficient data pipelines and infrastructure to support autonomous driving research and development.
- Extract transform and load (ETL) data from various sources including sensors simulations and with data scientists researchers and engineers to understand data requirements and translate them into technical solutions.
- Develop and optimize data storage and retrieval strategies ensuring data quality and integrity.
- Implement data governance and security measures to protect sensitive data.
- Monitor and troubleshoot data pipelines and systems to identify and resolve issues promptly.
- Continuously explore and evaluate new data technologies and tools to improve data processing efficiency and scalability.
- Mentor and guide junior data engineers in their professional development.
Mandatory Core Skills & Competencies:
- Strong proficiency in Python or another programming language commonly used in data engineering.
- Expertise in data engineering tools and frameworks such as Apache Spark Hadoop Kafka and Airflow.
- Deep understanding of data warehousing and data lake concepts.
- Experience with cloud platform (Azure) and cloud-based data of SQL and NoSQL databases.
- Site Reliability Engineering.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a cross-functional communication and documentation skills.
Nice to Have Skills:
- Experience with machine learning and deep learning frameworks (TensorFlow PyTorch).
- Familiarity with autonomous driving technologies and of data visualization tools (e.g. Tableau Matplotlib).
- Experience with real-time data processing and streaming technologies.
- Understanding of data quality and validation with containerization technologies (Docker Kubernetes).
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience with data governance and compliance frameworks (e.g. GDPR CCPA).Ph.D. or Masters degree in Computer Science Data Engineering or a related in relevant conferences or journals.
Qualifications :
BE M Tech PhD in Computer Science
Additional Information :
6 years
Remote Work :
No
Employment Type :
Full-time
View more
View less