Job Title: Data Tech Lead ( 10 years )
Job Location: Alpharetta Georgia (Hybrid)
Job Duration: Long-Term
ob Description: We are seeking an experienced Data Tech Lead with strong hands-on expertise in data engineering and large-scale data processing applications. The ideal candidate will have extensive experience in Java and Python development combined with proven leadership in guiding teams to deliver efficient and scalable data solutions.
Mandatory Skills: Python Spark Hadoop Cloud (AWS/Azure/GCP)
Key Responsibilities
Build and optimize data processing applications using Java and Python.
Lead a team of developers in data engineering projects providing technical guidance and oversight.
Design develop and optimize Spark-based solutions for large-scale data tasks.
Oversee the development of efficient reliable and high-performing data pipelines.
Manage and support the implementation of Hadoop HDFS and other cloud Big Data technologies.
Ensure best practices in coding testing and deployment for all data engineering initiatives.
Required Skills and Experience
10 years of overall professional experience in software and data engineering.
Hands-on experience in building and optimizing data processing applications using Java and Python.
Deep understanding of Apache Spark for large-scale data processing and optimization.
Comprehensive knowledge of Hadoop HDFS and cloud-based Big Data technologies.
Proven experience in leading and mentoring teams of developers to deliver end-to-end data solutions.
Job Title: Data Tech Lead ( 10 years ) Job Location: Alpharetta Georgia (Hybrid) Job Duration: Long-Term ob Description: We are seeking an experienced Data Tech Lead with strong hands-on expertise in data engineering and large-scale data processing applications. The ideal candidate will have ...
Job Title: Data Tech Lead ( 10 years )
Job Location: Alpharetta Georgia (Hybrid)
Job Duration: Long-Term
ob Description: We are seeking an experienced Data Tech Lead with strong hands-on expertise in data engineering and large-scale data processing applications. The ideal candidate will have extensive experience in Java and Python development combined with proven leadership in guiding teams to deliver efficient and scalable data solutions.
Mandatory Skills: Python Spark Hadoop Cloud (AWS/Azure/GCP)
Key Responsibilities
Build and optimize data processing applications using Java and Python.
Lead a team of developers in data engineering projects providing technical guidance and oversight.
Design develop and optimize Spark-based solutions for large-scale data tasks.
Oversee the development of efficient reliable and high-performing data pipelines.
Manage and support the implementation of Hadoop HDFS and other cloud Big Data technologies.
Ensure best practices in coding testing and deployment for all data engineering initiatives.
Required Skills and Experience
10 years of overall professional experience in software and data engineering.
Hands-on experience in building and optimizing data processing applications using Java and Python.
Deep understanding of Apache Spark for large-scale data processing and optimization.
Comprehensive knowledge of Hadoop HDFS and cloud-based Big Data technologies.
Proven experience in leading and mentoring teams of developers to deliver end-to-end data solutions.
View more
View less