Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailThe Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data processing and transforming it managing databases ensuring data quality visualizing data automating processes working with relevant technologies and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organizations data needs. A typical day in the life of a Data Research Engineer- Team Lead will involve guiding team members through code standards optimization techniques and best practices in debugging and testing. They oversee the development and consistent application of testing protocols including unit integration and performance testing ensuring a high standard of code quality across the team. They work closely with engineers offering technical mentorship in areas like Git version control task tracking and documentation processes as well as advanced Python and database practices.
Responsibilities
Technical Mentorship and Code Quality: Guide and mentor team members on coding standards optimization techniques and debugging. Conduct thorough code reviews provide constructive feedback and enforce code quality standards to ensure maintainable and efficient code.
Testing and Quality Assurance Leadership: Develop implement and oversee rigorous testing protocols including unit integration and performance testing to guarantee the reliability and robustness of all projects. Advocate for automated testing and ensure comprehensive test coverage within the team.
Process Improvement and Documentation: Establish and maintain high standards for version control documentation and task tracking across the team. Continuously refine these processes to enhance team productivity streamline workflows and ensure data quality.
Hands-On Technical Support: Serve as the teams primary resource for troubleshooting complex issues particularly in Python MySQL GitKraken and Knime. Provide on-demand support to team members helping them overcome technical challenges and improve their problem-solving skills.
High-Level Technical Mentorship: Provide mentorship in advanced technical areas including architecture design data engineering best practices and advanced Python programming. Guide the team in building scalable and reliable data solutions.
Cross-Functional Collaboration: Work closely with data scientists product managers and quality assurance teams to align on data requirements testing protocols and process improvements. Foster open communication across teams to ensure seamless integration and delivery of data solutions.
Continuous Learning and Improvement: Stay updated with emerging data engineering methodologies and best practices sharing relevant insights with the team. Drive a culture of continuous improvement ensuring the teams skills and processes evolve with industry standards. Data Pipelines: Design implement and maintain scalable data pipelines for efficient data transfer cleaning normalization transformation aggregation and visualization to support production-level workloads.
Big Data: Leverage distributed processing frameworks such as PySpark and Kafka to manage and process massive datasets efficiently.
Cloud-Native Data Solutions: Develop and optimize workflows for cloud-native data solutions including BigQuery Databricks Snowflake Redshift and tools like Airflow and AWS Glue.
Regulations: Ensure compliance with regulatory frameworks like GDPR and implement robust data governance and security measures.
Skills and Experience
Experience: 8 years
Technical Proficiency:
Programming: Expert-level skills in Python with a strong understanding of code optimization debugging and testing.
Object-Oriented Programming (OOP) Expertise: Strong knowledge of OOP principles in Python with the ability to design modular reusable and efficient code structures. Experience in implementing OOP best practices to enhance code organization and maintainability.
Data Management: Proficient in MySQL and database design with experience in creating efficient data pipelines and workflows.
Tools: Advanced knowledge of Git and GitKraken for version control with experience in task management ideally on GitHub. Familiarity with Knime or similar data processing tools is a plus.
Testing and QA Expertise: Proven experience in designing and implementing testing protocols including unit integration and performance testing. Ability to embed automated testing within development workflows.
Process-Driven Mindset: Strong experience with process improvement and documentation particularly for coding standards task tracking and data management protocols.
Leadership and Mentorship: Demonstrated ability to mentor and support junior and mid-level engineers with a focus on fostering technical growth and improving team cohesion. Experience leading code reviews and guiding team members in problem-solving and troubleshooting.
Problem-Solving Skills: Ability to handle complex technical issues and serve as a key resource for team troubleshooting. Expertise in guiding others through debugging and technical problem-solving.
Strong Communication Skills: Effective communicator capable of aligning cross-functional teams on project requirements technical standards and data workflows.
Adaptability and Continuous Learning: A commitment to staying updated with the latest in data engineering coding practices and tools with a proactive approach to learning and sharing knowledge within the team.
Data Pipelines: Comprehensive expertise in building and optimizing data pipelines including data transfer transformation and visualization for real-world applications.
Distributed Systems: Strong knowledge of distributed systems and big data tools such as PySpark and Kafka.
Data Warehousing: Proficiency with modern cloud data warehousing platforms (BigQuery Databricks Snowflake Redshift) and orchestration tools (Airflow AWS Glue).
Regulations: Demonstrated understanding of regulatory compliance requirements (e.g. GDPR) and best practices for data governance and security in enterprise settings
Perks:
Qualifications :
Educational Background: Bachelors or Masters degree in Computer Science Data Engineering or a related field. Equivalent experience in data engineering roles will also be considered.
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
No
Employment Type :
Full-time
Full-time