On a daily basis you will:
- drive innovation and improvements by evaluating new tools (e.g. Data Quality monitoring) and platform features (e.g. Genie Space on Databricks).
- monitor the effectiveness of solutions by tracking implemented actions (e.g. naming convention adherence metadata completeness MR quality).
- define workflows and coding standards for style maintainability and best practices on the analytical platform.
- evangelize platform users on the best practices for its use and encouraging teams to continuously improve their working methods. Advocate for coding standards through various workshops and guidelines.
- monitor the market for new tools and methodologies in data product development area.
- while the role involves conceptual work youll also have opportunities for hands-on coding such as analyzing AI readiness and implementing AI solutions to automate data development tasks.
- work with various Data&AI competencies (Data Consultants Data Engineers AI Engineers Cloud Engineers Data Architect)
Qualifications :
Which skills should you bring to the pitch:
- At least 5 years of experience in an analytical role working with large datasets
- Experience in data modeling and implementing complex data-driven solutions is a strong plus
- Excellent proficiency in Python/PySpark for data analysis SQL for data processing bash scripting to manage Git repositories
- Comprehensive understanding of the technical aspects of data warehousing including dimensional data modeling and ETL/ELT processes
- Experience with real-time data processing and the ability to handle data from various backend/frontend systems.
- Familiarity with cloud-based data platforms (GCP/Azure/AWS)
- The ability to present technical concepts and solutions to diverse audiences
- Self-motivated with the ability to work independently and manage multiple tasks
- Excellent interpersonal skills with the ability to collaborate effectively with cross-functional teams
- Fluent in English: verbal and written
Nice to have:
- Experience in working with Apache Spark in Databricks
- Familiarity with modern data building tools like Apache Airflow DBT
- Familiarity with data visualization tools such as PowerBI/Tableau/Looker
- Knowledge of data governance principles and practices
- Ability to thrive in a highly agile intensely iterative environment
- Positive and solution-oriented mindset
Additional Information :
The course of the recruitment process:
- Step 1: HR Interview
- Step 2: Devskiller test
- Step 3: Technical Interview (60 min)
- Step 4: Home task
- Step 5: Home task presentation and discussion (60 min)
Remote Work :
Yes
Employment Type :
Full-time
On a daily basis you will:drive innovation and improvements by evaluating new tools (e.g. Data Quality monitoring) and platform features (e.g. Genie Space on Databricks).monitor the effectiveness of solutions by tracking implemented actions (e.g. naming convention adherence metadata completeness MR ...
On a daily basis you will:
- drive innovation and improvements by evaluating new tools (e.g. Data Quality monitoring) and platform features (e.g. Genie Space on Databricks).
- monitor the effectiveness of solutions by tracking implemented actions (e.g. naming convention adherence metadata completeness MR quality).
- define workflows and coding standards for style maintainability and best practices on the analytical platform.
- evangelize platform users on the best practices for its use and encouraging teams to continuously improve their working methods. Advocate for coding standards through various workshops and guidelines.
- monitor the market for new tools and methodologies in data product development area.
- while the role involves conceptual work youll also have opportunities for hands-on coding such as analyzing AI readiness and implementing AI solutions to automate data development tasks.
- work with various Data&AI competencies (Data Consultants Data Engineers AI Engineers Cloud Engineers Data Architect)
Qualifications :
Which skills should you bring to the pitch:
- At least 5 years of experience in an analytical role working with large datasets
- Experience in data modeling and implementing complex data-driven solutions is a strong plus
- Excellent proficiency in Python/PySpark for data analysis SQL for data processing bash scripting to manage Git repositories
- Comprehensive understanding of the technical aspects of data warehousing including dimensional data modeling and ETL/ELT processes
- Experience with real-time data processing and the ability to handle data from various backend/frontend systems.
- Familiarity with cloud-based data platforms (GCP/Azure/AWS)
- The ability to present technical concepts and solutions to diverse audiences
- Self-motivated with the ability to work independently and manage multiple tasks
- Excellent interpersonal skills with the ability to collaborate effectively with cross-functional teams
- Fluent in English: verbal and written
Nice to have:
- Experience in working with Apache Spark in Databricks
- Familiarity with modern data building tools like Apache Airflow DBT
- Familiarity with data visualization tools such as PowerBI/Tableau/Looker
- Knowledge of data governance principles and practices
- Ability to thrive in a highly agile intensely iterative environment
- Positive and solution-oriented mindset
Additional Information :
The course of the recruitment process:
- Step 1: HR Interview
- Step 2: Devskiller test
- Step 3: Technical Interview (60 min)
- Step 4: Home task
- Step 5: Home task presentation and discussion (60 min)
Remote Work :
Yes
Employment Type :
Full-time
View more
View less