On a daily basis you will:
- Closely support stakeholders in making data-driven decisions.
- Work with different InPost departments and business lines.
- Gather requirements harness large-scale real-time data from various sources analyze it and prepare insights and recommendation about business critical areas and processes.
- Design develop and extend our data model layers that support optimized and scalable calculations and visualizations of successful analytics outcomes.
- Craft code that meets our internal standards for style maintainability and best practices for a high-scale data environment. Maintain and advocate for these standards through code reviews.
- Collaborate with cross-functional teams including Data Engineers Data Scientists and Business Analysts to deliver integrated data solutions.
- Prototype and coordinate data visualizations.
Qualifications :
Which skills should you bring to the pitch:
- Min 3 years of experience in an analytical role handling vast volumes of data in (preferably in domains such as Marketing Logistics Customer or Sales).
- Experience in data modeling and implementing complex data-driven solutions is a strong plus.
- Strong proficiency in Python/PySpark for data analysis SQL for data processing Bash scripting to manage Git repositories.
- Proven ability to pull insightful and actionable conclusions from complex data and communicate recommendations to business stakeholders clearly and concisely.
- Comprehensive understanding of the technical aspects of data warehousing including dimensional data modeling and ETL/ELT processes.
- Ability to translate business needs into data models.
- Strong understanding of real-time data: ability to request and handle data from both backend and frontend systems including internal and external platforms.
- Self-motivated and self-managing with the ability to work independently and mange multiple tasks simultenously.
- Strong interpersonal skills with the ability to collaborate effectively with cross-functional teams
- Fluency in English: verbal and written.
It would be awesome if you have:
- Experience in working with Apache Spark in Databricks.
- Familiarity with cloud-based data platforms (e.g. GCP Azure AWS).
- Familiarity with modern data building tools like Apache Airflow DBT.
- Familiarity with data visualization tools such as PowerBI/Tableau/Looker.
- Knowledge of data governance principles and practices.
- Ability to thrive in a highly agile intensely iterative environment.
Additional Information :
Remote Work :
Yes
Employment Type :
Full-time
On a daily basis you will:Closely support stakeholders in making data-driven decisions.Work with different InPost departments and business lines.Gather requirements harness large-scale real-time data from various sources analyze it and prepare insights and recommendation about business critical area...
On a daily basis you will:
- Closely support stakeholders in making data-driven decisions.
- Work with different InPost departments and business lines.
- Gather requirements harness large-scale real-time data from various sources analyze it and prepare insights and recommendation about business critical areas and processes.
- Design develop and extend our data model layers that support optimized and scalable calculations and visualizations of successful analytics outcomes.
- Craft code that meets our internal standards for style maintainability and best practices for a high-scale data environment. Maintain and advocate for these standards through code reviews.
- Collaborate with cross-functional teams including Data Engineers Data Scientists and Business Analysts to deliver integrated data solutions.
- Prototype and coordinate data visualizations.
Qualifications :
Which skills should you bring to the pitch:
- Min 3 years of experience in an analytical role handling vast volumes of data in (preferably in domains such as Marketing Logistics Customer or Sales).
- Experience in data modeling and implementing complex data-driven solutions is a strong plus.
- Strong proficiency in Python/PySpark for data analysis SQL for data processing Bash scripting to manage Git repositories.
- Proven ability to pull insightful and actionable conclusions from complex data and communicate recommendations to business stakeholders clearly and concisely.
- Comprehensive understanding of the technical aspects of data warehousing including dimensional data modeling and ETL/ELT processes.
- Ability to translate business needs into data models.
- Strong understanding of real-time data: ability to request and handle data from both backend and frontend systems including internal and external platforms.
- Self-motivated and self-managing with the ability to work independently and mange multiple tasks simultenously.
- Strong interpersonal skills with the ability to collaborate effectively with cross-functional teams
- Fluency in English: verbal and written.
It would be awesome if you have:
- Experience in working with Apache Spark in Databricks.
- Familiarity with cloud-based data platforms (e.g. GCP Azure AWS).
- Familiarity with modern data building tools like Apache Airflow DBT.
- Familiarity with data visualization tools such as PowerBI/Tableau/Looker.
- Knowledge of data governance principles and practices.
- Ability to thrive in a highly agile intensely iterative environment.
Additional Information :
Remote Work :
Yes
Employment Type :
Full-time
View more
View less