Work across workstreams to support data requirements including reports and dashboards
Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes
Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation ingestion processing and surfacing of data for large-scale applications
Develop data pipeline automation using Azure technologies stack Databricks Data Factory
Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery
Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example file transfer API Data Streaming using Kafka and Spark Streaming
Analyze and understand data processing and standardization requirements develop ETL using Spark processing to transform data
Understand data/reports and dashboards requirements develop data export data API or data visualization using Power BI Tableau or other visualization tools
Job Requirements
Bachelors degree in Computer Science Computer Engineer IT or related fields
Minimum 5 years experience in Data Engineering fields
Data Engineering skills: Python SQL Spark Cloud Architect Data & Solution Architect API Databricks Azure
Data Visualization skills: Power BI (or other visualization tools) DAX programming API Data Model SQL Story Telling and wireframe design
Business Analyst skills: business knowledge data profiling basic data model design data analysis requirement analysis SQL programing
Basic knowledge in Data Lake/Data Warehousing/ Big data tools Apache Spark RDBMS and NoSQL Knowledge Graph
Experience working in a client-facing/consulting environment is a plus
Team player analytical and problem-solving skills
Good communication skills in English
Benefits
Competitive Salary
BPJS Kesehatan & Ketenagakerjaan
THR (Religious Festive Bonus)
Medical Benefit (Employee Eligible Dependents)
Employee Share Purchase Program
Career Path
Training & Development
International exposure & possibility to be transferred overseas (if needed & based on qualification)
Job DescriptionApache KafkaAbout the companyGeekhunter is hiring on behalf of a well-established global firm that delivers integrated services in strategy consulting digital experience technology and operations. Known for its cross-industry expertise and innovation-driven approach the company helps ...
Work across workstreams to support data requirements including reports and dashboards
Analyze and perform data profiling to understand data patterns and discrepancies following Data Quality and Data Management processes
Understand and follow best practices to design and develop the E2E Data Pipeline: data transformation ingestion processing and surfacing of data for large-scale applications
Develop data pipeline automation using Azure technologies stack Databricks Data Factory
Understand business requirements to translate them into technical requirements that the system analysts and other technical team members can drive into the project design and delivery
Analyze source data and perform data ingestion in both batch and real-time patterns via various methods; for example file transfer API Data Streaming using Kafka and Spark Streaming
Analyze and understand data processing and standardization requirements develop ETL using Spark processing to transform data
Understand data/reports and dashboards requirements develop data export data API or data visualization using Power BI Tableau or other visualization tools
Job Requirements
Bachelors degree in Computer Science Computer Engineer IT or related fields
Minimum 5 years experience in Data Engineering fields
Data Engineering skills: Python SQL Spark Cloud Architect Data & Solution Architect API Databricks Azure
Data Visualization skills: Power BI (or other visualization tools) DAX programming API Data Model SQL Story Telling and wireframe design
Business Analyst skills: business knowledge data profiling basic data model design data analysis requirement analysis SQL programing
Basic knowledge in Data Lake/Data Warehousing/ Big data tools Apache Spark RDBMS and NoSQL Knowledge Graph
Experience working in a client-facing/consulting environment is a plus
Team player analytical and problem-solving skills
Good communication skills in English
Benefits
Competitive Salary
BPJS Kesehatan & Ketenagakerjaan
THR (Religious Festive Bonus)
Medical Benefit (Employee Eligible Dependents)
Employee Share Purchase Program
Career Path
Training & Development
International exposure & possibility to be transferred overseas (if needed & based on qualification)