Experience: 6 to 8 Years
Location: Bengaluru
Job Description:
Skills and Tools Required:
- Proficiency in Databricks and experience with Spark for big data processing.
- Strong experience with Power BI particularly in building reports and dashboards using DAX.
- Solid programming skills in Python or Scala for developing data pipelines.
- Knowledge of SQL and experience with relational databases and data warehousing concepts.
- Familiarity with cloud platforms - Azure particularly in relation to data storage and processing.
- Understanding of data modeling ETL processes and data integration methodologies.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities to work effectively in a team-oriented environment.
Roles & Responsibilities
Data Pipeline Development (Databricks)
Design and implement scalable ETL/ELT pipelines using Databricks and Apache Spark.
Integrate structured and unstructured data from various sources into unified data lakes or warehouses.
Optimize performance and cost-efficiency of data workflows in cloud environments (Azure).
Data Modeling & Transformation
Develop robust data models using Delta Lake and Unity Catalog in Databricks.
Apply advanced transformation logic using PySpark SQL and notebooks.
Ensure data quality lineage and governance across the pipeline.
Power BI Reporting & DAX
Create interactive dashboards and reports tailored to business needs.
Write complex DAX expressions for calculated columns measures and KPIs.
Optimize Power BI performance through efficient data modeling and query tuning.
Collaboration & Stakeholder Engagement
Work closely with business analysts data scientists and decision-makers to gather requirements.
Translate business needs into technical solutions and visualizations.
Provide training and support for self-service analytics using Power BI.
Experience: 6 to 8 Years Location: Bengaluru Job Description: Skills and Tools Required: Proficiency in Databricks and experience with Spark for big data processing. Strong experience with Power BI particularly in building reports and dashboards using DAX. Solid programming skills in Python or Sca...
Experience: 6 to 8 Years
Location: Bengaluru
Job Description:
Skills and Tools Required:
- Proficiency in Databricks and experience with Spark for big data processing.
- Strong experience with Power BI particularly in building reports and dashboards using DAX.
- Solid programming skills in Python or Scala for developing data pipelines.
- Knowledge of SQL and experience with relational databases and data warehousing concepts.
- Familiarity with cloud platforms - Azure particularly in relation to data storage and processing.
- Understanding of data modeling ETL processes and data integration methodologies.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities to work effectively in a team-oriented environment.
Roles & Responsibilities
Data Pipeline Development (Databricks)
Design and implement scalable ETL/ELT pipelines using Databricks and Apache Spark.
Integrate structured and unstructured data from various sources into unified data lakes or warehouses.
Optimize performance and cost-efficiency of data workflows in cloud environments (Azure).
Data Modeling & Transformation
Develop robust data models using Delta Lake and Unity Catalog in Databricks.
Apply advanced transformation logic using PySpark SQL and notebooks.
Ensure data quality lineage and governance across the pipeline.
Power BI Reporting & DAX
Create interactive dashboards and reports tailored to business needs.
Write complex DAX expressions for calculated columns measures and KPIs.
Optimize Power BI performance through efficient data modeling and query tuning.
Collaboration & Stakeholder Engagement
Work closely with business analysts data scientists and decision-makers to gather requirements.
Translate business needs into technical solutions and visualizations.
Provide training and support for self-service analytics using Power BI.
View more
View less