Role Responsibilities:
- Design and develop data solutions using Data Bricks.
- Create and maintain data pipelines for processing large datasets.
- Collaborate with data scientists and analysts to optimize data workflows.
- Implement ETL processes to streamline data management.
- Utilize Apache Spark to enhance data processing capabilities.
- Conduct data analysis to derive actionable insights.
- Ensure data quality and integrity throughout data lifecycle.
- Monitor and improve existing data systems.
- Integrate Data Bricks with cloud platforms (e.g. Azure AWS).
- Develop customized dashboards for data visualization.
- Document data architecture and development processes.
- Troubleshoot data discrepancies and resolve issues.
- Participate in Agile methodologies and sprint planning.
- Train and support team members on data tools.
- Stay updated with the latest data technologies and trends.
Qualifications:
- Proven experience in data engineering or similar role.
- Strong proficiency in SQL and data query languages.
- Experience with streaming data and batch processing.
- Familiarity with Python or Scala for data manipulation.
- Understanding of data modeling and warehousing concepts.
- Knowledge of cloud services such as Azure or AWS.
- Experience with data visualization tools (e.g. Tableau Power BI).
- Ability to work collaboratively in a team environment.
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal skills.
- Experience with Agile/Scrum methodologies.
- Familiarity with version control systems (e.g. Git).
- Detail-oriented with a knack for troubleshooting.
- Ability to handle multiple priorities in a fast-paced environment.
- Willingness to learn and adapt to new technologies.
data bricks,power bi,sql,python