Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailJob Description:
Key responsibilities:
Creates and maintains optimal data pipeline architecture
Assembles large complex data sets that meet functional / nonfunctional business requirements
Identifies designs and implements internal process improvements: automating manual processes optimising data delivery redesigning infrastructure for greater scalability etc
Builds analytics tools that utilise the data pipeline to provide actionable insights into customer acquisition operational efficiency and other key business performance metrics
Works with stakeholders including the Executive Product Data and Design teams to assist with datarelated technical issues and support their data infrastructure needs
Keeps our data separated and secure
Creates data tools for analytics and data scientist team members that assist them in building and optimising our product into an innovative industry leader
Works with data and analytics experts to strive for greater functionality in our data systems
Must have:
Data Architecture & Modeling: Design and maintain scalable efficient data models and architectures to support data analytics reporting and ML model training.
Data Pipeline Engineering: Develop maintain and optimise scalable data pipelines that can handle large volumes and various types of data.
Data Quality Assurance: Implement rigorous data cleaning transformation and integration processes to ensure data quality and consistency.
Collaboration: Work closely with data scientists ML engineers and other stakeholders to understand data requirements and implement effective data solutions.
Documentation & Governance: Maintain comprehensive documentation of data procedures systems and architectures. Provide guidance and support for data governance practices including metadata management data lineage and data cataloging.
ML Familiarity: Familiarity with machine learning concepts and tools.
Technical Skills:
* Strong proficiency in Python with an emphasis on clean modular and welldocumented code.
* Proficient in Spark (PySpark and SparkSQL).
* Expertise in SQL JIRA Git and GitHub.
Good Communication Skills: Able to explain complex technical concepts clearly and concisely to both technical and nontechnical audiences.
Good to have:
Azure Cloud Expertise: Handson experience with designing and implementing scalable and secure data processing pipelines using Azure cloud services and tools like Databricks or Azure Synapse Analytics.
Azure Data Management: Experience managing and optimizing data storage within Azure using services like Azure SQL Data Warehouse and Azure Cosmos DB.
ML Experience: Experience in deploying and maintaining ML models in production environments.
Location:
BengaluruBrand:
MerkleTime Type:
Full timeContract Type:
PermanentFull-Time