About Us:
At Parkar we stand at the intersection of innovation and technology revolutionizing software development with our cuttingedge Low Code Application Platform . For almost a decade our expertise has expanded to four countries offering a full range of software development services including product management fullstack engineering DevOps test automation and data analytics.
our pioneering Low Code Application Platform redefines software development by integrating over 500 modular code components. It covers UI/UX frontend and backend engineering and analytics for a streamlined efficient path to digital transformation through standardized software development and AIOps.
Our commitment to innovation has earned the trust of over 100 clients from large enterprises to small and mediumsized businesses. We proudly serve key sectors like Fintech HealthcareLife Sciences RetaileCommerce and Manufacturing delivering tailored solutions for success and growth.
At Parkar we dont just develop software; we build partnerships and pave the way for a future where technology empowers businesses to achieve their full potential.
For more info. Visit our website:
Role Overview:
As a Data Architect you will be responsible for designing implementing and maintaining the organizations data architecture. You will collaborate with crossfunctional teams to understand business needs develop data models ensure data security and governance and optimize data infrastructure for performance and scalability.
Responsibilities:
- Lead the design development and deployment of robust and scalable data pipelines across raw curated and consumer layers.
- Collaborate with crossfunctional teams to gather data requirements and translate them into technical solutions.
- Leverage Databricks (Apache Spark) and PySpark for largescale data processing and realtime analytics.
- Implement solutions using Microsoft Fabric ensuring seamless integration performance optimization and centralized governance.
- Design and manage ETL/ELT processes using Azure Data Factory (ADF) Synapse Analytics and Delta Lake on Azure Data Lake Storage (ADLS).
- Drive implementation of data quality checks error handling and monitoring for data pipelines.
- Work with SQLbased and NoSQLbased systems to support diverse data ingestion and transformation needs.
- Guide junior engineers through code reviews mentoring and enforcing development best practices.
- Support data governance and compliance efforts ensuring high data quality security and lineage tracking.
- Create and maintain detailed technical documentation data flow diagrams and reusable frameworks.
- Stay current with emerging data engineering tools and trends to continuously improve infrastructure and processes.
Requirements:
- 810 years of experience in Data Engineering with a focus on Azure Cloud Databricks and Microsoft Fabric.
- Proficiency in PySpark Spark SQL and ADF for building enterprisegrade data solutions.
- Strong handson experience with SQL and experience managing data in Delta Lake (parquet) format.
- Expertise in Power BI for developing insightful dashboards and supporting selfservice analytics.
- Solid understanding of data modeling data warehousing and ETL/ELT frameworks.
- Experience working with Azure Synapse Analytics MS SQL Server and other cloudnative services.
- Familiarity with data governance data lineage and security best practices in the cloud.
- Demonstrated ability to lead engineering efforts mentor team members and drive delivery in Agile environments.
- Relevant certifications such as DP203 DP600 or DP700 are a strong plus.
- Strong problemsolving abilities excellent communication skills and a passion for building highquality data products.