Organizational Level 1 :reqOrganizational1
Organizational Level 2 :reqOrganizational2
Manager :hiringManagerName
Location :reqLocation
Talent Acquisition:Richa Roy
Required Travel: Minimal
Open to Relocation:filter14
Referral Bonus Reward Amount*:erpAmount
* In case this job is open for Amdocs employees only Refer Friend to Job option is disabled
Who are we
Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services we unlock our customers innovative potential empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our approximately 30000 employees around the globe are here to accelerate service providers migration to the cloud enable them to differentiate in the 5G era and digitalize and automate their operations. Listed on the NASDAQ Global Select Market Amdocs had revenue of $4.89 billion in fiscal 2023.
In one sentence
We are looking for a Data Engineer (Consultant) Pune with expertise in Databricks SQL PySpark Spark SQL Airflow and Azure Databricks responsible for migrating SQL Server Stored Procedures building scalable incremental data pipelines and orchestrating workflows while ensuring data quality performance optimization and best practices in a cloud-based environment.
Key Responsibilities...
- Migrate SQL Server Stored Procedures to Databricks Notebooks leveraging PySpark and Spark SQL for complex transformations.
- Design build and maintain incremental data load pipelines to handle dynamic updates from various sources ensuring scalability and efficiency.
- Develop robust data ingestion pipelines to load data into the Databricks Bronze layer from relational databases APIs and file systems.
- Implement incremental data transformation workflows to update silver and gold layer datasets in near real-time adhering to Delta Lake best practices.
- Integrate Airflow with Databricks to orchestrate end-to-end workflows including dependency management error handling and scheduling.
- Understand business and technical requirements translating them into scalable Databricks solutions.
- Optimize Spark jobs and queries for performance scalability and cost-efficiency in a distributed environment.
- Implement robust data quality checks monitoring solutions and governance frameworks within Databricks.
- Collaborate with team members on Databricks best practices reusable solutions and incremental loading strategies.
All you need is...
- Bachelors degree in computer science Information Systems or a related discipline.
- 4 years of hands-on experience with Databricks including expertise in Databricks SQL PySpark and Spark SQL. (Must)
- Proven experience in incremental data loading techniques into Databricks leveraging Delta Lakes features (e.g. time travel MERGE INTO).
- Strong understanding of data warehousing concepts including data partitioning and indexing for efficient querying.
- Proficiency in T-SQL and experience in migrating SQL Server Stored Procedures to Databricks.
Why you will love this job:
- Design and optimize large-scale data solutions using cutting-edge technologies.
- Work in a collaborative fast-paced environment on innovative projects.
- Gain hands-on experience with Azure Databricks Airflow and big data processing.
- Enjoy career growth learning opportunities and a supportive work culture.
- Benefit from comprehensive perks including health insurance and paid time off.
Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce
Required Experience:
Contract