drjobs Senior Data Migration Engineer

Senior Data Migration Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Mumbai - India

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Description

Senior Data Migration Engineer

About Oracle FSGIU - Finergy:

The Finergy division within Oracle FSGIU is dedicated to the Banking Financial Services and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency and our expert consulting services ensure technology aligns with our clients business goals.

Job Summary:

We are seeking a skilled Senior Data Migration Engineer with expertise in AWS Databricks Python PySpark and SQL to lead and execute complex data migration projects. The ideal candidate will design develop and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms ensuring data integrity accuracy and minimal downtime.

Job Responsibilities

Software Development:

  • Design develop test and deploy high-performance and scalable data solutions using Python PySpark SQL
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
  • Implement efficient and maintainable code using best practices and coding standards.

AWS & Databricks Implementation:

  • Work with Databricks platform for big data processing and analytics.
  • Develop and maintain ETL processes using Databricks notebooks.
  • Implement and optimize data pipelines for data transformation and integration.
  • Utilize AWS services (e.g. S3 Glue Redshift Lambda) and Databricks to build and optimize data migration pipelines.
  • Leverage PySpark for large-scale data processing and transformation tasks.

Continuous Learning:

  • Stay updated on the latest industry trends tools and technologies related to Python SQL and Databricks.
  • Share knowledge with the team and contribute to a culture of continuous improvement.

SQL Database Management:

  • Utilize expertise in SQL to design optimize and maintain relational databases.
  • Write complex SQL queries for data retrieval manipulation and analysis.

Qualifications & Skills:

  • Education: Bachelors degree in Computer Science Engineering Data Science or a related field. Advanced degrees are a plus.
  • 6 to 10 Years of experience inDatabricksand big data frameworks
  • Proficient in AWS services and data migration
  • Experience inUnity Catalogue
  • Familiarity withBatch and real time processing
  • Data engineering with strong skills in Python PySpark SQL
  • Certifications: AWS Certified Solutions Architect Databricks Certified Professional or similar are a plus.

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to work in a fast-paced agile environment.


Responsibilities

Job Responsibilities

Software Development:

  • Design develop test and deploy high-performance and scalable data solutions using Python PySpark SQL
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
  • Implement efficient and maintainable code using best practices and coding standards.

AWS & Databricks Implementation:

  • Work with Databricks platform for big data processing and analytics.
  • Develop and maintain ETL processes using Databricks notebooks.
  • Implement and optimize data pipelines for data transformation and integration.
  • Utilize AWS services (e.g. S3 Glue Redshift Lambda) and Databricks to build and optimize data migration pipelines.
  • Leverage PySpark for large-scale data processing and transformation tasks.

Continuous Learning:

  • Stay updated on the latest industry trends tools and technologies related to Python SQL and Databricks.
  • Share knowledge with the team and contribute to a culture of continuous improvement.

SQL Database Management:

  • Utilize expertise in SQL to design optimize and maintain relational databases.
  • Write complex SQL queries for data retrieval manipulation and analysis.


Qualifications

Career Level - IC3




Required Experience:

Senior IC

Employment Type

Full-Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.