RQ09822 DataOpsCloud Data Engineer Senior

Maarut

Not Interested
Bookmark
Report This Job

profile Job Location:

Toronto - Canada

profile Monthly Salary: Not Disclosed
profile Experience Required: 10years
Posted on: 2 hours ago
Vacancies: 1 Vacancy

Job Summary

Responsibilities:

  • Designing and developing data pipelines from source to end user
  • Optimizing data pipelines


General Skills:

  • Experience with Cloud data platforms data management and data exchange tools and technologies
  • Experience with Commercial and opensource data and database development and management and specializing in data storage setting and managing cloud Data As a Service (DaaS). application Database as a Service (DBaaS) Data Warehouses as a Service (DWaaS) and other storage platforms (both in the cloud and on-premise).
  • Experience with Data pipeline and workflow development orchestration deployment and automation and specializing in programming and pipelines to create and managing the dataflow and movement of data.
  • Experience with Cloud data engineer must be familiar and experienced with different programming languages and be able to integrate with many different platforms to create data pipelines automate tasks and write scripts.
  • Experience in DataOPS principle best practice and implementation and Agile project development and deployment
  • Experience in Continuous Integration/Continuous Development/Deployment (CI/CD) and Data provisioning Automation
  • Experience with digital product data analysis data exchange data provisioning and data security
  • Extensive expert experience in designing/developing and implementing data conversation and migration of VLD (Very large Data) of Online analytical processing (OLAP) and online transaction processing (OLTP) environments to Cloud Software-as-a-service (SaaS) Platform-as-a-service (PaaS) and Infrastructure-as-a-service (IaaS) environments.
  • Experience in design development and implementation of fact/dimension model data mapping data warehouse data lake and data lakehouse for enterprise
  • Experience managing Cloud Data services for project delivery including storage repositories Data Lake Data Lakehouse key vault virtual machine disk etc.
  • Experience with structured semi-structured unstructured data collection ingestion provisioning and exchange technological development of enterprise data warehouse and data lake and data lakehouse solutions and operational support.
  • Experience with DataOPS performance monitoring and tuning
  • Excellent analytical problem-solving and decision-making skills; verbal and written communication skills; presentation skills; interpersonal and negotiation skills
  • A team player with a track record for meeting deadlines


Requirements

Experience and Skill Set Requirements:

Must Haves:

  • Design build automate and optimize complex data ETL/ELT processes using Databricks Delta Live Tables (DLT) and equivalent.
  • Development of unified data platforms using Databricks and Unity catalog or its equivalent.
  • Relational databases (Oracle MySQL SQL Server) data modelling (relational & dimensional) advanced SQL query optimization data replication administration.
  • Advanced SQL skills (PL/SQL TSQL).
  • Big Data processing Frameworks (PySpark).
  • Proficient in creating Azure Data Factory pipelines for copy activity and custom data pipelines in Databricks with CI CD (Azure DevOPS Git Integration) .
  • Data migration and data integration across different platforms Oracle to Azure Data Lake Oracle to Databricks Oracle to Microsoft Fabric.


Skill Set Requirements:

Desirable Skills/Experience:

  • Data lake design and development with focus on extracting transforming data coming from various data sources and loading into Lakehouse in medallion architecture.
  • Advanced skills and hands-on experience with Oracle databases Azure SQL Server and Azure Data Factory.
  • Cloud technologies (Azure Google AWS).
  • PL/SQL for data extraction transformation and loading initial setup and other ETL experience management and support such as troubleshooting performance tuning failover and recovery.
  • Scripting languages (Python Unix shell Scala).



Required Skills:

Experience and Skill Set Requirements: Must Haves: Design build automate and optimize complex data ETL/ELT processes using Databricks Delta Live Tables (DLT) and equivalent. Development of unified data platforms using Databricks and Unity catalog or its equivalent. Relational databases (Oracle MySQL SQL Server) data modelling (relational & dimensional) advanced SQL query optimization data replication administration. Advanced SQL skills (PL/SQL TSQL). Big Data processing Frameworks (PySpark). Proficient in creating Azure Data Factory pipelines for copy activity and custom data pipelines in Databricks with CI CD (Azure DevOPS Git Integration) . Data migration and data integration across different platforms Oracle to Azure Data Lake Oracle to Databricks Oracle to Microsoft Fabric. Skill Set Requirements: Desirable Skills/Experience: Data lake design and development with focus on extracting transforming data coming from various data sources and loading into Lakehouse in medallion architecture. Advanced skills and hands-on experience with Oracle databases Azure SQL Server and Azure Data Factory. Cloud technologies (Azure Google AWS). PL/SQL for data extraction transformation and loading initial setup and other ETL experience management and support such as troubleshooting performance tuning failover and recovery. Scripting languages (Python Unix shell Scala).

Responsibilities:Designing and developing data pipelines from source to end userOptimizing data pipelines General Skills:Experience with Cloud data platforms data management and data exchange tools and technologiesExperience with Commercial and opensource data and database development and management...
View more view more

Company Industry

IT Services and IT Consulting

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala