Job Title: Senior Hybrid Cloud Data Engineer
Location: Dallas TX 5 Days Onsite (Need only locals within 30 miles & 12 Years senior profiles)
Duration: 6-12 Months
Overview: We are looking for a highly skilled Hybrid Cloud Data Engineer to bridge the gap between legacy systems and modern cloud infrastructures. The ideal candidate will be responsible for leveraging the scalability and flexibility of the cloud while maintaining the control and reliability of on-prem systems supporting critical decision-making and innovation.
Key Responsibilities:
Data Pipeline Development:
Build and manage ETL (Extract Transform Load) pipelines to move data between on-premises systems and cloud platforms.
Ensure pipelines are efficient scalable and capable of handling large volumes of data.
System Integration:
Design and implement solutions that enable interoperability between on-prem systems and cloud platforms in hybrid cloud models.
Facilitate data synchronization ensuring consistency and availability across both environments.
Data Storage and Management:
Manage storage solutions for both on-prem and cloud systems balancing performance cost and reliability.
Optimize the use of cloud-based storage (e.g. Amazon S3 Azure Blob Storage) with on-prem database systems.
Security and Compliance:
Implement robust security measures to safeguard data as it moves between on-prem and cloud environments.
Ensure compliance with regulatory and organizational policies particularly in industries like finance or healthcare.
Performance Optimization:
Monitor and fine-tune data processing workflows to minimize latency and maximize efficiency.
Leverage cloud-native tools (e.g. AWS Glue Azure Data Factory) alongside on-prem tools for streamlined operations.
Skills and Expertise:
Cloud Platforms: Proficiency in AWS Azure or Google Cloud with knowledge of hybrid deployment patterns.
On-Prem Systems: Strong understanding of traditional database systems (e.g. SQL Server Oracle) and data warehouses.
Programming:
Experience with languages like Python Java or Scala for data processing and automation.
Containerization:
Familiarity with Docker and Kubernetes for managing hybrid workloads.
Networking:
Understanding of VPNs firewalls and other networking principles for hybrid connectivity.
Big Data Tools:
Knowledge of distributed processing tools like Hadoop Spark or Kafka.