Job Title: Control-M Developer Data Engineering
Location: Remote
Experience: 10 years
Duration: 6 Months
Employment Type: Contract
Note: Attached is a profile selected internally at RL for this role. Need someone better than him.
Job Summary
We are seeking an experienced Control-M Developer with strong Data Engineering expertise to design develop and manage enterprise-scale batch scheduling and data pipeline workflows. The ideal candidate will have hands-on experience in Control-M automation data integration and end-to-end data pipeline orchestration across modern data platforms.
Key Responsibilities
Control-M & Scheduling
- Design develop and maintain Control-M job flows for complex batch and data workflows
- Create and manage job dependencies calendars conditions and alerts
- Monitor troubleshoot and optimize batch failures and performance issues
- Implement job automation reruns recovery and SLA management
- Collaborate with application data and infrastructure teams to ensure seamless scheduling
Data Engineering
- Develop and support data pipelines using ETL/ELT tools and scripting
- Integrate Control-M with data platforms such as:
- Data warehouses (Snowflake Redshift BigQuery Teradata Oracle SQL Server)
- Big data ecosystems (Hadoop Spark)
- Orchestrate workflows involving file transfers (SFTP FTP) APIs and cloud storage
- Support data ingestion transformation validation and downstream consumption
- Ensure data quality reliability and performance across pipelines
Cloud & Automation
- Schedule and monitor workloads in AWS / Azure / GCP environments
- Integrate Control-M with cloud-native services (S3 ADLS Lambda Databricks etc.)
- Use shell scripting / Python for automation and data processing
- Implement CI/CD best practices for job and pipeline deployments
Required Skills & Qualifications
Must-Have
- 5 years of hands-on experience as a Control-M Developer / Scheduler
- Strong experience in Data Engineering or ETL development
- Proficiency in Unix/Linux shell scripting
- Strong SQL skills for data validation and troubleshooting
- Experience supporting enterprise batch processing environments
Nice-to-Have
- Experience with Python Spark or Scala
- Exposure to cloud-based data platforms (Snowflake Databricks BigQuery)
- Knowledge of CI/CD tools (Git Jenkins Azure DevOps)
- Experience with monitoring tools and SLA reporting
- Control-M certification is a plus
Soft Skills
- Strong analytical and troubleshooting skills
- Ability to work in fast-paced production-critical environments
- Excellent communication and cross-team collaboration
- Ownership mindset with attention to detail
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
Yes
Employment Type :
Contract
Job Title: Control-M Developer Data EngineeringLocation: RemoteExperience: 10 yearsDuration: 6 MonthsEmployment Type: Contract Note: Attached is a profile selected internally at RL for this role. Need someone better than him.Job SummaryWe are seeking an experienced Control-M Developer with strong D...
Job Title: Control-M Developer Data Engineering
Location: Remote
Experience: 10 years
Duration: 6 Months
Employment Type: Contract
Note: Attached is a profile selected internally at RL for this role. Need someone better than him.
Job Summary
We are seeking an experienced Control-M Developer with strong Data Engineering expertise to design develop and manage enterprise-scale batch scheduling and data pipeline workflows. The ideal candidate will have hands-on experience in Control-M automation data integration and end-to-end data pipeline orchestration across modern data platforms.
Key Responsibilities
Control-M & Scheduling
- Design develop and maintain Control-M job flows for complex batch and data workflows
- Create and manage job dependencies calendars conditions and alerts
- Monitor troubleshoot and optimize batch failures and performance issues
- Implement job automation reruns recovery and SLA management
- Collaborate with application data and infrastructure teams to ensure seamless scheduling
Data Engineering
- Develop and support data pipelines using ETL/ELT tools and scripting
- Integrate Control-M with data platforms such as:
- Data warehouses (Snowflake Redshift BigQuery Teradata Oracle SQL Server)
- Big data ecosystems (Hadoop Spark)
- Orchestrate workflows involving file transfers (SFTP FTP) APIs and cloud storage
- Support data ingestion transformation validation and downstream consumption
- Ensure data quality reliability and performance across pipelines
Cloud & Automation
- Schedule and monitor workloads in AWS / Azure / GCP environments
- Integrate Control-M with cloud-native services (S3 ADLS Lambda Databricks etc.)
- Use shell scripting / Python for automation and data processing
- Implement CI/CD best practices for job and pipeline deployments
Required Skills & Qualifications
Must-Have
- 5 years of hands-on experience as a Control-M Developer / Scheduler
- Strong experience in Data Engineering or ETL development
- Proficiency in Unix/Linux shell scripting
- Strong SQL skills for data validation and troubleshooting
- Experience supporting enterprise batch processing environments
Nice-to-Have
- Experience with Python Spark or Scala
- Exposure to cloud-based data platforms (Snowflake Databricks BigQuery)
- Knowledge of CI/CD tools (Git Jenkins Azure DevOps)
- Experience with monitoring tools and SLA reporting
- Control-M certification is a plus
Soft Skills
- Strong analytical and troubleshooting skills
- Ability to work in fast-paced production-critical environments
- Excellent communication and cross-team collaboration
- Ownership mindset with attention to detail
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
Yes
Employment Type :
Contract
View more
View less