Job Title: Data Engineer Big Data
Location: Alpharetta GA/ NY/NJ ( Day 1 Onsite)
About the Role
- Valuetechnology is seeking a highly skilled Big Data Engineer to design develop and optimize large-scale data pipelines and platforms. This role focuses on building secure resilient and high-performance data engineering solutions that support enterprise analytics regulatory reporting risk management and business intelligence initiatives.
Responsibilities
- Design develop and maintain end-to-end data pipelines using Big Data technologies (Hadoop Spark Hive Kafka HBase etc.).
- Build scalable ETL/ELT workflows for batch and real-time data ingestion from structured and unstructured data sources.
- Work with Cloud and On-Prem platforms (Azure / GCP / Wells Fargo internal platforms).
- Implement data governance data lineage metadata management and quality controls in alignment with enterprise standards.
- Optimize data storage solutions using HDFS S3 Delta Lake Parquet or relevant technologies.
- Partner with cross-functional teams (Data Scientists Analysts Product Risk Compliance) to deliver high-quality data solutions.
- Develop automation monitoring CI/CD pipelines and infrastructure-as-code where applicable.
- Ensure all solutions meet security privacy and regulatory requirements (SOX GDPR internal data handling policies).
- Troubleshoot performance and reliability issues within data pipelines and distributed systems.
- Contribute to the architecture and design of next-generation data platforms.
Required Qualifications
- 5 years of experience in Data Engineering Software Engineering or a related field.
- Strong expertise in Big Data ecosystem technologies:
- Hadoop Spark (PySpark/Scala) Hive Kafka Sqoop Oozie Airflow
- Proficiency with SQL distributed computing and data modeling.
- Experience with cloud-based data services (Azure Data Lake Databricks GCP BigQuery etc.).
- Hands-on experience building scalable ETL/ELT pipelines in enterprise environments.
- Strong knowledge of Unix/Linux shell scripting and version control tools (Git).
- Experience working in large financial institutions or regulated environments (preferred but not required)
- Experience with Python/Scala for data processing.
- Knowledge of machine learning pipelines and data preparation for analytics.
- Understanding of risk management financial data domains or regulatory reporting.
- Familiarity with CI/CD tools (Jenkins Azure DevOps GitLab).
- Experience with containerization (Docker Kubernetes) is a plus.
- Strong problem-solving communication and collaboration skills.
Job Title: Data Engineer Big Data Location: Alpharetta GA/ NY/NJ ( Day 1 Onsite) About the Role Valuetechnology is seeking a highly skilled Big Data Engineer to design develop and optimize large-scale data pipelines and platforms. This role focuses on building secure resilient and high-perf...
Job Title: Data Engineer Big Data
Location: Alpharetta GA/ NY/NJ ( Day 1 Onsite)
About the Role
- Valuetechnology is seeking a highly skilled Big Data Engineer to design develop and optimize large-scale data pipelines and platforms. This role focuses on building secure resilient and high-performance data engineering solutions that support enterprise analytics regulatory reporting risk management and business intelligence initiatives.
Responsibilities
- Design develop and maintain end-to-end data pipelines using Big Data technologies (Hadoop Spark Hive Kafka HBase etc.).
- Build scalable ETL/ELT workflows for batch and real-time data ingestion from structured and unstructured data sources.
- Work with Cloud and On-Prem platforms (Azure / GCP / Wells Fargo internal platforms).
- Implement data governance data lineage metadata management and quality controls in alignment with enterprise standards.
- Optimize data storage solutions using HDFS S3 Delta Lake Parquet or relevant technologies.
- Partner with cross-functional teams (Data Scientists Analysts Product Risk Compliance) to deliver high-quality data solutions.
- Develop automation monitoring CI/CD pipelines and infrastructure-as-code where applicable.
- Ensure all solutions meet security privacy and regulatory requirements (SOX GDPR internal data handling policies).
- Troubleshoot performance and reliability issues within data pipelines and distributed systems.
- Contribute to the architecture and design of next-generation data platforms.
Required Qualifications
- 5 years of experience in Data Engineering Software Engineering or a related field.
- Strong expertise in Big Data ecosystem technologies:
- Hadoop Spark (PySpark/Scala) Hive Kafka Sqoop Oozie Airflow
- Proficiency with SQL distributed computing and data modeling.
- Experience with cloud-based data services (Azure Data Lake Databricks GCP BigQuery etc.).
- Hands-on experience building scalable ETL/ELT pipelines in enterprise environments.
- Strong knowledge of Unix/Linux shell scripting and version control tools (Git).
- Experience working in large financial institutions or regulated environments (preferred but not required)
- Experience with Python/Scala for data processing.
- Knowledge of machine learning pipelines and data preparation for analytics.
- Understanding of risk management financial data domains or regulatory reporting.
- Familiarity with CI/CD tools (Jenkins Azure DevOps GitLab).
- Experience with containerization (Docker Kubernetes) is a plus.
- Strong problem-solving communication and collaboration skills.
View more
View less