Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailJob Description:
At Bank of America we are guided by a common purpose to help make financial lives better through the power of every connection. We do this by driving Responsible Growth and delivering for our clients teammates communities and shareholders every day.
Being a Great Place to Work is core to how we drive Responsible Growth. This includes our commitment to being a diverse and inclusive workplace attracting and developing exceptional talent supporting our teammates physical emotional and financial wellness recognizing and rewarding performance and how we make an impact in the communities we serve.
At Bank of America you can build a successful career with opportunities to learn grow and make an impact. Join us!
Responsibilities:
Develop and deliver data solutions to accomplish technology and business goals.
Code design and delivery tasks associated with the integration cleaning transformation and control of data in operational and analytics data systems.
Work with stakeholders Product Owners and Software Engineers to aid in the implementation of data requirements performance analysis research and troubleshooting.
Work with data engineering practices and contribute to story refinement/defining requirements.
Participate in estimating work necessary to realize a story/requirement through the delivery lifecycle.
Code solutions to integrate clean transform and control data in operational and/or analytics data systems per the defined acceptance criteria.
Use Java Scala Python Apache Kafka architecture Cloudera architecture components and ecosystem to maintain system operations enhance existing data processing routines and innovate new methods using the latest offerings.
Create advanced data pipelines by using Kafka APIs including producers consumers and Kafka Streams.
Utilize Kafka Connect Schema Registry and KSQL to perform inline data enrichments calculations and build near realtime data products.
Develop objects and metadata for integration of CDC data using fully transactional Hive managed tables.
Perform troubleshooting and performance tuning on Cloudera Data platform.
Perform Python and Spark development with an emphasis on Spark performance tuning and advance Spark multiparallel processing applications.
Remote work may be permitted within a commutable distance from the worksite.
Required Skills & Experience:
Masters degree or equivalent in Computer and Information Science Management Information Systems Engineering (any) or related; and
2 years of experience in the job offered or a related IT occupation.
Must include 2 years of experience in each of the following:
Using Java Scala Python Apache Kafka architecture Cloudera architecture components and ecosystem to maintain system operations enhance existing data processing routines and innovate new methods using the latest offerings;
Creating advanced data pipelines by using Kafka APIs including producers consumers and Kafka Streams;
Utilizing Kafka Connect Schema Registry and KSQL to perform inline data enrichments calculations and build near realtime data products;
Developing objects and metadata for integration of CDC data using fully transactional Hive managed tables;
Performing troubleshooting and performance tuning on Cloudera Data platform; and
Performing Python and Spark development with an emphasis on Spark performance tuning and advance Spark multiparallel processing applications.
If interested apply online at or email your resume to and reference the job title of the role and requisition number.
Employer: Bank of America N.A.
Shift:
1st shift (United States of America)Hours Per Week:
40Required Experience:
Chief
Full-Time