We are seeking highly motivated and talented people to join the Boerboel team. As a member of the team you will be working with passionate and curious colleagues who thrive on solving complex problems. Boerboel is an office-first firm. While we offer flexibility to our employees remote work is not available at this time.
As a Data Engineer you will work closely with researchers on our data framework which underpins the firms analytic and research addition to supporting the data needs of teams dedicated to specific strategies you will also routinely address data management challenges relevant across the firm.
Essential Requirements:
- Proficient in Python with a strong command of data manipulation and analysis libraries such as Polars Pandas and NumPy.
- Experience acquiring validating and integrating third-party data sets. Familiarity with multiple asset classes and their distinct features such as corporate actions.
- Ability to prioritize and deliver with limited resources. Ability to proactively identify propose and drive technical solutions; both independently and as part of a team.
Skills:
- Experienced with database systems query engines and data warehousing systems. Experienced managing and optimizing databases handling large amounts of structured data.
- Proficient implementing and operating complex ETL pipelines. Proficient managing interdependent data acquisition validation integration and processing tasks.
- Experienced troubleshooting tuning and operating data management systems.
- Experienced with Linux. Comfortable navigating and interfacing with POSIX filesystems on the command line.
- Strong problem-solving and troubleshooting skills.
Job Duties:
- Maintain and improve the research framework for multiple asset classes.
- Optimize the collection processing validation manipulation and presentation of static and historical data.
- Apply corporate actions to prices portfolios ETF baskets reference data and other relevant datasets.
- Create and maintain datasets across asset classes.
- Develop and deploy validation tests as part of a larger data pipeline.
- Contribute features and performance improvements to the query engine used by the team.