Data Engineer

Tiimely

Not Interested
Bookmark
Report This Job

profile Job Location:

Adelaide - Australia

profile Monthly Salary: Not Disclosed
Posted on: Yesterday
Vacancies: 1 Vacancy

Job Summary

About Us

Tiimely is a platform technology company leading the future of financial assessmentthrough predictive and explainable AI and configurable API-led solutions. Ourproprietary technology automates financial assessment and credit decisioningenabling faster more accurate outcomes across a range of use cases and islicensed by large banks fintechs ASX-listed brands and our own in-houseretail business Tiimely Home.

Headquartered in Adelaide with a national team of 130 were on a mission to transform howAustralians access credit through cutting-edge technology and a human-firstapproach. As a certified B Corp our values Time to be Human Time to TakeResponsibility Time to be Transparent and Time to Build Good Bonds shapehow we build products and how we work together.

The Position

We are looking for a Data Engineer who can independently build and improve thetrusted data platform capabilities that support analytics reporting datascience and decision-making across our business.

As a Data Engineer at Tiimely youll design build and operate data pipelinescurated data assets and platform capabilities that support analyticsreporting and downstream data consumption. Youll contribute to the evolutionof a modern open lakehouse environment and help ensure data products arereliable well-governed and aligned with business operational and regulatoryneeds.

Youll work closely with data specialists software engineers business stakeholdersand analytics consumers to translate requirements into trusted and usable dataoutcomes. Our platform processes real credit applications in a regulatedfinancial services context understanding how your data assets support lendingworkflows compliance requirements and business decision-making is part ofwhat makes this role distinctive.

Your responsibilities will include:

  • Design build and maintain scalable pipelines that ingest process transform and deliver data from a range of internal and external source systems.
  • Develop curated datasets and data models to support analytics reporting and downstream consumption.
  • Implement and maintain transformation workflows across batch and platform-driven processing patterns.
  • Contribute to the development of Tiimelys open lakehouse architecture including medallion-aligned data layers and open table formats such as Apache Iceberg.
  • Monitor pipeline health investigate data issues and improve reliability performance and recoverability across the data quality controls validation reconciliation and observability throughout the data lifecycle.
  • Apply sound data modelling practices including dimensional modelling and analytical data structures.
  • Contribute to schema management metadata quality lineage awareness and controlled access to data assets.
  • Work with sensitive data in accordance with governance requirements including PII handling de-identification and data retention obligations.
  • Partner with data scientists BI analysts software engineers and business stakeholders to deliver trusted data products.
  • Participate in design discussions code reviews and continuous improvement initiatives relating to the data platform.
  • Stay current with evolving data engineering practices tools and technologies and share knowledge with peers.

What Were Looking For

Required

  • Demonstrated experience in a Data Engineering or similar mid-level role.
  • Solid SQL and Python capability.
  • Sound understanding of data modelling including dimensional modelling and analytical data structures.
  • Good understanding of ETL and ELT patterns and the practical trade-offs of each.
  • Familiarity with modern data platform concepts including lakehouse architecture and open table formats such as Apache Iceberg.
  • Experience with data engineering tooling including dbt and workflow orchestration platforms such as Apache Airflow.
  • Experience working in Agile development environments.
  • Familiarity with version control testing CI/CD and collaborative engineering practices.
  • Awareness of data governance concepts including PIIhandling de-identification retention and secure use of regulated data.
  • Solid analytical thinking debugging and problem-solving capability.
  • Clear written and verbal communication skills including the ability to collaborate effectively with technical and non-technical stakeholders.

Desirable

  • Experience with AWS data services or other cloud-based data platforms.
  • Exposure to BI tools such as Power BI or Tableau.
  • Familiarity with data observability lineage and platform monitoring practices.
  • Exposure to real-time or near real-time data processing patterns.
  • Experience leveraging AI tools and technologies to enhance data engineering productivity or problem-solving.
  • Experience in fintech financial services or another regulated environment.

Whats In It For You

  • Competitive remuneration package with employee share plans that reward individual and company success.
  • Flexible working arrangements including hybrid options.
  • Ongoing learning and development to support your professional growth.
  • A modern office environment designed for a scaling fintech.
  • A supportive and inclusive team culture backed by our B Corp certification and values-driven approach.

If this sounds like you wed love to hear from you. Even if you dont tick all the boxes we encourage you to apply.

If you have any questions or need adjustments to our hiring process please get intouch with Karam Singh at


Required Experience:

Manager

About UsTiimely is a platform technology company leading the future of financial assessmentthrough predictive and explainable AI and configurable API-led solutions. Ourproprietary technology automates financial assessment and credit decisioningenabling faster more accurate outcomes across a range of...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala