Data Engineer

Sydney, New South Wales, Australia | Engineering | Full-time | Partially remote

Apply

Quantium

 

Since 2002, Quantium have combined the best of human and artificial intelligence to power possibilities for individuals, organisations and society. Our solutions make sense of what has happened and what will, could or should be done to re-shape industries and societies around the needs of the people they serve.

 

As one of the world’s fully diversified data science and AI leaders we operate across every sector of the economy and we’re growing fast - with growth comes opportunity! We’re passionate about building out our team of smart, fun, diverse and motivated people.

 

We combine a team of experts that spans data scientists, actuaries, statisticians, business analysts, strategy consultants, engineers, technologists, programmers, product developers, and futurists – all dedicated to harnessing the power of data to drive transformational outcomes for our clients.

 

Role Summary

 

Times and technology have changed, but this remains our goal. Instead of wrangling single, SQL-based databases, our data technology stacks have evolved to include:

 

  • Snowflake + Matillion
  • Scala/Spark processing on both Hadoop and/or Kubernetes
  • Scaled scoring workloads on Kubernetes, based on Python or Scala/Spark
  • Microsoft SQL Server / Integration Services / Analysis Services

 

In this role you’ll be part of a team delivering high quality data processing pipelines quickly and reliably to enable Quantium to go-fast and go-wide without sacrificing engineering integrity and still building reliable, maintainable systems.

 

We work in multi-discipline teams so you’ll be working alongside Data Scientists, Analysts, Delivery Managers, Product Managers, Testers and DevOps

 

About you

 

  • We’re looking for Data engineers who have experience working with large, complex data sets, knowledge and experience in one or more data processing technologies, and a solid understanding of data modelling concepts
  • Ideally you have started your career working with SQL server and traditional ETL technologies but have recently gained exposure / experience building data pipelines using Spark / Scala
  • You’re a pragmatist, a true engineer who loves to solve complex issues
  • You take a balanced approach between knowing that data processing is highly dependent on the business rules and characteristics inherent in data, but are also driven to find patterns, synergies and efficiencies to enable scale

 

Skills & experience

 

  • Experience implementing data warehousing solutions in a large, commercial environment using Microsoft SQL Server (T-SQL, SSIS) - This is a foundational requirement for this role
  • Experience working with large, complex data-sets
  • A solid foundation in: dimensional data mart modelling concepts, data architectures for analytical processing and data modelling abstraction approaches
  • An awareness of considerations around structuring data on distributed systems to support analytic use cases
  • Contribute to design decisions
  • Can work on stories autonomously
  • A love for knowledge sharing, you know what works, but you’re also happy to learn new methods and technology