top of page



(Hybrid) Netherlands



What We're Looking For

  • Curiosity – proven abilities to take initiative and be innovative

  • An analytical mind with a problem-solving aptitude

  • Independent thinker and desire to share knowledge across the development community

The Role

  • Design, implement and manage data pipelines that ingest vast amounts of data from various sources

  • Enhance end-to-end workflows with automation, CI/CD processes, proper orchestration and monitoring

  • Innovate and advise on the latest technologies and standard methodologies in Data Engineering and be able to identify software solutions that can address hurdles in client organizations

  • Act as a technical leader for resolving problems, with both technical and non-technical audiences

  • Mentoring, coaching, and steering colleagues across technical challenges

  • Ability to work alone and self-steer initiatives, influencing the community of developers

  • Take ownership of project work and develop client relationships

  • Be a confident self-starter! Independently plan, design, code, debug and test major features, ensuring issues are identified early and requirements are delivered

  • Identify technical areas for improvement and create business cases for improvement

Desired Skills and Experience

  • BS/MS in Computer Science, or related field with 7+ years of software/data engineering experience

  • In-depth understanding of data lake architectures and experience implementing, mesh architecture a plus

  • Experience working across cloud providers a plus (i.e., AWS, Azure, GCP)

  • Experience in orchestration technologies (e.g., Airflow, AWS Step Functions)

  • Excellent knowledge of programming languages (e.g., Python, Scala, SQL)

  • Hands-on experience in application deployment (e.g., Docker, container registry, AKS, etc.)

  • Hands-on experience with CI/CD tooling (e.g., GitHub Actions, Gitlab CI/CD, Travis, etc)

  • Technical expertise with data modeling and mining techniques

  • Experience within the Apache Hadoop Ecosystem (I.e., Kafka, Spark, Hive, etc.)

  • Experience with data warehousing technologies (e.g., Snowflake, BigQuery, Synapse)

  • Experience managing and provisioning Infra-as-Code (i.e., Terraform, Ansible)

  • Experience (and understand the importance of) implementing proper platform/pipeline logging and monitoring

  • Experience with data governance initiatives and/or integrating data quality/data catalogue/MDM solutions a plus

  • Proficiency with modern software development methodologies such as Agile, source control, project management and issue tracking with JIRA

  • You are fluent in English. Dutch is a great plus.

What We Offer

  • Flexibility: We live in a digital age – so of course we offer a flexible/hybrid working arrangement which is dependent on the client engagement.

  • Work-Life Balance: People first and we mean it. We work hard, but also know you have a life that we also want you to enjoy.

  • Growth and Development: We're dedicated to ensuring our people have the skills, training, and support to develop and grow. Fridays are for you. We hold knowledge sharing sessions and bring in trainings to keep our people up to date with the latest technologies and best practices.

  • Competitive Compensation: We know that our people are the key to our success and our compensation packages reflect this. We offer top-notch equipment, company car and an uncapped training budget.

  • Changing Pace and Challenges: Being a consulting firm, we offer the unique opportunity to see various challenges across various industries and within different organizations. This results in working with small and large companies, mature and unmature data journeys, and wide range of potential use cases.

bottom of page