Working time
Type of contract
Professional experience
Educational level
We are looking for a Data Engineer for one of our clients based in Luxembourg.
- Freelance contract or permanent position via a payrolling company
- Minimum 5 years' experience
- Strong knowledge basis in Python
Your responsibilities
- You will design, build, and optimize scalable Python-based ETL and ELT pipelines using pandas, Polars, or similar frameworks.
- You will develop efficient data ingestion pipelines supporting batch, incremental, and streaming use cases.
- You will integrate data pipelines with internal and external APIs, ensuring proper authentication, error handling, high data quality, develop and maintain CI/CD pipelines for data workflows, library packaging, and infrastructure components.
- You will apply best-in-class data modelling approaches, including dimensional modelling, data vault, and domain-driven designs.
- You will take responsibility for the full application lifecycle while actively reducing architectural debt, communicating it transparently and protecting platforms from excessive accumulation.
- You will deploy, operate, and monitor data pipelines in production using Docker, Kubernetes, and GitLab.
- You will enforce engineering best practices such as version control, automated testing, code quality standards, observability, and documentation.
- You will collaborate closely with data scientists to deliver reliable feature pipelines and high-quality datasets.
- You will partner with software engineers and platform teams to improve data services, APIs, and deployment processes.
- You will support troubleshooting and incident response across data pipelines, integrations, and infrastructure layers and actively participate in architecture discussions while contributing to defining the technical roadmap of the data platform.
Your profile
- You are a data engineering or backend engineering professional with at least 5 years of experience, bringing a strong data-focused background.
- You are highly proficient in Python, with expert-level skills covering pandas or Polars, typing, packaging, testing, and performance optimization.
- You are experienced with data orchestration frameworks, ideally Dagster, and you are comfortable working with Airflow or comparable tools.
- You are well-versed in data modelling concepts, ETL patterns, and performance considerations.
- You are proficient with Docker, Kubernetes, and GitLab CI/CD in production environments.
- You are hands-on with SQL and relational databases, with solid experience in MS SQL and exposure to PostgreSQL or Oracle.
- You are experienced with Azure data and compute services such as Databricks, ADLS, and ADF, and you have foundational knowledge of Azure networking, Azure AD, and security principles, or equivalent experience with AWS or GCP.
- You are familiar with distributed systems, microservices, and API-driven architectures.
- You are committed to automation, reproducibility, and DevOps practices applied to data engineering.
- You are capable of refining requirements and collaborating closely with stakeholders.
- You are experienced working in regulated environments such as finance, insurance, or healthcare, which is considered a strong advantage.
- You are a clear communicator, able to collaborate effectively with cross-functional stakeholders.
- You are a proactive and detail-oriented problem solver with a strong product-oriented mindset.