AdTech
Krakow (Poland)
Hybrid
DataOps Engineer.
Krakow (Poland)
Hybrid
Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Bigabid’s platform processes 50 TB+ of raw data daily, handles 4M+ requests/second, and reaches over 1 billion unique users weekly. As a DataOps Engineer, you will be the reliability guardian of this data ecosystem – catching issues before they hit production, responding to operational incidents, and building the automation and infrastructure that keeps everything running.
About the Role:
We are looking for a DataOps Engineer to ensure the reliability and quality of Bigabid’s data ecosystem. You will build and maintain monitoring systems, enforce data quality standards, investigate and resolve pipeline issues, and manage metadata.
Day-to-day work combines planned sprint tasks (infrastructure development, pipeline improvements, and test coverage) with reactive operational work (monitoring alerts, incident triage, and ad-hoc requests).
You are expected to own tasks end-to-end, proactively clarify requirements when needed, and independently drive work to completion without close supervision.
Key Responsibilities:
- Design and maintain real-time monitoring and alerting for pipeline health, freshness, and accuracy
- Detect and investigate anomalies before they reach downstream consumers
- Implement automated data quality tests, validation checks, and data contracts using Python and testing frameworks
- Triage operational incidents: identify root causes, document findings, and escalate when needed
- Execute ad-hoc operational tasks with Airflow, Python, and SQL; apply config changes to meet business needs
- Own the metadata store (tables, columns, lineage, sources) — keep documentation clear and current
- Optimize existing workflows for performance, reliability, and scale
- Collaborate with Data Engineering, Operations, and Business teams on quality metrics and governance
Required Competence and Skills:
- Minimum 3 years of experience as a Data Engineer or in a similar data-focused role.
- Proficient in Python and Apache Airflow for pipeline development and orchestration.
- Strong SQL skills and hands-on experience with databases and data warehouses (e.g., MySQL, Presto, Athena, MemSQL).
- Practical experience with pySpark for large-scale data processing.
- Demonstrated attention to data quality and proactive identification of anomalies.
- Experience building or working with data monitoring, alerting, or observability tools(e.g, Monte Carlo, Prometheus, Datadog)
- Bachelor’s degree in Computer Science, Mathematics, Physics, Engineering, Statistics or a related technical field.
Nice to Have:
- Experience as a NOC or DevOps engineer.
- Familiarity with data governance frameworks or metadata management tools.
- Experience with data quality platforms such as Great Expectations, dbt tests or Monte Carlo.
Why Us:
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.