IT Services
Bulgaria, Poland, Romania
Remote
Big Data Engineer.
Bulgaria, Poland, Romania
Remote
Who We Are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
The product is an enterprise-grade digital experience platform that provides real-time visibility into system performance, application stability, and end-user experience across on-premises, virtual, and cloud environments. It ingests large volumes of telemetry from distributed agents on employee devices and infrastructure, processes and enriches data through streaming pipelines, detects anomalies, and stores analytical data for monitoring and reporting. The platform serves a global customer base with high throughput and strict requirements for security, correctness, and availability. Rapid adoption has driven significant year-over-year growth and demand from large, distributed teams seeking to secure and stabilize digital environments without added complexity.
About the Role:
This is a true Big Data engineering role focused on designing and building real-time data pipelines that operate at scale in production environments serving real customers. You will join a senior, cross-functional platform team responsible for the end-to-end data flow: ingestion, processing, enrichment, anomaly detection, and storage. You will own both architecture and delivery, collaborating with Product Managers to translate requirements into robust, scalable solutions and defining guardrails for data usage, cost control, and tenant isolation. The platform is evolving from distributed, product-specific flows to a centralized, multi-region, highly observable system designed for rapid growth, advanced analytics, and future AI-driven capabilities. Strong ownership, deep technical expertise, and a clean-code mindset are essential.
Key Responsibilities:
- Design, build, and maintain high-throughput, low-latency data pipelines handling large volumes of telemetry.
- Develop real-time streaming solutions using Kafka and modern stream-processing frameworks (Flink, Spark, Beam, etc.).
- Contribute to the architecture and evolution of a large-scale, distributed, multi-region data platform.
- Ensure data reliability, fault tolerance, observability, and performance in production environments.
- Collaborate with Product Managers to define requirements and translate them into scalable, safe technical solutions.
- Define and enforce guardrails for data usage, cost optimization, and tenant isolation within a shared platform.
- Participate actively in system monitoring, troubleshooting incidents, and optimizing pipeline performance.
- Own end-to-end delivery: design, implementation, testing, deployment, and monitoring of data platform components.
Required Competence and Skills:
- 5+ years of hands-on experience in Big Data or large-scale data engineering roles.
- Strong programming skills in Java or Python, with willingness to adopt Java and frameworks like Vert.x or Spring.
- Proven track record of building and operating production-grade data pipelines at scale.
- Solid knowledge of streaming technologies such as Kafka, Kafka Streams, Flink, Spark, or Apache Beam.
- Experience with cloud platforms (AWS, Azure, or GCP) and designing distributed, multi-region systems.
- Deep understanding of production concerns: availability, data loss prevention, latency, and observability.
- Hands-on experience with data stores such as ClickHouse, PostgreSQL, MySQL, Redis, or equivalents.
- Strong system design skills, able to reason about trade-offs, scalability challenges, and cost efficiency.
- Clean code mindset, solid OOP principles, and familiarity with design patterns.
- Experience with AI-first development tools (e.g., GitHub Copilot, Cursor) is a plus.
Nice to have:
- Experience designing and operating globally distributed, multi-region data platforms.
- Background in real-time analytics, enrichment, or anomaly detection pipelines.
- Exposure to cost-aware data architectures and usage guardrails.
- Experience in platform or infrastructure teams serving multiple products.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of the country you are based in).
We provide full accounting and legal support in all countries in which we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.