Principal Engineer - Data Platform
Balbix
Role Overview
You will partner with engineering leadership, product, and cross-functional teams to define, build, and evolve the core data systems that allow Safe to scale securely and reliably.
Key Responsibilities
• Architect & Lead Data Platform Strategy
Drive the long-term vision for Safe’s data platform: lakehouse architecture, open table formats (Apache Iceberg), data ingestion frameworks, streaming pipelines, and data serving layers.
Evaluate alternative architectures, lead design reviews, and ensure consistency across solutions.
• Operational Excellence & Scalability
Ensure data systems operate at high performance with strong guarantees on data freshness, accuracy, and availability.
Lead efforts in performance tuning, large-scale data handling (billions of records), cost efficiency, and capacity planning.
• Cross-cutting “Horizontal” Ownership
Lead horizontal capabilities such as data ingestion, data modeling, streaming pipelines, data quality, lineage, and data observability.
Drive self-serve data platform capabilities for internal teams.
• Drive Engineering Standards & Best Practices
Establish best practices for data modeling, schema evolution, partitioning, compaction, and pipeline design.
Ensure strong data quality, testing, and reliability standards across the platform.
Mentor senior and staff engineers and elevate overall technical rigor in data systems.
• Collaboration & Influence
Work closely with Product, AI, Security, and Platform leadership to align data architecture with business goals.
Clearly articulate trade-offs, constraints, and design decisions.
• End-to-End Ownership
From ingestion to transformation to serving — own critical data flows end-to-end and ensure production-grade reliability.
Guide teams through complex data challenges and maintain robustness in production systems.
Must-Have Qualifications
• Lakehouse & Iceberg Expertise:
Deep hands-on experience with Apache Iceberg (mandatory) and modern lakehouse architectures.
Strong understanding of partitioning strategies, schema evolution, compaction, snapshotting, and large-scale table optimization.
• Distributed Data Systems:
Proven track record designing and building large-scale data pipelines, including batch and streaming systems, event-driven architectures, and data ingestion frameworks.
• Strong Language Skills:
Expert proficiency with Python, Go, or TypeScript (or equivalent); familiarity with multiple languages is a plus.
• Storage & Messaging:
Deep experience with data lakes (S3), and systems like Kafka, Spark, Flink, or equivalent processing frameworks.
• Cloud & Infra:
Hands-on experience with AWS (or equivalent), containerization (Docker), orchestration (ECS/Kubernetes), and IaC (Terraform/CloudFormation).
• Observability & Reliability:
Expertise in data observability, pipeline monitoring, data quality systems, SLAs, and failure recovery mechanisms.
• Security & Multi-Tenancy:
Strong understanding of data isolation, governance, access control, and secure data design in multi-tenant systems.
• Leadership & Communication:
Excellent written and verbal communication. Comfortable influencing cross-functional stakeholders across geographies.
• Problem-Solving & Judgement:
Strong fundamentals in system design, tradeoff analysis, and building scalable data systems.
Preferred / Nice-to-Have
• Exposure to AI/ML pipelines, feature stores, or vector databases
• Experience with real-time analytics and streaming systems
• Experience in developer-facing data platforms (self-serve data, internal tooling)
• Exposure to Snowflake or similar analytical warehouses
• Experience in regulated or security-sensitive environments (ISO 27001, SOC2)