Job openings across our network

companies
Jobs

Principal Engineer - Data Platform

Balbix

Balbix

Bengaluru, Karnataka, India
Posted on Mar 31, 2026
About Safe
At Safe Security, we are building Cyber Super Intelligence (CSI) — a next-gen system of intelligence that autonomously predicts, detects, and remediates cyber threats. We operate with radical transparency, high trust, and a culture-first mindset (no “brilliant jerks”). You’ll join a team that values ownership, growth, technical rigour, and impact.
We operate with radical transparency, autonomy, and accountability—there’s no room for brilliant jerks. We embrace a culture-first approach, offering an unlimited vacation policy, a high-trust work environment, and a commitment to continuous learning. For us, Culture is Our Strategy—check out our Culture Memo to dive deeper into what makes SAFE unique.

Role Overview

    As a Principal Engineer – Data Platform, you will drive the next wave of architectural direction and foundational data capabilities that power Safe’s multi-tenant data platform, analytics, and intelligence systems.

    You will partner with engineering leadership, product, and cross-functional teams to define, build, and evolve the core data systems that allow Safe to scale securely and reliably.
    You’ll not just execute — you’ll lead and mentor, influence technical direction across the org, and champion best practices in data architecture, lakehouse design, scalability, reliability, and observability.

Key Responsibilities

    • Architect & Lead Data Platform Strategy
    Drive the long-term vision for Safe’s data platform: lakehouse architecture, open table formats (Apache Iceberg), data ingestion frameworks, streaming pipelines, and data serving layers.

    Evaluate alternative architectures, lead design reviews, and ensure consistency across solutions.

    • Operational Excellence & Scalability
    Ensure data systems operate at high performance with strong guarantees on data freshness, accuracy, and availability.

    Lead efforts in performance tuning, large-scale data handling (billions of records), cost efficiency, and capacity planning.

    • Cross-cutting “Horizontal” Ownership
    Lead horizontal capabilities such as data ingestion, data modeling, streaming pipelines, data quality, lineage, and data observability.

    Drive self-serve data platform capabilities for internal teams.

    • Drive Engineering Standards & Best Practices
    Establish best practices for data modeling, schema evolution, partitioning, compaction, and pipeline design.

    Ensure strong data quality, testing, and reliability standards across the platform.
    Mentor senior and staff engineers and elevate overall technical rigor in data systems.

    • Collaboration & Influence
    Work closely with Product, AI, Security, and Platform leadership to align data architecture with business goals.

    Clearly articulate trade-offs, constraints, and design decisions.

    • End-to-End Ownership
    From ingestion to transformation to serving — own critical data flows end-to-end and ensure production-grade reliability.

    Guide teams through complex data challenges and maintain robustness in production systems.

Must-Have Qualifications

    Experience: 10+ years in software/data engineering, including 4+ years as a senior/lead/principal engineer in data platform, backend, or infrastructure systems.

    Lakehouse & Iceberg Expertise:
    Deep hands-on experience with Apache Iceberg (mandatory) and modern lakehouse architectures.

    Strong understanding of partitioning strategies, schema evolution, compaction, snapshotting, and large-scale table optimization.

    • Distributed Data Systems:
    Proven track record designing and building large-scale data pipelines, including batch and streaming systems, event-driven architectures, and data ingestion frameworks.

    • Strong Language Skills:
    Expert proficiency with Python, Go, or TypeScript (or equivalent); familiarity with multiple languages is a plus.

    • Storage & Messaging:
    Deep experience with data lakes (S3), and systems like Kafka, Spark, Flink, or equivalent processing frameworks.

    • Cloud & Infra:
    Hands-on experience with AWS (or equivalent), containerization (Docker), orchestration (ECS/Kubernetes), and IaC (Terraform/CloudFormation).

    • Observability & Reliability:
    Expertise in data observability, pipeline monitoring, data quality systems, SLAs, and failure recovery mechanisms.

    • Security & Multi-Tenancy:
    Strong understanding of data isolation, governance, access control, and secure data design in multi-tenant systems.

    • Leadership & Communication:
    Excellent written and verbal communication. Comfortable influencing cross-functional stakeholders across geographies.

    • Problem-Solving & Judgement:
    Strong fundamentals in system design, tradeoff analysis, and building scalable data systems.

Preferred / Nice-to-Have

    • Experience building B2B SaaS data platforms at scale
    • Exposure to AI/ML pipelines, feature stores, or vector databases
    • Experience with real-time analytics and streaming systems
    • Experience in developer-facing data platforms (self-serve data, internal tooling)
    • Exposure to Snowflake or similar analytical warehouses
    • Experience in regulated or security-sensitive environments (ISO 27001, SOC2)