cover image
Notion

Software Engineer, Enterprise Data Platform

Hybrid

San francisco, United states

$ 300,000 /year

Mid level

Full Time

27-11-2025

Share this job:

Skills

Python Java Scala Unity Figma SQL Data Engineering Incident Response Encryption Kubernetes Training Programming Databases Azure AWS Analytics GCP Spark OpenAI Databricks Kafka

Job Specifications

About Us

Notion helps you build beautiful tools for your life’s work. In today's world of endless apps and tabs, Notion provides one place for teams to get everything done, seamlessly connecting docs, notes, projects, calendar, and email—with AI built in to find answers and automate work. Millions of users, from individuals to large organizations like Toyota, Figma, and OpenAI, love Notion for its flexibility and choose it because it helps them save time and money.

In-person collaboration is essential to Notion's culture. We require all team members to work from our offices on Mondays and Thursdays, our designated Anchor Days. Certain teams or positions may require additional in-office workdays.

About The Role

Join Notion’s Data Platform team as we scale our infrastructure for enterprise customers. You’ll help design and build the core data platform that powers Notion’s AI, analytics, and search while meeting stringent security, privacy, and compliance requirements. This role focuses on the data platform layer (storage, compute, pipelines, governance) and partners closely with Security, Search Platform, AI, and Data Engineering.

What You'll Do

Design and evolve the data lakehouse

Build and operate core lakehouse components (e.g., Iceberg/Hudi/Delta tables, catalogs, schema management) that serve as the source of truth for analytics, AI, and search.

Own critical data pipelines and services

Design, implement, and harden batch and streaming pipelines (Spark, Kafka, EMR, etc.) that move and transform data reliably across regions and cells.

Advance EKM and encryption-by-design

Work with Security and platform teams to integrate Enterprise Key Management (EKM) into data workflows, including file- and record-level encryption and safe key handling in Spark and storage systems.

Improve data access, auditability, and residency

Build primitives for fine-grained access control, auditing, and data residency so customers can see who accessed what, where, and under which guarantees.

Drive reliability and observability

Raise the operational bar for our data stack: improve on-call experience, debugging, and alerting for data jobs and services.

Optimize large-scale performance and cost

Tackle performance and cost challenges across Kafka, Spark, and storage for very large workspaces (20k+ users, multi-cell deployments), including cluster migrations and workload tuning.

Enable ML and search workflows

Build infrastructure to support training and inference pipelines, ranking workflows, and embedding infrastructure on top of the shared data platform.

Shape the platform roadmap

Contribute to design docs and evaluations that influence our long-term platform direction and vendor choices.

Skills You'll Need

Experience: 5+ years building and operating data platforms or large-scale data infrastructure for SaaS or similar environments.
Programming: Strong skills in at least one of Python, Java, or Scala; comfortable working with SQL for analytics and data modeling.
Distributed data systems: Hands-on experience with Spark or similar distributed processing systems, including debugging and performance tuning.
Streaming & ingestion: Experience with Kafka or equivalent streaming systems; familiarity with CDC/ingestion patterns (e.g., Debezium, Fivetran, custom connectors).
Lakehouse / storage: Experience with data lakes and table formats (Iceberg, Hudi, or Delta) and/or data catalogs and schema evolution.
Security & governance: Practical understanding of access control, encryption at rest/in transit, and auditing as they apply to data platforms.
Cloud infrastructure: Experience with at least one major cloud provider (AWS, GCP, or Azure) and managed data/compute services (e.g., EMR, Dataproc, Kubernetes-based compute).
Operations: Comfortable owning services and pipelines in production, including on-call, incident response, and reliability improvements.

Nice To Haves

Experience working directly with enterprise customers or on features like data residency, EKM, or compliance-driven auditing.
Prior work on Databricks, Unity Catalog, Lake Formation, or similar catalog/governance systems.
Background implementing multi-region / multi-cell data architectures.
Experience building ML training/eval workflows or model/feature stores on top of a shared data platform.
Familiarity with vector databases or search infrastructure, and how they integrate with upstream data systems.
Experience designing or improving observability for data platforms (e.g., Honeycomb, OpenTelemetry, metrics/trace-heavy debugging).

Our customers come from all walks of life and so do we. We hire great people from a wide variety of backgrounds, not just because it's the right thing to do, but because it makes our company stronger. If you share our values and our enthusiasm for small businesses, you will find a home at Notion.

Notion is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, colo

About the Company

Notion blends your everyday work tools into one. Product roadmap? Company wiki? Meeting notes? With Notion, they're all in one place, and totally customizable to meet the needs of any workflow. It's the all-in-one workspace for you, your team, and your whole company. We humans are toolmakers by nature, but most of us can't build or modify the software we use every day -- arguably our most powerful tool. Our team at Notion is on a mission to make it possible for everyone to shape the tools that shape their lives. Know more