cover image
Red Oak Technologies

Senior Software Engineer

Hybrid

Bellevue, United states

$ 80 /hour

Senior

Freelance

14-12-2025

Share this job:

Skills

Communication Python Java C# Unity Big Data Data Warehousing Data Governance Data Engineering Apache Spark CI/CD DevOps Monitoring Problem-solving Decision-making Architecture Programming apache Azure AWS Cost Management AWS Cloud Analytics GCP Spark CI/CD Pipelines Databricks Kafka Flink Data Bricks

Job Specifications

Senior Software Engineer (Data Engineering + Software Engineering)

Location: Bellevue, WA (Hybrid / Onsite)

Type: Full-Time / Long-Term Project

We are seeking a Senior Software Engineer with a rare combination of data engineering depth and strong software engineering fundamentals. This role supports the buildout of large-scale, cloud-native, data-centric systems for a next-generation SaaS platform. You will design data pipelines, construct semantic models, develop distributed systems, and drive engineering best practices across the platform.

If you are excited about the role but don’t meet every single requirement, we encourage you to apply—your experience may still be a great fit.

About the Role

This is a hands-on, high-impact engineering position. The ideal candidate is:

Strong in OOP programming (C#, Java, or Python)
Highly experienced with Azure Databricks
Fluent in Delta Live Tables (DLT) pipeline development
Confident in explaining and designing semantic data models
Familiar with CI/CD and DevOps environments
Skilled in building and optimizing cloud-scale, distributed systems
Able to work autonomously with minimal direction

You will influence platform architecture, ensure data quality at scale, and contribute to high-throughput, multi-cloud capabilities.

Key Responsibilities

Design, develop, and own large-scale data processing and ingestion pipelines using Azure Databricks, including Delta Live Tables (DLT) for both batch and streaming workloads.
Build high-performance, distributed services using C#, Java, or Python.
Create, explain, and optimize semantic data models, including business logic, relationships, dependencies, and performance considerations.
Develop scalable, maintainable data systems using cloud-native services on Azure, AWS, or GCP.
Implement and maintain CI/CD pipelines in collaboration with DevOps teams.
Apply engineering best practices across performance optimization, cost management, monitoring, and secure design.
Collaborate with cross-functional teams on design, architecture, and delivery of new features.
Troubleshoot and resolve complex issues in distributed systems, cloud infrastructure, and data pipelines.

Required Qualifications

5+ years of professional software engineering experience building production-grade systems.
Strong programming skills in C#, Java, or Python (OOP required).
Hands-on Azure Databricks experience, including:
Databricks environments
Ingestion framework development
Delta Lake
Delta Live Tables (DLT) pipeline design
Ability to walk through and articulate semantic models you have built.
Experience with CI/CD pipelines and collaboration with DevOps teams.
Experience with distributed systems, large-scale data processing, or cloud-scale services.
3+ years working with Azure or AWS cloud services.
Strong data modeling, ETL/ELT pipeline development, and data engineering fundamentals.
Proven ability to deliver scalable systems with high performance and reliability.
Strong debugging, problem-solving, and architecture decision-making skills.

Preferred Qualifications

Experience with Apache Spark, Kafka, Flink, EventHub, or similar technologies.
Experience with Data Bricks Unity Catalog or Microsoft Fabric.
Background in data governance, lineage, observability, or data quality frameworks.
Experience designing ingestion frameworks, monitoring pipelines, and data reliability systems.
Familiarity with big data analytics, data warehousing, or ML/AI pipelines.
Excellent communication skills for both technical and non-technical audiences.
Prior experience at large cloud platform companies is a plus.

What We’re Prioritizing

To be highly competitive for this role, candidates should have:

Hands-on Azure Databricks + DLT pipeline experience
Strong OOP programming (C#, Java, or Python)
CI/CD and DevOps collaboration skills
Data platform or data integration engineering
Ability to confidently walk through semantic modeling work

About the Company

Red Oak Technologies has been servicing customers for nearly 30 years by providing IT Managed Services and Talent Solutions which are customized for each client. We Solve Tech Problems Talent Solutions: We leverage our extensive network and industry knowledge to impact how companies engage external talent partners and workforce management through Managed Resource Programs, Talent Engagements, and Direct Hire programs. Managed Services: We provide complete management and oversight of your outcome and deliverable base... Know more