cover image
Dew Software

Data Engineering Lead

Hybrid

Toronto, Canada

Senior

Freelance

28-10-2025

Share this job:

Skills

Communication Leadership Python SQL Data Governance Data Engineering Apache Airflow CI/CD DevOps Monitoring Version Control Jenkins Azure Data Factory Agile methodologies Problem-solving Machine Learning apache git Azure AWS Agile Analytics GCP Snowflake Talend PySpark

Job Specifications

Job Title: Data Engineering Lead

Location: Toronto, Ontario, Canada (Hybrid)

Experience Level: 8–10 Years

Employment Type: Contract - W2 Only

Position Overview

We are seeking a highly skilled and motivated Data Engineering Lead to design, build, and optimize scalable data pipelines and architectures for enterprise analytics and business intelligence platforms. The ideal candidate will have strong hands-on experience in Python, PySpark, SQL, Snowflake, and modern ETL/Data Integration tools, with a solid understanding of data quality, governance, and performance optimization.

This role requires both technical expertise and leadership ability to guide cross-functional teams in implementing robust data engineering solutions.

Key Responsibilities

Lead the end-to-end design and implementation of scalable data pipelines and ETL workflows for batch and real-time processing.
Develop, optimize, and maintain data models and data warehouse solutions on Snowflake or similar cloud data platforms.
Design and implement efficient PySpark and Python-based data transformations to handle large-scale structured and semi-structured data.
Collaborate with data analysts, data scientists, and business stakeholders to ensure high-quality, accessible, and trusted data.
Integrate data from multiple sources (on-prem, cloud, APIs) using ETL and data integration tools such as Informatica, Talend, Azure Data Factory, or similar.
Define and implement data quality checks, data validation frameworks, and exception handling mechanisms.
Manage and optimize data storage, partitioning, and query performance to meet SLAs.
Establish best practices for data engineering, version control, CI/CD for data pipelines, and monitoring solutions.
Provide technical leadership, code reviews, and mentorship to junior data engineers.
Ensure compliance with data governance, security, and privacy standards.

Required Skills and Qualifications

8–10 years of experience in Data Engineering, with proven expertise in Python, PySpark, and SQL.
Strong experience with Snowflake (or equivalent data warehouse platforms such as Redshift, BigQuery, or Synapse).
Proficiency in ETL development and data integration tools (e.g., Informatica, Talend, DataStage, or Azure Data Factory).
Hands-on experience building data pipelines on cloud environments (AWS, Azure, or GCP).
Deep understanding of data quality frameworks, data validation, and data governance principles.
Solid experience with data modeling, performance tuning, and query optimization.
Experience implementing CI/CD and version control (Git, Jenkins) in data projects.
Excellent problem-solving, communication, and leadership skills.
Bachelor's or Master's degree in Computer Science, Information Systems, or related field.

Preferred Qualifications

Experience with orchestration tools such as Apache Airflow or Azure Data Factory.
Knowledge of data cataloging and lineage tools (e.g., Collibra, Alation).
Exposure to machine learning data pipelines or data lake architectures.
Familiarity with agile methodologies and DevOps for data practices.

About the Company

At Dew Software, we are a leading player in the Digital Transformation space, empowering businesses to thrive in the rapidly evolving digital landscape. With over 25 years of industry expertise, we deliver innovative solutions and services to Fortune 500 companies, driving their growth and success. As a CMMi Level 3 and ISO certified organization, we are committed to excellence, quality, and customer satisfaction. Our robust processes and stringent quality standards ensure that we deliver exceptional outcomes for our client... Know more