senior Data Engineer ic

Introduction

A career in IBM Consulting is built on long-term client relationships and close collaboration worldwide. You’ll work with leading companies across industries, helping them shape their hybrid cloud and AI journeys. With support from our strategic partners, robust IBM technology, and Red Hat, you’ll have the tools to drive meaningful change and accelerate client impact. At IBM Consulting, curiosity fuels success. You’ll be encouraged to challenge the norm, explore new ideas, and create innovative solutions that deliver real results. Our culture of growth and empathy focuses on your long-term career development while valuing your unique skills and experiences.

Your role and responsibilities

We are seeking a seasoned Azure Data Architect to lead our data modernization initiatives, leveraging Microsoft Fabric and the modern Azure Data Stack. The ideal candidate will design end-to-end data solutions, transforming legacy data architectures into high-performance lakehouse/mesh architectures. You will define best practices for data ingestion, processing, and governance while acting as the technical authority for our data engineering teams. 

 

Key Responsibilities

1. Architectural Design & Strategy (Microsoft Fabric Focus)

  • Define and implement end-to-end data architecture strategies covering Data Lakes, Warehouses, and Lakehouses using Microsoft Fabric (OneLake, Lakehouse, DirectLake).
  • Architect modern data integration patterns, prioritizing Fabric Dataflows Gen2 Fabric Data Pipelines, and Azure Data Factory (ADF).
  • Design and implement Data Mesh architectures, focusing on domain-driven design and reusable data products. 

 

2. Data Engineering & Pipeline Development

  • Design high-performance, scalable ETL/ELT pipelines using PySpark SQL, and Databricks.
  • Implement Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing.
  • Architect real-time and event-driven ingestion patterns using Fabric Event Streams KQL, and Kafka

3. Governance, Security, and Optimization

  • Establish governance frameworks for OneLake, including RBAC, data lineage, and naming conventions.
  • Implement data cataloging, sensitivity labeling, and security using Microsoft Purview.
  • Optimize cloud-based data infrastructure for performance, cost-effectiveness, and scalability (including cluster management in Databricks). 

 

4. Leadership & Consulting

  • Lead client workshops, requirement discovery sessions, and modern data stack roadmaps.
  • Mentor data engineers on best practices for Spark/PySpark development and Azure DevOps CI/CD implementation.
  • Support pre-sales activities, including effort estimation and technical proposals. 

 

Required technical and professional expertise

Required Technical Skills & Experience

  • Azure Services: Deep expertise in Microsoft Fabric, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS Gen2), Azure Databricks, and Azure Synapse Analytics.
  • Languages: Strong proficiency in Python/PySpark and SQL.
  • Modeling: Experience with Dimensional Modeling, Data Warehousing, and Delta Lake concepts.
  • Data Fabric: Hands-on experience with Dataflows Gen2, OneLake, and Notebooks.
  • DevOps: Experience in CI/CD pipelines (Azure DevOps/GitHub). 

Preferred technical and professional experience

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • Preferred Certification: Microsoft Certified: Fabric Analytics Engineer Associate (DP-700) or Azure Data Engineer Associate (DP-203)

 

All data engineering jobs data engineering in Gurugram, IN Jobs in Gurugram, IN data engineering salaries data engineering career path
All IBM Jobs Browse data engineering roles senior positions