Data Engineer, ShipTech Analytics
Amazon · Hyderabad, India · Business Intelligence
mid
Data Engineer
ic
· Posted May 16, 2026
Skills
About this role
Amazon is hiring a mid-level Data Engineer based in Hyderabad, India. The posting calls out experience with Python, Scala, SQL, AWS.
- Role
- Data Engineer
- Function
- data engineering
- Level
- mid
- Track
- Individual contributor
- Employment
- Full-time
- Location
- Hyderabad, India
- Department
- Business Intelligence
- Posted
- May 16, 2026
AI Summary
Design and build scalable near-real-time streaming and batch data pipelines for Amazon's transportation network. Write code for data ingestion, processing, and storage while partnering with operations teams. Own data quality, collaborate on AI-powered automation, and participate in on-call support.
More roles at Amazon
Sr. FinOps Analyst - AR Healthcare (Cash App), Healthcare
Hyderabad, India · senior
Performance Optimization
Process Assistant
Cranbury, NJ · mid
Mechatroniker / Elektroniker / Mechaniker, Instandhaltung (m/w/d) - Dahlewitz, RME
Dahlewitz, Germany · mid
Area Manager / Bereichsleiter / Schichtleiter (m/w/d) - Cottbus
DE · manager
Program Manager, AWS Data & AI GTM
Seattle, WA · senior
SQL AWS Cloud Computing
All Amazon jobs →
Job description
from Amazon careersJoin ShipTech Analytics (STA) as a Data Engineer to build the data and analytics backbone powering Amazon's global transportation network. You'll contribute to our vision of building an intelligent, autonomous, and scalable performance management and decision support system across Amazon Transportation.
As a Data Engineer I, you'll work on building standardized data solutions that empower operations teams worldwide. You'll develop both near-real-time (NRT) streaming pipelines and batch processing pipelines that deliver critical insights for decision-making. You'll also build data tools that enable self-service analytics and automate operational tasks, while contributing to innovative AI-powered solutions delivering automated insights across the transportation network.
Key job responsibilities
- Design and build scalable near-real-time (NRT) streaming and batch data pipelines supporting Amazon's global transportation network.
- Build data products and data tools that streamline the complete data lifecycle including data ingestion, processing, backfill, storage, vending, and operational support modules.
- Partner closely with stakeholders across operations and analytics teams to understand requirements, design solutions, and deliver datasets and data tools that enable faster, more informed decision-making
- Own data quality and implement enhancements for datasets that enable operational excellence and improve customer experience
- Collaborate with cross-functional teams to standardize analytics capabilities and build AI-powered automation
- Leverage AWS cloud technologies to transform raw data into actionable metrics
- Drive continuous improvement through code reviews, design discussions, operational best practices, and security compliance
A day in the life
You work backwards from what stakeholders need, clarifying ambiguities and building data products and tools. Throughout the day, you're writing code for data pipelines and building tools that handle ingestion, processing, storage, and vending alongside software engineers.
Some days you're in design discussions about data infrastructure, pipelines and tools.
Beyond coding, you're documenting datasets, tools and runbooks, and participating in code reviews. When oncall, you monitor the pipeline health.
What makes the role interesting is the variety - you're partnering with stakeholders, TPMs, SDEs building scalable data pipelines and tools that directly affect how Amazon's transportation network operates.
About the team
ShipTech Analytics serves as the data and analytics backbone for Amazon's global transportation network. The team builds standardized data solutions that streamline analytics and enable faster decision-making for operations teams worldwide. Team members collaborate with cross-functional partners to deliver data products and tools.
The team is modernizing systems, building near-real-time architectures, and building AI-powered automation. Team members have opportunities to mentor others, prototype new ideas , and shape the future of the data analytics. The team values collaboration across geographies with members in multiple locations working together to serve global operations.
- Experience with SQL
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more scripting language (e.g., Python, KornShell, Scala)
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
As a Data Engineer I, you'll work on building standardized data solutions that empower operations teams worldwide. You'll develop both near-real-time (NRT) streaming pipelines and batch processing pipelines that deliver critical insights for decision-making. You'll also build data tools that enable self-service analytics and automate operational tasks, while contributing to innovative AI-powered solutions delivering automated insights across the transportation network.
Key job responsibilities
- Design and build scalable near-real-time (NRT) streaming and batch data pipelines supporting Amazon's global transportation network.
- Build data products and data tools that streamline the complete data lifecycle including data ingestion, processing, backfill, storage, vending, and operational support modules.
- Partner closely with stakeholders across operations and analytics teams to understand requirements, design solutions, and deliver datasets and data tools that enable faster, more informed decision-making
- Own data quality and implement enhancements for datasets that enable operational excellence and improve customer experience
- Collaborate with cross-functional teams to standardize analytics capabilities and build AI-powered automation
- Leverage AWS cloud technologies to transform raw data into actionable metrics
- Drive continuous improvement through code reviews, design discussions, operational best practices, and security compliance
A day in the life
You work backwards from what stakeholders need, clarifying ambiguities and building data products and tools. Throughout the day, you're writing code for data pipelines and building tools that handle ingestion, processing, storage, and vending alongside software engineers.
Some days you're in design discussions about data infrastructure, pipelines and tools.
Beyond coding, you're documenting datasets, tools and runbooks, and participating in code reviews. When oncall, you monitor the pipeline health.
What makes the role interesting is the variety - you're partnering with stakeholders, TPMs, SDEs building scalable data pipelines and tools that directly affect how Amazon's transportation network operates.
About the team
ShipTech Analytics serves as the data and analytics backbone for Amazon's global transportation network. The team builds standardized data solutions that streamline analytics and enable faster decision-making for operations teams worldwide. Team members collaborate with cross-functional partners to deliver data products and tools.
The team is modernizing systems, building near-real-time architectures, and building AI-powered automation. Team members have opportunities to mentor others, prototype new ideas , and shape the future of the data analytics. The team values collaboration across geographies with members in multiple locations working together to serve global operations.
Basic Qualifications
- 1+ years of data engineering experience- Experience with SQL
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more scripting language (e.g., Python, KornShell, Scala)
Preferred Qualifications
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.