Company Overview
UTR Planning Tech builds the data infrastructure that powers labor planning across 12 Amazon last-mile and sort-center business lines. Our pipelines feed the planning systems that determine how Amazon staffs its delivery network, serving hundreds of sites and processing millions of data points daily.
Key Responsibilities
- Design and own logical and physical data models for major datasets in the team's architecture
- Build and optimize ETL pipelines for complex datasets using AWS services and Python-based orchestration
- Design and build configuration-driven data frameworks that replace repetitive custom code with reusable, declarative patterns
- Build AI agent tooling and MCP-based interfaces that allow conversational agents to generate SQL, validate configurations, manage data quality rules, and execute pipeline operations through natural language
- Own ongoing data quality for datasets you build and establish data certification processes
- Improve self-service access to data and build analytical data models and tooling
- Improve engineering processes by automating manual operations, establishing monitoring and alerting standards, and driving code quality and dependency management practices
- Mentor engineers and interns and participate in the interview process
Requirements
- 3+ years of data engineering experience
- 1+ years of experience developing and operating large-scale data structures for business intelligence analytics using ETL/ELT processes, OLAP technologies, data modeling, and SQL
- Experience with data modeling, warehousing, and building ETL pipelines
Benefits
- Competitive salary and equity compensation
- Comprehensive health insurance (medical, dental, vision)
- 401(k) matching
- Paid time off and parental leave
- Employee Assistance Program and mental health support