Skip to content
← All work

Launch Ready

15 analysts, 5,000 products, and a process that took hours — now done in minutes.

automation supply-chain ops

The weekly grind

At Nike, when a product launches, someone needs to make sure there’s enough inventory in the right stores at the right time. Sounds simple. It’s not.

Launch readiness analysts check incoming shipments across multiple data sources, figure out what’s on track and what’s stuck, and write status comments for each product. These comments go into Airtable so the wider team can act on them. Multiply that by 5,000+ products and 15-20 analysts, and you get a process that ate 4-6 hours per person every week. That’s 60-120 hours of manual work, spent reading spreadsheets and typing summaries.

The AI Accelerator pitch

I got selected for the MSC AI Accelerator at Nike, a program where you work with senior leaders to prototype tools that solve real operational problems. The brief was clear: use AI to fix this.

So I looked at the problem. Analysts were opening multiple Excel files, scanning rows, cross-referencing inventory positions against shipment timelines, and then writing a paragraph summarizing what they saw. It was tedious. But it wasn’t complex. The bottleneck wasn’t understanding — it was data gathering.

Not everything needs AI

This is where it got interesting. Leadership wanted an AI solution. I could’ve built one. But the honest answer was: this problem didn’t need machine learning. It needed plumbing.

The data already existed in structured formats. The comments followed predictable patterns. What analysts were really doing was a series of lookups and conditional logic — the kind of thing a well-built pipeline handles better than any language model.

So I pushed back. Built it with n8n for orchestration, Python and JavaScript for the data logic, and piped everything straight into Airtable where the team already worked.

What changed

The process that used to take 4-6 hours per analyst now runs in about 7-8 minutes. For all 5,000+ products. Across the full analyst team.

That’s not a small improvement — it’s a category change. Analysts went from spending a day on data entry to spending their time on the exceptions that actually needed human judgment. The stuff that matters.

What I learned

The biggest lesson wasn’t technical. It was about judgment. In a room full of people excited about AI, the most useful thing I did was say “we don’t need it here.” The right tool for the job isn’t always the most impressive one. Sometimes it’s a webhook and a for loop.