A large online retailer with thousands of products had a pricing problem. They'd recently raised prices across parts of their catalogue — and watched order volumes fall off a cliff. The increases had been applied broadly, without any understanding of which products customers were price-sensitive about and which they weren't.
The business needed to grow margins. But after the volume drop, the commercial team was afraid to touch prices again. Every pricing decision felt like a gamble — raise too aggressively and lose customers, hold too conservatively and leave money on the table. They had years of transactional data sitting in their warehouse, but no way to turn it into pricing intelligence.
They needed a system that could tell them, before changing a single price, exactly what would happen to order volumes — and which products could quietly absorb an increase without anyone noticing.
What price elasticity actually means
Price elasticity of demand measures how sensitive customers are to a change in price. Formally, it's the percentage change in quantity demanded divided by the percentage change in price. If you raise the price of a product by 1% and order volume drops by 2%, that product has an elasticity of −2 — every percent you add to the price costs you two percent in volume.
The sign is almost always negative (higher prices mean fewer sales), so what matters is the magnitude. An elasticity close to zero means customers barely notice the change — these are your inelastic products, and they're where margin improvements hide. An elasticity well beyond −1 means customers are highly responsive — raise the price and they'll switch to a competitor, buy a substitute, or simply stop buying. These are elastic products, and touching their price is risky.
The dividing line at −1 is significant. Below it (say −0.5), a price increase actually raises total revenue — you lose some volume, but the higher price more than compensates. Above it (say −1.5), a price increase destroys revenue — the volume loss outweighs the price gain. At exactly −1, revenue stays flat regardless of the price change.
In practice, elasticity isn't a fixed number. It shifts with seasonality, competitor behaviour, customer demographics, and how substitutable the product is. A bottle of a specific branded cleaning product might be inelastic because customers are loyal to the brand. A generic USB cable is elastic because there are twenty identical alternatives one click away. The challenge isn't understanding the theory — it's measuring elasticity accurately enough to act on it.
The cold start problem
Measuring elasticity requires data on what happened when prices actually changed. And that was the problem: this retailer's prices had barely moved in years. Products sat at the same price for long stretches, with promotional discounts being the main exception.
Promotions weren't useful here. A temporary 20% discount triggers different buying behaviour than a permanent price adjustment — customers stock up, buy opportunistically, or respond to urgency. None of that tells you what would happen if you quietly moved the base price up by 3%.
To make things harder, the retailer couldn't run A/B tests on pricing. With thousands of products and customers who compare prices across channels, showing different prices to different people wasn't operationally feasible. And even if they could, seasonality would confound the results — a volume drop in January might be the price change, or it might just be January.
Testing smart, not testing everything
We evaluated three approaches to building the elasticity model despite the sparse data.
Comparing similar products at different prices. The idea was to infer elasticity by looking at how comparable products performed at different price points. It fell apart quickly — brand recognition, packaging, shelf placement, and a dozen other factors meant that "similar" products weren't similar enough. Price wasn't the only variable changing.
Random price testing across the catalogue. The textbook approach: randomly adjust prices across a wide range and measure the response. Statistically optimal, but prohibitively expensive for a business with thousands of SKUs. You can't afford to deliberately misprice your entire catalogue to gather training data.
Category-based sampling. This was the compromise we chose. Rather than testing every product, we selected a small representative sample from each product category — one or two items per range — and ran controlled price changes on those. The assumption: products within the same category share similar elasticity characteristics. A customer's sensitivity to the price of one cordless drill tells you something meaningful about their sensitivity to the price of other cordless drills.
We then built features around the factors that drive elasticity: product category, competitor pricing, historical demand patterns, supply constraints, and quality positioning. The model learned the relationship between these features and the observed volume response to price changes in the test sample — then generalised that understanding across the full catalogue.
Proving it before betting on it
A model that says "this product has low elasticity" is only useful if the business trusts it enough to act. We needed to prove accuracy before anyone would stake real pricing decisions on it.
We validated against two KPIs. The first was volume prediction accuracy — for each product in the test sample where we'd changed the price, we compared the model's predicted volume change against what actually happened. The predictions had to account for external factors like seasonality and promotional calendars, not just the price movement itself.
The second was total margin, corrected for seasonality. We compared margins across equal-length periods before and after each price change, controlling for seasonal variation. A margin improvement in December doesn't mean much if you're comparing it to a quiet September — so we ensured the comparison windows were fair.
The results gave the commercial team two things they'd never had before: a ranked list of products where prices could be raised with minimal volume impact, and — equally important — a clear signal on which products to leave alone. The model didn't just identify opportunity; it prevented a repeat of the original volume collapse.
The results
The model turned pricing from a source of anxiety into a competitive advantage. Instead of blanket increases and hoping for the best, the retailer could make surgical, evidence-backed decisions — raise prices where customers wouldn't notice, and hold where they would. Every price change went from a gamble to a calculated move.