i

All case studies are anonymized. No client names, identifying details, or proprietary data appear in any study. Metrics reflect actual simulation outputs.

Case Studies

Decisions
already run.

Anonymized decision simulations across consumer goods, financial services, technology, and more. Each study ran on Simulatte infrastructure before the real-world decision was made.

10 studies published
Conversion rate by pricing scenario
61%
Standard
rate
+23%
0.3% cut
(simulation)
58%
Same-day
premium
44%
Guaranteed
track
4.2M agent simulation · UK retail lending market
CS-002 · Q4 2025 · 8 min read

Retail Lending Pricing Optimisation

+23%
Revenue vs. intended pricing
4.2M
Synthetic agents modeled
4
Pricing scenarios compared

The brief was deceptively simple: three pricing scenarios, one new loan product, one question — which tier converts best? What followed was more complicated, and more instructive, than the product team expected.

A major UK retail lender was preparing to launch a new personal loan product with a three-tier pricing structure. The product team had two competing hypotheses: that customers would accept a modest rate premium for same-day approval, and that brand trust would outweigh rate comparisons for their existing customer base. Before committing to a pricing architecture, they ran the decision through Simulatte. Three scenarios were modeled across 4.2 million synthetic agents calibrated from UK census data, Financial Conduct Authority lending records, and panel interviews with active personal loan applicants.

The conventional wisdom — confirmed by an internal survey — said speed mattered. Same-day approval would command a premium. The data seemed clear. But Simulatte modeled something the survey couldn't: actual decision behavior under real choice conditions. Not what customers say they'd pay, but how they behave when rate, brand, and friction are all present simultaneously.

The 0.6% speed premium performed exactly as the survey predicted — among customers under 40 who actively used comparison sites. But that segment was already the lender's most loyal, lowest-margin cohort. The premium wasn't extracting value from new customers. It was monetizing existing ones who would have converted anyway.

The real finding was in the 45+ segment. These customers — the highest-value, longest-tenure accounts — barely registered the speed benefit at all. What moved them was something else entirely: specific trust signals in the product copy. Not the rate. Not the approval time. The particular phrasing around "your existing account data" and "no re-application required" reduced their switching friction to near-zero — but only when the messaging was personalized to their relationship tenure.

The simulation also surfaced a counterintuitive finding about the pricing band itself. The standard-rate product was underperforming because it was priced within the margin of error for comparison site filtering — effectively invisible in aggregate rankings. A modest 0.3% reduction moved it into the top-three visibility position on major comparison sites, dramatically increasing the inbound pipeline from price-sensitive under-35 applicants.

Three pricing scenarios went in. A fourth emerged from the simulation — one the team hadn't considered. The lender adjusted their launch architecture before committing to infrastructure. Estimated revenue impact over the product's first 18 months: £2.1M higher than the original middle scenario.

The simulation also produced verbatim persona responses. Daniel, 38, Senior Accountant, Leeds: "I'd move for a better rate but it has to be worth the admin. Half a percent isn't worth an afternoon on the phone with a new bank." Margaret, 57, Retired Teacher, Birmingham: "I've been with the same bank for 30 years. They know me. That's worth something that no rate discount replaces." These weren't anecdotes. They were representative outputs from behavioral clusters the simulation had identified as strategically distinct.

This is what decision infrastructure produces that survey research cannot: not just the headline number, but the behavioral logic underneath it. The mechanisms. The conditions. The words that actually matter to the people who matter.

Read the full study →
Is your pricing architecture built on what customers say — or on how they actually choose?
CS-003 · Q1 2026 · 9 min read

Staples Brand Regional Expansion

1 of 3
Strategies that drove trial
800K
Synthetic agents modeled
6
Tier 2 markets modeled

Three positioning options. One new market. An FMCG staples brand with strong metro credibility and no presence in the Tier 2 cities it needed to crack. The simulation didn't just rank the options — it showed why two of them would actively backfire.

The brand had done everything right in Tier 1. Strong SKU performance. High recall in urban panels. A distribution network built over years. Now they were expanding into six Tier 2 markets — cities with different retail structures, different media consumption patterns, and consumers who had never encountered the brand except peripherally. Three positioning strategies were on the table.

The first leaned into national credibility: celebrity endorsement, standardized packaging, the existing visual identity unchanged. The second localized aggressively: regional language elements, locally known influencers, adapted imagery reflecting local food habits and cooking contexts. The third was a hybrid: national brand architecture with local cultural signals woven into execution — same structural message, different surface. The marketing team favored the hybrid. It felt safe.

The simulation found something that surprised everyone. The hybrid performed worst — not because it was incoherent, but because it satisfied no one. Consumers in the target markets processed it as an outsider brand trying to look local without being local. The national positioning performed better than expected among consumers over 45, who read national credibility as a quality signal. But the fully localized approach dominated trial intent among under-35s — exactly the segment that drove repeat purchase velocity in the Tier 2 channel.

More interesting than the headline result was the mechanism. Full localization didn't win because it used regional language. It won because of a specific combination: regional visual context (local kitchen settings, familiar cooking vessels) paired with ingredient transparency messaging. Remove either element and performance dropped sharply. The combination was the signal — not local language alone.

The simulation also revealed something the brand hadn't considered: kirana channel risk. In all three scenarios, national-celebrity positioning created measurable resistance among the kirana shopkeepers who stocked the product. These shopkeepers functioned as active recommendation engines. When they perceived a brand as urban-oriented or "aspirational-for-someone-else," their recommendation behavior was suppressed — dampening sell-through in the critical first six weeks, before any marketing spend could build organic pull.

Ananya, 32, Teacher, Mysuru: "I don't mind a national brand, but if I see it cooked in a Mumbai kitchen I don't recognise, I can't connect it to my cooking. Show me something that looks like my kitchen." Rajesh, 44, Kirana owner, Nashik: "My customers ask me what I recommend. If a brand feels like it's not for us, I don't push it. Simple as that."

The brand restructured their entire Tier 2 launch strategy — regional-first creative, kirana outreach built into activation budgets, national celebrity restricted to top-funnel awareness only. The simulation turned a launch assumption into a launch architecture.

Read the full study →
Is your positioning built for the market you're entering — or the one you came from?
Trial intent by positioning strategy · Tier 2 market
Localized
Hybrid
National
800K agent simulation · 6 Tier 2 markets
Loyal customer retention · Post reformulation
73%
Predicted retention above
reformulation threshold
Switching threshold identified 0.8%
Competitor defection predicted 18%
Agents simulated 1,000
CS-001 · Q1 2026 · 7 min read

Beverage Reformulation Risk Assessment

73%
Loyalty retention predicted pre-launch
1,000
Loyal-customer agents simulated
18%
Competitor defection rate

A global beverage manufacturer was about to change a formula that had existed, unchanged, for 23 years. Operational reasons. Cost pressures. A reformulation the R&D team was confident consumers wouldn't detect. The simulation told a more specific story.

The brief was reformulation risk, but what the company was really asking was simpler: how many of our most loyal customers will we lose, and to whom? Traditional conjoint analysis had given them a clean answer — negligible switching intent. But conjoint surveys ask customers to evaluate trade-offs in isolation. The simulation placed 1,000 synthetic loyal customers in a realistic market scenario where the reformulated product was available alongside an unchanged competitor offering, and let them make decisions in their own behavioral language.

Seventy-three percent of loyal customers stayed — but not unconditionally. The 27% who switched weren't doing so because they detected the formula change. They were responding to a specific competitor action the simulation had included: a targeted "original formula, still unchanged" campaign that had run in the test market. Remove that campaign, and retention climbed to 81%.

This was the critical finding. The reformulation itself was survivable. The competitive response to it was not — at least not without a counter-strategy. The simulation had modeled something conjoint analysis structurally cannot: the behavior of competitors and the indirect effect on loyal customers who weren't even consciously aware of the formula change.

The simulation also surfaced the switching threshold with precision that surprised the client. Among loyal customers with more than five years of purchase history, the trigger wasn't taste. It was category salience — whether their habitual purchase occasion was disrupted in any way (product unavailability, packaging change at shelf). Among lighter loyals (two to four years), taste sensitivity was the primary mechanism, and the switching threshold was lower than the R&D team had modeled.

Chris, 44, Regular buyer, Manchester: "I've been drinking the same thing since I was 20. If something changes, I'll probably not notice straight away. But if the shop I always buy from doesn't have it, I'll just grab whatever's there." This behavioral logic — disruption driving substitution, not formula detection — was consistent across 34% of the at-risk segment.

The manufacturer launched with the reformulation but invested in three changes the simulation had identified as retention levers: shelf availability agreements with key retail partners, pre-emption of the competitor's "original formula" campaign with a 12-week brand heritage push, and a segmented re-engagement offer to the two-to-four year loyalty cohort. Post-launch tracking at 16 weeks showed actual switching rates within 2 percentage points of the simulation's predictions.

Read the full study →
Before you reformulate, simulate. Know exactly which customers you'll lose and why.
CS-004 · Q4 2025 · Coming soon

Consumer Electronics Premium Expansion

41%
Credibility gap vs. incumbent
600K
Synthetic agents modeled

A D2C consumer electronics brand tested premium tier pricing at 3× current ASP against a synthetic population of current loyalists and premium competitor buyers. The simulation surfaced a 41% credibility gap — the delta between the brand's existing trust scores and the trust threshold required to command that price point — and identified the specific product attributes that closed it.

The conventional assumption was that heritage loyalty would transfer. It didn't. Existing customers trusted the brand for their current price tier, not the new one. The path to premium required distinct proof points — not just better specs, but a specific signal architecture the simulation identified across 6 product attribute combinations.

Full write-up coming
Want to test premium tier viability for your brand before committing to a new product line?
Brand trust vs. premium threshold gap
Current
81%
Required
100%
41pp
credibility gap
Campaign attention over time · 15-year run
Launch Year 15
−14pp
Attention decay detected
CS-005 · Q1 2026 · Coming soon

Long-Running Campaign Fatigue Analysis

14pp
Attention decay detected
1.2M
Synthetic agents modeled

A personal care conglomerate tested whether a flagship campaign (15+ year run) was still driving incremental purchase intent, or had become invisible through familiarity. The simulation modeled behavioral response to the existing campaign, three replacement concepts, and a hybrid evolution strategy across 1.2 million synthetic agents.

The 14-point attention decay wasn't evenly distributed. Among consumers under 35, it had become almost complete — the campaign was processing as ambient noise. Among 45–60s, it still retained strong brand reinforcement function, even without driving new intent. The replacement ranking surprised the team: the lowest-budget concept outperformed by 12 points in new-to-category conversion.

Full write-up coming
Is your longest-running creative still working — or has it become wallpaper?
CS-006 · Q1 2026 · Coming soon

Speed Premium Willingness by Category

2
Categories with genuine speed WTP
2.8M
Synthetic agents modeled

A quick commerce platform simulated purchase decisions across five product categories at varying delivery times and delivery fees. The simulation identified which categories command genuine speed premium — where consumers actually pay more for faster delivery — and which are driven purely by base price, with speed functioning only as a tiebreaker.

Only two of the five categories showed real speed WTP: fresh produce and baby/infant products. In the other three, a 15-minute improvement in delivery time moved conversion by less than 1 point. The platform's unit economics model had been optimizing for the wrong categories.

Full write-up coming
Do you know which of your categories actually commands a speed premium — and which don't?
Speed premium WTP by category
Fresh
#1
Baby
#2
Snacks
#3
Cleaning
#4
Staples
#5
Trust recovery by scenario
Month 0 Month 12 Scenario A Scenario B Scenario C
8.2mo
Trust recovery timeline (best path)
CS-007 · Q4 2025 · Coming soon

Post-Crisis Brand Trust Recovery

8.2mo
Predicted recovery timeline
1.5M
Synthetic agents modeled
3
Relaunch scenarios modeled

An EdTech platform modeled trust recovery under three relaunch scenarios across synthetic parent populations with varying levels of exposure to a prior brand incident. The simulation identified which messaging sequence drove sustainable trust recovery and — critically — which re-triggered the original skepticism when deployed too early.

Scenario C, the most intuitive approach (immediate transparency + compensation), performed worst: it surfaced memories of the incident in consumers who had begun to move past it, setting back recovery by an average of 3.4 months. The winning sequence was counterintuitive: extended silence followed by evidence-first messaging from third-party voices, then direct brand communication only once trust indicators had begun to recover organically.

Full write-up coming
Before you communicate your way through a crisis, simulate what each sequence actually does to your customers.
CS-008 · Q1 2026 · Coming soon

Subscription Retention Offer Optimisation

3
Offers ranked by churn impact
400K
Synthetic agents modeled

A D2C grooming subscription brand tested three retention offers — discount, bundle upgrade, and pause option — against a synthetic population of churn-risk subscribers showing engagement decline signals. The simulation ranked offers by predicted churn reduction and LTV impact, and revealed a finding that overturned the team's intuition.

The pause option outperformed the discount on both metrics. Not because subscribers valued flexibility more than money, but because the subset most likely to churn permanently were doing so from overwhelm, not dissatisfaction. A discount increased their sense of obligation. A pause reduced it. The LTV calculation at 18 months favored pause-to-return subscribers by a margin the team hadn't modeled.

Full write-up coming
Do you know why your at-risk subscribers are actually leaving — and what would actually keep them?
Churn reduction by retention offer
Pause option
−38%
Bundle upgrade
−22%
10% discount
−14%
vs. control group (no intervention)
Sales decline attribution · Plant-based FMCG
Total decline explained
62% Brand
38% Category
Competitive displacement
Category plateau
62%
Brand-specific loss
38%
Category-wide
CS-009 · Q4 2025 · Coming soon

Brand vs. Category Decline Attribution

62%
Brand-specific displacement
900K
Synthetic agents modeled

A plant-based FMCG brand experiencing sustained sales decline needed an answer to a fundamental question: were they losing to competitors, or was the whole category plateauing? The answer determined everything — marketing investment, innovation roadmap, pricing response, and whether to defend or diversify.

The simulation cleanly separated brand-specific competitive displacement (62% of the decline) from category-wide behavioral shift (38%). The category plateau was real — post-2023 trial behavior for new plant-based entrants had softened across the segment. But the brand's additional decline was traceable to three specific competitive moves from a single challenger brand, and was recoverable with targeted response.

Full write-up coming
Is your sales decline a brand problem or a category problem? The strategies for each are completely different.
CS-010 · Q1 2026 · Coming soon

Portfolio Expansion Into Adjacent Categories

38%
Cross-sell probability · top segment
700K
Synthetic agents modeled

A fresh dairy D2C brand with strong milk subscription revenue tested expansion into paneer, curd, and ghee. The simulation mapped cross-sell probability by subscriber segment, measured brand credibility transfer into each new category, and ranked SKUs by expected contribution margin — before any product development investment was made.

Cross-sell probability varied by 26 percentage points between the highest and lowest subscriber segments. The brand's intuition about which segment would adopt soonest was wrong: the highest-trust, longest-tenure subscribers showed the strongest expansion intent, not the highest-engagement recent acquirers. Paneer ranked first on credibility transfer; ghee ranked first on WTP. The portfolio sequencing the simulation recommended differed significantly from what the team had planned.

Full write-up coming
Before you launch a new SKU, know which segment will adopt it first and why.
Cross-sell probability matrix
38%
Paneer · Long-tenure
29%
Curd · Long-tenure
24%
Ghee · Long-tenure
21%
Paneer · Recent
17%
Curd · Recent
12%
Ghee · Recent
CPG-001 cover
CPG-001
CPG-001 · Q2 2026 · 20 min read

Why Loyal US Shoppers Trade Down to Private Label — and Why Most Never Come Back

5 of 8
Personas with no win-back pathway
48
Simulation runs across 6 hypotheses
4
Interventions tested

CPG brands are fighting a perception and identity problem with pricing tools. When inflation hit, loyal shoppers switched to private label — and most didn't come back when prices stabilised. This study mapped why.

Eight deep behavioral personas spanning US suburban households with HHI $45K–$130K were exposed to four brand win-back interventions: coupon/discount (INT-A), quality credentialing (INT-B), loyalty reward (INT-C), and category re-education (INT-D). The simulation ran 48 times across 6 hypotheses. The headline finding was stark: five of eight personas have no standard win-back pathway available under any tested intervention.

Three personas actively produce backfire when exposed to coupon or promotional interventions. For Deirdre, a former loyal Tide/Bounty/Heinz buyer who told her book club about her switch and built a social identity around being "smart about it," a coupon offer confirmed her original decision rather than reversing it. The promotion signalled that the brand was worth less — exactly what she believed when she left.

Two personas were not win-back cases at all. They were acquisition cases — people who had never held a strong brand preference and whose private label adoption was a stable equilibrium, not a lapse. Treating them as lapsed brand buyers and applying win-back resources was a mismatch of frame and audience.

The only intervention with positive movement across more than three personas was quality credentialing: independent testing results, ingredient transparency, and third-party certifications presented without promotional framing. Even this had limits — it worked only for personas whose switch had been primarily rational rather than identity-based.

The simulation produced a segmentation the brand team hadn't been working with: not a spectrum from loyal to lapsed, but four structurally distinct groups with different intervention logic, different re-engagement conditions, and different expected costs per recovered unit. The implication for media spend, CRM targeting, and in-store activation was material.

Read the full study →
Is your win-back program targeting lapsed buyers — or people who were never really yours?

Run your decision.

Every study on this page ran on Simulatte infrastructure before the real-world decision was made. Your decision can too.

Book a session →