Skip to content
CMO & CTO
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

  • Digital Experience
    • Experience Strategy
    • Experience-Driven Commerce
    • Multi-Channel Experience
    • Personalization & Targeting
    • SEO & Performance
    • User Journey & Behavior
  • Marketing Technologies
    • Analytics & Measurement
    • Content Management Systems
    • Customer Data Platforms
    • Digital Asset Management
    • Marketing Automation
    • MarTech Stack & Strategy
    • Technology Buying & ROI
  • Software Engineering
    • Software Engineering
    • Software Architecture
    • General Software
    • Development Practices
    • Productivity & Workflow
    • Code
    • Engineering Management
    • Business of Software
    • Code
    • Digital Transformation
    • Systems Thinking
    • Technical Implementation
  • About
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

When rules beat ai in personalization

Posted on June 24, 2023 By Luis Fernandez

Creation date: 2023-06-24T02:31:56

Story led opening

The team met to ship a new product sorter for the homepage. The deck said AI would pick the best item for each visitor. The demo was slick. A week in, conversion went down for new visitors and support got flooded by people who could not find basics. We rolled it back and pushed a simple rule. New visitors see top sellers. Logged in buyers see last viewed and category favorites. Revenue came back the same day.

I work on personalization a lot and I love new toys. ChatGPT is on every tab. The feed is full of smart takes on large models changing search, code, and copy. Still, when it comes to what shows up on a page for a real person, rules beat AI more often than people expect. Not always. Not forever. But often.

We do not lack models. We lack clarity on goals, data quality, consent, and guardrails. With all the buzz, it is easy to forget that the fastest way to serve the right thing can be a small set of clear business rules tied to first party events. Today I want to map when rules win, when models win, and how to decide without falling into the hype trap.

Analysis

Time to value. A rule can ship in a day. If category is shoes then show free shipping banner. If last purchase is over 90 days then show comeback offer. An AI model needs data collection, consent checks, features, training, scoring, and monitoring. If you need a lift by next Friday, rules are the safer bet.

Data reality. Plenty of sites still rely on cookies that drop on first interaction. Consent banners reduce coverage. iOS tracking is weaker after the ATT changes. Chrome is not blocking all third party cookies yet, but Safari and Firefox do. GA4 is about to replace Universal Analytics next week and many teams are still wiring events. In this state, rules based on clear signals like page type, session count, category viewed, cart value, or user tier are easier to trust. A model trained on spotty or biased data can look smart while sending people to the wrong place.

Explainability for non data folks. You can point to a rule and say why it fired. You can send a screenshot to legal. You can put it in a playbook for support. A model can be explained too, but that takes more work, and right now most teams do not have the practice in place. When stakes are high and the meeting has product, brand, and legal in the room, clear if then logic wins hearts fast.

Cold start and sparse traffic. Plenty of businesses do not have millions of sessions per week. Or they have millions, but split into hundreds of micro segments once you slice by channel, geo, category, device, and consent. A rule does not care about sample size. A model does. If you are running on thin traffic, start with rules and simple A B tests, then grow into models when volume allows.

Creative inventory. Personalization is not magic if you only have two banners. A model cannot invent a relevant hero for each segment without content to choose from. Rules force you to ask the right question early. Do we have variants for new versus returning, low AOV versus high AOV, sale hunters versus category loyalists. If the honest answer is no, a model will only shuffle the same cards.

Latency and control. Onsite choices need to be fast. If you wait on an external scorer for every slot, you add risk and cost. Rules can run at the edge or even inside your CMS. You can cache the result. For high traffic moments like a drop or a flash sale, fast rules with clear fallbacks will keep you up when a remote service hiccups.

Where models shine. There are areas where AI carries clear wins. Product recommendations on deep catalogs with rich event streams. Send time for email at massive scale. Search ranking that blends text and behavior. Content similarity when you have a large library. Text generation for variants and subject lines when a writer sets tone and reviews output. In those cases, the model is not guessing. It is amplifying known signals you already collect.

Governance. Privacy and brand safety matter. If your sector is health, finance, or kids, you need strict guardrails. Rules let you bake in hard stops. Never show offer X to minors. Never show personalized pricing without consent. Never message more than once per day. Models can follow rules too, but you still need the rules in the first place. Start with the rulebook, then add the model inside those fences.

Cost and care. Models need maintenance. Features drift. Data pipelines break. People churn. Rules also need care, but the skills are closer to product and content, so more team members can help. The cheapest path is a layered setup. Use rules for the bulk of choices. Use models where the return is proven and monitored. Keep a simple fallback for every slot so nothing breaks during peak traffic.

Risks

Risks with rules

– Rule creep. Every sprint adds one more rule. Soon you have spaghetti. Solve this by naming rules, adding comments, and archiving stale ones. Keep a monthly cleanup ritual.

– Local maxima. Rules can trap you in small wins. You show best sellers forever and never push new items. Add rotation caps, freshness rules, and periodic tests to escape this trap.

– Bias from the loudest voice. A VP asks for a rule and it ships without proof. Protect your system with a simple step. Every rule has an owner, a metric, and a sunset date unless renewed.

Risks with AI

– Garbage in. Weak consent rates and missing events will hurt a model more than a rule. You get a pretty chart and a drop in revenue.

– Opaque decisions. You cannot explain why a slot shows a certain product. This brings friction with brand and legal, and it slows your ship cadence.

– Latency spikes and costs. External scoring during high traffic can slow pages and raise bills. Always measure tail latency and set timeouts with a safe fallback.

– Over personal tone. Copy and offers that feel creepy. Combat this with guardrails. No use of sensitive attributes. Keep context to on site behavior with consent.

Decision checklist

Use this list to pick rules, AI, or a mix. Print it and bring it to the next planning session.

– Goal clarity. What is the single metric you want to move. Conversion. RPV. Lead quality. Retention. Pick one.

– Traffic and sample size. Do you have enough volume per segment to train and test. If not, start with rules.

– Consent coverage. What share of users can you personalize for. If consent is low, rules on contextual signals are safer.

– Data fitness. Are key events firing in GA4 or your CDP. Are identities stitched. Do you trust product and category taxonomies.

– Creative inventory. Do you have enough variants to make personalization worthwhile. If you only have one hero, fix content first.

– Latency budget. How many ms can you spend on a decision. If the budget is tight, place rules close to the page.

– Guardrails. Have you written no go rules for sensitive segments, frequency caps, and price fairness.

– Team skills. Who will build, review, and monitor. If the data team is maxed out, lean on rules in product tools that PMs and marketers can run.

– Explainability need. Will you need to explain every decision to brand or legal. If yes, favor rules or models with simple features.

– Fallbacks. What shows up when a decision source is down. If there is no safe default, do not deploy.

– Test plan. Can you run A B tests to prove lift with clean attribution. If you cannot test, do not add complexity.

Action items

Week one

– Map the top five placements where personalization could move revenue or engagement. Hero banner, category sorter, search suggestions, cart upsell, email subject line.

– Write a one page rulebook. Define audience definitions you trust today. New, returning, high value, category interest, discount seeking. Set guardrails like frequency caps and no go zones.

– Audit data. Confirm GA4 events are live and named well. Check your CDP or event bus for identity stitching. Note any gaps in consent.

Weeks two to three

– Ship two or three high confidence rules on the highest impact placement. Keep them simple. For example, new visitors see best sellers by category. Returning visitors see last viewed. Cart over a threshold sees free shipping banner.

– Set up measurement. A B test each rule versus control. Track lift on the one metric you picked. Watch page speed.

– Build fallbacks. For each slot, define the default that shows when anything fails. Document it in the rulebook.

Weeks four to six

– Expand creative inventory. Create at least three variants per key segment for the hero and one lifecycle email. Add short copy that is easy to test.

– Add one model where it fits. If you have a decent catalog and event depth, turn on viewed together recommendations with a clear fallback to top sellers. If email has high volume, try send time at a small percentage and compare to a simple schedule.

– Install guardrails in tooling. Hard caps per user per day. Exclusions for sensitive categories. Easy opt out on site and in email.

Ongoing

– Review rules monthly. Remove ones that no longer move the metric. Keep a changelog so you can trace impact.

– Monitor models. Track drift, latency, and variance across segments. Keep a toggle to switch to the rule based fallback during issues.

– Keep score on effort versus lift. If a model gives small lift with heavy care, swap it out. If a rule is stale, retire it. The stack should be alive and simple.

Closing thought

AI is exciting and it will keep getting better. The goal is not to pick a side. The goal is to ship choices that help people and help the business. Start with clear rules on clean signals. Add models where they prove real lift. Keep guardrails, fallbacks, and a habit of testing. When you do that, you will have a system that survives hype cycles and keeps shipping wins.

Digital Experience Experience Strategy Personalization & Targeting Customer JourneyPersonalizationTargetingUser Journey

Post navigation

Previous post
Next post
  • Digital Experience (94)
    • Experience Strategy (19)
    • Experience-Driven Commerce (5)
    • Multi-Channel Experience (9)
    • Personalization & Targeting (21)
    • SEO & Performance (10)
  • Marketing Technologies (92)
    • Analytics & Measurement (14)
    • Content Management Systems (45)
    • Customer Data Platforms (4)
    • Digital Asset Management (8)
    • Marketing Automation (6)
    • MarTech Stack & Strategy (10)
    • Technology Buying & ROI (3)
  • Software Engineering (310)
    • Business of Software (20)
    • Code (30)
    • Development Practices (52)
    • Digital Transformation (21)
    • Engineering Management (25)
    • General Software (82)
    • Productivity & Workflow (30)
    • Software Architecture (85)
    • Technical Implementation (23)
  • 2025 (12)
  • 2024 (8)
  • 2023 (18)
  • 2022 (13)
  • 2021 (3)
  • 2020 (8)
  • 2019 (8)
  • 2018 (23)
  • 2017 (17)
  • 2016 (40)
  • 2015 (37)
  • 2014 (25)
  • 2013 (28)
  • 2012 (24)
  • 2011 (30)
  • 2010 (42)
  • 2009 (25)
  • 2008 (13)
  • 2007 (33)
  • 2006 (26)

Ab Testing Adobe Adobe Analytics Adobe Target AEM agile-methodologies Analytics architecture-patterns CDP CMS coding-practices content-marketing Content Supply Chain Conversion Optimization Core Web Vitals customer-education Customer Data Platform Customer Experience Customer Journey DAM Data Layer Data Unification documentation DXP Individualization java Martech metrics mobile-development Mobile First Multichannel Omnichannel Personalization product-strategy project-management Responsive Design Search Engine Optimization Segmentation seo spring Targeting Tracking user-experience User Journey web-development

©2025 CMO & CTO | WordPress Theme by SuperbThemes