Photos in a Dropbox do not get cited by AI search. Photos on multi-modal pages with FAQ schema and answer-first leads do. The 90 days is how long it takes to do the work properly. I photograph corporate teams for marketing directors across the US, and for my clients the visual content and the AI-search rollout are the same project. This is the day-by-day calendar I am running on henrydavidphotography.com. Not theory, not a generic checklist. The actual order of operations, the actual changes, the actual tracking I have set up. If you have read the AI search visibility pillar and the technical spokes, this is the post that ties it all together into a sequenced plan you can copy.
The plan is structured as three 30-day phases: Foundation (technical setup), Content (the rewrites and schema), and Authority (the brand-mention work). Each phase builds on the last. Skipping ahead almost always leaves the foundation work undone, which is why most AI search projects stall.
Why 90 days, not 30 or 180
AI search citations have a measurable lag. The retrieval bots crawl, the changes propagate, and the citation patterns shift over weeks, not days. Per public AIO research, the median time from a meaningful site change to a measurable shift in citation patterns is roughly 4 to 8 weeks. Sites that try to evaluate results in a 30-day window almost always conclude 'this didn't work' because the lag hasn't fully played out yet.
On the other end, anything longer than 90 days drifts. Three months is enough time for the technical changes, the content rewrites, and at least two cycles of citation tracking to land before the post-mortem. Six months without checkpoints is how AI visibility projects turn into slow, unaccountable work that gets quietly abandoned.
The 90-day frame keeps the urgency high and gives the data room to land.

Phase 1: Foundation (Days 1-30)
The goal of Phase 1 is making your site readable by AI search systems and adding the structural signals they look for. By the end of day 30, every retrieval bot can reach your site, your top three pages have FAQPage schema, and your content style is set up for citability.
Days 1-2: Crawler audit. Open `yourdomain.com/robots.txt` and verify GPTBot, ChatGPT-User, OAI-SearchBot, ClaudeBot, Claude-Web, PerplexityBot, Google-Extended, and GoogleOther are explicitly allowed. Use my robots.txt template as the baseline. If you use Cloudflare, log into the dashboard and confirm AI Crawl Control isn't blocking these bots at the edge. This is the step that breaks one million sites silently.
Days 3-4: Schema audit. Run your top three pages through Google's Rich Results Tool. Identify which schema types are present (LocalBusiness, Service, Article, BreadcrumbList) and which are missing (FAQPage, ImageObject, VideoObject). The most important addition is FAQPage on your top revenue page.
Days 5-10: FAQPage schema rollout. Pull five to seven real questions from your inbox or sales calls. Write 134 to 167-word answers with specific numbers and locations. Wrap them in FAQPage JSON-LD using my template. Test in Google's Rich Results Tool. Repeat across your top three to five pages. Per the research, content with FAQPage schema sees roughly a 2.5x higher chance of being cited by AI search.
Days 11-15: GA4 AI referrer tracking. Add a custom dimension `ai_platform` in GA4 and configure an event called `ai_referral` that fires when `document.referrer` matches a known AI search platform: chatgpt.com, perplexity.ai, claude.ai, gemini.google.com, copilot.microsoft.com, you.com, search.brave.com, meta.ai, and a handful of others. The implementation I run on this site lives in `src/lib/analytics.ts` and `src/components/analytics/AIReferralTracker.tsx`. Without this, you can't measure the lagging indicator (AI traffic conversion) that proves the work is paying off.
Days 16-22: Lead-paragraph rewrites. Rewrite the first 134 to 167 words of your top three to five pages so they lead with a complete, citable, factually verifiable answer. The framework is in my answer-first content post. Specific number, specific location, specific named entities (industries, cities, regulations). No throat-clearing. The first paragraph is the most cited block on the page; that's the highest-leverage block to rewrite.
Days 23-26: llms.txt deployment. Publish a short llms.txt at the root of your site listing your highest-value pages, in markdown. The detail is in my llms.txt explainer. Honest read: this is low-cost insurance, not a guaranteed lift. Run it because the cost is near zero, not because it's a magic bullet.
Days 27-30: Citation baseline tracking. Run your top 10 prompts across ChatGPT, Perplexity, Claude, and Google AI Mode. Log which businesses get cited for each prompt. This is your day-30 baseline. You will compare against this on day 60 and day 90. The prompts should be the queries your customers actually type, scoped to your service area. For me, that's variations on 'corporate headshot photographer in St. Louis,' 'multi-office team headshots,' 'professional photographer for healthcare team page,' and so on.
Phase 2: Content (Days 31-60)
The goal of Phase 2 is building a citation-worthy content footprint. By the end of day 60, every page on your site that drives revenue has a self-contained answer in its lead, FAQ schema, and a published date that signals freshness.
Days 31-40: Service-page deep edit. Work through every service page and every high-traffic landing page. Each one gets the answer-first lead rewrite, FAQPage schema, and an audit for entity density (target 15+ recognized entities per 1,000 words). Named cities, named industries, named regulations, named tools, named credentials. This is the slow grind that compounds over the next two phases.
Days 41-45: Multi-modal upgrade. Per the multi-modal research, pages that combine text, images, video, and structured data see roughly a 156% higher AI citation rate than text-only pages. For each of your top three pages, add an embedded YouTube video walkthrough or behind-the-scenes clip with a transcript, an image grid showing real client work, and VideoObject + ImageObject schema. If you don't have video, this is the moment to commission a single-day photo and video shoot that produces material for four to six multi-modal pages at once.
Days 46-52: New cluster content. Start publishing your topic cluster. One pillar post on the central topic in your service line, plus three to five spoke posts that each answer a specific sub-question and link back to the pillar. Pillar at 2,500 to 3,000 words, spokes at 1,200 to 1,800 each. Internal-link the spokes to each other and to the pillar with descriptive anchor text. The query-fan-out research published in 2026 explains why: AI systems break a single user query into 5-10 sub-queries and search each one independently. Each spoke post is a candidate match for a different sub-query.
Days 53-57: Author-bio and E-E-A-T signals. Per the AIO research, 96% of AI Overview content comes from sources with verifiable Experience, Expertise, Authoritativeness, and Trust signals. Add an author bio with credentials, named clients (anonymized if needed), years in business, and a real photo. Add Person schema. Reference verifiable claims with primary-source citations. Trust signals are a hard filter, not a soft preference.
Days 58-60: Mid-cycle citation check. Re-run the same 10 prompts you logged on day 30. Compare. If you have moved from 'absent' to 'cited' on any prompt, that's data. If nothing has shifted yet, that's also data. The lag can be 6-8 weeks; don't panic at day 60.
Phase 3: Authority (Days 61-90)
The goal of Phase 3 is the brand-mention layer that AI search systems weight roughly 3x more heavily than backlinks. Brand mentions, third-party citations, and presence across the platforms AI systems cite (Wikipedia, Reddit, YouTube, industry-specific publications) are the inputs that compound your visibility long after the technical setup is done.
Days 61-66: Directory and citation cleanup. Audit your business citations across Google Business Profile, industry directories, and trade publications. Confirm your name, address (or service area for SAB), phone, and website match exactly across every directory. Mismatches dilute your authority signal. AI systems cross-reference Knowledge Graph data, and inconsistent citations get filtered out.
Days 67-73: Digital PR pitches. Pitch three to five industry publications, podcasts, or newsletters with a story tied to your expertise. The angle isn't 'please write about my business.' The angle is 'I have specific data, a specific case study, or a specific contrarian take that your audience would care about.' Brand mentions in these third-party properties carry more weight than backlinks. The Substack citation that triggered this whole project (AI Central, February 2026) is the proof of concept on this site.
Days 74-80: Reddit and forum presence. Per Averi.ai research, Perplexity favors Reddit (about 47% of citations), and Reddit is increasingly cited across other AI platforms too. Find the two or three subreddits where your customers ask questions in your space. Answer real questions with specific, verifiable, helpful content. Don't shill. Build a posting history that establishes authority over months. This is a slow burn that pays compound interest.
Days 81-86: YouTube uploads. YouTube is the most-cited domain in Google AI Overviews. If you have video, upload it with descriptive titles, complete descriptions, accurate transcripts, and chapter markers. Each video becomes a multi-modal asset that can be cited independently of your website. Photographers and videographers have an unfair advantage here because we already produce video as a byproduct of doing the work.
Days 87-90: 90-day post-mortem. Re-run the same 10 prompts. Compare against day 30 and day 60. Pull GA4 data on AI referral traffic from the `ai_referral` event. Calculate the conversion rate of AI-referred sessions versus Google organic. Per Conductor's 2026 benchmarks, the expected lift is roughly 5x (AI traffic at 14.2% conversion versus Google organic at 2.8%). If your numbers are within 20% of that, the playbook is working as advertised. If they're way off, the data tells you which phase to revisit in the next sprint.
What I am tracking on this site
For my own implementation, I am running this exact plan starting from the beginning of May 2026. Day 30 is around June 2. Day 60 around July 2. Day 90 around August 1.
The tracking dashboard is GA4 + a manual prompt log spreadsheet. GA4 captures the `ai_referral` events fired by the AIReferralTracker component installed in `src/app/layout.tsx`. The spreadsheet logs each prompt, the date I ran it, the AI platform, and which businesses got cited. I run the prompts on the first of every month.
I will update this post on the first of June, July, and August with the data I have logged. Not vague impressions. Actual rows in a table. The Substack citation from February is the day-zero proof point. Every additional citation between now and August is a data row.
This is the part that separates a real plan from a content-marketing brochure: the plan only matters if it survives contact with reality. I am running the experiment in public so the receipts are verifiable.
Want this 90-day plan run as part of a full engagement?
AI accelerates discoverability. The lens still does the work. The AI-Visual Branding Package combines the shoot day, schema deployment, content rewrites, and quarterly citation tracking into one engagement.
See the packageWhat to do today
If you read all the way here, here are three actions to take in the next hour, before you close this tab and forget.
One: open `yourdomain.com/robots.txt` and verify the AI bots are explicitly allowed. If you use Cloudflare, check AI Crawl Control. This is day 1.
Two: open your top revenue page. Read the first 167 words. If they don't lead with a complete, factually verifiable answer with a specific number and a specific location, write the rewrite into a Google Doc tonight. Don't ship until tomorrow; today is just the draft. This is day 5 of Phase 1.
Three: pull five real questions from your last 50 prospect emails. Write 134 to 167-word answers. Save them in a Google Doc. Tomorrow you'll wrap them in FAQPage JSON-LD. This is days 5-10 of Phase 1, started.
The whole 90-day plan is more work than this post can capture in detail. Each phase has its own walkthrough in the cluster. But the three actions above are the day-1 starting line, and they take less than an hour.
If you want me to run this 90-day plan on your site rather than running it yourself, get in touch and we'll talk it through.
