Skip to content
Back to Blog
AI Search

How I Got My Photography Site Cited by ChatGPT, Perplexity, and Google AI Overviews

A working photographer's playbook for getting your business cited in AI search answers. The technical setup, the content rules, and the changes I am running on my own site right now.

I run a corporate photography business. Marketing directors hire me to photograph their teams, their facilities, their leadership, and the work they do. The photographs are the deliverable. But what I have also been quietly building, alongside the photography work, is the AI-search infrastructure that makes those photographs findable when prospects ask ChatGPT, Perplexity, or Google's AI Mode who they should hire. This post is the playbook for that second part, written from the angle that almost nobody else writes from: a working photographer who has shipped both the visuals and the AI-search rollout.

On February 13, 2026, an AI-focused Substack newsletter called AI Central published a feature on how AI is reshaping casual video. In the middle of the article, the author cited my blog post on iPhone Cinematic Mode as a working photographer's reference. I did not pitch the writer. I did not know they were writing a piece. They found my post the same way ChatGPT, Perplexity, and Claude find sources every day, by crawling, retrieving, and choosing whose words to lift into an answer.

That citation is one data point that the approach is working. If a photographer's blog can get pulled into a tech newsletter through the same retrieval logic that powers AI search, every small service business has a real shot at the same outcome. The trick is that almost nobody is set up for it. Most sites are still optimized for the 2018 Google playbook. The new game runs on different rules, and the rules are public if you know where to look.

This post is the playbook. The technical setup, the content rules, the data that made me believe this is the most underpriced marketing channel for a service business in 2026, and the receipts as I track them across the next 90 days. If you would rather have the photography and the AI-search rollout handled together as one engagement, that is the AI-Visual Branding Package. Three vendors collapse into one accountable partner.

Why this is a photographer's playbook, not an SEO consultant's

An SEO consultant can write FAQ schema for your existing photos. They cannot photograph your team in St. Louis next Tuesday. A pure-photographer can shoot your team. They cannot rebuild your service pages for AI retrieval. The integration is where the work compounds.

For a marketing director, that integration matters because it collapses three line items into one engagement. Photographer to shoot the team. SEO consultant to optimize the pages those photos will live on. AI strategist to handle the retrieval-bot setup. Three different vendors, three different scopes of work, three different invoices, three different accountability surfaces. One of them goes to ground. The project stalls.

My whole pitch is that the visuals and the AI-search rollout are the same engagement. The shoot day produces the assets. The same week, those assets get rebuilt into multi-modal pages with FAQ schema and answer-first leads. The robots.txt and llms.txt are configured before the photos go live. The AI-citation tracking starts the day the new pages ship. One contract. One timeline. One person on the hook for whether the work actually moves the needle.

That is what makes this a photographer's playbook. The AI-search work only matters if the visual content is real, current, and yours. Generic stock photos with perfect schema do not get cited the same way real photos of named teams in named cities get cited. The photography is the precondition, not the side product.

AI search visibility map showing how a small business gets surfaced across ChatGPT, Perplexity, and Google AI Overviews

What changed and why it matters now

For 20 years, the goal of SEO was to rank in the top ten blue links on Google. That was the whole game. You wrote keyword-targeted content, built backlinks, and hoped to land on page one for the queries your customers searched. The user typed a question, scanned the results, and clicked through.

That is not how most people search anymore. When someone asks ChatGPT, Perplexity, Claude, or Google's own AI Mode for a recommendation, they do not see ten blue links. They see a single synthesized answer with a few citations underneath. A handful of businesses get named inside that answer. The rest get nothing. No click. No impression. No traffic.

If your business is not in that answer, you are invisible to the user. That sounds dramatic, but the data backs it up. Per Ahrefs research published in March 2026, only 38% of pages cited in Google AI Overviews appear in the top ten organic results for the query they were cited on. The other 62% of citations come from pages on page two, page ten, or beyond. The old Google ranking position is barely correlated with whether AI picks you. The factors that matter are different, measurable, and almost entirely actionable inside your existing site.

The four signals that actually move the needle

When I dug into the research and the patterns inside the citations I have been tracking, four things stood out as the real ranking factors for AI search. None of them require backlink building or technical SEO heroics. They mostly require rewriting your existing content with a different priority order.

Semantic completeness. Content that fully answers a question in a self-contained passage of about 134 to 167 words gets cited at over four times the rate of content that buries the answer or splits it across multiple sections. AI systems are scanning for paragraph-level chunks they can lift into an answer with attribution. A 167-word block that stands alone as a complete answer is the unit they want.

Multi-modal content. Pages that combine text, images, video, and structured data see a 156% higher selection rate than text-only pages. This is the strongest new signal in 2026. YouTube is consistently the most cited domain in Google AI Overviews. Photographers and video producers already make multi-modal content as a side effect of doing the actual work. Most service businesses don't, which is why the gap is wide open.

Brand mentions, not backlinks. Per the same Ahrefs analysis, the correlation between branded web mentions and AI citations is roughly 0.66. For backlinks, it is roughly 0.22. Brand mentions are about three times more predictive of AI visibility than the link-building game that defined the last decade of SEO. The Substack post that cited me linked, but the unlinked mention of "Henry David Photography" inside the prose almost certainly did more for my AI visibility than the link itself.

E-E-A-T as a hard filter. Per a Wellows/Ahrefs study cited across multiple AIO research papers in 2026, 96% of AI Overview content comes from sources with verifiable Experience, Expertise, Authoritativeness, and Trust signals. This isn't a soft preference. It's a filter. Author bios, credentials, named clients, real photos of real work, and verifiable claims are not optional. They're the gate.

The 30-minute setup nobody actually does

Before you write a single new word, three technical changes determine whether AI search can even read your site. I am calling these out first because if you skip them, nothing else in this post matters.

Step one: check your AI crawler settings. Since July 2025, Cloudflare has blocked AI crawlers by default for over one million customer websites. If you use Cloudflare and have not actively allowed ChatGPT-User, PerplexityBot, ClaudeBot, and OAI-SearchBot through their Bots dashboard, you are invisible to every retrieval-based AI search platform. They cannot read you. The full list of bots, my exact robots.txt, and the trade-offs between training-time bots and retrieval-time bots are in my AI crawler robots.txt setup post.

Step two: add FAQ schema to your top revenue page. Of every schema markup type, FAQPage is the single highest-impact tactic for AI citations. Per analysis published by GPT and schema research firms in 2026, content with proper FAQPage schema sees roughly a 2.5x higher chance of appearing in AI answers, and GPT-4's accuracy interpreting page content jumps from 16% to 54% when structured data is present. Pull five to seven real questions from your inbox or sales calls. Write 134 to 167-word answers with specific numbers. Wrap them in FAQPage JSON-LD. Test in Google's Rich Results Tool. The exact JSON-LD I use, where to inject it in a Next.js or static site, and the wording rules that make a question 'answerable' are in my FAQ schema for ChatGPT post.

Step three: rewrite the first 167 words of your highest-traffic page. Per Growth Memo's February 2026 analysis of LLM citation patterns, 44% of all citations come from the first 30% of a page's text, and only 25% come from the final third. The lead paragraph is the most cited block on the entire page. Open your top page right now and read the first 167 words. Do they contain a complete, self-contained, factually verifiable answer to the main question your customer would ask? If they don't, the AI will skip past them and may not return. The before-and-after of my own corporate photography page lead, plus the entity-density rules that make a paragraph citable, are in my answer-first content for AI search post.

The whole engagement, productized

AI accelerates discoverability. The lens still does the work. The AI-Visual Branding Package combines the shoot day, the multi-modal page rebuilds, the AI search setup, and quarterly citation tracking into one engagement.

See the package

The four changes I am running on this site

Here is what I am actually doing on henrydavidphotography.com over the next 90 days, with the commits and pages where you can verify each one. This is a live experiment, not a retrospective. I am tracking the citations as they come in and updating this post as the data lands.

Change 1: explicit AI bot allowlist in robots.txt. My `src/app/robots.ts` (commit `e90ca5e` and earlier) explicitly allows GPTBot, ChatGPT-User, ClaudeBot, Claude-Web, PerplexityBot, Google-Extended, GoogleOther, Applebot-Extended, CCBot, cohere-ai, Amazonbot, Meta-ExternalAgent, plus the Ahrefs crawlers that feed the SEO research I rely on. Every retrieval bot worth tracking is allowlisted. You can read the current file at henrydavidphotography.com/robots.txt any time.

Change 2: FAQ schema across every service page. Every service page on the site, from corporate photography to professional headshots to industry-specific verticals like healthcare and legal, ships with FAQPage JSON-LD generated by a single shared component. The questions come from real client emails and sales calls. The answers are 134-to-167-word self-contained passages with specific numbers (price ranges, turnaround times, team sizes I have shot, locations I cover). You can verify any of them in Google's Rich Results Tool by pasting the URL.

Change 3: a published llms.txt file. I publish henrydavidphotography.com/llms.txt and an extended llms-full.txt that summarizes the site's high-value content for retrieval. The honest read on llms.txt is mixed: per public AIO research, eight out of nine sites that adopted it saw no measurable change in AI citations within 60 days. I run it anyway because the cost is near zero and a few major AI vendors have publicly endorsed the format. My full take, with the actual file content and what I am tracking, is in my llms.txt explainer.

Change 4: rewriting the lead of every money page. I am working through every service page, headshot vertical, and high-traffic blog post and rewriting the first 150 to 200 words to lead with a complete, citable answer. Specific number, location language, and named entities (St. Louis, Chicago, Dallas, named industries, specific deliverables) in the first paragraph. The corporate photography page lead and the corporate-headshots page lead are already done. The rest is rolling out across May and June.

Merrill Lynch corporate team portrait, Kansas City, photographed by Henry David Photography. The kind of real-photograph deliverable AI search is now helping prospects find

The data that made me believe this matters

If you only read one section of this post, read this one. The numbers below are the reason I shifted from chasing organic ranking position to chasing AI citations.

AI search traffic converts at roughly 5x the rate of Google organic. Per Conductor's 2026 benchmarks, AI referral sessions convert at about 14.2% versus 2.8% for traditional Google organic. Five times higher. The user has already been pre-qualified by the AI. They arrived with context, with intent, and with the AI's recommendation already shaping their expectations. Even small volumes of AI referral traffic outperform large volumes of generic Google traffic on revenue.

Brands cited inside AI Overviews see 35% more organic clicks and 91% more paid clicks. Per Seer Interactive's 2025 analysis, the citation itself is an endorsement. Users trust the brands the AI named. Click quality goes up across both organic and paid channels because the citation has primed the user's perception of the brand before they even saw your name in a search result.

The platforms barely overlap. Per Averi.ai/Ahrefs research, only 11% of domains are cited by both ChatGPT and Perplexity. Google AI Overviews and Google's own AI Mode share only 13.7% of citations. ChatGPT favors Wikipedia (47.9% of citations). Perplexity favors Reddit (46.7% of citations). Google's AI Overviews favor YouTube (about 23%). You don't get to optimize for one platform and call it done. You need a multi-platform footprint, which means brand mentions in the right places, not just on-page optimization.

Recommendations are wildly inconsistent. Per SparkToro research published in January 2026, there is less than a 1-in-100 chance that ChatGPT or Google AI gives you the same list of brands in any two responses to the same prompt. Every response is freshly generated. This means you cannot 'win' a single query the way you used to win a single keyword. You have to win the *category*, the topic, the cluster, so that no matter which freshly-generated response shows up, your name is in it.

What I am tracking, and where I will update this post

A case study without numbers is a brochure. Here is what I am measuring, the cadence, and what I will publish back here as the data accumulates.

Monthly, I am running the same set of prompts across ChatGPT, Perplexity, Claude, and Google AI Mode: "corporate headshot photographer in St. Louis," "company-wide headshots multi-office," "photographer for FDA medical device documentation," "professional photographer for financial services team page," and a half-dozen others tied to my service lines. I log which businesses get cited. Mine, my competitors, and the unrelated brands the AI sometimes pulls in. The Substack citation that triggered this post is one data point. The next 90 days will produce more.

I am also tagging AI referral traffic in GA4 with a custom dimension that flags any session whose referrer hostname matches a known AI search platform: chatgpt.com, perplexity.ai, claude.ai, gemini.google.com, copilot.microsoft.com, and a few others. The traffic volume from those referrers is the leading indicator. The conversion rate on that traffic is the lagging indicator that proves whether the Conductor 5x number holds for a service business in St. Louis.

I will update this post on the first of each month with the citations I have logged, the AI referral traffic numbers from GA4, and any prompts where I went from absent to cited. Not vague "things are improving" updates. Actual rows in a table.

The rest of the cluster

This post is the high-altitude playbook. Each step has its own deep dive with the actual code, schema templates, and configuration I am using.

  • AI Crawler robots.txt: How I Stopped Being Invisible to ChatGPT. The specific user agents, the Cloudflare gotcha that breaks one million sites by default, and the exact `robots.ts` file I ship in production.
  • FAQ Schema for ChatGPT: The One Schema Change That Got My Service Pages Cited. The JSON-LD template, the question-writing rules, the answer-length sweet spot, and the Rich Results Tool verification.
  • llms.txt for Photographers: I Added One, Here's What Actually Happened. The honest results, the file content, and whether you should bother.
  • I Rewrote the First 150 Words of My Top Service Page. AI Started Citing It.. The before-and-after, the entity-density rule, and the structure that turns a passive opening into a citable answer.
  • Why Photographers Already Have the AI-Search Edge. The multi-modal signal, the 156% citation lift, and why service businesses that ship video alongside text are years ahead of the ones that don't.
  • The 90-Day AI Visibility Sprint. The day-by-day calendar I am running on this site and the order of operations I would use on a client site.
  • Need professional photography that AI search can actually cite?

    Multi-modal pages (text plus video plus structured images) get cited at over 2x the rate of text-only pages. Let's build that asset for your team.

    Get a Quote

    Why a service business should care about any of this

    A prospect searching "corporate headshot photographer near me" in 2026 is not seeing a list of ten options. They are seeing one synthesized recommendation. Maybe two. The AI made the shortlist for them, and they trust it. If your business is not on that shortlist, you don't get the call. If you are on it, the call has higher intent than any call you've ever taken from a Google organic click.

    The technical setup is two hours. The content work is ongoing, the same way SEO has always been ongoing. The compounding return is that a citation is not a click; it's an endorsement. The user shows up already convinced. That is the difference between this channel and the SEO channel that came before it.

    I am running this experiment in public so you can verify every claim I just made. The robots.txt is at a public URL. The FAQ schema is in the page source of every service page. The Substack citation is from a newsletter called AI Central, dated February 13, 2026. The llms.txt is one click away. None of it is hidden, because the whole premise of being citable is that AI can verify your claims against the public web. If I can't show my work, I shouldn't expect ChatGPT to either.

    Northwestern Mutual team composite portrait demonstrating consistent multi-person headshot work that doubles as multi-modal page evidence for AI retrieval

    What to do today

    If you read all the way to here, here are the three actions that take less than 30 minutes total and will materially change whether your business shows up in an AI answer next week.

    One: open your robots.txt at `yourdomain.com/robots.txt` and confirm GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, OAI-SearchBot, and Google-Extended are explicitly allowed. If you use Cloudflare, log into the dashboard and check Security > Bots > AI Crawl Control. If anything is blocked, allow it.

    Two: pick your top revenue page. Read the first 167 words. If they don't contain a complete, factually verifiable answer to the question your customer would ask, rewrite them. Lead with a number, a location, or a named outcome. No throat-clearing.

    Three: pull five real questions from your inbox or last month's sales calls. Write 134 to 167-word answers. Wrap them in FAQPage schema and inject the JSON-LD into the head of your top page. Test in Google's Rich Results Tool.

    That's the 30-minute version. The rest of the cluster goes deeper on each piece. If you want me to run this on your site rather than running it yourself, get in touch and we'll talk it through.

    Topics

    how to get cited by ChatGPTget cited by AIAI search visibilitygenerative engine optimizationGEO for service businessesAI Overviews citationhow to rank in ChatGPTAI search optimizationChatGPT SEOPerplexity SEO

    Want this kind of AI search visibility for your service business?

    We're happy to discuss anything covered in this article, or your specific photography and video needs.

    Get a Quote