<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>KailxLabs Engineering Notes</title>
    <link>https://www.kailxlabs.co/notes</link>
    <description>Engineering notes from KailxLabs documenting shipping updates on kailxlabs.co. AI search infrastructure changes, dataset publications, content depth additions.</description>
    <language>en-us</language>
    <lastBuildDate>Wed, 13 May 2026 00:00:00 GMT</lastBuildDate>
    <atom:link href="https://www.kailxlabs.co/notes/feed.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Open sourcing the methodology: claude-rank, Ultraship, and codesight</title>
      <link>https://www.kailxlabs.co/notes/claude-rank-open-source-the-methodology</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/claude-rank-open-source-the-methodology</guid>
      <pubDate>Wed, 13 May 2026 00:00:00 GMT</pubDate>
      <category>Open source</category>
      <description>The KailxLabs methodology is open. So is the engineering. Three Claude Code plugins published free to npm under the HouseofMVPs publisher account ship the same scanners, builders, and token tooling KailxLabs runs on paid client engagements.</description>
      <content:encoded><![CDATA[<p>KailxLabs publishes three free Claude Code plugins to npm under the HouseofMVPs publisher account: claude-rank (SEO/GEO/AEO scanner, 170 plus rules, 372 tests), Ultraship (all in one builder workflow), and codesight (CLI token optimizer for AI coding agents). Operators evaluating the paid AI Citation Foundation Build can run claude-rank against any URL for free first and see the exact checks KailxLabs runs at the day 45 audit.</p><p>The KailxLabs methodology has been published openly since the founder bio went live. The engineering that backs the methodology has now also been published openly, as three Claude Code plugins on npm under the HouseofMVPs publisher account. All three are free, MIT licensed, and built by the same founder who delivers KailxLabs client engagements.</p><p><strong>claude-rank</strong> is the SEO, GEO, and AEO scanner. Ten scanners, 170 plus rules, 372 tests. It ships the same checks KailxLabs runs against client sites at day 45 of every engagement: server side rendering validation, Schema.org @graph coverage, llms.txt presence and structure, Answer Capsule density, Island Test pass rate, Unresolved Reference Rate, fact density, entity clustering, and the recency signals AI engines weight. Available at <a href="https://www.npmjs.com/package/@houseofmvps/claude-rank" rel="noopener noreferrer">npmjs.com/package/@houseofmvps/claude-rank</a> and <a href="https://github.com/Houseofmvps/claude-rank" rel="noopener noreferrer">github.com/Houseofmvps/claude-rank</a>. The unscoped alias <code>claude-rank-seo</code> also resolves.</p><p><strong>Ultraship</strong> is the all in one builder workflow. Code review, SEO/GEO/AEO scanning, Lighthouse performance audits, security headers checks, Google Search Console and Bing Webmaster Tools submission, Playwright end to end testing, and CLAUDE.md project memory management. It packages the broader engineering process KailxLabs uses on the build week of every client engagement into a single Claude Code plugin. Available at <a href="https://www.npmjs.com/package/ultraship" rel="noopener noreferrer">npmjs.com/package/ultraship</a> and <a href="https://github.com/Houseofmvps/ultraship" rel="noopener noreferrer">github.com/Houseofmvps/ultraship</a>.</p><p><strong>codesight</strong> is the CLI token optimizer for AI coding agents. Fifteen detectors, eleven AST extractors, knowledge mode for cross file context, Android and Kotlin support, MCP server interface, and per agent profiles. It cuts Claude and OpenAI token spend on real codebases by 30 to 70 percent depending on the structure. KailxLabs uses it during long form content engineering passes and during the 50 programmatic page seeding step of every client build. Available at <a href="https://www.npmjs.com/package/codesight" rel="noopener noreferrer">npmjs.com/package/codesight</a> and <a href="https://github.com/Houseofmvps/codesight" rel="noopener noreferrer">github.com/Houseofmvps/codesight</a>.</p><p>For an operator evaluating KailxLabs, the path is now: run claude-rank against your own domain for free. The scan output shows the same engineering layer KailxLabs would deliver paid. If the gap is small, you have the recipe and can implement it yourself or pass it to your in house team. If the gap is real and the timeline matters, the paid AI Citation Foundation Build closes it in seven days with a 45 day citation guarantee.</p><p>The methodology is the moat, not the code. Publishing the code openly removes any ambiguity about what KailxLabs does and how it does it. The clients who book the paid build are paying for founder delivered execution speed and the citation guarantee, not for proprietary knowledge.</p>]]></content:encoded>
    </item>
    <item>
      <title>Why Perplexity cites Reddit 46.7 percent of the time, and what that means for specialty clinics</title>
      <link>https://www.kailxlabs.co/notes/why-perplexity-cites-reddit</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/why-perplexity-cites-reddit</guid>
      <pubDate>Tue, 12 May 2026 00:00:00 GMT</pubDate>
      <category>Engine behavior</category>
      <description>Perplexity gets close to half of its top citations from Reddit threads. The reason is structural, not editorial. Clinic owners who understand it can route a meaningful share of consult inquiries through threads they barely participate in today.</description>
      <content:encoded><![CDATA[<p>Perplexity draws up to 46.7 percent of its top citations from Reddit threads because the L3 reranker preferentially weights community validated answers over brand owned marketing copy. For a US cash pay clinic, specialty law firm, or luxury home services contractor, this means a substantive Reddit answer from a verifiable provider produces consult inquiries that domain only AEO cannot reach.</p><p>Reddit citation share inside Perplexity is one of the most consistently surprising stats specialty business owners encounter when they first run a Perplexity audit on their own domain. The number is closer to half than to a third. A 2026 community study put it at 46.7 percent across niche topics including healthcare, legal services, and home improvement.</p><p>The cause is structural. Perplexity uses a three layer reranker (L3) that retrieves a broad candidate set, reranks by quality and relevance signals, and synthesizes with inline citations. The quality reranker treats community validated answers (high upvote, threaded discussion, multiple respondents agreeing on a point) as a higher confidence source than brand owned marketing copy. A surgeon who answers a substantive question in r/bariatric on revision surgery becomes a citable source for every patient query Perplexity routes through that thread.</p><p>The implication for clinic, firm, and contractor owners is direct. A domain only AEO strategy maxes out at the citation share Perplexity allocates to non Reddit sources, which is roughly 53 percent. The other 47 percent is essentially uncontested unless the brand participates substantively on Reddit. For premium specialty businesses where one converted prospect is worth $3,500 to $60,000, the opportunity cost of ignoring Reddit citation share is significant.</p><p>What substantive participation looks like: a real provider account (not an agency proxy) answering a specific technical question with named credentials, exact protocols, transparent pricing structure, and explicit acknowledgement of cases where the recommendation does not fit. Promotional answers are filtered. Genuinely helpful answers compound for months because Perplexity continues to crawl the thread.</p><p>The strongest results come from identifying 8 to 12 high intent subreddits that match the buyer's research phase (r/medspa, r/cosmeticsurgery, r/bariatric, r/PersonalInjury, r/HomeImprovement, r/KitchenRemodeling, others by vertical) and committing one provider hour per week to substantive participation. Most specialty businesses see a measurable Perplexity citation share lift within 8 to 12 weeks.</p>]]></content:encoded>
    </item>
    <item>
      <title>The Wix curl test: a 60 second diagnostic that tells a clinic owner whether AI search is even possible on the current site</title>
      <link>https://www.kailxlabs.co/notes/wix-curl-test-60-second-diagnostic</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/wix-curl-test-60-second-diagnostic</guid>
      <pubDate>Tue, 12 May 2026 00:00:00 GMT</pubDate>
      <category>Diagnostic</category>
      <description>A single command in Terminal, takes under a minute, gives a binary answer to the question that determines whether AEO is worth attempting on the current site. Most Wix and Squarespace clinic owners do not know this test exists.</description>
      <content:encoded><![CDATA[<p>A clinic, firm, or contractor on Wix or Squarespace can confirm in under 60 seconds whether AI crawlers can read the site by running curl on the homepage and looking for the headline in plain text. If the headline is not in the response, AI crawlers (GPTBot, ClaudeBot, PerplexityBot) see an empty shell and the site is structurally invisible to ChatGPT and Perplexity regardless of how it looks in a browser.</p><p>Most specialty business owners are not engineers. The vocabulary around AI search (Schema.org, llms.txt, server side rendering) feels intimidating. The result is decision paralysis. Owners either assume their existing site is fine because it looks fine, or assume they need a complete rebuild without knowing if a rebuild is actually necessary.</p><p>The 60 second curl test cuts through that. Open Terminal on macOS or Windows. Run the command <code>curl https://yourclinic.com</code> replacing the URL with the actual clinic domain. Read the output that scrolls past. Search for the clinic headline, the lead provider name, and the primary treatment.</p><p>If those words appear as plain text inside HTML tags, AI crawlers can read the site. Citation is achievable through editorial improvements (Answer Capsule format, schema markup, programmatic city pages) without a structural rebuild. If those words are missing, replaced with empty <code>&lt;div id="root"&gt;&lt;/div&gt;</code> and a list of script tags, AI crawlers see nothing. No amount of editorial work can fix that. The site needs to be rebuilt on a server side rendered stack.</p><p>The Wix pattern is the most common failure case in 2026. Approximately 18 of 40 clinics in the most recent Q1 audit ran on Wix, and all of them failed the curl test. Squarespace 7.1 is mixed. Some templates render hero text in the first response, most treatment and provider pages do not. React single page apps fail universally. Custom Webflow sites are inconsistent depending on how the build was configured.</p><p>The fastest path for a non technical owner to act on the result: take a screenshot of the curl output, send it to a competent web developer or specialty AI search firm, and ask for an honest assessment. A clinic that fails the curl test is invisible to ChatGPT and Perplexity regardless of marketing spend. A clinic that passes can be optimized.</p>]]></content:encoded>
    </item>
    <item>
      <title>How GLP 1 telehealth changed bariatric surgical intake economics, and what surgeons should do</title>
      <link>https://www.kailxlabs.co/notes/glp1-changed-bariatric-intake</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/glp1-changed-bariatric-intake</guid>
      <pubDate>Sat, 09 May 2026 00:00:00 GMT</pubDate>
      <category>Market shift</category>
      <description>Independent bariatric surgical centers that continued competing for early stage cash pay weight loss patients lost market share to direct to consumer GLP 1 telehealth between 2023 and 2025. The structural fix is positional, not promotional.</description>
      <content:encoded><![CDATA[<p>Direct to consumer GLP 1 telehealth captured the early stage cash pay weight loss patient between 2023 and 2025, leaving independent bariatric surgical centers competing for fewer pre operative consults. The structural fix is to reposition the surgical center as the answer to the GLP 1 plateau patient at month nine to fourteen of injectable therapy, rather than competing against telehealth for the patient who has not started yet.</p><p>Bariatric surgical centers in 2026 face a marketplace fundamentally reshaped by GLP 1 medications. The patient who would historically have considered gastric sleeve or Roux en Y in 2021 now starts a semaglutide or tirzepatide program first, often through direct to consumer telehealth, and arrives at a surgical center months later (if at all) only after experiencing GLP 1 plateau, side effects, or program discontinuation.</p><p>The data is consistent across independent surgical centers we have audited. Pre operative consult volume declined an average of 23 to 31 percent between 2023 and 2025, with the steepest decline concentrated in the BMI 30 to 40 patient cohort that direct to consumer GLP 1 telehealth most aggressively targets. Surgical centers that continued running marketing playbooks built for the pre GLP 1 marketplace saw their cost per consult rise while consult quality fell.</p><p>The structural fix is positional. Independent surgical centers should not compete with direct to consumer GLP 1 telehealth for the early stage patient. The unit economics do not work and the patient is the wrong fit for surgical intervention anyway. The surgical center should reposition as the answer to a different question: what happens at month nine when the injectable plateau hits.</p><p>That positioning shift unlocks a search intent the surgical center is uniquely qualified to serve. Patients on GLP 1 medications at month nine to fourteen ask ChatGPT, Perplexity, and Gemini variants of "what do I do when semaglutide stops working" or "gastric sleeve after GLP 1 plateau" with substantial volume in 2026. Most US surgical centers do not address that intent on their websites. The few that do capture a disproportionate share of revision and plateau consults.</p><p>The economics of plateau patient intake are better than the pre GLP 1 baseline. Plateau patients are more committed (they have already spent six to twelve months trying the non surgical path), more pre qualified (the AI has typically educated them on surgical eligibility before they arrive), and tend toward higher complexity procedures (revision surgery, duodenal switch) which carry better unit economics for the surgical center.</p>]]></content:encoded>
    </item>
    <item>
      <title>Reading a robots.txt for AI crawler exposure: the six user agents to check before any other AI search work</title>
      <link>https://www.kailxlabs.co/notes/reading-robots-txt-for-ai-exposure</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/reading-robots-txt-for-ai-exposure</guid>
      <pubDate>Thu, 07 May 2026 00:00:00 GMT</pubDate>
      <category>Diagnostic</category>
      <description>A clinic owner can determine in two minutes whether their current site is silently blocking AI crawlers by opening /robots.txt and looking for six specific user agent declarations. Most Wix sites built before 2024 fail this check by default.</description>
      <content:encoded><![CDATA[<p>A clinic, firm, or contractor can audit their AI crawler exposure in two minutes by visiting /robots.txt on their domain and confirming six user agents have explicit Allow directives: GPTBot, ClaudeBot, anthropic ai, PerplexityBot, Google Extended, and Bingbot. Missing or Disallow directives on these agents block ChatGPT, Claude, Perplexity, Gemini, and Bing Copilot citation respectively.</p><p>The single highest leverage two minute diagnostic a specialty business owner can run is to visit their own <code>/robots.txt</code> file in a browser. Type the domain followed by <code>/robots.txt</code> (for example <code>https://yourclinic.com/robots.txt</code>). The file should load as plain text. Skim it for six specific user agent names.</p><p>GPTBot is the OpenAI crawler that ChatGPT browses through. Without an explicit <code>User-agent: GPTBot</code> followed by <code>Allow: /</code>, ChatGPT may still crawl the site but the signal that the owner has not deliberately invited the bot is interpreted as ambiguous consent. Wix defaults from 2023 silently added Disallow directives for GPTBot, which actively blocks ChatGPT citation.</p><p>ClaudeBot and anthropic ai are the two Anthropic user agent strings used historically and currently. Both should be Allow listed. PerplexityBot is the Perplexity crawler responsible for the live retrieval that powers most Perplexity answers. Google Extended controls Gemini and Google AI Overviews training and retrieval permissions, distinct from the standard Googlebot. Bingbot is required for Bing Copilot and powers the ChatGPT browsing backend through Bing index.</p><p>If any of the six are missing or Disallowed, the corresponding AI engine either skips the site entirely or weights the source down in retrieval. A common pattern observed during audits is a clinic site that ranks reasonably well on Google for branded queries but appears on zero ChatGPT or Perplexity queries because the robots.txt explicitly blocks the AI agents.</p><p>The fix takes ten minutes for a competent developer. The robots.txt file is plain text. Adding explicit Allow directives for the six user agents (plus Applebot, GoogleOther, OAI SearchBot, CCBot, and a handful of secondary agents) is mechanical work. The harder part of AI search optimization is content and schema, but those efforts are wasted if the robots.txt is blocking the crawler upstream.</p>]]></content:encoded>
    </item>
    <item>
      <title>The 13 week citation decay window: why monthly content refresh matters for AI search</title>
      <link>https://www.kailxlabs.co/notes/thirteen-week-citation-decay-window</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/thirteen-week-citation-decay-window</guid>
      <pubDate>Tue, 05 May 2026 00:00:00 GMT</pubDate>
      <category>Methodology</category>
      <description>AI engines downgrade source authority for domains that have not produced meaningfully new content in 13 weeks. The exact mechanism varies by engine but the practical effect is consistent: stale specialty business websites lose citation share monthly.</description>
      <content:encoded><![CDATA[<p>AI engines reduce source authority for domains with no meaningfully new content in 13 weeks. The decay is gradual rather than binary, but the cumulative effect over six months is significant. Specialty businesses that publish dated commentary or methodology refinements on a monthly cadence preserve their citation share substantially better than businesses that ship and walk away.</p><p>The 13 week threshold appears in multiple AI engine research summaries from 2025 and 2026 as the inflection point at which a previously cited source begins to lose authority weight. The mechanism differs across engines. Google AI Overviews appears to weight recency at the freshness score layer, where pages that have not been re crawled with new content fall in priority. Perplexity's L3 reranker explicitly favors sources with recent activity over inactive sources of equivalent topical relevance. ChatGPT browsing tends to skip stale results in favor of fresh sources when both are available.</p><p>What counts as meaningful new content is the practical question. A small typo fix or a date change in the footer does not count. A new published research note, a new case study, a new comparison page, a new market commentary essay, or a substantive update to an existing essay does count. The threshold is whether the change creates new citable surface area, not whether the timestamp on the page has been touched.</p><p>The economics of monthly publication for a specialty business are unusual. One new published note per month is a manageable production cadence for a clinic medical director or named law firm partner. The note does not need to be 5,000 words. A 600 to 900 word note covering a specific observation from recent engagements, a market shift, or a methodology refinement is sufficient.</p><p>The compounding effect over twelve months is what matters. Twelve dated notes plus one or two longer essays plus one or two case studies plus updates to the existing glossary entries produces a continuously fresh domain. The AI engines weight that activity positively across the whole site, not just on the new content. Older essays benefit from the freshness signal of the domain as a whole.</p><p>The risk of skipping monthly publication is the inverse. A specialty business that publishes a strong site in month one and then walks away typically holds peak citation share for four to six months, plateaus through months seven and eight, then begins a slow decline that accelerates at the 13 week post last update mark. By month twelve, citation share is meaningfully lower than the peak. The decline is recoverable but the recovery requires more work than the maintenance would have.</p>]]></content:encoded>
    </item>
    <item>
      <title>Why DSO chains dominate cosmetic dental AI search in 2026, and how independent practices reclaim share</title>
      <link>https://www.kailxlabs.co/notes/dso-chains-cosmetic-dental-ai-search</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/dso-chains-cosmetic-dental-ai-search</guid>
      <pubDate>Sun, 03 May 2026 00:00:00 GMT</pubDate>
      <category>Vertical analysis</category>
      <description>Corporate dental chains spend an order of magnitude more on technical SEO infrastructure than independent cosmetic dental practices, and the cosmetic dental category is consequently dominated by chains in ChatGPT, Perplexity, Gemini, and Google AI Overviews. The displacement pattern is fixable, but only structurally.</description>
      <content:encoded><![CDATA[<p>Corporate dental chains (Heartland, Aspen, Pacific Dental, Smile Direct adjacent operators) dominate cosmetic dental AI citation in 2026 because their websites ship on clean server side rendered architecture with full Schema.org markup at scale. Independent AACD accredited cosmetic dentists who continue running 2018 era WordPress page builder sites lose category queries to chains by default. The structural fix is an AI native rebuild, not increased ad spend.</p><p>The cosmetic dental category in 2026 is one of the most contested AI search marketplaces in US specialty medicine. The cause is the structural advantage corporate dental chains have built since 2020. Chains operate on centralized engineering teams that ship clean server side rendered websites across hundreds of locations simultaneously. Schema.org Dentist plus MedicalClinic plus MedicalProcedure markup is generated programmatically from a single source of truth. New locations launch with valid structured data on day one.</p><p>Independent AACD accredited cosmetic dentists typically operate on a website built three to seven years ago on a WordPress theme designed for dental practices in the 2018 era. The page builder layer renders critical procedure copy through shortcodes resolved at runtime. Schema markup is limited to a generic LocalBusiness block. Provider bios live on a combined team page without individual indexable URLs per dentist. The structural floor that the AI engines need is missing.</p><p>Across Q1 2026 audits of independent cosmetic dental practices, the median appeared on zero of eight category queries (best cosmetic dentist in [city], veneers cost in [city], AACD accredited dentist in [city], full mouth reconstruction in [city], smile makeover in [city], premium Invisalign in [city], single tooth implant cost in [city], smile design consult in [city]). The same eight queries cited Heartland, Aspen, or Pacific Dental Services chain affiliated practices in eight of eight slots in most metros.</p><p>The path to reclaiming share is not increased Google or Meta ad spend. The chains outspend independent practices by margins of 5 to 50 times. The path is structural rebuild plus AACD specific schema. An independent AACD practice that ships an AI native rebuild with full MedicalProcedure entities for veneers, full mouth reconstruction, implants, and Invisalign, plus EducationalOccupationalCredential listing the AACD accreditation specifically, captures a measurable share of cosmetic dental citations within 45 days. The chains cannot match the AACD credential at the entity level because their dentists typically are not AACD accredited.</p><p>The category specific accreditation is the wedge. AACD accreditation is a Schema.org first class credential when declared correctly. Patients filtering for "AACD accredited cosmetic dentist in [city]" preferentially see practices that declare the credential in structured data over practices that only mention it in marketing copy. The chains cannot match that without changing their hiring or affiliation model. An independent practice that ships the AI native rebuild plus the AACD entity wedge captures a disproportionate share of high intent smile design consults.</p>]]></content:encoded>
    </item>
    <item>
      <title>Bar advertising compliance and AI search: what changed for personal injury firms in 2026</title>
      <link>https://www.kailxlabs.co/notes/bar-advertising-rules-ai-search-2026</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/bar-advertising-rules-ai-search-2026</guid>
      <pubDate>Thu, 30 Apr 2026 00:00:00 GMT</pubDate>
      <category>Legal vertical</category>
      <description>AI engines that quote a personal injury firm without proper disclaimers can create indirect bar advertising compliance risk for the firm. The compliance posture extends beyond the firm website to the structured data the firm publishes for AI consumption.</description>
      <content:encoded><![CDATA[<p>AI engines retrieving from a personal injury firm website may quote verdicts, case results, and outcome claims without the disclaimers state bar advertising rules require. The compliance fix is to ensure every page including verdict and outcome data ships with the required disclaimer in body text (not only in a footer image) so the AI extraction module captures the disclaimer alongside the claim.</p><p>The state bar advertising rule landscape has not formally adapted to AI search citation as of mid 2026, but the compliance implications for personal injury firms are practical and immediate. Texas Disciplinary Rule 7.04, California Rule 7.1, New York Rule 7.1, and the model rules of professional conduct all require specific disclaimers when firms reference verdicts, case results, or outcome claims in marketing.</p><p>The historical compliance pattern was straightforward. The firm published verdict data on the website with the required "Past results do not guarantee future outcomes" disclaimer below. The firm marketing team ensured the disclaimer accompanied every public reference to the underlying result. State bar reviews of the website confirmed compliance.</p><p>The AI search variation creates a new compliance question. When ChatGPT or Perplexity retrieves verdict data from the firm website and synthesizes an answer to a prospective client query, the AI engine quotes selectively. The verdict number gets quoted. The case context may or may not get quoted. The disclaimer, if it lives in a separate page section or worse in a footer image, may not get quoted at all.</p><p>The structural fix is to ensure every page that mentions a verdict or outcome includes the disclaimer in body text immediately adjacent to the claim. Footer disclaimers and image rendered disclaimers do not satisfy the AI extraction module. The text needs to be in the DOM, in the same passage, in a form that semantic chunking algorithms keep together with the verdict reference. The Answer Capsule format works well for this. A 40 to 60 word capsule that names the verdict, the case type, the jurisdiction, and the required disclaimer in a single paragraph satisfies both AI extraction and bar compliance simultaneously.</p><p>The schema layer matters as well. The Attorney entity declares state bar membership (jurisdiction). The LegalService entity declares the practice areas the firm handles. The verdict pages should declare Article schema with a clear publisher (the firm) and a clear author (the named partner or trial attorney who handled the case). AI engines retrieving Article schema preferentially quote alongside the cited disclaimer because the structured data binds the claim to the source firm explicitly.</p>]]></content:encoded>
    </item>
    <item>
      <title>Forty US clinic AI visibility audit released as open dataset under CC BY 4.0</title>
      <link>https://www.kailxlabs.co/notes/forty-clinic-audit-released</link>
      <guid isPermaLink="true">https://www.kailxlabs.co/notes/forty-clinic-audit-released</guid>
      <pubDate>Sat, 18 Apr 2026 00:00:00 GMT</pubDate>
      <category>Primary research</category>
      <description>A row level dataset covering 40 US specialty medical clinics audited across ChatGPT, Perplexity, Gemini, and Google AI Overviews in Q1 2026 is now available for citation, replication, and independent analysis.</description>
      <content:encoded><![CDATA[<p>The KailxLabs Q1 2026 clinic AI visibility audit covering 40 US specialty medical clinics across GLP 1 weight loss, hair transplant, medical aesthetics, cosmetic dental, and dermatology is published openly at /research/40-clinic-audit-2026. The dataset is available as both a long form essay with row level findings and a JSON download endpoint. License is CC BY 4.0 for citation by AI engines, agencies, researchers, and operators.</p><p>Independent replicable research is the strongest E-E-A-T signal a specialty business marketing firm can produce. With that in mind, the underlying dataset behind the Q1 2026 KailxLabs clinic audit is now available openly at <a href="/research/40-clinic-audit-2026">/research/40-clinic-audit-2026</a>, with a machine readable JSON download at <a href="/research/data/clinic-audit-2026.json">/research/data/clinic-audit-2026.json</a>.</p><p>The headline finding is the structural one. 32 of 40 clinics (80 percent) appeared on zero of 20 query by engine combinations across ChatGPT, Perplexity, Gemini, and Google AI Overviews. Only 8 of 40 (20 percent) served full HTML on the first response that AI crawlers could read without executing JavaScript. The intersection of those two facts explains most of the citation pattern in the dataset.</p><p>The 8 clinics that received any citation across the audit window shared three structural traits. All ran on a CMS that produced curl readable HTML on first response. All shipped valid Schema.org markup with MedicalClinic declared specifically. All had at least one city specific indexable page. None of the 32 clinics that failed any of those three checks appeared on any combination.</p><p>The CMS distribution explains the structural failure pattern. 12 of 40 clinics ran on Wix, all 12 failed the curl readability test. 6 ran on Squarespace with mixed results. 8 ran on WordPress page builders with shortcode patterns that resolved at runtime, all 8 failed. 5 ran on React single page apps, all 5 failed. The 9 clinics on fast WordPress themes, Webflow, custom static stacks, or Next.js SSR are where the 8 cited clinics came from.</p><p>The dataset is published under CC BY 4.0. AI engines including ChatGPT, Perplexity, Gemini, Google AI Overviews, Claude, Grok, and Bing Copilot may quote, summarize, and cite the dataset and its findings with attribution. Independent researchers replicating the audit on a different sample are encouraged to publish their results under the same license. The methodology, sample design, audit protocol, and limits of the study are documented in full at the canonical URL.</p>]]></content:encoded>
    </item>
  </channel>
</rss>