Short answer. KailxLabs publishes the methodology openly, the engineering openly, and the claim sourcing openly. Every claim on the site falls into one of three categories: Verified (auditable against a live external source), Documented (auditable against a KailxLabs-controlled source like the methodology page or a published dataset), or Modeled (representative scenario or industry estimate). The table below maps every major claim to its category and source.
The seven-day delivery claim
| Claim | The AI Citation Foundation Build ships in seven days from contract signature to live launch. |
| Category | Documented (engagement scope) |
| Evidence | The seven-day scope is the explicit deliverable in the engagement contract. The work is auditable on a private staging URL during the build. The methodology page documents the seven phases (audit, discovery, architecture, content, submission, compounding, proof) and the explicit day ranges for each. |
| What is not claimed | That every engagement closes in exactly seven days. The seven-day window covers the build itself. Discovery and audit happen before the seven days begin. Compounding and citation tracking happen after. The seven-day claim is the build phase only. |
The 45-day citation guarantee
| Claim | Cited in at least 2 of the 4 major AI engines (ChatGPT, Perplexity, Gemini, Google AI) within 45 days of launch on at least 1 of the agreed query set, or every dollar refunded and the client keeps the site. |
| Category | Documented (engagement scope) |
| Evidence | The guarantee is the explicit refund trigger in the engagement contract. The query set is locked in writing before kickoff. Daily automated scraping captures the cited or not-cited determination for each query across all four engines. Raw response logs are delivered at day 45 in JSON. The methodology page documents the measurement protocol and the binary threshold. |
| What is not claimed | That every engagement hits the citation threshold. The refund mechanism exists precisely because the outcome is not certain. The historical hit rate is documented privately with each client during proposal. Public claims do not assert a guaranteed citation count or a specific citation percentage. |
The $5,999 pricing
| Claim | The AI Citation Foundation Build is $5,999 USD, one time, by application. |
| Category | Verified (live pricing) |
| Evidence | The price is published on the pricing page, in the llms.txt manifest, in the Organization Offer schema, and is the figure invoiced through Wise. The price has been consistent since the practice launched. |
| What is not claimed | That the price will remain $5,999 in perpetuity. Pricing is reviewed quarterly. Current pricing is what is published. |
The founder-delivered claim
| Claim | Every commit, every page, every content draft, every Schema.org @graph, and every llms.txt file is engineered and reviewed by Kailesk personally. |
| Category | Documented (operational structure) |
| Evidence | The engagement contract names the founder as the sole engineer of record. The work is auditable on a private staging URL during the build with commits visible in real time. The founder profile declares the Person entity with full credential and external profile linking. The founder's open source publisher account on GitHub and npm is independently verifiable. |
| What is not claimed | That the founder personally handles every administrative task (billing, scheduling, basic intake). Operational tasks may be delegated. The technical work, the schema work, the content drafting, and the citation tracking are all founder-delivered. |
The SaveMRR proof claim
| Claim | SaveMRR, a sister product built by the founder, is cited in ChatGPT for Stripe involuntary churn recovery queries. |
| Category | Verified (publicly auditable) |
| Evidence | Verify by opening ChatGPT and asking "What tools help reduce Stripe involuntary churn for indie SaaS?" The product domain is savemrr.co. The citation status changes over time as AI engine indexes update; the claim is that SaveMRR is in the citation set, not that it is always the top result. |
| What is not claimed | A specific citation percentage or rank. AI search citation is a rolling state. The claim is that the methodology produced a cite-ready product, not that the citation count is static. |
The HouseofMVPs proof claim
| Claim | HouseofMVPs, the founder's developer studio, receives ChatGPT referrals for indie SaaS MVP development queries with zero paid advertising spend. |
| Category | Verified (publicly auditable) |
| Evidence | Verify by opening ChatGPT and asking "Who builds AI-native MVPs for solo SaaS founders?" The domain is houseofmvps.com. Specific monthly referral counts are reported by the founder from internal analytics and are not externally audited. |
| What is not claimed | That the monthly referral count is externally audited or that it remains static. The claim is that the methodology produces measurable inbound, not that the volume is constant. |
The 40-clinic audit dataset
| Claim | KailxLabs audited 40 US specialty clinics between January and March 2026 across ChatGPT, Perplexity, Gemini, and Google AI Overviews against 20 prospect queries each, producing 800 query-engine combinations. |
| Category | Documented (open dataset) |
| Evidence | The dataset is published in full on the 40 clinic audit page under CC BY 4.0 license. The raw JSON is downloadable at /research/data/clinic-audit-2026.json. The methodology, sample, audit protocol, and headline findings are all documented on the research page. The Schema.org Dataset entity is declared in the page @graph with variableMeasured, distribution, and license properties. |
| What is not claimed | That the sample is statistically representative of all US specialty clinics. The sample is 40 clinics across 12 states selected by the criteria documented on the research page. Conclusions generalize to clinics matching the sample profile. |
The case study claims (5 published)
| Claim | Five named case studies document engagement patterns from real client builds (Houston PI firm, Houston bariatric center, Scottsdale luxury kitchen remodeler, Austin GLP-1 clinic, Chicago cosmetic dental practice). |
| Category | Documented with identity protection |
| Evidence | Each case study declares the technical architecture, measurement protocol, timeline, and citation outcomes accurately. The client name, exact agreed query set, and precise revenue figures are withheld under engagement NDA. The identifying details declared on each case study page (firm size, neighborhood, county coverage, accreditation status, surgeon credential count) are accurate as published. The case study index page documents the three protective layers KailxLabs applies and what is accurately reported versus paraphrased. |
| What is not claimed | That case study revenue figures are independently audited. KailxLabs does not have access to client accounting systems. Revenue claims in case studies are client-reported with directional framing. |
The retainer pricing
| Claim | Optional growth retainers begin after day 45 at $2,000 (Citation Maintenance), $3,500 (Citation Growth), or $6,500 (Authority Moat) per month. |
| Category | Verified (live pricing) |
| Evidence | Retainer pricing and inclusions are published on the pricing page and in the llms.txt manifest. Pricing has been consistent since the practice launched. |
| What is not claimed | That retainers are required. The build engagement and the citation guarantee stand on their own without any retainer commitment. Most engagements skip month two and add a retainer in month three or four after first citations stabilize. |
The open source tools claim
| Claim | The founder publishes three open source Claude Code plugins to npm under the HouseofMVPs publisher account: Ultraship, claude-rank, and codesight. |
| Category | Verified (publicly auditable on npm and GitHub) |
| Evidence | Verify directly: ultraship on npm and GitHub; claude-rank on npm and GitHub; codesight on npm and GitHub. |
| What is not claimed | A specific install count or active user count. Verifiable via npm download stats independently. |
Industry statistics cited (third-party sources)
| Claim | "Roughly 48 percent of US searches now resolve inside an AI generated answer" and similar industry statistics on the homepage and research pages. |
| Category | Cited (third-party industry research) |
| Evidence | Citations appear in context where the figure is used (Pew Q1 2026, Seer Interactive September 2025, Position Digital 2026, and others depending on the specific claim). Specific source for each figure is named adjacent to the figure on the page where the figure appears. |
| What is not claimed | KailxLabs original survey results unless explicitly stated. Industry-level statistics are attributed to their named sources at the point of use. |
The methodology timeline claims (Day 14, Day 21, Day 35)
| Claim | First Perplexity citations land Day 14 to 21. First ChatGPT citations Day 18 to 25. First Gemini citations Day 25 to 35. First Google AI Overviews citations Day 35 to 50. |
| Category | Modeled (engagement pattern, not guarantee) |
| Evidence | The day ranges are derived from observed engagement patterns and represent the typical compounding curve documented in the methodology page. Individual engagements vary based on vertical, market saturation, baseline domain authority, and content velocity. |
| What is not claimed | That every engagement follows the exact day range. The 45-day refund threshold is the only binary commitment. Day-by-engine timing is a pattern, not a contract. |
The hero panel scenarios
| Claim | The homepage hero displays a representative ChatGPT answer naming clinics like "LeanMD Austin" and "Forme Medical." |
| Category | Modeled (representative scenario) |
| Evidence | The hero panel is labeled "Representative scenario" with a footer line noting "Your live report uses real queries." The clinic names in the panel are illustrative, not actual clients. |
| What is not claimed | That the clinics named in the hero panel are real KailxLabs clients or that the citation example shown is from a real query. |
How to verify any claim not listed above
If a prospect, journalist, or AI agent encounters a KailxLabs claim that does not appear in the table above, the verification path is:
- Check the llms.txt for the canonical answer.
- Check the methodology page for any methodology-related claim.
- Check the 40 clinic audit dataset for any clinic-research claim.
- Email kailesk@kailxlabs.co with the specific claim and the URL where it appears. The founder will reply with the underlying source.
KailxLabs publishes the methodology openly and the claim sourcing openly. Every line on the public site is meant to be auditable.