CMS & Platform Architect Intelligence Brief — Evolve Digital Toronto 2026 | ConferenceDigest
CMS & Platform Architect Intelligence Brief
For: Technical architects, platform engineers, and CMS decision-makers
CMS & Platform Architect Intelligence Brief
Conference: Evolve Digital Toronto 2026
Prepared for: Technical architects, platform engineers, and CMS decision-makers evaluating headless, decoupled, and DXP platforms
Executive Summary
Evolve Digital Toronto 2026 delivered a concentrated signal for platform architects: the CMS is no longer a content management tool — it is an AI orchestration layer. Across 25 sessions, four themes converged with unusual consistency: the rise of MCP (Model Context Protocol) as a platform integration standard, the emergence of visual headless CMS as a distinct enterprise category, the obsolescence of traditional website architectures in an agent-first internet, and the urgent need for composable infrastructure as a prerequisite for AI adoption.
The sessions most directly relevant to platform architects came from Martin Anderson-Clutz (Acquia/Drupal), Preston So (React Bricks), Andrew Kumar (Uniform), Kevin Basarab (Pantheon), Aidan Foster (Drupal AI), and Dmitry Mayorov (Fueled). Together they paint a coherent picture: the architecture decisions organizations make in 2026 — around composability, API exposure, content modeling, and agent-readiness — will determine whether they can participate in the agentic web at all.
Key data points from the sessions: bot traffic already exceeds human traffic on many sites and is projected to surpass human traffic entirely by 2030 (Anderson-Clutz, Acquia); Gartner forecasts 80% of customer interactions will shift to agentic experiences by 2028 (Kumar, Uniform); organizations with composable, API-first architectures adopt AI significantly faster than those on legacy systems (Kumar, citing MACH Alliance research); Drupal Canvas achieves an 80% usable AI-generated page output rate (Foster); and 56% of developers already use AI tools like GitHub Copilot, saving up to one full workday per week (Basarab, Pantheon).
The window for foundational architectural decisions is closing. Organizations that restructure now for agent-readiness will compound those advantages; those that wait will face retrofitting costs under competitive pressure.
1. MCP Is Becoming the Integration Standard for AI-Enabled Platforms
Model Context Protocol (MCP) was referenced independently across multiple sessions as the emerging standard for connecting AI agents to CMS and platform tooling — not as a future speculation but as an active implementation target.
Kevin Basarab (Pantheon) provided the most concrete demonstration: a live walkthrough of MCP servers enabling Claude to interact with multiple publishing systems through a single conversational interface, automating content approval workflows and metadata management without requiring users to navigate multiple tabs. He framed MCP as the mechanism that transforms fragmented multi-tool workflows into unified, text-driven orchestration layers.
Martin Anderson-Clutz (Acquia) positioned MCP as core to what he called "agent-friendly architecture" — the requirement that websites treat APIs as the new UI, exposing well-documented agentic skills so downstream AI systems can hand off tasks seamlessly. He argued this is no longer optional infrastructure.
Preston So (React Bricks) confirmed that React Bricks' MCP server for developer-facing AI control was launching within approximately one month of the conference, allowing developers to fine-grain AI behavior within the CMS. He framed MCP support as a new evaluation criterion when selecting CMS vendors.
Andrew Kumar (Uniform) described MCP servers as enabling AI agents to work across different systems — a capability he connected directly to the MACH Alliance finding that composable, API-first organizations are adopting AI significantly faster than legacy-system organizations.
Justin Cook (9thCO) additionally referenced UCP — a consortium protocol backed by Google, Shopify, Target, and Walmart — as a parallel emerging standard for agentic commerce, suggesting the MCP/UCP ecosystem will form the connective tissue of the next-generation web stack.
Architect implication: MCP support should be added to CMS and DXP RFP criteria immediately. Platforms without a published MCP integration roadmap are architectural dead ends for organizations planning to participate in agentic workflows.
2. Visual Headless CMS Is a Distinct Enterprise Category — Not a Compromise
Preston So's session ("How to Make AI Work for Everyone with Visual Headless CMS") was the conference's most architecturally precise CMS positioning argument. So traced the CMS evolution arc — monolithic (WordPress, Drupal) to headless (Contentful, Sanity) to the current moment — and argued that neither pole fully serves enterprise organizations.
Pure headless CMS gave developers the API-first flexibility they needed but reduced content editors to dry form-field data entry, severing them from the visual context of the experiences they were building. Visual site builders (Wix, Webflow, Squarespace) restored editor autonomy but ceded developer control and design system integrity. So described the emerging category of visual headless CMS as the resolution: headless architecture and API delivery paired with in-context, WYSIWYG-style editing that respects component boundaries.
The architectural key is the component or "brick" as the atomic unit. So argued that a props-driven component model gives AI a structured, bounded context for content generation — preventing hallucination and maintaining design fidelity — precisely because the model cannot generate outside the defined field schema. This is a fundamentally different architecture from piecemeal AI feature additions (text generator here, image generator there) that he criticized as the dominant but insufficient approach.
So introduced Matt Biilmann's (Netlify CEO) concept of "agent experience" (AX) as a third required design dimension alongside developer experience (DX) and user experience (UX). Architects should audit current CMS platforms for AX readiness: how well does the platform accommodate AI agents as a class of user with its own interface requirements?
For competitive positioning: So explicitly placed React Bricks, Uniform, and Sanity in the visual headless enterprise tier, distinguishing them from Wix and Webflow (SMB/no-code). Andrew Kumar (Uniform) provided a complementary framing, describing Uniform as a headless CMS unifier that helps organizations with complex multi-system architectures manage brand guidelines across AI-powered interfaces.
Architect implication: When evaluating CMS platforms, require demonstration of the complete AI-to-component pipeline — not just AI text generation — to validate true AX capability. Evaluate whether the component model constrains AI output appropriately or whether AI generation is simply wrapped around an existing editor.
3. Website Architecture Must Be Rebuilt for Bot Traffic as the Primary Audience
Martin Anderson-Clutz (Acquia) delivered the conference's clearest statement of the structural shift: bot traffic already exceeds human traffic on many enterprise websites, and bad bot traffic is projected to surpass human traffic entirely by 2030. AI crawlers are not a secondary audience segment — they are the primary inbound traffic type.
This has immediate architectural consequences. First, content must be served in AI-consumable formats alongside HTML — specifically JSON, Markdown, and clean REST/GraphQL/JSON API responses that LLM crawlers can ingest without parsing HTML. Second, the website's role has fundamentally changed: it is now a brand validation and conversion layer for visitors who have largely made purchase decisions elsewhere (inside an LLM), not an educational funnel for early-stage discovery. Optimizing for funnel-top education is misallocating engineering resources.
Justin Cook (9thCO) provided the technical details of how LLMs retrieve content: they do not index in real time but fire retrieval mechanisms when confidence is low, decomposing queries into sub-queries, hitting search APIs, fetching pages, and synthesizing responses. Cook's four-part framework — Eligibility, Authority, Compressibility, Association — maps directly to architectural decisions: server-side rendering and static site generation over client-side rendering (Eligibility); schema markup and organizational data (Association); structured headings, internal linking, and FAQs (Compressibility). Critically, he noted that client-side rendering, lazy loading, and infinite scrolling render content invisible to AI crawlers — common patterns that must be architecturally reconsidered.
Kerry Basarab (Pantheon) added that websites now serve two distinct audiences — human users and AI systems — and that content delivery infrastructure must be optimized for both. He flagged tech sprawl from fragmented AI tools as creating cybersecurity vulnerabilities that compound the problem.
Brian Piper's session on discoverability reinforced the zero-click search reality and introduced the concept of llm.txt files — analogous to robots.txt — as an emerging convention for giving AI systems explicit context about site structure and content relationships. Preston So referenced the same convention (alongside routes.md) as shared context layers allowing AI, developers, and editors to operate from the same knowledge base.
Architect implication: Run an infrastructure audit against AI-crawler requirements immediately: rendering pipeline (SSR/SSG vs. CSR), API exposure (REST/GraphQL availability and documentation quality), schema markup coverage, and crawl load handling. Treat the AI crawler as a first-class client in your content delivery architecture.
4. Composable, API-First Architecture Is the Prerequisite for AI Readiness
Andrew Kumar (Uniform) cited MACH Alliance research showing a strong correlation between composable, API-first architecture and AI adoption velocity. This was one of the conference's few data-backed structural claims and deserves to be taken literally by architects: organizations on legacy monolithic systems are not slower at adopting AI features — they are structurally blocked from adopting the most consequential ones.
Anderson-Clutz (Acquia) reinforced this with a platform selection principle: choose a DXP that is LLM-agnostic, natively agentic, composable, and orchestration-friendly (citing n8n as an integration platform). He argued that LLM-agnostic architecture is critical because model rankings are shifting rapidly and locking to a single AI provider will mean falling behind as better models emerge for specific tasks. The ability to swap models per task — rather than being dependent on one provider's roadmap — unlocks compounding capability.
Kumar added a practical sequencing insight from Uniform's actual customer adoption data over the previous 90–120 days: the most-adopted AI feature was "AI guidance" — providing brand voice, tone, and editorial guidelines to AI tools to improve output quality and reduce token costs. The second and third most-adopted features were content translation and SEO metadata automation. His hypothesis: there is a strong correlation between how boring and painful a task is and the likelihood of successful AI adoption. Start with unglamorous but high-volume tasks (translations, metadata, quality checks) before moving to strategic content generation.
This sequencing insight has architectural implications: the systems that need to be composable first are not the glamorous AI content generation features — they are the data pipelines and API layers that feed AI with structured content (catalog data, product descriptions, translation memory) and receive AI output back into governed workflows.
Joyce Peralta (McGill University) provided a cautionary counter-example from higher education: McGill's ecosystem of approximately 1,000 Drupal sites with 1,300 active content creators has required five years of governance framework development before the institution could move from "what are our standards" to "how do we apply them." The foundational governance infrastructure — content models, web registries, lifecycle management — is the prerequisite that makes AI implementation trustworthy at scale.
Architect implication: Map your current architecture against MACH principles (Microservices, API-first, Cloud-native, Headless). Identify which content pipelines lack clean API exposure — these are the highest-priority composability investments for 2026, as they gate AI adoption more broadly.
5. Design Systems Are the Guardrail Layer for AI-Generated Content
Multiple sessions addressed design systems, and their convergence with AI content generation is the architectural pattern that will define CMS governance in the next two to three years.
Aidan Foster's Drupal Canvas session demonstrated the most complete integration: a prototype AI page builder that uses a component-based design system as its constraint layer. The Canvas prototype runs on GPT-4.1 but is explicitly model-agnostic, designed to support self-hosted LLMs. Its AI Context Control Center acts as a knowledge hub containing brand guidelines, audience personas, and content rules. Autonomous agents can update multiple pages site-wide when organizational facts change in the context center. The critical architectural finding: without properly structured context documents, the same prompts produced "AI slop" and hallucinated content — demonstrating that the design system and its documented rules are the quality control mechanism, not the AI model itself.
Dmitry Mayorov (Fueled) demonstrated how WordPress's theme.json can function as an AI-constraining style dictionary — converting JSON design tokens into CSS custom properties and disabling dangerous customization options like arbitrary spacing and gradients. This constrains both human editors and AI-generated markup to brand-compliant outputs. He noted that documentation and schema availability make theme.json configuration accessible through AI tools, creating a virtuous loop where the design system tokens are both the constraint for AI output and legible to AI tooling.
James Harrison (Loblaw Digital) described a headless design system architecture with bidirectional Figma-to-code sync as the foundation for multi-brand consistency across 20 websites and 9 apps. He flagged AI as crucial for addressing design system entropy — the inevitable debt accumulation in large federated systems — and is exploring AI for component library regeneration and automated migration workflows.
The design systems panel (moderated by Guy Seagull, Thomson Reuters; Arena Stoka, Bell; Andrea Ang, RBC; David Cox, Lyft) surfaced a critical nuance: accessible components do not guarantee accessible compositions — context matters. This is directly relevant to AI-generated layouts: an AI generating content from accessible components can still produce inaccessible page compositions if the system does not enforce compositional rules.
Architect implication: The design system is not a design team deliverable — it is an AI governance layer. Platform architects must ensure design tokens and component schemas are structured in formats that AI tooling can read and be constrained by. Evaluate CMS platforms for their ability to enforce component boundary rules on AI-generated output.
Jesse Dyck's session on migrating a 700-subsite WordPress multisite network (frozen at version 4.9 since 2017) provided the conference's most detailed technical migration case study. The six-phase approach — setup/audit, cleanup, testing, execution, deployment, post-upgrade maintenance — demonstrated that the audit phase is disproportionately important: comprehensive cataloging of all sites, plugins, and themes before any upgrade work begins.
Key technical specifics for architects: WP-CLI combined with AI tools (ChatGPT for generating WP-CLI commands) can automate complex data extraction that would otherwise take days of manual effort. PHPCS identifies PHP compatibility issues in legacy plugins before upgrade, enabling proactive patching or replacement. Playwright visual regression testing with before-and-after screenshots is essential for detecting CSS and markup changes across hundreds of sites at scale. Cleanup before upgrade reduced the network from ~700 to ~350 sites and cut database and file sizes by 50%, dramatically improving deployment and rollback times.
The Sheridan College CMS migration (from 17-year-old Sitecore to Drupal, presented by Nicole Woodall and Ian Barcarse) added a governance dimension: the financial crisis facing Ontario colleges has paradoxically accelerated consolidation onto open-source shared platforms, with the CIO becoming a close ally. Rogue WordPress sites and an unmaintained HTML-only registrarial website (no analytics, no maintainer) were explicitly flagged as SEO damage and business continuity risks requiring aggressive consolidation.
Luke Woolliscroft (Empire Life) described a parallel transformation in the regulated insurance sector: unifying a fragmented digital estate across Drupal 10, Drupal 11, and non-Drupal applications through a component-driven design system, standardizing from 20+ disparate Google Tag Manager instances into unified cross-domain tracking, and restructuring JSON schema markup for LLM consumption. His principle of decoupling infrastructure from application layers to enable zero-downtime migrations is directly applicable to any large CMS consolidation project.
Architect implication: For organizations running legacy CMS estates (particularly Sitecore, WordPress multisite, or pre-composable Drupal configurations), the migration decision clock is running. Each year of delay adds technical debt in a landscape where bot traffic, AI crawlers, and agent-friendly architecture requirements are escalating. Build the audit infrastructure first — it is the highest-ROI investment before any migration begins.
Strategic Implications
The Platform Selection Calculus Has Changed
The criteria for CMS and DXP selection in 2026 have shifted materially. The traditional evaluation dimensions — editor experience, developer flexibility, hosting model, cost — remain relevant but are now insufficient. Anderson-Clutz (Acquia) proposed four new required characteristics: LLM-agnostic (no single-model lock-in), natively agentic (designed for agent consumption, not retrofitted), composable (MACH-aligned architecture), and orchestration-friendly (integrable with automation platforms like n8n). Preston So added a fifth: AX-readiness — how well does the platform accommodate AI agents as a first-class user type?
MCP support has moved from roadmap item to evaluation criterion. Platforms should be assessed not just for whether they have announced MCP support but for the maturity of their MCP implementation — specifically, what agentic skills are exposed and how well-documented they are for downstream AI systems.
Content Modeling Is an AI Architecture Decision
The structural question of whether content is stored in structured fields (typed, queryable, reusable) versus unstructured rich text has always mattered for omnichannel delivery. It now matters at a higher order of magnitude because it directly determines AI ingestibility. Anderson-Clutz addressed this explicitly: structured content is easier for AI to ingest and less likely to hallucinate on, but unstructured long-form content remains necessary for certain use cases. The key is deliberate modeling — knowing which content types need tight structure for agent consumption and which can remain more flexible.
The McMaster chatbot case study (Charlotte Miller and Leanna Ruiz) is an instructive microcosm: their RAG implementation worked because the Registrar's website was already well-maintained and accurate — the content quality was the foundation for chatbot trustworthiness. A 900-page undergraduate calendar PDF was ingested as a second RAG source in under two minutes, immediately improving answer quality for admissions questions. Content quality and structure are the upstream dependency for AI capability.
Token Cost Management Is an Infrastructure Concern
Andrew Kumar (Uniform) flagged escalating AI token costs as a critical implementation consideration that has moved from a budgeting conversation into an infrastructure design conversation. His finding that "AI guidance" — providing brand voice and editorial guidelines to constrain AI output — is the most-adopted feature is partly a cost story: well-scoped AI context windows consume fewer tokens than open-ended generation. This means that design system documentation, brand guidelines, and content rules are not just governance artifacts — they are token efficiency mechanisms.
For architects, this translates to a build consideration: how does your content delivery infrastructure expose structured context (brand rules, tone guidelines, component constraints) to AI systems in a way that minimizes token overhead? This is a new dimension of performance optimization.
Open-Source Platforms Have a Structural AI Advantage
Anderson-Clutz (Acquia) made an explicit case that Drupal's open-source, collaboration-driven module ecosystem allows faster AI integration iteration than commercial plugin markets, because top contributors collaborate rather than compete. He cited Drupal having a working OpenAI API integration within three weeks of that API's public launch. The Drupal AI Partners initiative (referenced by Foster) brings together 30+ agencies contributing resources to the Canvas prototype, creating a shared investment model that individual commercial platform ecosystems cannot replicate.
This structural advantage compounds over time: as AI integration becomes the primary axis of CMS innovation, the platforms with the most active contributor communities will iterate faster. Architects evaluating proprietary versus open-source platforms should weight the contributor ecosystem's AI activity as a leading indicator of capability trajectory.
Governance Infrastructure Is the Rate-Limiting Constraint
Multiple sessions converged on a finding that will be uncomfortable for organizations eager to ship AI features: governance infrastructure — not model capability — is the rate-limiting factor for trustworthy AI at scale. Kumar's principle of keeping humans in the loop was reinforced by Anderson-Clutz's concrete example of an AI chatbot issuing 80–100% discounts that the brand had to honor. The failure was not the AI model's — it was the absence of governance rails around what the AI was permitted to generate.
McGill's five-year governance framework development (Peralta), Sheridan's stakeholder triage methodology (Woodall, Barcarse), and McMaster's year-long privacy impact assessment process (Miller, Ruiz) all point to the same conclusion: the organizations that have invested in governance infrastructure before AI deployment will be able to scale AI responsibly; those that deploy AI into ungoverned content estates will create liability.
Action Items
Immediate (0–30 days)
Audit your rendering pipeline for AI-crawler compatibility. Identify any pages or sections rendered client-side that would be invisible to AI crawlers. Prioritize SSR or SSG migration for high-value content. Pay particular attention to infinite scroll, lazy-loaded content, and JavaScript-gated data.
Add MCP to your CMS/DXP evaluation criteria. Contact your current platform vendor and any platforms under evaluation to request their MCP integration roadmap and timeline. Platforms that cannot provide a specific roadmap should be downgraded in evaluation scoring.
Audit structured data and schema markup coverage. Run a schema audit against all key page types (product pages, FAQs, service descriptions). Implement JSON-LD schema for organizational data, services, and products. This directly improves AI retrievability and LLM association signals.
Inventory your content API exposure. Document which content types are currently accessible via REST, GraphQL, or JSON API — and which are locked in HTML-only delivery. This inventory is the prerequisite for composability planning.
Short-term (30–90 days)
Run a composability gap analysis against MACH principles. Map your current architecture against Microservices, API-first, Cloud-native, and Headless criteria. Identify which content pipelines lack clean API exposure and prioritize them for Q2/Q3 composability investments.
Add llm.txt and routes.md files to your web properties. These emerging conventions (referenced by Preston So and Brian Piper) provide AI systems with shared context about site structure and content relationships. Low implementation cost; measurable improvement in AI content generation quality for your own workflows and AI crawler comprehension.
Prototype an MCP workflow before purchasing. Before committing to any platform's MCP integration, run a controlled prototype that exercises the actual agentic skills exposed — content approval, metadata management, cross-system publishing. Basarab's Pantheon demonstration is the model to replicate at your own infrastructure scale.
Structure your design tokens as AI context documents. Ensure brand guidelines, design tokens, and component schemas exist in structured, machine-readable formats (JSON, Markdown) that AI systems can ingest as context. This reduces hallucination risk and token costs simultaneously.
Medium-term (90–180 days)
Begin content model audit for AI ingestibility. Identify your highest-volume content types and audit them for structured field coverage versus unstructured rich text. For content that will feed RAG systems or AI agents, migrate to structured fields with consistent vocabulary and typed relationships.
Evaluate visual headless CMS platforms if your current architecture has the headless/editor gap. If your development team has moved to headless but editor experience has degraded to form-field data entry, evaluate React Bricks, Uniform, or Sanity against your specific enterprise requirements — paying particular attention to how each constrains AI-generated content within component boundaries.
Establish AI governance rails before deploying customer-facing AI. Document which content categories require human review before publication, define confidence thresholds for automated versus reviewed AI output, and establish escalation paths. The McMaster chatbot governance process (privacy impact assessment, IT security review, confidence threshold tuning) is a replicable template for enterprise-scale AI deployment.
Plan CMS estate consolidation if you have rogue departmental sites. Unmaintained sites create SEO damage, AI-crawler confusion, and security risk. Use the AI-driven content audit framing (retire outdated content that confuses AI models) as executive justification for consolidation initiatives that previously lacked clear business drivers.
Sessions to Watch
Tier 1: Essential for Platform Architects
"The AI-Driven DXP: New Horizons for Marketers" — Martin Anderson-Clutz (Acquia)
The most architecturally complete session of the conference. Anderson-Clutz covers bot traffic trajectory, agent-friendly architecture requirements, LLM-agnostic platform selection, content format requirements for AI ingestion (JSON, Markdown), MCP and agent-to-agent protocols, and Drupal's 2026 AI roadmap. The Q&A on structured vs. unstructured content trade-offs and headless editor preview challenges is particularly dense with practitioner-level insight. Required viewing for anyone making DXP platform decisions in the next 12 months.
"How to Make AI Work for Everyone with Visual Headless CMS" — Preston So (React Bricks)
The definitional session for the visual headless CMS category. So's framing of agent experience (AX) alongside DX and UX, his critique of piecemeal AI feature additions versus coherent end-to-end AI architecture, and his live demo of page generation from a single prompt within component boundaries are the conceptual foundation for evaluating any AI-enabled CMS. The competitive positioning section (React Bricks/Uniform/Sanity vs. Wix/Webflow) is directly actionable for RFP scoping.
"AI That Actually Matters in 2026" — Andrew Kumar (Uniform)
Kumar's session is notable for being grounded in actual adoption data (90–120 day Uniform customer cohort) rather than prediction. The finding that AI guidance — brand voice and editorial constraint documents — is the most-adopted feature has direct implications for how architects should structure context delivery infrastructure. His correlation between "boring and painful" tasks and AI adoption likelihood is the most useful heuristic for sequencing AI investments.
"The Future of WebOps: How AI and Changing Tech Up the Ante" — Kevin Basarab (Pantheon)
Basarab's live MCP server demonstration is the most concrete technical artifact from the conference. His illustration of Claude interacting with multiple publishing systems through a single interface — automating content approval and metadata workflows — shows what MCP-enabled orchestration looks like in practice, not in theory. His content governance, compliance, and automation taxonomy maps cleanly onto enterprise platform requirements.
Tier 2: High Value for Specific Decision Contexts
"AI Page Building in Drupal Canvas" — Aidan Foster (Evolving Web / Drupal AI)
Essential for organizations on Drupal or evaluating open-source DXP platforms. The 80% usable output rate, the AI Context Control Center architecture, and the autonomous site-wide update capability demonstrate production-approaching capability. Foster's point that the system requires substantial upfront investment in brand strategy, personas, and design system documentation — and that without it, the same prompts produce "AI slop" — is the architectural prerequisite that most vendor demos obscure.
"Stop Letting WordPress Break Your Design System" — Dmitry Mayorov (Fueled)
For organizations with significant WordPress estates, Mayorov's theme.json-as-style-dictionary approach is a practical path to design system enforcement without full platform migration. The framework — theme.json as constraint layer, custom blocks for structural changes, block styles as theme providers, patterns for pre-approved layouts — maps cleanly onto the AI governance requirements: the same mechanisms that constrain human editors constrain AI-generated content.
"The Unified Estate: Orchestrating Design, Data, and Strategy at Empire Life" — Luke Woolliscroft (Empire Life)
The most detailed enterprise transformation case study from the conference. Woolliscroft's three-pillar approach (standardizing visual surfaces, unifying invisible infrastructure, AI optimization for LLM consumption) across a regulated industry context (insurance) is directly applicable to any organization with complex multi-system estates. His analytics consolidation from 20+ GTM instances to unified cross-domain tracking is both a standalone win and a prerequisite for AI-informed personalization.
"600 Sites 8 Years Outdated: A Massive Multisite WordPress Upgrade" — Jesse Dyck (Evolving Web)
The definitive technical playbook for large-scale CMS estate modernization. Dyck's audit-first methodology, WP-CLI + AI automation pattern, PHPCS compatibility checking, and Playwright visual regression testing framework constitute a complete toolchain for any organization facing a legacy CMS upgrade. The data points — 700 to 350 sites post-cleanup, 50% database size reduction — calibrate realistic scope expectations for similar projects.
"Consistency at Scale in Higher Education" — Joyce Peralta (McGill University)
McGill's 1,000-site Drupal ecosystem with 1,300 active content creators is the largest-scale content governance case study in the corpus. Peralta's nine digital standards, five-year governance framework, community of practice model, and web registry with annual attestation requirements constitute a governance architecture that any large organization maintaining an enterprise CMS estate should study before deploying AI at scale.
"Achieving Brand Visibility in the Era of AI-Search" — Justin Cook (9thCO)
Cook's Eligibility/Authority/Compressibility/Association framework is the most structured technical SEO-to-AEO (Answer Engine Optimization) translation in the conference. His points on server-side rendering versus client-side rendering for AI crawler visibility, schema markup for Association signals, and MCP/UCP as the protocols enabling agentic commerce are directly relevant to platform infrastructure decisions. The prompt evaluation audit methodology — identifying queries your brand should qualify for and checking whether your site contains the data to qualify — is immediately actionable.