10 meta-themes that emerged independently across multiple conference tracks.
Across sessions covering AI in content creation, accessibility, chatbots, and marketing, a consistent warning emerged: AI augments but cannot replace human judgment. Practitioners from every sector — insurance, higher education, SEO, and design — insisted that human review, validation, and accountability remain essential checkpoints, not optional luxuries. The mantra appeared repeatedly in different forms: the 'human-in-the-loop sandwich,' editorial sign-off before publication, and the irreplaceable role of lived experience in testing.
Implications: Practitioners should resist vendor promises of autonomous AI that removes the need for human review. Instead, embed review checkpoints at specific pipeline stages — particularly before customer-facing publication, before financial or legal commitments, and before accessibility sign-off. Design your governance to specify who reviews what and at what frequency, rather than leaving oversight informal. The volume question (how do you review AI output at scale?) is unsolved and represents a genuine capability gap most organisations need to address intentionally.
•Brian Piper (Choose Your Own AI Adventure and Preparing Your Content) explicitly named the 'human-in-the-loop sandwich' as a core workflow principle — humans provide context at the start and validation at the end.
•Preston So (How to Make AI Work for Everyone) stated that 'a human-in-the-loop is non-negotiable in AI-driven content workflows' and argued AI should augment, not replace, editorial review.
•The Accessibility unlocked panel found that AI testing tools catch only 25–35% of accessibility issues and that human validation by people with lived experience 'remains non-negotiable in any AI-assisted accessibility workflow.'
•Martin Anderson-Clutz (The AI-Driven DXP) cited an AI chatbot granting 80–100% discounts the brand had to honour as a cautionary tale for unsupervised AI in customer-facing roles.
•Charlotte Miller and Leanne Ruiz (From Questions to Clarity) ran weekly review cycles throughout their chatbot pilot, with staff monitoring unanswered questions and iterating on content.
•Anton Morrison (Building a Second Brain in Claude Code) demonstrated AI handling autonomous tasks end-to-end — from client research to invoice generation — with lighter-touch human involvement, suggesting that for some use cases the oversight loop can be loosened.
•Kevin Basarab (The Future of WebOps) framed the future as 'text-driven interfaces' where AI orchestrates multiple systems, implying growing automation that could reduce the surface area requiring human review.
Multiple sessions converged on a single structural shift: the traditional model of driving visitors from search engines to websites is breaking down as AI-generated answers eliminate the click. Zero-click search, bot traffic exceeding human traffic, and the rise of agent-to-agent interactions require organisations to rethink what discoverability means, how content should be structured, and what metrics actually matter. The sessions provided a consistent new framework: optimise for being retrieved and trusted by AI systems, not for ranking on a results page.
Implications: Organisations need to move beyond traditional SEO audits to conduct prompt evaluation audits — identifying what questions their audiences ask AI tools and whether their brand would appear in those answers. Technical infrastructure changes (server-side rendering, schema markup, JSON/Markdown content delivery, llm.txt files) must become standard practice. KPI frameworks need updating: declining organic traffic volume should be expected and is not a failure signal; the new metrics are share of voice in AI-generated responses and intent-based session quality. Content governance must prioritise accuracy and freshness over volume, since outdated or incorrect content can confuse AI models and actively harm retrievability.
•Justin Cook (Achieving Brand Visibility in AI Search) argued that 'there is no ranking algorithm inside AI search tools — the goal is to be trusted enough to be retrieved,' and introduced the Eligibility, Authority, Compressibility, Association framework for answer engine optimisation.
•Brian Piper (Preparing Your Content for the Future of Discoverability) stated: 'If your content can't be discovered by AI, it doesn't exist,' and predicted that 90% of online content will be AI-generated within a year.
•Martin Anderson-Clutz (The AI-Driven DXP) reported that bot traffic already exceeds human traffic and is projected to surpass human traffic by 2030, requiring websites to treat AI crawlers as a primary audience.
Across design systems, content management, higher education digital estates, and AI implementation, a consistent pattern emerged: organisations that had invested in governance frameworks, documented standards, and community-of-practice structures were able to move faster, adopt new technologies with less disruption, and survive resource crises better than those that had not. Governance — often dismissed as overhead — was repeatedly reframed as the enabling infrastructure for everything else.
Implications: Governance investment should be framed not as a bureaucratic cost but as a compounding organisational asset. Standards documented today accelerate AI implementation tomorrow; design systems built collaboratively survive resource crises that solo-champion systems do not; audit-first approaches to legacy systems prevent the compounding technical debt that forces expensive emergency projects later. The practical implication for teams: budget for governance infrastructure explicitly, identify community champions rather than relying on central authorities alone, and use data — user research, analytics, accessibility metrics — as the arbiter of contested decisions rather than political weight.
•Joyce Peralta (Consistency at Scale in Higher Education) described McGill's three-pillar governance approach — institutional standards, governance framework, and community of practice — as the foundation that enabled the shift from 'what are our standards?' to 'how do we best apply them?'
•James Harrison (Practical Advice for Building a System Without a Team) built Figma School and Dev School as mandatory training curricula and a steering committee to govern Loblaw Digital's federated design system — without these structures, the system would have collapsed.
•Nicole Woodall and Ian Barcarse (The Stakeholder Maze) repositioned Sheridan College's central team from ticket-takers to strategic advisors using governance discipline, user-data grounding, and probing questions — and found that a financial crisis clarified rather than undermined governance priorities.
Three dedicated sessions and multiple passing references across the conference converged on a redefinition of design systems. The most persistent misconception — that a design system is a Figma library or a component kit — was dismantled repeatedly. The more accurate and more durable framing: a design system is a contractual relationship between design and development teams, sustained by community, governance, and shared purpose. Building components without the relationship infrastructure causes adoption failure.
Implications: Teams planning or rebuilding design systems should invest as heavily in community infrastructure as in technical architecture. This means: identifying and cultivating passionate volunteer contributors before the system launches, building training programmes that establish shared vocabulary and mental models, creating lightweight weekly relationship touchpoints rather than heavy quarterly reviews, and defining clear ownership that survives staff turnover. The 'rule of three teams' as an inclusion criterion is a practical governance heuristic. The political reality that high-visibility products may warrant exceptions should be acknowledged explicitly in governance documentation rather than handled informally.
•The design system mindset panel (Arena Stoka, Andrea Ang, David Cox) opened with Arena's statement: 'A design system is a relationship between design and development, kind of like a contract that has to be maintained.'
•James Harrison (Practical Advice for Building a System Without a Team) found that maintaining a federated design system across 20 websites required Figma School, Dev School, a steering committee, and volunteer champions — the community and education infrastructure was as critical as the technical architecture.
•Dmitry Mayorov (Stop Letting WordPress Break Your Design System) demonstrated that the technical implementation must encode the governance decisions — theme.json, custom blocks, and block styles are tools for enforcing the design contract in production WordPress environments.
A counterintuitive pattern appeared across multiple sessions: the AI use cases with the highest actual adoption rates were not the most glamorous ones. Translation, SEO metadata generation, email personalisation processing, accessibility validation, and content tagging consistently outperformed AI-generated creative content in real-world uptake. Andrew Kumar articulated this as a hypothesis — 'there is a strong correlation between how boring and painful a task is and the likelihood of AI adoption' — that was corroborated independently across higher education, insurance, marketing, and developer tooling contexts.
Implications: Organisations struggling to achieve AI adoption should identify the most painful, repetitive, low-creativity tasks in their workflows first and solve those — not the most visible or impressive ones. The business case for unglamorous AI use is easier to make (time saved, error reduction, volume handled), the governance requirements are lower (less creative judgment involved), and the adoption resistance is lower (no one defends their right to do tedious work). Once these quick wins build organisational confidence and technical literacy, more ambitious use cases become achievable. Kumar's sequencing advice is practically sound: SEO metadata and translation before creative content generation, process automation before experience design.
•Andrew Kumar (AI That Actually Matters) presented Uniform's 90-120 day adoption data: the most popular AI feature was 'AI guidance' (brand voice consistency), followed by translation (one Danish customer used it 104 times in a month), SEO metadata, and content previews — all unglamorous workflow tasks.
•Emma Nguyen and Gary Bhanot (AI in Practice, Not Theory) built a custom GPT that reduced per-email processing time from 10 to 3 minutes for a 1,600-campaign-per-year email operation — a repetitive, high-volume task with no creative component.
Jesse Dyck (600 Sites 8 Years Outdated) used AI tools like ChatGPT to generate WP-CLI commands for auditing tasks that 'would otherwise take days of manual work' — AI applied to the least glamorous part of a major infrastructure project.
Whether the downstream goal was AI retrievability, design system compliance, chatbot accuracy, personalisation, or multilingual delivery, a consistent prerequisite emerged: content must be structured. Sessions covering CMS architecture, chatbot implementation, design systems, and AI search all pointed to unstructured, poorly governed content as the core bottleneck limiting more advanced digital capabilities. The investment in content structure was repeatedly framed not as a technical task but as a strategic enabler.
Implications: Organisations that have not yet invested in content structure — consistent metadata, taxonomy, heading hierarchies, schema markup, component-based content models — face compounding disadvantages as AI becomes more central to both delivery and discovery. The practical starting point is a content audit: identify what content exists, what structure it has, and what gaps prevent AI systems from accurately retrieving and representing it. CMS selection decisions should now explicitly include evaluation of structured content delivery capabilities (REST, GraphQL, JSON API, Markdown output). Content governance investment made today will pay dividends across AI discoverability, chatbot accuracy, multilingual delivery, and personalisation — making it one of the highest-leverage infrastructure decisions available.
•Charlotte Miller and Leanne Ruiz (From Questions to Clarity) led with 'start with content mapping before selecting technology' — their chatbot's accuracy depended entirely on the quality and structure of the Registrar's website content, which had been maintained carefully before the AI project began.
•Preston So (How to Make AI Work for Everyone) argued that the component/brick model gives AI a 'structured, bounded context' that prevents hallucination and maintains design fidelity — structure as AI guardrail.
•Martin Anderson-Clutz (The AI-Driven DXP) urged organisations to 'serve content in AI-consumable formats (JSON, Markdown) alongside HTML' and argued that structured content enables omnichannel delivery across digital signage, wearables, voice, and AI agents.
Across sessions addressing website redesigns, design systems, stakeholder management, and email strategy, a recurring mechanism for resolving contested decisions was the deployment of user data as an objective arbiter. Teams that had invested in user research — tree tests, sentiment testing, usability studies, chatbot interaction logs — consistently reported being able to override stakeholder preferences with evidence rather than authority. This pattern appeared most prominently in higher education contexts but was not limited to them.
Implications: User research is most powerful when institutionalised rather than project-specific. Teams that commission research on a one-time basis for individual projects spend political capital on each contested decision. Teams with embedded, ongoing research capability — Sheridan's embedded market research group, McGill's Student Usability Panel, McMaster's weekly chatbot review cycles — can make data-driven decisions as a cultural norm rather than an occasional argument. The minimum viable version of this capability is lower than most organisations assume: Margineanu demonstrated that a single internal researcher executing a disciplined five-study program can drive a measurable 62% increase in conversion while consuming less than 10% of project budget.
•Adie Margineanu (Creating Impact While Mitigating Risk) presented sentiment testing data showing Option B was far more brand-adherent despite stakeholder preference for Option A — 'leadership accepted the data without pushback,' demonstrating user research as a non-confrontational conflict resolution mechanism.
•Nicole Woodall and Ian Barcarse (The Stakeholder Maze) used user research to override the homepage slider despite strong political pressure from multiple stakeholders, grounding the decision in prospective student data rather than internal preferences.
Suzanne Dergacheva (Five Types of Landing Pages) emphasised that each landing page type must map to user journey needs and specific business objectives — design decisions grounded in user understanding, not aesthetic preference.
Across enterprise, higher education, and agency contexts, the most consistent barrier to AI adoption was not technical capability but human resistance, institutional process, and change management failure. Sessions from University of Toronto, McGill, McMaster, and agency practitioners all framed AI rollout primarily as an organisational and cultural challenge. The technology was described as the easier problem; building the human infrastructure around it was the harder and more consequential one.
Implications: Organisations planning AI initiatives should budget explicitly for change management: cross-functional working groups, role-specific training, structured business case processes for institutional approval, and time for fear of job displacement to be addressed transparently. The 'problem-first' approach — identifying workplace problems before selecting AI solutions — is a change management tactic as much as a product strategy, because it keeps adoption grounded in team-identified needs rather than leadership-mandated tools. Privacy impact assessments and IT security reviews should be started as early as possible because institutional governance timelines are long and cannot be compressed by enthusiasm or urgency.
•Emma Nguyen and Gary Bhanot (AI in Practice, Not Theory) opened by stating explicitly: 'AI adoption is primarily about change management rather than technology implementation,' and structured their entire four-phase framework (planning, engagement, enablement, scaling) around human and organisational factors.
•Joyce Peralta (Consistency at Scale) described five years building governance frameworks and community of practice before McGill could move from basic standards compliance to advanced AI implementation — the prerequisite human infrastructure took years.
•Charlotte Miller and Leanne Ruiz (From Questions to Clarity) spent close to a year on privacy impact assessment and IT security review before piloting — institutional governance processes, not technology, determined the project timeline.
Multiple sessions converged on a redefinition of what a website is for. The traditional model — educate visitors, build awareness, capture leads — is giving way to a new model in which websites primarily serve two audiences: AI crawlers that will synthesise content for off-site delivery, and high-intent human visitors who have already completed most of their decision-making journey elsewhere. This shift has profound implications for information architecture, content strategy, design, and the metrics used to evaluate digital performance.
Implications: Website strategy teams need to reconsider their information architecture from a dual-audience perspective: what does a high-intent human visitor need in order to validate a decision already mostly made, and what does an AI crawler need in order to accurately represent the organisation's content in a synthesised response? These two needs are partially aligned (clear structure, accurate information, fast loading) but diverge significantly in emphasis. For humans: frictionless social proof, easy conversion paths, compassionate UX for high-stakes moments. For AI: JSON and Markdown delivery, schema markup, factual density, technical crawlability. The website as a destination for awareness-building and top-of-funnel education is a declining model; the website as a validated source node in an AI-mediated discovery ecosystem is the emerging one.
•Martin Anderson-Clutz (The AI-Driven DXP) argued that 'websites in 2026 will primarily serve visitors who want to validate an existing purchase intent' — frictionless social proof matters more than broad education.
•Justin Cook (Achieving Brand Visibility in AI Search) described how 'the classic customer journey — query, SERP, click, consider, purchase — is being replaced by synthesised AI answers that may produce no clicks at all.'
•Brian Piper (Preparing Your Content for the Future of Discoverability) argued that 'the traditional destination model of driving traffic to websites is breaking down' and recommended repositioning websites as knowledge repositories to be retrieved, not visited.
Across design, content strategy, accessibility, SEO, and knowledge management, a counterintuitive pattern emerged: the more capable AI becomes at production tasks, the more valuable distinctively human capabilities become. Authentic user research, strategic judgment, cross-disciplinary communication, lived experience, and creative direction were all named as the capabilities that AI amplifies rather than replaces — and that differentiate organisations as AI-generated output floods every channel.
Implications: Practitioners should identify which capabilities in their role are distinctively human — involving empathy, contextual judgment, cross-disciplinary translation, or lived experience — and invest in deepening those rather than competing with AI on production speed. Organisations should structure roles to concentrate human effort on strategy, validation, stakeholder communication, and research while using AI to handle production tasks. The career development implication is significant: skills that are defensible against AI are not necessarily the most technical or specialised, but those that are most relational, contextual, and situated in lived experience — including disability experience, cultural knowledge, and client relationship-building. The content strategy implication is equally clear: as AI-generated content saturates every channel, content that carries verifiable human insight, original research, and authentic voice becomes scarcer and more valuable.
•Aidan Foster (AI Page Building in Drupal Canvas) warned that 'as AI-generated content floods the internet, organisations must double down on authentic, human-centred research and strategy to remain distinct,' and demonstrated that without human-created context documents, AI produces 'AI slop.'
•The Accessibility Unlocked panel argued that 'human validation by people with lived experience remains non-negotiable' and that AI training data skews toward mainstream patterns, creating gaps for disability-related edge cases that only lived experience can identify.
•Sean Stanleigh (Focus on the Signals) warned about 'zero search' — users getting answers directly from AI tools without visiting websites — threatening traditional digital marketing strategies.
•Luke Woolliscroft (The Unified Estate) described Empire Life's strategy of structuring data for LLM consumption using JSON schema markup, treating AI readiness as a third pillar of their digital transformation.
•Adie Margineanu (Creating Impact While Mitigating Risk) reported a 10% year-over-year session increase despite sector-wide declines attributed to AI — suggesting that well-researched, user-centred sites can still grow organic traffic even in this environment.
•Dayana Kibilds (Do People Still Read Emails?) demonstrated that email consistently outperforms social media in conversion ROI and is growing with life responsibility, suggesting that owned channels may be more durable than AI-disrupted discovery.
•Andrew Kumar (AI That Actually Matters) cited MACH Alliance research showing that organisations with composable, API-first governance architectures adopt AI significantly faster than those with legacy monolithic systems.
•Jesse Dyck (600 Sites 8 Years Outdated) demonstrated that a six-phase governance approach — including an exhaustive audit phase — was the critical factor in successfully upgrading a 700-subsite WordPress multisite that had been frozen for eight years.
•James Harrison explicitly noted that the federated governance model at Loblaw is 'not the ideal way to build design systems' and causes significant personal and organisational stress, cautioning against it when alternatives exist.
•The design system mindset panel noted that governance by committee and heavy enforcement often drives teams to detach components and work around the system entirely — overly rigid governance can undermine the very adoption it aims to ensure.
•Andrea Ang (design system mindset panel) cited Kevin Foster's observation: 'If you build it, they won't come' — adoption requires active community work, not technical completeness.
•The design system mindset panel argued that component accessibility does not guarantee application accessibility: 'If the component itself is accessible... that's simply not true' — the relationship (how teams compose components) matters as much as the artefact.
•Dmitry Mayorov's session was almost entirely technical — focused on theme.json configuration, custom block architecture, and CSS management — suggesting that for some practitioners, the technical implementation problems are the most pressing and the relationship framing is secondary.
•James Harrison acknowledged that the federated model 'can work mostly right despite the personal toll,' suggesting that relationship-light approaches can function under resource constraints, even if suboptimally.
•Kevin Basarab (The Future of WebOps) reported 56% of developers using AI tools like GitHub Copilot, saving up to a full workday per week — primarily through code assistance on routine tasks.
•Charlotte Miller and Leanne Ruiz (From Questions to Clarity) found that their AI chatbot's highest-value function was deflecting repetitive transactional questions (payment processes, academic calendar dates) so staff could focus on complex cases.
•Aidan Foster (AI Page Building in Drupal Canvas) demonstrated AI creating complete landing pages from a single prompt — a genuinely creative and high-profile task — with an 80% usable output rate, suggesting that creative AI use cases are maturing and becoming viable.
•Nicole Rogers (Bringing AI to the Website) created a live digital assistant for the Royal Ontario Museum during her talk, demonstrating ease of creative and customer-facing AI deployment — though the ROI argument she made rested on conversion and engagement metrics, not pure efficiency.
•Luke Woolliscroft (The Unified Estate) described restructuring data with JSON schema markup specifically for LLM consumption as a core pillar of Empire Life's digital transformation.
•Justin Cook (Achieving Brand Visibility in AI Search) argued that 'compressibility' — how efficiently content can be reduced to its essential facts — is a core determinant of AI retrievability, and that clear headings, internal linking, and FAQ structure all contribute to this.
•Adie Margineanu (Creating Impact While Mitigating Risk) noted that 'institutional content governance gaps limited the specificity of eligibility and fee information' on the UTSC admissions website — content structure problems remain common and hard to resolve even on well-resourced projects.
•The McMaster chatbot team deliberately chose not to connect to internal student data systems, keeping the chatbot limited to public content — suggesting that full structure is sometimes impractical and that scope limitation is a valid alternative strategy.
•Charlotte Miller and Leanne Ruiz (From Questions to Clarity) described chatbot interaction data as 'a strategic content intelligence tool' that revealed where digital journeys broke down and which language confused students — user behaviour data informing content strategy.
•Joyce Peralta (Consistency at Scale) described McGill's Student Usability Panel as providing ongoing user experience insights that help evolve standards — user research embedded as a continuous governance mechanism.
•Chris Mantil (Setting the Tone) worked primarily with client preference and subjective design workshops — his visual style and tone grid is a structured method for eliciting preference data rather than empirical user testing, suggesting that user research is not universally applicable in all design contexts.
•Dayana Kibilds (Do People Still Read Emails?) drew entirely on published research and behavioural principles rather than presenting original user research — suggesting that secondary research and established frameworks can sometimes substitute for first-party testing.
•Brian Piper (Choose Your Own AI Adventure) demonstrated his session format around an audience-chosen scenario — pitching an AI committee to leadership — acknowledging that internal institutional resistance is the primary obstacle to AI adoption in higher education.
•Sean Stanleigh (Focus on the Signals) identified fear of job displacement as a major adoption barrier and noted that nearly half of Canadians use AI regularly despite 'wild west' initial workplace implementation approaches.
•Andrew Kumar (AI That Actually Matters) argued that technical architecture — composable, API-first systems — is a significant determinant of AI adoption speed, suggesting that technology readiness is not irrelevant and may be comparably important to cultural readiness.
•Anton Morrison (Building a Second Brain) built his entire AI-powered business operating system individually, without institutional change management — suggesting that at the individual or small-team level, technology adoption can outpace the need for formal change management.
•Nicole Rogers (Bringing AI to the Website) predicted that 'every company will eventually have a digital assistant representing their brand voice 24/7' and that websites will 'primarily function as interfaces for digital assistants rather than traditional navigation-based experiences.'
•Suzanne Dergacheva (Five Types of Landing Pages) provided a practical taxonomy of landing page purposes — why, wayfinding, decision-making, lead generation, dashboard — implying that each page should serve a specific function in a user journey that may begin before the website visit.
•Adie Margineanu's user research at UTSC showed a 62% increase in 'Apply Now' clicks in the first eight weeks post-launch — suggesting that traditional conversion-focused website design still produces measurable results for high-intent actions like admissions.
•Dayana Kibilds demonstrated that email consistently outperforms social media in ROI and conversion — suggesting that owned, direct channels remain valuable even as discovery channels shift to AI.
•The design system mindset panel discussed AI's 'commoditisation of design work' and concluded that it 'might force designers away from being production monkeys toward more critical thinking and human connection work.'
•Chris Mantil (Setting the Tone) built his entire practice around a human capability AI cannot replicate: the ability to translate between visual language and client intuition, build trust across the visual literacy gap, and make design vocabulary accessible to non-designers.
•Sean Stanleigh (Focus on the Signals) predicted 'fewer but higher-paid workers' and argued that quality content creation 'remains crucial to compete against AI-generated slop flooding the internet.'
•Anton Morrison (Building a Second Brain) demonstrated AI replacing tasks previously requiring strategic and creative human input — client research, business model canvases, UX research outputs — suggesting the boundary between 'production' and 'strategy' may be narrower than the human-skills framing implies.
•Andrew Kumar noted that Gartner forecasts 80% of customer interactions shifting to agentic experiences by 2028 — a prediction that implies significantly reduced human involvement in many knowledge work tasks.