Higher Education Digital Team Intelligence Brief — Evolve Digital Toronto 2026 | ConferenceDigest
Higher Education Digital Team Intelligence Brief
For: University web managers, higher-ed IT directors, and institutional digital teams
Higher Education Digital Team Intelligence Brief
Evolve Digital Toronto 2026 — Synthesized Intelligence for University Web Managers, Higher-Ed IT Directors, Registrar's Office Digital Leads, and Institutional Communications Teams Managing Multi-Site Digital Estates
Executive Summary
Evolve Digital Toronto 2026 surfaced a cluster of converging pressures that are landing simultaneously on higher education digital teams: multi-site estate sprawl demanding urgent consolidation, AI integration that is far more operationally tractable than the hype suggests, a fundamental shift in how prospective students discover institutions online, accessibility obligations that remain chronically under-resourced, and a funding environment in Ontario that is paradoxically accelerating modernisation. Four sessions were delivered by practitioners from Canadian universities — McMaster, McGill, University of Toronto, and Sheridan College — and their specificity makes this conference unusually actionable for institutional teams.
The signal-to-noise ratio across sessions was high on one specific claim: AI adoption in higher ed is not primarily a technology problem. It is a change management problem (University of Toronto), a content governance problem (McMaster Registrar's Office), a stakeholder alignment problem (Sheridan College), and a research infrastructure problem (University of Toronto Scarborough). Teams that moved fastest did so by solving process and governance problems first, then layering AI on top.
The other consistent signal: zero-click search is not a future threat. It is a present one. Three separate sessions — from Justin Cook (9thCO), Brian Piper, and Martin Anderson-Clutz (Acquia/Drupal) — converged on the same finding: prospective students and other audiences are increasingly receiving synthesised answers inside AI tools without visiting institutional websites. The institutions whose content is structured, fast, accessible, and semantically rich will survive this transition. Those whose content is buried in PDFs, rendered client-side, or stranded on unmaintained subdomains will disappear from AI-generated answers.
Key Findings
1. Multi-Site Estate Debt Is a Strategic Risk, Not Just a Maintenance Problem
Two sessions addressed large-scale CMS estate management from angles directly applicable to higher education.
Jesse Dyck (Evolving Web), in "600 Sites 8 Years Outdated: A Massive Multisite WordPress Upgrade," documented a client WordPress multisite network that had been frozen at WordPress 4.9 since 2017, comprising nearly 700 subsites, 200 plugins, 43 themes, a 500 MB compressed database, and 143 GB of files — running on PHP 7 while PHP 8 became mandatory. The six-phase remediation (audit, cleanup, test, upgrade, deploy, maintain) reduced the network to approximately 350 sites, cut database and file sizes by 50%, and required months of sustained effort. The audit phase was identified as the single most important investment: without a complete inventory of every plugin, theme, and site — including active vs. dormant status — the upgrade would have been unmanageable. Key tools included WP-CLI for automation, PHPCS for PHP compatibility checking, and Playwright for visual regression testing across hundreds of sites.
Joyce Peralta (McGill University), in "Consistency at Scale in Higher Education," described McGill's ecosystem: approximately 1,000 centrally hosted Drupal websites, 500 custom websites, 1,300 active content creators, and 6,000–8,000 content edits per week during peak periods. Her three-pillar framework — institutional standards, governance frameworks, and community of practice — took five years to develop into its current form. A web registry implemented in ServiceNow with annual attestation requirements now functions as the primary lifecycle management mechanism. McGill is currently conducting 50 subject matter expert interviews to build a unified institution-wide content model, which is the prerequisite for coherent AI integration across the estate.
Sheridan College's case, presented by Nicole Woodall and Ian Barcarse in "The Stakeholder Maze," showed what estate consolidation looks like under financial duress. Sheridan is migrating off Sitecore (used for 15–17 years) onto Drupal under a compressed timeline while absorbing staff reductions driven by Ontario's higher-ed funding crisis. Rogue departmental WordPress sites and a standalone HTML-only registrarial website with no analytics and no maintainer were flagged explicitly as active SEO liabilities and business continuity risks. The paradox Nicole Woodall identified is worth noting: the financial crisis is functioning as a consolidation accelerant, creating executive urgency for open-source shared platforms that was previously hard to generate in stable conditions.
2. AI Integration Is Tractable When Governance Comes First
The most important higher education AI case study at the conference was Charlotte Miller and Leanna Ruiz's presentation "From Questions to Clarity: Using AI to Transform Student Service" from McMaster University's Office of the Registrar.
McMaster serves 37,000+ students and sends 1,600+ email campaigns annually. Their chatbot evolution ran over several years: a 20-person live chat service launched during the 2020 pandemic, a rule-based intent/keyword chatbot introduced in 2022 (requiring 25–30 manually written question variations per topic), and finally a generative AI chatbot piloted March–September 2025 using retrieval-augmented generation (RAG) against the Registrar's website. The pre-launch governance process took close to a year and included a formal privacy impact assessment, an IT security review with 300+ checklist items, and AI disclaimers and source attribution built into the interface.
Key design decisions: the AI was restricted to public content only (starting with the Registrar's website, then expanded to include the 900-page undergraduate academic calendar, which was ingested as a RAG source in under two minutes). No connection to internal student data systems was made. A soft launch revealed that the legacy intent confidence threshold (80–85%) was intercepting queries the generative AI should have handled; raising the threshold to ~95% shifted the balance so generative AI handled 70–80% of all answers.
Pilot results after six months: positive interactions increased 19%; generative AI responses were rated accurate 69% of the time — more than double the accuracy of the prior canned-response system; generative AI handled approximately 62% of all student inquiries; live-chat escalations to staff dropped measurably. A student who messaged at 12:23 AM asking how to pay for university without personal funds received three relevant resources instantly — an interaction that, a year earlier, would have required staff.
Beyond deflection, the chatbot produced strategic content intelligence: it surfaced where digital journeys broke down, which terminology confused students, and where navigation failed — directly informing the communications team's content strategy.
Emma Nguyen and Gary Bhanot (University of Toronto) in "AI in Practice, Not Theory" described their four-phase AI adoption framework at U of T's decentralized marketing and communications hub: planning (leadership buy-in, problem-first methodology), engagement (60+ member cross-functional working groups), enablement (mini business cases for institutional approval), and scaling. Their most concrete outcome: a custom GPT that processes email copy decks into HTML emails with metadata, tracking parameters, and brand elements across 30 university divisions — reducing production time from 10 minutes to 3 minutes per email and eliminating human error in repetitive steps. A second tool called "Mailense" analyzes email performance using historical campaign data. Critically, they standardized existing workflows before applying AI; the AI augmented a documented process rather than being applied to an undocumented one.
Brian Piper's session "Choose Your Own AI Adventure" used a fictional Evolve University scenario to demonstrate multi-tool AI workflows for higher education marketing, including building an AI committee pitch using ChatGPT, Gemini, NotebookLM, and Claude in sequence. His "human-in-the-loop sandwich" — human expertise at the start, AI for research and drafting, human validation at the end — is directly applicable to institutional approval-heavy environments. His rule: if you use a prompt more than three times, build a custom GPT from it.
3. Zero-Click Search Is Restructuring How Prospective Students Find Institutions
Justin Cook (9thCO) in "Achieving Brand Visibility in the Era of AI Search" delivered the most technically grounded session on what institutions need to do to remain discoverable as AI search becomes the dominant entry point for information-seeking behaviour. With nearly one billion weekly ChatGPT users, zero-click searches are rising sharply. Cook's key technical point: LLMs do not index the web in real time. They retrieve content only when their confidence is low, firing sub-queries against a search API (Bing for ChatGPT, Google for Gemini), then fetching and synthesising relevant pages. There is no ranking algorithm inside AI search tools — the goal is to be trusted enough to be retrieved.
His four-part framework for "Answer Engine Optimisation" (AEO) is directly applicable to institutional web estates:
Eligibility: Pages must be fast, server-side or statically rendered, and served from edge CDNs. Client-side rendering, lazy loading, and infinite scroll can render content invisible to AI crawlers. Many university programme finders and calendar systems fail this test.
Authority: Genuine contextual brand mentions — conference sponsorships, open-source contributions, editorial coverage, podcast transcripts, directory listings — build the trust signals AI tools use to determine whether an institution is credible on a given topic.
Compressibility: Content must reduce cleanly to its essential facts. Well-structured headings, clear title tags, internal linking, FAQs, and accessibility-compliant markup all improve compressibility. Institutions with dense, jargon-heavy programme descriptions written for compliance rather than comprehension fail this test.
Association: Schema markup, organisational data, and clearly defined programmes and services help AI determine when to surface an institution in response to a specific query.
Cook also previewed Model Context Protocol (MCP), a standard that lets brands expose product data and booking tooling directly within AI chat interfaces. For institutions, this could eventually mean programme information, application steps, and campus visit booking surfacing inside ChatGPT or Gemini without a user ever visiting the institutional website.
Martin Anderson-Clutz (Acquia/Drupal) in "The AI-Driven DXP" reinforced this with a sharper framing: bot traffic already exceeds human traffic, and bad bot traffic is projected to surpass human traffic by 2030. The institutional website's role is no longer to educate visitors but to serve as a brand validation and conversion layer for visitors who have largely made their decisions inside AI tools — and to serve agents that may complete tasks (like requesting information or beginning applications) on behalf of users who never visit the site at all. His practical prescription for content teams: serve content in AI-consumable formats (JSON, Markdown) alongside HTML, and expose well-documented API endpoints so LLM crawlers can ingest institutional content cleanly.
4. Accessibility Compliance Requires Structural Investment, Not Point-in-Time Audits
The "Accessibility Unlocked" panel moderated by Fran Wyllie, featuring Jeevan Bains (Rogers), Niki Ramesh (CBC), Pina D'Intino (Aequum Global Access), and Juan Olarte (Digita11y Accessible), produced the most policy-relevant findings for institutional compliance teams.
Juan Olarte's quantified caution is critical: automated accessibility testing tools currently catch only 25–35% of issues. Any vendor claiming 100% coverage is unreliable. AI is particularly weak at detecting cognitive and neurodivergent accessibility barriers. For institutions under AODA obligations, this means automated auditing is insufficient and human validation by people with lived experience of disability is non-negotiable.
The panel also surfaced the financial argument that travels best with leadership: remediating inaccessible products after the fact is far more costly than embedding accessibility from the start. This is the same argument institutions make for technical debt in CMS systems. Project managers — frequently overlooked as an accessibility training audience — were identified as the most important intervention point: they set project scope and must raise accessibility as a requirement from day one.
Pina D'Intino's maturity mapping framework is the recommended starting point for institutional teams: assess current state across strategy, standards and roles, product planning, design and development processes, procurement, and training. The panel assessed Ontario's AODA as groundbreaking but under-enforced, and the federal Accessible Canada Act as similarly hampered. However, they noted that enforcement trajectory is upward, and early-mover institutions will avoid future remediation costs.
Adie Margineanu (University of Toronto Scarborough) in "Creating Impact While Mitigating Risk: The Strategic Value of User Research" provided an adjacent finding: accessibility and user research are mutually reinforcing. Her UTSC admissions site redesign — conducted over 10 months with 194 participants in five research studies — produced quantifiable outcomes: Google average position improved from 10.7 to 6.5; sessions increased 10% year-over-year despite sector-wide AI-driven declines; session duration rose 3.8% site-wide and 19% on programme pages; and conversion (clicks on "Apply Now") increased 62% in the first eight weeks post-launch during peak admissions season. Research cost less than 10% of total project spend (recruitment only, using an internal lead) and consumed approximately 20% of project timeline. The tree test result is particularly instructive for registrar teams: prospective students strongly preferred a journey-based navigation structure (programmes → applying → finances → campus) over one organised by institutional departments.
5. Email Remains the Highest-ROI Channel — But Requires Discipline
Dayana Kibilds (Ologie) in "Do People Still Read Emails? Yes. Just Not the Way You Think" delivered the most evidence-dense communication session. Key data for institutional email programmes: 58% of people check email first thing every morning; email consistently outperforms social media in conversion rate and ROI; the average email gets nine seconds of attention — but only among those who actually engage. A third of recipients glance for two seconds or less. A Litmus 2022 study found 54% of people open email because it is relevant; only 19% for personalisation.
For institutional communications teams, several findings apply directly: subject lines should summarise the email's content in six to nine words, not generate curiosity. First-name personalisation is universally recognised as a database field and should be replaced with genuinely relevant second-person segmentation. In newsletter emails, headings must tell the complete story because most readers only scan headings — a direct finding for student affairs newsletters, alumni communications, and academic calendar updates. Screen readers read emoji names aloud verbatim, which can radically alter meaning — a compliance consideration for institutions under AODA.
Strategic Implications
The Estate Consolidation Window Is Now
For institutions still managing rogue departmental websites, unmaintained subdomains, or fragmented CMS environments, the combination of financial pressure, AI search restructuring, and accessibility compliance risk creates an unusually clear consolidation mandate. Sites that cannot be found by AI crawlers, do not meet WCAG standards, and have no maintainer are simultaneously liabilities on three fronts. Sheridan's experience shows that a funding crisis — counterintuitively — generates the executive alignment needed to consolidate. McGill's experience shows that the governance infrastructure (web registry, annual attestation, content model) must be built before the technical consolidation or the new estate will fragment again.
AI Readiness Is a Content Governance Question
McMaster's chatbot success was built on a simple foundation: the Registrar's website was already well-maintained and accurate. The AI did not create good content; it surfaced content that was already good. Institutions with inconsistent, outdated, or departmentally fragmented web content will find AI tools amplifying those problems rather than solving them. The prerequisite for institutional AI readiness — whether for chatbots, AI-assisted content creation, or AEO optimisation — is rigorous content governance: a unified content model, regular auditing, clear ownership, and documented standards. McGill's 50-SME interview process to build an institution-wide content model is the clearest available template.
The Programme Finder Is the Highest-Stakes Digital Surface
Multiple sessions converged on the programme discovery experience as the single most consequential digital surface for prospective students. Adie Margineanu's tree testing showed that students prefer journey-based navigation, not departmental navigation — a finding that directly contradicts how most institutional programme finders are organised. The UTSC programme finder usability failure (advanced filters placed above the fold, causing users to assume they must complete them before seeing results) is likely endemic across institutional websites. The 62% increase in "Apply Now" clicks following research-driven redesign represents real enrollment impact. For registrar's offices and web teams, this is the most direct available evidence that UX investment in programme discovery pays for itself in admissions outcomes.
On the AI search front, programme pages are precisely the content type most at risk from poor structure. AI tools attempting to answer "what programmes does X university offer in Y field" need clean, structured, compressible content. Dense prerequisite tables, PDF-locked calendar entries, and JavaScript-rendered programme finders are likely invisible to AI crawlers.
Staffing Models Are Shifting
Several sessions touched on team structure changes under AI. Anton Morrison's demonstration of Claude Code showed a consultancy reducing headcount from 12 to 6 while maintaining output. University of Toronto's email workflow reduced per-campaign production time by 70%. Sean Stanleigh (Globe Content Studio) predicted fewer but higher-paid workers, salary band compression upward, and the rise of "polyworking." For higher-ed digital teams already operating under hiring freezes or staff reductions, AI tooling is increasingly a capacity strategy rather than an efficiency play. The governance implication: teams that standardise and document their workflows before implementing AI will capture the capacity gains. Teams that apply AI to undocumented, ad-hoc processes will not.
Procurement Must Account for AI Architecture
For institutions evaluating CMS platforms or rebuilding their digital stack, several technical signals from the conference are directly relevant to procurement decisions. Martin Anderson-Clutz argued for LLM-agnostic platforms to avoid vendor lock-in as model rankings shift. Preston So (React Bricks) and Andrew Kumar (Uniform) both identified composable, API-first architecture as the prerequisite for AI readiness — organisations using legacy, tightly coupled systems are adopting AI significantly more slowly than those with composable stacks. The Drupal AI Canvas demonstration by Aidan Foster showed a working prototype of AI-assisted page generation within a structured component system, achieving an 80% usable output rate. For institutions standardising on Drupal, the AI initiative roadmap is production-focused for 2026 and represents a significant near-term capability uplift without requiring platform migration.
Action Items
Immediate (0–30 Days)
For web managers and IT directors:
- Audit your programme finder pages against Cook's Eligibility criteria: Are they server-side rendered? Do they load in under two seconds? Are they crawlable by AI bots, or are they JavaScript-rendered with client-side filtering? Run a technical crawl and flag any pages that would be invisible to AI retrieval.
- Run a prompt evaluation audit: type the five most common prospective student questions about your institution into ChatGPT and Gemini. Does your institution appear in the synthesised answers? Does your website contain the structured content needed to qualify?
- Identify and inventory all unmaintained departmental websites, HTML-only microsites, and rogue subdomains. Flag those with no analytics, no named maintainer, and no update in the past 12 months as immediate consolidation candidates.
For registrar's office digital leads:
- Review McMaster's RAG chatbot implementation model. Assess whether your registrar's website meets the content quality baseline McMaster used as the RAG data source. If the website content is inconsistent or outdated, content remediation must precede any AI chatbot pilot.
- Begin a privacy impact assessment scoping exercise for a generative AI student inquiry tool. At large institutions, McMaster's experience suggests this process takes close to a year; starting now positions a pilot for 2026–27.
For institutional communications teams:
- Audit your email programme against Kibilds' framework. Pull five recent institutional emails and assess: Does each deliver its full message in two seconds? Do subject lines summarise content in six to nine words? Do all CTAs include a verb and context? Are emojis used in ways that will be read correctly by screen readers?
- Assess your custom GPT / AI tool inventory. Following the University of Toronto model, convene a cross-functional working group to identify the three to five highest-volume, most repetitive content production tasks (email coding, metadata writing, social caption drafting) and scope a structured pilot.
Near-Term (30–90 Days)
Conduct a tree test of your programme navigation using prospective students as participants, not faculty or staff. Margineanu's UTSC finding — that students prefer journey-based navigation over departmental structure — is directly testable against your own audience. The research cost is well under $5,000 internally staffed.
Implement a web registry with annual attestation requirements, following McGill's ServiceNow model. Every active site in the estate should have a named owner, an annual review date, and a documented purpose. Sites failing attestation should enter an archival or consolidation pipeline.
Develop or update your institution's accessibility maturity map using D'Intino's six-pillar framework (strategy, standards and roles, product planning, design and development, procurement, training). Identify the single pillar with the largest gap and resource a focused improvement initiative rather than spreading effort across all six.
Begin structured schema markup implementation on programme pages, FAQ content, and organisational data. This is the lowest-cost, highest-return AEO intervention and can be implemented without a CMS platform change.
Strategic (90+ Days)
Initiate an institution-wide content model project using McGill's 50-SME interview approach as a template. Map terminology, ownership, and content relationships across the estate before any AI tooling is deployed at scale. Without this foundation, AI will amplify content inconsistencies rather than resolve them.
Evaluate your CMS platform against AI readiness criteria: Is it LLM-agnostic? Does it serve content via well-documented REST/GraphQL/JSON API endpoints? Does it support MCP? Does it enforce brand context in AI-generated layouts? If your current platform scores poorly on these criteria, include AI architecture requirements in your next procurement cycle.
Build a repeatable user research model into your web programme, following Margineanu's UTSC template: one internal UX lead, four research phases (IA validation, prototype testing, sentiment testing, post-launch testing), embedded in the project lifecycle rather than commissioned as a separate engagement. Secure executive buy-in and dedicated budget before the next major redesign cycle begins.
Assess your content delivery infrastructure for AI crawler performance. If programme finders are JavaScript-rendered, investigate server-side rendering or static generation options. If the academic calendar is PDF-only, pilot a structured HTML or JSON delivery layer.
Sessions to Watch
“From Questions to Clarity: Using AI to Transform Student Service” — Charlotte Miller & Leanna Ruiz (McMaster University)
The most directly applicable higher education AI case study at the conference. Covers the full arc from governance and privacy review through pilot design, threshold tuning, measurement, and outcomes. The six-month pilot results — 69% AI accuracy rate vs. previous system, 62% of inquiries handled by generative AI, measurable reduction in staff escalations — provide the benchmark data institutional teams need to build internal business cases. The content intelligence angle (chatbot interactions surfacing content gaps and navigation failures) is particularly undervalued.
“Consistency at Scale in Higher Education” — Joyce Peralta (McGill University)
The essential governance reference for any institution managing a large Drupal or multi-CMS estate. Peralta's five-year arc from ad-hoc standards to a structured community of practice, web registry, and institution-wide content model is the most detailed available public account of how large Canadian universities build sustainable digital governance. The content model project currently underway at McGill is directly replicable.
“Creating Impact While Mitigating Risk: The Strategic Value of User Research” — Adie Margineanu (University of Toronto Scarborough)
The strongest ROI argument for UX research investment available in this corpus. The UTSC admissions site redesign produced a 62% increase in "Apply Now" conversions in the first eight weeks post-launch, driven by a single internal researcher working across five research studies with 194 total participants for less than 10% of project budget. The programme finder findings — journey-based navigation outperforms departmental navigation, advanced above-the-fold filters cause user confusion, co-op terminology is opaque to international students — are likely transferable across Canadian institutions.
“Achieving Brand Visibility in the Era of AI Search” — Justin Cook (9thCO)
The most technically grounded session on Answer Engine Optimisation. Cook's Eligibility/Authority/Compressibility/Association framework provides an actionable audit checklist for institutional web infrastructure teams. His framing of the AI search retrieval mechanism — LLMs fire sub-queries against Bing or Google when confidence is low, then extract and synthesise from retrieved pages — corrects widespread misconceptions about how AI search works and makes the optimisation task concrete rather than abstract.
“The Stakeholder Maze” — Nicole Woodall, Ian Barcarse & Jessie Johnston (Sheridan College / Evolving Web)
The most honest account of what institutional digital transformation looks like under financial pressure. The Sitecore-to-Drupal migration at Sheridan — compressing a multi-year project into a resource-constrained timeline during an Ontario higher-ed funding crisis — provides a candid template for institutions facing similar circumstances. The homepage slider case study (removing a politically entrenched element using user research data and incoming presidential backing) is a useful model for any institutional team trying to use data to override stakeholder opinion.
“AI in Practice, Not Theory” — Emma Nguyen & Gary Bhanot (University of Toronto)
The essential change management reference for institutional AI adoption. The four-phase framework (plan, engage, enable, scale) and the emphasis on problem-first identification — not technology-first selection — directly address the most common failure modes in institutional AI initiatives. The email production case study (10 minutes to 3 minutes per campaign, zero human error in repetitive steps, across 30 university divisions) provides a concrete, measurable outcome that can be replicated with modest internal investment.
“Accessibility Unlocked: People, Tools, and What's Next” — Panel moderated by Fran Wyllie (Northern)
Critical for any institution interpreting AODA compliance as an automated audit exercise. Juan Olarte's finding that automated tools catch only 25–35% of accessibility issues directly challenges the adequacy of most institutional compliance programmes. Pina D'Intino's maturity mapping framework and the panel's consensus on role-based training (developers need technical standards; project managers need scope-setting language; executives need cost-avoidance framing) provide a practical governance redesign roadmap.