Design & UX Lead Intelligence Brief — Evolve Digital Toronto 2026 | ConferenceDigest
Design & UX Lead Intelligence Brief
For: Design directors, UX leads, design system managers, and accessibility specialists
Design & UX Lead Intelligence Brief
Event: Evolve Digital Toronto 2026
Prepared for: Design directors, UX leads, design system managers, and accessibility specialists
Source sessions analysed: 25
Executive Summary
Evolve Digital Toronto 2026 surfaced a clear inflection point for design and UX practices: the craft is being pulled in two directions simultaneously. On one side, AI tooling is compressing production timelines to the point where Drupal Canvas can assemble a landing page from a single text prompt in under two minutes, and federated design system teams are exploring AI-driven component regeneration to combat entropy. On the other side, multiple speakers — including accessibility leaders at Rogers, CBC, and RBC — and user research practitioners delivered a counter-argument: that human judgment, lived experience, and rigorous research methodology are precisely what make AI output trustworthy and design outcomes defensible.
Four themes dominated the design-relevant sessions. First, design systems are graduating from component libraries into governance and cultural infrastructure — the panel featuring Arena Stoka (Bell), Andrea Ang (RBC), and David Cox (Lyft) was emphatic that adoption, education, and stakeholder relationships matter more than technical completeness. Second, AI is restructuring the content-design interface: visual headless CMS platforms (Preston So, React Bricks), AI-powered page builders (Aidan Foster, Drupal Canvas), and enterprise design system workflows (James Harrison, Loblaw Digital) are all converging on the same requirement — structured, token-based component models that give AI bounded contexts to generate within. Third, accessibility is no longer a compliance checklist; it is being elevated to organizational strategy, with panellists arguing it belongs alongside security and privacy in executive risk frameworks. Fourth, user research ROI has a new, concrete benchmark: Adie Margineanu's UTSC admissions redesign produced a 62% increase in conversion on a budget that represented less than 10% of total project cost, establishing a replicable evidence base that design directors can use to secure research budgets.
The signal for design leaders: the teams that will thrive in the next two years are those that have done the foundational work — robust token architectures, documented content models, accessibility baked into component libraries, and institutionalised research practice — because those foundations are exactly what AI tools require to generate anything other than generic output.
Key Findings
1. Design Systems Are a Relationship, Not a Repository
The panel "The design system mindset: Principles, patterns, and real-world lessons" (Arena Stoka, Andrea Ang, David Cox, moderated by Guy Seagull of Thomson Reuters) opened by dismantling the most persistent misconception in the field. Arena Stoka (Bell) described a design system as "a relationship between design and development, kind of like a contract that has to be maintained." Andrea Ang (RBC) cited Kevin Foster's observation that "if you build it, they won't come" as the guiding principle behind RBC's heavy investment in community and adoption work rather than component completeness.
Key governance findings from this session:
The rule of three teams: A component should serve at least three different teams before it warrants inclusion in the core system. High-visibility products may earn exceptions for strategic adoption gains, but this threshold prevents bloat.
Education over policing: The panel unanimously rejected enforcement models. Ang recommended understanding what teams are actually adopting versus what they think they are adopting. David Cox (Lyft) advocated a "let them learn" posture over centralized compliance gates.
Accessibility composition gap: Ang issued a direct warning: "If the component itself is accessible, my composition will be accessible — that's simply not true." Component accessibility does not guarantee application accessibility; composition context, heading hierarchy, and interaction state management remain team responsibilities.
Metrics beyond adoption rates: The panel recommended measuring passionate collaborators, narrative-driven success stories, and monetary value calculations rather than raw adoption percentages, which can mask unhealthy or surface-level uptake.
AI and craft: Andrea Ang acknowledged concern about AI "commoditising" design production but framed it as potentially liberating designers from being "production monkeys" toward more critical thinking and human connection work. David Cox flagged deterioration in craft care as a genuine risk.
James Harrison's session "Practical advice for building a system without a team" provided a practitioner-level complement. Harrison, Staff Product Designer at Loblaw Digital, built the Helios design system solo across 20 websites and 9 apps serving millions of daily users across brands including PC Optimum, Shoppers Drug Mart, and Loblaws. His architecture decisions are directly applicable to multi-brand environments:
Headless design token architecture: Storing design information centrally and distributing it bidirectionally to both Figma and code eliminates handoff and enables real-time reskinning across brands by switching theme-level tokens.
Federated governance warning: Harrison explicitly advised against federated models, citing Spotify and Nathan Curtis as organisations that have moved away from them. The federated model creates quality control gaps, process ambiguity, and the tragedy of the commons — maintenance work goes unaddressed when everyone owns it but no one is specifically responsible.
Permissive entry, systematic improvement: Harrison's counterintuitive principle — allowing imperfect components into the system because "being in the system was more important than what the system was" — enabled later systematic alignment (e.g., normalising border radii across all components).
Figma school and dev school: Harrison created mandatory training modules covering systems thinking, token application, and documentation, drawing on a decade of college teaching experience. These became organisationally required for anyone contributing to the system.
AI for design system entropy: Harrison is actively exploring AI for component regeneration, automated migration, and production code generation from design work — framing AI as the eventual solution to the tragedy of the commons problem.
2. AI Requires Design Foundations to Perform — and Will Expose Gaps Immediately
Aidan Foster's session "AI page building in Drupal Canvas" provided the most concrete demonstration of AI's dependence on upstream design work. Foster, UX lead for the Drupal AI project and founder of Foster Interactive, showed Canvas generating on-brand landing pages in 1–2 minutes from text prompts — but with a critical caveat: the same prompts run without brand guidelines, audience personas, and content strategy produced what he called "AI slop" and hallucinated content. The system's 80% usable output rate (one in five attempts excellent, three in five needing minor tuning, one requiring a complete restart) is contingent on the quality of the context documents fed into the AI Context Control Center.
Design system implications from Foster's session:
AI page assembly uses the same component-based design system as human editors — the system selects from a defined component library, respects brand rules, and draws from a 200-image media library described with vector embeddings. A well-structured design system is directly what makes AI-generated output brand-compliant.
Autonomous agents can propagate updates site-wide when an organisational fact changes in the context center, automatically identifying and drafting revisions to all affected pages. This makes content model accuracy a design system concern, not just a content team concern.
Human insight cannot be outsourced: Foster closed by warning that as AI-generated content floods the internet, organisations must double down on authentic user research and brand strategy to remain distinct.
Preston So (React Bricks) extended this analysis in "How to make AI work for everyone with visual headless CMS," introducing the concept of "agent experience" (AX) — coined by Netlify CEO Matt Biilmann — as a required design dimension alongside developer experience (DX) and user experience (UX). So's core argument: AI needs clear structure, bounded contexts, and human oversight. The component/brick model is the key atomic unit for AI-generated content because it prevents hallucination and keeps output aligned with design systems. Props-driven architecture lets developers expose only the fields and configuration options AI (and editors) should control.
So also flagged emerging shared context conventions: llm.txt and routes.md files as lightweight standards for communicating brand voice, page structure, and content relationships to AI tools without repeated prompting. These are practical, near-term additions to any design documentation suite.
Martin Anderson-Clutz (Acquia) in "The AI-Driven DXP" reinforced the design system dependency: when generating layouts with AI — whether inside-out (describe layout within the CMS) or outside-in (generate frontend via tools like Cursor or Lovable then import it) — brand context management is the critical failure point. Without enforced brand context, generated layouts diverge from design systems rapidly and at scale.
3. Accessibility Strategy Must Operate at Three Levels: Component, Organisation, and Culture
The panel "Accessibility unlocked: People, tools, and what's next" (Jeevan Bains, Rogers; Niki Ramesh, CBC; Pina D'Intino, Aequum Global Access / IAAP; Juan Olarte, Digita11y Accessible; moderated by Fran Wyllie, Northern) was the most substantive accessibility session at the conference and covered ground that extends well beyond compliance.
Automated tools are not enough — and the gap is larger than most teams assume. Juan Olarte stated plainly that AI testing tools currently catch only 25–35% of accessibility issues. Any vendor claiming 100% coverage should not be trusted. AI is particularly weak at detecting cognitive and neurodivergent accessibility barriers, and AI training data skews toward mainstream patterns, creating significant gaps for disability-related edge cases (Pina D'Intino).
CBC's AI-assisted accessibility workflow offers a replicable model for large content organisations. Niki Ramesh described using AI to generate alternative text for images at scale — outperforming human-written descriptions in a sample of 80–100 images — and supporting caption review workflows. The approach is human-supervised, not autonomous.
The training gap is a design system governance problem. The panel identified role-based training failure as a primary reason accessibility does not stick: employees do not understand their specific role, training is treated as a one-time checkbox rather than ongoing practice, and it is rarely delivered by people with lived experience of disability. This maps directly to design system adoption failure patterns: designers who do not understand accessibility at the component composition level, not just the component level.
Organisational maturity mapping before strategy. Niki Ramesh described beginning her tenure at CBC by mapping accessibility maturity across six pillars: overall strategy, standards, roles and accountabilities, product planning, procurement, and training. This is a directly transferable framework for design directors assessing their own organisations before setting accessibility roadmaps.
Executive framing: Pina D'Intino argued that accessibility must be treated with the same organisational urgency as security and privacy. Jeevan Bains (Rogers) framed it as a driver of revenue, quality, and customer service — not a side initiative. Project managers were highlighted as a frequently overlooked but critical training audience, because they set project scope and must raise accessibility as a requirement with clients from day one.
Metrics beyond defect counts: The panel recommended tracking employee sense of belonging, customer retention, return rates, and user satisfaction scores disaggregated by disability community feedback — not just accessibility defect totals, which can create a false sense of progress.
Luke Woolliscroft's "The unified estate" (Empire Life) demonstrated the architectural expression of this philosophy: implementing AODA compliance at the component level in a unified design system that spans Drupal 10, Drupal 11, and non-Drupal applications. By governing components, not pages, compliance is built in from the start rather than retrofitted — eliminating the per-page remediation cost that the accessibility panel identified as the primary financial argument for building accessibly from the beginning.
Dmitry Mayorov ("Stop letting WordPress break your design system") added a platform-specific insight: accessibility considerations must be built into component libraries from the start, not retrofitted later. His enterprise WordPress implementations for clients including Starbucks, Hilton, and Blackstone use theme.json as a constraint layer to prevent editors from inadvertently breaking design system — and accessibility — compliance.
4. User Research ROI Is Now Quantifiable and Defensible at the Executive Level
Adie Margineanu's "Creating impact while mitigating risk: The strategic value of user research" provided the conference's most data-rich case for embedding UX research into digital project delivery. As UX Lead at University of Toronto Scarborough, Margineanu ran a five-study research programme across a 10-month admissions website redesign, engaging 194 participants, as the sole dedicated researcher representing less than 9% of the delivery team. Research consumed approximately 20% of the project timeline and less than 10% of total project costs (recruitment only, since Margineanu was an internal resource).
Outcomes were measurable and directly attributable:
Session duration rose 3.8% site-wide and 19% on program pages
Program page sessions increased 18% year-over-year
Conversion (clicks on "Apply Now") increased 62% in the first eight weeks post-launch during peak admissions season
The research sequence covered the full product lifecycle: tree testing validated information architecture before development began (revealing users preferred journey-based navigation over departmental organisation); two rounds of prototype usability testing (low and medium fidelity) surfaced critical filter UX failures in the program finder; sentiment testing with 60 participants using a brand-validated word bank resolved a stakeholder versus user preference conflict without generating political conflict; and post-launch testing with 12 participants (~$1,500 recruitment cost) validated live performance.
Three design-specific findings are broadly reusable:
Advanced filters above the fold create completion anxiety. Users assumed they had to fill out filters before seeing results. Progressive disclosure — showing high school prerequisites only when an undergraduate program type is selected — resolved the issue.
International students did not understand institution-specific terms. Co-op comprehension gaps among international students, surfaced only in prototype testing, led to a tooltip addition that would otherwise have been absent.
Wireframes must use real content. Margineanu found that Lorem Ipsum limited user engagement and hid design constraints during testing — a practice-level warning that affects testing validity across the industry.
The McKinsey implication she made explicit: framing research outputs as a prioritised roadmap tied to enrollment impact translated findings into language executives act on. This is the brief that secured executive buy-in and dedicated budget going forward.
5. The Shared Vocabulary Problem Is Still Blocking Design Quality
Chris Mantil's session "Setting the tone: Building a shared vocabulary in design" addressed a problem that predates AI but is becoming more acute as AI-generated content raises the stakes for precise brand communication. Mantil, Creative Director at Chris Mantil Design, frames design client work as translation: when a client says "modern and bold," their definition may differ substantially from a designer's, and that gap — if unaddressed — produces expensive mid-project reversals.
His structured tools are directly applicable to design systems and brand governance work:
Visual style and tone grid: A matrix of four words drawn from early conversations, plotted in quadrants. Clients position a weighted circle to make abstract tone preferences concrete and discussable. This exercise makes explicit that design elements are a mesh — changing one has downstream effects on others — a concept that maps directly to how token changes propagate through a design system.
Deliberate exposure of rejected references: Presenting material the designer expects clients to reject is as valuable as presenting aspirational references. Negative reactions sharpen direction faster than positive responses.
Conceptual references over vague aspiration: Using "fine dining" to communicate quality-without-exclusivity for a veterinary clinic is more precise than "premium" or "high-end." Concrete conceptual anchors reduce interpretation drift across design, development, and content teams.
Design for maintainability: Mantil emphasised designing assets clients without marketing teams can actually use. This is the same principle James Harrison applied to Loblaw's Helios system — a design system that creates work rather than enabling autonomy has adoption problems, not just component problems.
6. Landing Page Strategy Requires Design Thinking, Not Just Design Systems
Suzanne Dergacheva (Evolving Web) in "Five types of landing pages your website needs" argued that design systems provide beautiful building blocks but cannot substitute for strategic thinking about page purpose and user journeys. Her five-type taxonomy — Why pages, wayfinding pages, decision-making pages, lead generation pages, and dashboard pages — maps directly to the content model work that multiple other speakers (Joyce Peralta at McGill, Charlotte Miller and Leanna Ruiz at McMaster, Luke Woolliscroft at Empire Life) described as foundational to both AI readiness and organisational consistency.
Design system governance findings:
Too much flexibility produces pizza-like designs. Visual content governance — limiting how far editors can deviate from component constraints — is a design system responsibility, not just an editorial one. This aligns directly with Mayorov's WordPress design system session and Anderson-Clutz's brand context management arguments.
Default images should be boring or absent. Deliberately unglamorous defaults create pressure on content creators to upload relevant custom visuals rather than accepting placeholders.
Accessibility must be built into component libraries from the start. Dergacheva named this as a non-negotiable, not a retrofit — consistent with the accessibility panel's cost-of-remediation argument.
Mobile-first acts as a natural content filter. Designing for mobile first eliminates content that cannot stand without heavy visual scaffolding, improving both accessibility and AI compressibility.
Strategic Implications
AI Amplifies Design System Quality — in Both Directions
The clearest strategic signal from Evolve Digital 2026 is that AI does not flatten the quality gap between well-governed and poorly-governed design practices — it widens it. Aidan Foster's Drupal Canvas demonstration showed that AI with a robust design system, documented brand guidelines, and a curated media library produces usable landing pages in minutes. The same AI without those foundations produces generic content. James Harrison's Loblaw work shows the same dynamic at the system level: AI can help manage design system entropy, but only if the token architecture, component model, and governance processes are already in place to give it structured inputs.
For design directors, this means the ROI case for design system investment has changed. It is no longer primarily about designer productivity or developer handoff efficiency. It is about organisational AI readiness. A design system is now the infrastructure that determines whether AI tools can generate brand-compliant outputs at all.
Accessibility Is Entering the Executive Risk Register — and Design Teams Need to Lead That Conversation
The accessibility panel's framing — treating accessibility with the same organisational urgency as security and privacy — signals a shift that design leaders should anticipate and drive rather than react to. Regulatory pressure (Ontario AODA, federal Accessible Canada Act) is acknowledged as under-enforced, but the panellists from Rogers, CBC, and IAAP were clear that the enforcement environment will tighten, and that organisations building accessibly from the start face dramatically lower remediation costs than those retrofitting. Luke Woolliscroft's Empire Life approach — AODA compliance at the component level in the design system — is the architectural expression of this proactive strategy.
The AI testing limitation is a specific risk design teams need to document and communicate upward: automated tools catch only 25–35% of issues. Any audit programme that relies primarily on automated scanning is leaving the majority of issues undetected. Human validation by people with lived experience remains mandatory — and that requirement has resourcing implications that need to appear in project plans.
User Research Has a Replicable Executive-Ready ROI Model
Margineanu's UTSC case study is the most immediately usable deliverable from the conference for design leaders seeking to institutionalise research practice. The structure — less than 10% of project cost, 20% of timeline, concurrent with design and development, producing a 62% conversion increase — is a template for how to pitch research budget to executives who object on cost or schedule grounds. The further evidence point: the research programme surfaced findings (filter above-fold confusion, co-op comprehension gap among international students) that would not have been caught any other way, and that would have been extremely expensive to fix post-launch.
The Shared Vocabulary Problem Becomes a Design System Governance Problem at Scale
Chris Mantil's individual client work and Joyce Peralta's McGill governance case study (1,000 Drupal websites, 1,300 active content creators) describe the same underlying problem at different scales: without shared language and shared mental models, design systems cannot be adopted consistently. Peralta's insight is that McGill's investment in community of practice — as the most scalable tool for implementing digital standards — mirrors the design system panel's emphasis on education over enforcement. The foundational work at McGill (nine digital standards, mandatory training, a Web Advisory Committee, a Student Usability Panel) took five years and is what enabled them to shift conversations from "what are our standards?" to "how do we best apply them?" — including to AI implementation.
Agent Experience (AX) Is Now a Required Design Deliverable
Preston So's introduction of agent experience (AX) as a design dimension alongside UX and DX is the most structurally significant new concept for design leaders in this corpus. AI agents interacting with digital products have different needs than human users: they need structured, bounded contexts; clear component models; and shared context files (llm.txt, routes.md) that describe brand voice and page structure. Design systems built for human users and developer ergonomics will need to be audited and extended to support AI agents as first-class consumers. This is an emerging practice, but the CMS platforms represented at the conference — Drupal Canvas, React Bricks, Uniform, Acquia — are already building toward it.
Action Items
Immediate (0–30 days)
Audit your design system for AI readiness. Assess whether your component library uses a token-based, props-driven architecture that gives AI bounded contexts to generate within. If components are not documented with clear props schemas, AI-generated content using your system will be unpredictable. Priority fix: separate theme-level tokens from component-level tokens (Harrison, Loblaw Digital).
Run an automated accessibility audit and document the coverage gap. Use the 25–35% figure from Juan Olarte (Digita11y Accessible) to frame the gap for stakeholders. Pair automated results with a manual audit sample by a human reviewer — ideally someone with lived experience of disability — to establish a realistic current-state baseline. Present this as an ongoing risk, not a one-time compliance exercise.
Introduce the visual style and tone grid to your next brand or design sprint kickoff. Mantil's four-word quadrant exercise costs nothing to implement and prevents the mid-project misalignment that drives expensive revision cycles. Adapt it for internal stakeholder alignment as well as client-facing work.
Add llm.txt and routes.md to your design documentation suite. These lightweight files — recommended by Preston So (React Bricks) — give AI tools the shared context they need to generate on-brand content. Start with brand voice, page type descriptions, and component purpose definitions.
Identify one accessibility lever and go deep. The accessibility panel recommended picking one lever — tooling, culture and training, or product roadmap integration — rather than attempting broad simultaneous improvement. Assess which lever would produce the most organisational impact given your current maturity.
Short-term (30–90 days)
Map your accessibility maturity across the six pillars Niki Ramesh used at CBC: overall strategy, standards, roles and accountabilities, product planning, procurement, and training. Share the map with leadership to establish a baseline and identify which pillar is the critical constraint.
Establish a research programme for your next significant redesign using Margineanu's phased model. The minimum viable programme: tree test for information architecture, two rounds of prototype usability testing, and post-launch testing with a recruitment budget of approximately $1,500 per round. Frame the budget as less than 10% of total project cost and the timeline as concurrent with other workstreams.
Introduce role-based accessibility training, starting with project managers. The accessibility panel identified project managers as a frequently overlooked but critical audience: they set project scope and must raise accessibility as a client requirement from day one. Generic training that covers legislative landscape rather than specific job responsibilities is the primary failure mode to avoid.
Implement the rule of three teams as your component inclusion criterion. If your design system does not have a formal inclusion policy, establish one. Components serving fewer than three teams should live in product-specific libraries, not the core system. This prevents bloat and focuses maintenance resources.
Evaluate your design system governance model against the federated/hybrid/centralised spectrum. James Harrison's experience at Loblaw is explicit: federated models are the most challenging to sustain and have driven major organisations to recentralise. If you are operating a federated model, identify whether you have the steering committee, training infrastructure, and passionate volunteer contributors that Harrison's approach requires — or whether a hybrid model is more sustainable.
Strategic (90+ days)
Build a brand context document suite for AI tool readiness. Following Aidan Foster's Drupal Canvas demonstration, document audience personas, brand guidelines, content strategy, and design system rules in AI-consumable formats. This is both a design system governance deliverable and an AI readiness deliverable. Without it, AI-assisted page building defaults to generic output.
Audit agent experience (AX) requirements for your primary digital product. Working with your engineering counterparts, assess how well your current component model and content architecture serve AI agents as consumers. Identify which components need structured props schemas, which content types need API-consumable equivalents, and whether your CMS platform supports MCP server integration.
Establish a community of practice with weekly or bi-weekly check-ins. Arena Stoka's (Bell) practice of weekly check-ins with design teams to track component detachments and maintain relationships is a low-cost, high-return governance mechanism. Extend this to cover AI tool usage, accessibility testing cadence, and research integration into project workflows.
Build the executive-ready research ROI case before your next budget cycle. Use Margineanu's UTSC structure: document research cost as a percentage of total project budget, timeline as concurrent rather than sequential, and outcomes tied to conversion or engagement metrics that map to business objectives. This template, once created, is reusable across projects and programmes.
Sessions to Watch
“The design system mindset: Principles, patterns, and real-world lessons” (Arena Stoka, Bell; Andrea Ang, RBC; David Cox, Lyft; moderated by Guy Seagull, Thomson Reuters) — The broadest and most practitioner-grounded design system conversation at the conference. Essential for any design system manager navigating adoption, governance, and accessibility integration. The composition-versus-component accessibility distinction alone is worth the session.
“Practical advice for building a system without a team” (James Harrison, Loblaw Digital) — Harrison's four-year solo journey building Helios across 20 websites and 9 apps is the most detailed real-world design system case study at the conference. The token architecture decisions, federated governance warnings, and AI entropy roadmap are directly applicable to teams of any size. Particularly valuable for design system managers at organisations without dedicated DS teams.
“Accessibility unlocked: People, tools, and what's next” (Jeevan Bains, Rogers; Niki Ramesh, CBC; Pina D'Intino, Aequum Global Access / IAAP; Juan Olarte, Digita11y Accessible; moderated by Fran Wyllie, Northern) — The definitive accessibility session. The 25–35% automated tool coverage figure, the CBC AI-assisted alt-text workflow, the six-pillar maturity model, and the organisational framing (security and privacy parity) are all immediately usable. Essential for accessibility specialists and design directors setting organisational accessibility strategy.
“Creating impact while mitigating risk: The strategic value of user research” (Adie Margineanu, University of Toronto Scarborough) — The conference's best evidence-based case for embedding user research into digital projects. The 62% conversion increase, sub-10% budget footprint, and replicable five-study structure provide the executive-ready argument for research investment. Essential for UX leads seeking to institutionalise research practice and any design director who has faced "we don't have time for research" objections.
“AI page building in Drupal Canvas” (Aidan Foster, Foster Interactive / Drupal AI) — The most concrete demonstration of how AI interacts with design systems in production. The failure mode (AI slop without brand context) and the success mode (80% usable output with robust component library and content strategy) establish the design-system-as-AI-infrastructure argument empirically. Essential for design directors evaluating AI-assisted content workflows.
“How to make AI work for everyone with visual headless CMS” (Preston So, React Bricks) — The most conceptually forward-looking session for design leaders. The introduction of agent experience (AX) as a design dimension, the critique of piecemeal AI CMS features, and the llm.txt/routes.md convention recommendations define the next generation of design system documentation requirements. Essential for design directors planning two-to-three year platform and tooling roadmaps.
“Setting the tone: Building a shared vocabulary in design” (Chris Mantil, Chris Mantil Design) — The most immediately applicable session for practitioners doing brand and visual identity work. The visual style and tone grid, the deliberate rejection-reference technique, and the mesh metaphor for design elements translate directly into client workshops, design system documentation, and stakeholder alignment exercises.
“Stop letting WordPress break your design system” (Dmitry Mayorov, Fueled) — Practical and specific. The theme.json-as-style-dictionary approach, the custom block versus constrained core block decision framework, and the enterprise implementation examples (Starbucks, Hilton, Blackstone) are immediately applicable for any team managing a WordPress-based design system in an enterprise context.