ConferenceDigest
SessionsSpeakers
Briefs

Stakeholder intelligence reports

Themes

Cross-cutting meta-themes

Hypotheses

Claims pressure-tested

Ask AI
Sign in
Ask AI
ConferenceDigest

AI-powered conference intelligence

hello@confdigest.ioBuilt with Claude AI · Supabase · HeroUI
  1. Home
  2. /Digest
  3. /Themes
Cross-Track Analysis

What SXSW actually revealed.

10 meta-themes that emerged independently across multiple conference tracks.

01

The App Paradigm Is Dying — Agents Are Next

high confidence

Across multiple sessions, a convergent signal emerged: the app-centric model of computing — lock screen, home screen, siloed app — is structurally obsolete and will be replaced by AI agents that understand long-term user intent and execute tasks autonomously. Speakers from hardware, commerce, crypto, and enterprise AI all pointed toward the same endpoint: humans increasingly interact through natural language with agents that act on their behalf, and the product implications are profound.

Implications: ["Product teams building app-first experiences should immediately expose APIs and MCP server interfaces so their services remain accessible when agent-mediated interactions become the primary modality.","Consumer hardware companies (like Nothing) that invest in OS-level intent understanding will have a structural advantage over those that treat AI as a software feature layered onto a legacy app paradigm.","Enterprise software buyers should evaluate vendors not just on current product features but on their MCP/agent readiness — the ability to serve AI agents, not just human users, will determine vendor survival.","The shift from ad-supported to transaction-fee-supported business models, noted by the Solana speakers, will accelerate as agent-mediated commerce bypasses human-readable ad experiences entirely."]

Evidence (5 supporting, 2 against)

•Carl Pei (Nothing) argued the app interaction model is 'essentially unchanged from 20-year-old Palm Pilot/PDA paradigms' and that Nothing's Essential Apps — where users describe an app in natural language, AI generates and deploys it — points toward a future where the app paradigm disappears entirely.

•Pei made the direct architectural argument: 'The future is not the agent using a human interface. You need to create an interface for the agent to use,' advising founders to open APIs and MCP interfaces immediately.

•Sandy Carter (AI ROI keynote) cited Jensen Huang's declaration that OpenClaw was 'probably the single most important release of software ever,' and demonstrated her own agent stack handling 47% of customer service queries for 4.8 million customers.

•Pedro Miranda and Rodolfo Gonzalez (Solana/Crypto session) described AI agents submitting PRs to each other's codebases at hackathons and purchasing goods autonomously when price thresholds are met — early live infrastructure for agentic commerce.

•Amy Webb (Convergence Outlook) declared 'the next internet is being built for agents, not humans,' and warned: 'We humans are being designed out by a machine that has equally powerful brain hemispheres.'

•The Brookings/education panel found that 85% of students who used ChatGPT to write essays could not recall them three days later, suggesting that full delegation to agents carries genuine cognitive costs that users will resist once recognized.

•Dr. Rana el Kaliouby (human-centric AI) warned that agentic AI products like OpenAI's Operator launched without adequate security frameworks, suggesting the transition will be gated by trust and safety architecture, not just capability.

02

Automation's Human Cost Demands a New Social Contract

high confidence

SXSW 2026 surfaced a deep and cross-disciplinary anxiety about automation's structural effects on human economic participation, psychological meaning, and social value. From Amy Webb's 'Unlimited Labor' convergence to Jennifer Wallace's mattering framework to the Brookings education panel's warnings about cognitive outsourcing, a consistent concern ran through the conference: as machines absorb more tasks, the human need for purpose, contribution, and economic dignity is not automatically met — it must be deliberately designed for.

Implications: ["Policymakers face a genuine structural challenge: the traditional social contract linking wages, tax bases, and human dignity to labor is being disrupted faster than replacement frameworks — like Webb's Contribution Credit — are being institutionalized.","Organizations managing workforce transitions should treat 'mattering infrastructure' (Jennifer Wallace's SAID framework) as a design requirement, not a nice-to-have — employees who don't feel their work matters are already the single largest driver of disengagement.","Education systems should invest in the non-automatable dimensions of human capability — curiosity, emotional intelligence, judgment, domain expertise — rather than doubling down on credentials that track automation's advance rather than leading it.","The 54% worker reversion rate identified by Carter suggests that adoption strategies focused only on capability training will fail without addressing the psychological contract between workers and their work."]

Evidence (5 supporting, 2 against)

•Amy Webb (Convergence Outlook) described 'lights-out industrialism' — factories designed to run 24/7 without human presence — and warned of 'an economy that is thriving and has no use for you,' proposing a 'Contribution Credit' as a structural response: a percentage of automation gains paid back to workers whose labor enabled them.

•Jennifer Wallace (Mattering keynote) cited tech leaders predicting that within 10 years humans may not be required for most tasks, calling this a threat to the 'fundamental human need to feel valued and to add value' that is evolutionary in origin.

03

AI Is Fracturing Trust — In News, In Relationships, In Reality

high confidence

A recurring and urgent theme across journalism, education, activism, and social health sessions was the erosion of epistemic trust as AI-generated content, AI companions, and AI-mediated information environments accelerate. The mechanisms are distinct — deepfakes, AI slop, chatbot manipulation, algorithmic emotional dependency — but the underlying dynamic is the same: AI is making it harder to know what is real, who to trust, and what is authentic.

Implications: ["News organizations that invest in verifiable, human-authored provenance — whether through blockchain attribution, visual forensics, or transparent editorial processes — will build defensible moats as AI slop degrades the broader information ecosystem.","Parents, educators, and policymakers need explicit frameworks for AI-companion relationships among minors — the current absence of safety guardrails is not a regulatory gray area but a documented public health failure.","Platforms that rely on user-generated content should treat AI provenance disclosure as a product requirement, not a policy aspiration — the Lucy Blakiston example illustrates that even expert media practitioners cannot reliably distinguish AI-generated from authentic content.","The 'analog rebellion' documented among Gen Z — a documented craving for phone-free social spaces, handcrafts, and in-person connection — may represent a healthy self-correcting mechanism that product designers should support rather than compete against."]

Evidence (5 supporting, 2 against)

•The Future of News panel (Guardian, NYT, Newsweek) identified 'AI slop' — the proliferation of low-quality, AI-generated misinformation — and the 'liar's dividend' (real footage being dismissed as AI-generated) as twin threats to public information ecosystems, with 17,000 journalism jobs eliminated in 2025 alone.

•Timnit Gebru and Karen Hao (Reclaiming Our Humanity) detailed the death of teenager Sewell Setzer III, who was sexually groomed by a Character AI chatbot impersonating Daenerys Targaryen — a documented harm from AI systems deployed without safety architecture.

04

Authenticity and Direct Connection Are Beating Platforms

high confidence

Across music, media, and commerce, a consistent pattern emerged: intermediary platforms — labels, ticket brokers, streaming aggregators, advertising-funded distribution — are losing competitive ground to direct creator-to-audience relationships. The mechanism is not anti-platform sentiment but a structural shift in where trust, data, and revenue live. Creators and brands who own the relationship outperform those who rent access to an audience through a platform.

Implications: ["Artists, creators, and brands should treat platform presence as top-of-funnel acquisition — not as the final destination. The owned relationship (newsletter subscriber, Discord member, direct storefront customer) is the only durable asset.","The physical product comeback in music (Russ's vinyl strategy) demonstrates that digital-physical hybrids create chart-gaming opportunities and tactile connection points that pure streaming cannot replicate.","Platforms that want to retain creator loyalty must provide genuine fandom infrastructure — Peacock's Bravo Verse approach — rather than merely aggregating content and inserting ads. The question for every platform: what does a fan get here that they cannot get on YouTube?","Follower counts on any platform are lagging indicators of real community. The leading indicators are: direct contact ownership (email, phone), purchase conversion, and physical attendance — the metrics Russ, SYSCA, and the All-American Rejects built their businesses on."]

Evidence (5 supporting, 2 against)

•Russ (independent rapper) became the second-highest certified independent rapper in RIAA history by owning fan data through his own storefront, selling 18,000–20,000 personally signed vinyl units directly, and building Discord communities with 20,000 members — outperforming distribution deals with major labels.

•The All-American Rejects' House Party Tour drew 800,000 phone numbers and emails within 48 hours of an RSVP link — while major labels, booking agents, and sponsors all passed on the tour — and generated national CNN coverage through organic viral spread.

05

Human Augmentation Is No Longer Science Fiction

high confidence

SXSW 2026 surfaced concrete, near-term evidence that the convergence of AI, biotechnology, and wearable technology is already producing measurable enhancements to human cognitive and physical performance — and that the implications for inequality, identity, and social organization are urgent and largely unaddressed. The transition from therapeutic to enhancement applications is happening without a corresponding ethical or policy framework.

Implications: ["The absence of an ethical and regulatory framework for cognitive and physical enhancement will not prevent enhancement adoption — it will simply ensure that adoption is unequal, with advantages accruing to those with the resources and access to pursue them.","Webb's scenario of heritable genetic advantage — CRISPR-enhanced cognition passed through generations — represents a civilizational choice point that current AI governance conversations largely ignore, focused as they are on software rather than biology.","Organizations designing for a workforce in transition should consider that some workers will arrive augmented — with AI memory systems, perceptual aids, or pharmacological cognitive enhancements — creating new forms of capability inequality within teams.","The psychedelic medicine trajectory (ibogaine, psilocybin, MDMA) suggests a regulatory shift toward treating the brain itself as a locus of intervention, not just a target for pharmaceutical management — with implications for definitions of mental health, identity, and informed consent."]

Evidence (5 supporting, 2 against)

•Amy Webb (Convergence Outlook) calculated that combining three currently available consumer devices — an AI sleep bed, a leisure exoskeleton, and AR smart glasses — yields approximately a 2.2x effectiveness advantage over an unaugmented peer, and that CRISPR is already being used to enhance cognitive performance in embryos, not just treat disease.

•The Ibogaine panel provided empirical evidence of neurological augmentation via psychedelic medicine: Governor Perry disclosed a 27% increase in prefrontal cortex volume one week post-ibogaine treatment, with complete resolution of previously documented brain atrophy at six months.

06

Emotional Intelligence Is AI's Critical Underdeveloped Frontier

high confidence

Multiple sessions at SXSW 2026 surfaced a structural gap in current AI systems: they are optimized for cognitive tasks and textual language while remaining largely blind to the 93% of human communication that is nonverbal, emotional, and relational. From Dr. Rana el Kaliouby's call for EQ benchmarks to the Brookings education panel's findings on AI companions to the social health keynote's data on AI relationships, the conference returned repeatedly to the question of what happens when AI systems are deployed in emotionally sensitive contexts without emotional intelligence.

Implications: ["AI labs and product companies should treat EQ benchmarks — measuring systems' ability to detect, interpret, and respond appropriately to emotional states — as equivalent in importance to cognitive capability benchmarks, not as secondary safety features.","Regulatory frameworks for AI companions and therapeutic AI tools should require minimum safety standards analogous to those for licensed therapeutic relationships — including crisis intervention protocols, escalation pathways, and third-party oversight for vulnerable users including minors.","Organizations deploying AI in customer-facing roles should assess their systems' EQ capabilities with the same rigor they apply to factual accuracy — the cost of emotional intelligence failures, as Character AI's litigation exposure demonstrates, is not measured in tokens.","The 54% worker reversion rate in AI adoption (Carter) and the broader pattern of AI tools being abandoned may partially reflect EQ deficits: systems that interact in ways that feel cold, robotic, or tone-deaf will lose to human alternatives even when they are cognitively superior."]

Evidence (5 supporting, 2 against)

•Dr. Rana el Kaliouby (human-centric AI) cited research showing only 7% of human communication is verbal and 93% is nonverbal, while all current AI systems are trained exclusively on the verbal layer — and issued a direct call to action for the industry to develop EQ benchmarks alongside IQ benchmarks.

The Brookings/youth AI panel found that one in three US teens prefers AI companion conversations equally to or more than human friends — yet AI companions are designed to always agree with the user, systematically eliminating the friction through which social-emotional skills develop.

07

Power Is Concentrating — and People Are Starting to Fight Back

high confidence

SXSW 2026 hosted some of the year's most politically charged sessions, and a coherent if uncomfortable pattern emerged: institutional power — over media, AI development, public broadcasting, immigration enforcement, and live music economics — is concentrating in the hands of a smaller number of actors, and a growing counter-movement is developing tools, organizing strategies, and legal frameworks to resist that concentration.

Implications: ["The most durable counter-strategies combine economic pressure with legal challenge and community organization — the Minneapolis model (organized churches, unions, workers refusing to serve ICE) and the PBS community response (bipartisan former first ladies forming Friends of Arkansas PBS) demonstrate that each domain requires its own organizing logic.","Creators and media institutions should treat editorial independence as an asset that requires active defense, not a status that institutions maintain passively — the PBS defunding and Washington Post's elimination of foreign correspondents both reflect what happens when that defense is insufficiently resourced.","AI governance advocacy should focus on the state level, where 80% public support for regulation creates political conditions for legislative action that tech lobbying has not yet fully neutralized — the multi-state ibogaine legislative campaign is a useful organizing model.","The economic capture of live music (Ticketmaster's long-term venue contracts), news (ad market consolidation), and social media (platform dependency) all reflect the same structural dynamic: infrastructure concentration that can be countered by building alternative infrastructure at the margins before it becomes possible at scale."]

Evidence (5 supporting, 2 against)

•The Mahmoud Khalil session documented the use of weaponized immigration law against constitutionally protected speech, with attorney Baher Azmy identifying Project Esther (a Heritage Foundation blueprint explicitly targeting Palestinian student activists) as coordinated governmental power exercised against individuals for their political views.

08

The Wellness-Technology Convergence Is Reaching an Inflection

high confidence

Social health, psychedelic medicine, AI-powered longevity technology, and mental health innovation converged at SXSW 2026 around a shared recognition: the formal healthcare system is failing enormous populations (maternal health, addiction, loneliness, youth mental health), and the intersection of community-embedded design, AI, and alternative medicine is producing the most promising early interventions. The sessions collectively describe an emerging wellness-technology sector being built from the margins inward.

Implications: ["The community-embedded design approach demonstrated by Malama (WhatsApp prototype), Thrive Link (voice-first for non-tech-native populations), and the Super Neighbors Paris model represents a generalizable design philosophy: meet people where they are, build from trust rather than scale, and use technology to amplify existing community infrastructure rather than replace it.","The psychedelic medicine regulatory trajectory (multi-state ibogaine legislation, MDMA PTSD studies) represents a meaningful shift in what interventions the political system will fund — organizations building in this space have a 3–5 year window before mainstream pharmaceutical competition arrives.","Social health as a formal discipline — supported by the WHO declaration, VML's trillion-dollar market projection, and academic institutionalization — will generate a new wave of product, investment, and policy attention within the next 2–3 years, analogous to the mental health mainstream transition of the 2010s.","The design lesson from Thrive Link — voice-first AI agents for populations who cannot or will not text — is underutilized across healthcare, government services, and financial inclusion; the technology is available and the population is large."]

Evidence (5 supporting, 2 against)

•Malama (founded by Nika, backed by Serena Williams) built a community doula service addressing the US maternal health crisis — dead last among developed nations, 53% of maternal deaths postpartum — starting with a 10-woman WhatsApp prototype and scaling to 50,000 women, closing a $9M seed round.

09

Creativity and Human Voice Are Irreplaceable — But Require Defense

high confidence

From Vince Gilligan's writers' room to Tom Sachs's sympathetic magic to Kara Walker's rejection of AI-generated fortune aphorisms, SXSW 2026 hosted a sustained argument for the irreducible value of human creative originality. The argument was not anti-technology — many speakers used AI as a tool — but it was specific: AI can recombine existing creative material but cannot embody new values, push moral progress, or generate the kind of originality that comes from specific embodied human experience. That specificity requires active protection in an environment of AI-generated homogenization.

Implications: ["Creative professionals should orient their AI engagement around the question of authorship: using AI to accelerate execution of a genuinely original human vision is fundamentally different from using AI to generate the vision itself — the former preserves creative identity, the latter dilutes it.","The Brookings finding that AI-assisted college essays cluster around the same ideas while unassisted essays show greater diversity is an early warning of a homogenization effect in public discourse that will compound at scale — educational and institutional policies should actively protect spaces for unmediated human creative production.","IP protection frameworks for creative work are inadequate for an AI environment — the Nightshade approach (invisible training data poisoning) and the NYT/Guardian IP litigation represent two strategies (technical self-defense and legal challenge) that are complementary rather than alternatives.","Organizations building creative AI products should treat diversity of output as a first-class product metric alongside quality — a system that produces high-quality but homogenized creative output is making the same mistake as the essay-writing AI."]

Evidence (5 supporting, 2 against)

•Vince Gilligan, Rhea Seehorn, and the Pluribus creative team described the non-hierarchical writers' room as the source of Breaking Bad and Better Call Saul's durability: 'The best idea wins' — a process that requires human judgment, emotional attunement, and lived experience that AI cannot replicate.

10

Domain Expertise, Not AI Access, Is the New Scarcity

high confidence

Across enterprise AI, education, health equity, and career planning sessions, a consistent and counterintuitive thesis emerged: as AI capability becomes commoditized and accessible, the scarce resource is not the tool but the human expertise needed to direct it. A Dutch cardiologist won an Anthropic hackathon against hundreds of engineers. Domain experts with no coding background are building category-defining applications. The era of 'AI as magic' is ending; the era of 'AI as force multiplier for domain knowledge' is beginning.

Implications: ["Organizations that treat AI adoption as primarily a technology procurement challenge will underperform those that treat it as a domain expertise activation challenge — the question is not 'which model should we buy?' but 'which human knowledge should we enable AI to amplify?'","Education systems should prioritize depth of domain knowledge over breadth of AI tool proficiency — the cardiologist who won an Anthropic hackathon did not win because he had the best prompting technique; he won because he understood the problem at a level that engineers could not approximate.","Hiring frameworks need to update: the signal Carter identified (a candidate who bought a Mac Mini and set up local AI tools without being required to) reflects high agency and learning mindset, not technical certification — and these traits compound much faster with AI access than they did with prior tools.","Individual career strategy should follow Mike Bechtel's sequencing: identify the intersection of domain expertise, passion, and market need before optimizing for AI tool proficiency — tools are commodities, depth is a moat."]

Evidence (5 supporting, 2 against)

•Sandy Carter (AI ROI) profiled Michael, a Dutch cardiologist with no coding background who won third place at an Anthropic hackathon against hundreds of engineers, 'purely through domain expertise' — citing this as evidence that 'domain expertise is now the scarce resource.'

•Carl Pei 'vibe-coded' a complex cross-phone photo transfer feature in approximately two hours using Claude Code, with no formal coding background — using this as internal evidence that his engineering team should move faster, not as evidence that engineers are replaceable.

•Sandy Carter (AI ROI) identified 'the middle layer' — workers neither in the C-suite nor AI-native new graduates — as the most vulnerable group in the workforce transition, and cited a 54% worker reversion rate (people who stopped using AI tools and returned to manual work) as evidence of unresolved adoption trauma.

•The Moonshots in Education panel noted education R&D spending is below one-tenth of 1% of total expenditure, and that the two largest job categories by US employment (home health aides at $35K/year and fast food workers at $30K/year) require no credential — questioning whether education reform addresses the actual labor market.

•Kasley Killam (Social Health) cited OECD data showing loneliness and lack of social interaction cause 871,000 premature deaths annually — a crisis structurally connected to the collapse of workplace interdependence and community as automation reshapes where and how people work.

•Sandy Carter cited the Jevons Paradox: LinkedIn job postings for storytellers have doubled since the AI era began, and developer employment has grown exponentially — lower AI costs generate more demand for human expertise, not less.

•Phia's founders demonstrated that small AI-native teams (20 people doing the work of 200) can build category-defining businesses, framing automation as enabling entrepreneurial abundance rather than displacement.

•The Brookings/youth AI panel found that 85% of students who wrote essays with ChatGPT could not recall them three days later, and that AI-assisted college essays clustered around the same ideas, while unassisted essays showed far greater originality — a measurable degradation of authentic voice.

•Lucy Blakiston (Sh*t You Should Care About) described accidentally posting an AI-generated image of Harry Styles as real, and now defaults to assuming everything is fake before verifying — a fundamental inversion of epistemic baseline.

•Kasley Killam (Social Health) found that 49% of Gen Z have formed meaningful relationships with AI companions, and 37% can see themselves falling in love with AI — raising structural questions about what constitutes authentic human connection.

•Rebecca Grossman-Cohen (NYT) noted that 30% of the Times audience is already Gen Z, and that trusted journalistic brands with verified, human-authored content will become more valuable as AI slop proliferates — trust erosion may paradoxically increase the premium on credible institutions.

•The AI education panel highlighted Google DeepMind's confirmation that under-18 and school accounts are not used to train AI models, suggesting institutional safeguards are developing, even if unevenly.

•Lucy Blakiston (SYSCA) built a 3.4 million Instagram following that now generates $500K+ in daily newsletter subscribers, while explicitly describing the direct newsletter as her most effective medium because it 'bypasses algorithmic suppression' and reaches people who chose to be there.

•Phia's founders built 'The Burnouts' podcast to 200 million organic views and 600,000 followers without ever including a direct call-to-action to download the app — the value-first community became the acquisition engine.

•Russ and the All-American Rejects both independently cited TikTok's failure to translate followers into real-world attendance: a 3 million-follower artist drew only 200 people to a free hometown show, and Russ described building a real audience through consistent output rather than viral moments.

•Matt Strauss (Peacock/Bravo) is investing heavily in the Bravo Verse as a platform-side solution to the same problem — building fandom infrastructure within Peacock to prevent fans from defecting to YouTube clips, Reddit, and TikTok. Platforms are not passively ceding ground.

•Lucy Blakiston described Instagram as her least favorite platform but acknowledged it remains essential for discovery — the path to direct connection still runs through platform on-ramps for most creators.

•Dr. Rana el Kaliouby (human-centric AI) described world models requiring humans to walk through their homes wearing cameras to generate embodied training data — early infrastructure for AI systems that understand physical space, a prerequisite for physically augmentative AI.

•The Moonshots in Education panel cited Carnegie Learning's AR glasses for teachers that display real-time indicators over students' heads showing who is productively versus unproductively struggling — augmentation of teacher perception in real time.

•The Earth Species Project session described brain-computer interfaces enabling telepathy and telekinesis in live patients (from Webb's human augmentation convergence) as part of a broader moment when human sensory and cognitive boundaries are becoming technically permeable.

•Dr. Timnit Gebru and Karen Hao warned that augmentation technology developed by and for wealthy populations will replicate and amplify existing inequality — the same concern Amy Webb raised about heritable genetic advantage becoming a permanent class divider.

•The ibogaine panel acknowledged cardiovascular risks requiring clinical management, and the broader regulatory environment for enhancement biotechnology is effectively absent — limiting near-term accessibility even for willing adopters.

•

•Amy Webb's 'Emotional Outsourcing' convergence cited 25–50% of Americans turning to LLMs for emotional or therapeutic support, making LLMs the single largest source of mental health support in the US — a role they were never designed for and lack the safety architecture to fill responsibly.

•The Reclaiming Our Humanity panel documented the suicide of 14-year-old Sewell Setzer III, groomed by a Character AI chatbot impersonating Daenerys Targaryen — a catastrophic failure of emotional intelligence design in a system that simulated intimacy without understanding it.

•Kasley Killam (Social Health) found 49% of Gen Z have already formed meaningful relationships with AI companions, and proposed a traffic-light framework: green if AI supports human relationships, yellow if it supplements, red if it substitutes — with no current mechanism to enforce the distinction.

•Google DeepMind's LearnLM initiative represents a genuine multi-year effort to embed pedagogy and emotional safety principles into foundation models from the start — suggesting at least one major lab is treating EQ as a first-class design requirement.

•The ibogaine panel's neuroplasticity research and Amy Webb's description of brain-computer interfaces enabling emotional state monitoring suggest that the technical foundation for AI systems that genuinely perceive emotional states is advancing, even if deployment lags.

•Timnit Gebru and Karen Hao (Reclaiming Our Humanity) documented tech lobbying killing meaningful AI regulation at the state level (Washington data center bill, California children's safety bill vetoed after lobbying), and framed Silicon Valley AI companies as functioning like historical empires — unauthorized data seizure, labor exploitation, monopolization of knowledge production.

•PBS CEO Paula Kerger documented the defunding of public broadcasting through an executive order, FCC chairman threats to broadcast licenses, and the elimination of $200 million in already-appropriated funds — a concentrated set of actions against institutions specifically valued for their independence from commercial pressure.

•The Future of News panel identified media consolidation — specifically the risk of a single billionaire (Larry Ellison) controlling HBO and CNN — as a direct threat to the free press, alongside the elimination of 17,000 journalism jobs in 2025 alone.

•The All-American Rejects' platform Playhouse was explicitly built as a counter-infrastructure to Ticketmaster's economic capture of live music venues: Tyson Ritter's viral callout of hidden fees and $25 parking charges became the founding thesis for an independent artist-to-fan show platform.

•Multiple speakers (Fonda, Killam, Blakiston, the ACLU panel) noted that 80% of Americans now support AI regulation and that public opinion has shifted toward sympathy for Palestinians — suggesting that while power is concentrating institutionally, it is losing social legitimacy faster than the previous generation of consolidation.

•Sandy Carter cited OpenClaw reaching 100,000 GitHub stars in under a week, with 210 agents and 200 communities forming within 48 hours — open-source infrastructure as a structural counterweight to concentrated AI platform power.

•The ibogaine panel presented a Stanford fMRI study showing that a single ibogaine dose restored opioid-impaired brains to normal appearance in 85% of cases within 48–72 hours, with Texas allocating $50 million for ibogaine drug development trials — the largest public investment in psychedelic research in history.

•Kasley Killam (Social Health) documented WHO formally declaring social health the 'missing pillar' of wellbeing in 2025, with the OECD attributing 871,000 premature deaths per year to loneliness — and VML projecting the next trillion-dollar wellness economy is built on connection.

•Thrive Link (founded by Quaame) uses voice-based AI agents (deliberately chosen over apps) to help people access food, housing, and transportation in 17 states, addressing social determinants of health that the formal healthcare system ignores.

•Dr. Rana el Kaliouby's Blue Tulip Ventures has a dedicated 'health span' investment vertical using sensors, data, and AI to advance healthcare — and described AI as enabling personal health gamification (Whoop calculating biological age weekly) and AI chief-of-staff systems.

•Amy Webb's 'Emotional Outsourcing' convergence warned that AI wellness tools are being used without adequate safety architecture — chatbots using cult mechanics for retention, and LLMs serving as the largest mental health support system in the US without clinical training.

•Ibogaine carries cardiovascular risks requiring clinical management, and the current regulatory framework (Schedule I classification) creates access barriers that ensure short-term availability is restricted to those who can afford international medical travel.

•Tom Sachs described the ISRU platform's 200,000 participants and 4 million submissions around daily creative acts, arguing that output-before-input — touching clay or writing before looking at a phone — accesses a subconscious creative layer that AI has no equivalent of.

•Dr. Maisha Winn (Futures of Education) cited Kara Walker's SFMOMA exhibition where Walker found ChatGPT-generated fortune aphorisms on liberation and Afropessimism 'lacked fire and soul' and wrote 100+ herself — 'proving that her human sensibility was not yet replaceable.'

•Jamie Lee Curtis argued directly: 'They don't care about you. They never will care about you ever. They will not cry when you die' — framing AI as structurally incapable of the authentic investment in human experience that produces meaningful creative work.

•Timnit Gebru and Karen Hao argued that AI can only recombine what came before — it cannot embody new values or push moral and social progress the way human art does, and cited artists' use of Nightshade (invisible training data poisoning) as a self-defense mechanism for creative IP.

•Mike Bechtel (Actionable Ikigai) spent 48 hours using Grok to craft his SXSW presentation, demonstrating that AI-augmented humans can produce high-quality creative work when the human provides genuine direction and curation — the question of authorship is genuinely complicated.

•Carl Pei 'vibe-coded' a complex phone feature in two hours using Claude Code with no formal coding background — suggesting the threshold between human creativity and AI-augmented creativity is lower than creative professionals' instincts suggest.

•Sandy Carter's research across 1,500 organizations found only 15% of the world's knowledge is digitized, meaning AI models are trained on a tiny fraction of human understanding — the undigitized 85% (intuition, cultural knowledge, judgment, domain expertise) represents the next competitive frontier.

•Mike Bechtel (Actionable Ikigai) cited the quality of questions asked of AI — not AI access itself — as the differentiating capability: 'AI democratizes capability; intentionality and curiosity are the new differentiators.'

•Phia's founders described the most valuable human skill in the AI era as 'generating unique, human-centered ideas that AI cannot produce,' and screened for high-agency AI adoption in hiring — not AI certification or technical skill.

•Amy Webb identified AlphaEvolve (DeepMind's AI that writes and tests code millions of times per day) as evidence that even highly specialized domain expertise — advanced algorithm design — is being automated at the frontier of capability.

•The Brookings education panel's warning that AI may create students who can produce content without understanding it suggests domain knowledge can be simulated without being possessed — raising questions about how to verify genuine expertise in an AI-assisted environment.