
Image: UNVEIL
When AI becomes the interface
The conversation a customer has with an AI model is now often the first and sometimes the only interaction they have with a brand.
Our latest innovation report argues that if this interaction is left to third-party platforms, brands risk losing control of the relationship—along with their voice, their data, and direct access to their customers. The case we make for brand-owned adaptive customer experiences is both a business imperative and creative opportunity—to discover what value can be created for the relationship through AI capabilities.
While writing the report we came across many examples of brand-owned adaptive experiences. Since they couldn’t all be included, we drafted a market survey that goes further, identifying some of the brands taking advantage of adaptive experiences, the trends and archetypes that are shaping the landscape, and where the territory remains open. What we found was a lot of activity along with a good deal of unfulfilled promise. Brands are moving, but mostly in the same direction, and often not as far as they claim.
Whether through third-party platforms or directly through brand-owned adaptive experiences, AI is becoming the interface through which customers discover and act. This past quarter, some of the ideas and perspectives that surfaced in our programming connect back to this theme.

Image: UNVEIL
Rise of the model designer
With adaptive customer experience, brands are designing for likely, not guaranteed behavior. This creates real challenges around bias, privacy, and accessibility—with the potential to damage brands and harm users.
Paz Perez introduced this topic in the online panel that George moderated, Making AI ethics practical, not performative. In her words, “issues like hallucinations, privacy, and biases aren’t just problems users encounter when interacting with AI models. These issues are deeply woven into the very way the models are built.” She connects this responsibility to the role of the model designer, “You can design with AI: using tools to produce things faster. You can design the interface: the buttons, how prompts are explored, how memory is handled. But what's more interesting is what’s behind the hood: the model itself — this black box that is actually a material you can shape.”
During the webinar, Paz provided three levers through which a model designer shapes how an AI system behaves. The first is data: what the model was trained on determines its fundamental worldview, its assumptions, its blind spots. The second is context: the system prompts, instructions, and safeguards that steer behavior in real time, keeping it aligned with the brand’s intent in any given interaction.
The third is evals. Short for evaluations, these are the criteria used to measure whether the model is actually behaving the way you want it to. Does it respond with the right tone? Does it handle a sensitive situation with appropriate care? Does it stay on-brand under pressure? What makes evals significant is that writing them is a design act, not just an engineering one — they encode the values of the company. What counts as a correct answer? What counts as safe? What counts as on-brand? These are judgment calls that happen to sit inside what most people assume is a purely technical process.
The model designer, like the Whisperer role in a previous innovation report, shapes the behavior of the model: how the system responds, how it decides, how it interacts with people. As Paz puts it, “we used to live in a world where we were mostly focusing on the interface, but now what matters most is how the system behaves underneath.”

Image: UNVEIL
Rise of the brand engineer
Last month, some of the brand practice leads met with the founder of Brand.ai, Michael Carter, for a demo of their product. Much like Thomas’ ShoSho presentation, Brand.ai exists in part because no one reads brand guidelines. They offer their clients a “brand operating system” that brings unstructured, distributed and tacit information into a centralized intelligent interface. The product helps maintain brand consistency at scale, generate and validate creative assets, and align distributed teams.
Michael also introduced us to what he calls the brand engineer. As he puts it, “aligning your own teams and tools is only half the problem. AI-mediated discovery is rapidly becoming the primary way prospective buyers evaluate and compare options, which means your brand is being represented in conversations you’re not part of and aren’t actively shaping.” Even if a brand owns its adaptive experience, it still needs to influence how it shows up in third-party platforms, and the brand engineer is the role that makes that happen.
This comes down to what Thomas described as machine-actionable brands. Or in Michael’s words, “The models pull from whatever structured information it can find about your company. If those signals are vague or contradictory, you may not surface at all, or you may show up described in terms that don't match your actual positioning. If you want to control how you show up in those generated answers you need someone dedicated to making your brand machine-readable, whether you call them a brand engineer or not.”
Content authority with AEO/GEO
The brand engineer role is about influencing how a brand presents itself, but the same logic applies to an even more foundational question: can your content even be found, extracted, and cited by AI systems in the first place?
AEO/GEO were hot topics this past quarter, helping current and prospective clients understand how to be surfaced and cited with authority in AI environments. Drawing on content drafted by Luis and James, we summarize the opportunity below.
Answer Engine Optimization and Generative Engine Optimization are about what happens after AI finds your content. AEO asks whether a machine can extract a specific piece of information and present it as a standalone answer. GEO asks whether an AI model synthesizing information from multiple sources will treat your content as trustworthy enough to cite. Both depend on content being structured as discrete, well-defined data rather than prose that only makes sense when a human reads the whole page.
This is largely a content modeling problem, and content modeling is something we’ve been doing well for twenty-three years. When a content model has structured fields, a title, a summary, practice areas, credentials, you can render a well-designed page for the human visitor and simultaneously output correct schema markup from those same fields. One model, two outputs. The schema becomes a byproduct of good architecture rather than something retrofitted afterward.
The adjustment needed is relatively small: systematic schema generation added to our implementations, and intentionality during content modeling about whether a given model produces structured enough data to be machine-readable automatically. AEO and GEO aren't a new discipline we need to bolt on, they're a natural extension of how we already build.
The model designer, the brand engineer, and content modelling each help ensure that when AI mediates the relationship between a brand and its customers, the brand shows up with intention rather than by accident. The work of figuring out what that looks like in practice, for our clients and for ourselves, is the work ahead.
Practice lab highlights
There is so much R&D happening across the company right now, and that’s something to celebrate. Our goal is to help turn that experimentation into capabilities we can all benefit from, and the first step is making it visible. Here are a few recent highlights.
We had the pleasure of welcoming Unveil to our Paris office for a Gather& session. Co-founders Alexis Foucault, a former AREA 17 designer, and Tom-Jacques Perret shared their work, their approach, and how they've carved out a practice at the intersection of art, fashion, and music.
Thomas made the case that nobody reads brand guidelines anymore and showed us what happens when you stop trying to make a better .pdf and start making them interactive. His prototype is a functional brand agent: a chat interface that answers questions about the brand, a design tool that generates on-brand assets for non-designers, and a centralized knowledge base that feeds both.
Marius walked us through some of his latest Figma plugins and, more importantly, how to build your own. All the details are in the links below.
Tim Brook has set up a Claude plugin marketplace to centralize our skills, agents, MCPs, and hooks so we’re not locking them behind a workspace or copying them between projects. You add the marketplace once and pick what you need at any time. This isn't just for engineers! Anyone can use and contribute to it so send him your GitHub username to get added. Further instructions to follow.
We met with Beaucoup Data to explore a strategic partnership, adding their specialized AI capabilities to the mix. The Montreal-based AI and data consultancy helps businesses in retail, e-commerce, fashion, and logistics turn data into measurable outcomes. We have a joint proposal with them out to Criterion—wish us luck!
To continue making R&D visible, today we're sharing a preview of the Practice Lab tracker: a simple, shared space for keeping up with experimentation happening across the company. If you're trying something new, log it by sending a message to @rnd in Slack. If you see something a colleague has shared that looks like experimentation worth tracking, forward it to @rnd and we’ll take it from there. The tracker is currently in preview, updates coming soon.
Until next time!

Image: UNVEIL
Colophon
Video home: Unveil
Cover images: Unveil
Typography: Suisse Int'l
Tools: Figma, Framer, Claude, NotebookLM, Youtube, Vimeo, Soundcloud, Descript