Published on

Beyond Bespoke: How AI Turns Component Libraries Into Adaptive Systems

Authors
  • avatar
    Name
    Daniel Cress
    Twitter

Beyond Bespoke: How AI Turns Component Libraries Into Adaptive Systems

Every time we build a new feature, we follow the same ritual: design mockups, build custom components, wire up state management, test variations, deploy. Then the requirements change slightly, and we do it all again. A dashboard for managers needs different cards than one for employees. A mobile form needs different layouts than desktop. A beginner's view needs simpler options than an expert's.

We've gotten really good at building bespoke experiences. Maybe too good. We've optimized the process of creating unique interfaces for every context, every user type, every edge case. But what if we're solving the wrong problem?

What if instead of building better tools for creating variations, we built systems that generate variations on demand? What if our component libraries could adapt themselves based on context, using the same primitives we already have?

This isn't about replacing Nuxt UI or Shadcn or Radix. It's about teaching them to compose themselves.

What We Built at Bambee

At Bambee, we built a system that generates contextual UI based on vast amounts of workplace data—performance metrics, compliance requirements, employee sentiment, turnover patterns, organizational health indicators. The challenge wasn't just showing data; it was surfacing the right insights with the right actions at the right time.

A manager dealing with high turnover needs different recommendations than one managing a stable team. An employee in their first month needs different guidance than a three-year veteran. A compliance alert for California employment law requires different visualizations than one for federal OSHA requirements.

We started down the familiar path: custom components for each scenario. But the combinatorial explosion quickly became clear. Dozens of card types. Countless variations of forms. Complex conditional logic everywhere.

So we tried something different.

Dynamic Cards and Notices

The system analyzes company data and generates recommendations—we call them solutions and notices. Each one has different severity levels, different actions users can take, different visualizations to make the data clear.

One user might see a compliance alert with a bar chart showing policy gaps across departments. Another sees a performance insight with a timeline visualization of team productivity trends. A third sees a recognition opportunity with a simple progress indicator.

Here's the key: we're not building separate components for each type. We're using the same card component, the same underlying UI primitives from our component library. What changes is the structured data that drives them.

The system generates a schema-compliant payload that describes what to show, how to show it, and what actions should be available. The frontend trusts that structure and renders accordingly.

Dynamic Wizards

The second use case was even more interesting: multi-step wizards that adapt to context.

The system detects information gaps—missing data that would improve recommendations. Based on what's missing and how critical it is, it generates a wizard to collect that information.

Sometimes it's a simple two-step survey: "What's your management philosophy? How large is your team?" Other times it's a comprehensive five-step wizard with conditional logic: if you answer yes to one question, you see follow-up questions; if you answer no, you skip to the next section.

The question types change dynamically. Sliders for rating scales. Checkbox groups for multiple selections. Matrices for comparing options across criteria. Date pickers for timelines. All generated from schemas, all rendered by the same form components.

We're not pre-building every possible wizard variation. We're defining the contract—what a wizard can contain, what question types are valid, how steps can be arranged—and letting the system compose variations on the fly.

The Core Principles

Building this taught us some things about how to structure systems for dynamic UI generation. These aren't prescriptive rules, just patterns that worked for us.

Schema as Contract

In traditional development, you design an interface, build the component, then wire it to data. The component defines what's possible.

In schema-driven development, you define the contract first. The schema is the source of truth. It describes what shapes of data are valid, what fields are required, what values are acceptable.

The backend generates data conforming to that schema. The frontend trusts the structure and renders accordingly. Neither side makes assumptions beyond what the schema guarantees.

This inverts the usual relationship. Instead of data conforming to components, components adapt to data (within the bounds of the schema).

A simple example:

// Schema defines possibilities
ActionCardSchema = {
  type: 'ACTION_CARD',
  severity: 'LOW' | 'MEDIUM' | 'HIGH' | 'CRITICAL',
  action: {
    type: string,
    label: string,
    handler: { route: string, params: object }
  },
  visualization?: {
    type: 'BAR' | 'LINE' | 'PIE',
    data: object
  }
}

// Backend generates instance based on context
const generatedCard = analyzeContext(userData)

// Frontend renders using component library primitives
<CardSlot data={generatedCard} />

The schema is doing a lot of work here. It's defining not just data types, but the vocabulary of what interfaces can express. Add a new visualization type to the schema, teach the frontend to render it, and suddenly all generated cards can use it.

Type Safety Through Validation

The critical enabler for this approach is runtime validation. For us, that means Zod, but the principle applies to any schema validation library.

Here's why it matters: AI generates JSON. That JSON must match the exact structure your frontend expects. If it doesn't, you get runtime errors, broken UI, frustrated users.

With runtime validation, you create a feedback loop. AI generates output. You validate it immediately against your schema. If it fails validation, you send the errors back to the AI and ask it to regenerate.

The pattern looks like:

  1. AI generates structured output based on context
  2. Validate against schema immediately
  3. If invalid → capture specific validation errors
  4. Send those errors back to AI with original context
  5. AI regenerates, accounting for what went wrong
  6. Retry with exponential backoff (2-3 attempts max)
  7. If all retries fail → fallback to safe default

This creates remarkably reliable output. The AI learns your schema requirements through the error messages. After a few iterations of improving prompts and tightening schemas, validation failures become rare.

The validation itself is straightforward:

try {
  const validated = CardSchema.parse(aiOutput)
  return validated
} catch (error) {
  await retryWithRefinement(aiOutput, error)
}

But the implications are profound. Your frontend never sees invalid data. Type safety is enforced at runtime. Breaking changes to schemas are caught immediately, not in production.

Slots Over Specifics

We stopped building <ComplianceCard>, <PerformanceCard>, <OnboardingCard>. Instead, we built <ActionSlot> that accepts structured data and routes to appropriate presentation.

The slot examines the schema, determines which component primitives to use, and composes the final UI.

A compliance alert might render as: red badge with alert icon, bar chart visualization, "Review Policy" button that routes to policy creation. A performance insight might render as: blue badge with trend icon, line chart visualization, "View Details" button that opens a details modal.

Same slot. Same underlying Button, Card, Badge, and Chart components from our component library. Different compositions based on the schema data.

This is more than just abstraction. It's a different mental model. You're not building components for specific use cases. You're building a rendering engine that interprets schemas and composes UI from primitives.

The power comes from the mapping layer. It reads the schema and makes decisions:

  • What badge color and icon represent this severity level?
  • Which chart component matches this visualization type?
  • What button variant and text match this action type?
  • How should these elements be arranged for this context?

As you add more schema types, the mapping layer grows. But the underlying components stay the same.

Visualization as Configuration

Instead of building separate chart components for every context, we treat visualizations as configuration.

The backend sends structured data describing what to visualize and how:

{
  visualizationType: 'BAR',
  data: {
    labels: ['Q1', 'Q2', 'Q3', 'Q4'],
    values: [23, 45, 67, 89]
  },
  theme: 'minimal',
  options: { showLegend: false }
}

The frontend has a factory function. It looks at the visualization type and says, "BAR chart? Render the BarChart component from our library with these props."

This means adding a new visualization type is a small change. Add the type to your schema, teach the factory to handle it, and now every part of your system that generates visualizations can use it.

We extended this with a vector database of illustrations. The system can describe the visual context it needs—"performance improvement scenario with upward trend"—then query embeddings to find the closest matching illustration from our library. No manual asset selection. Just semantic matching between generated context and available visuals.

Dynamic Wizards from Schemas

Multi-step forms become declarative data structures.

A wizard schema defines:

  • How many steps
  • What questions appear in each step
  • Question types and validation rules
  • Conditional display logic
  • Progress indicators and navigation

The frontend loops through the schema and renders appropriate input components. A question with type 'SLIDER' renders your library's slider component. A question with type 'CHECKBOX' renders a checkbox group.

The power is in the conditional logic. Questions can show or hide based on previous answers. Entire steps can be skipped based on user context. Validation rules can reference other questions.

All described in the schema. All rendered by generic form components.

Adding a new question type means updating the schema to include it and teaching one component to render it. Not rebuilding every wizard that might use it.

This is where it clicked for me: we weren't building forms anymore. We were building a form generation engine.

The Future Vision

This is where it gets interesting. What we built at Bambee is a proof of concept. It works in production, handles real complexity, serves real users. But it's just scratching the surface of what's possible.

Let me paint some pictures of where this could go. To illustrate these concepts without diving into proprietary specifics, I'll use a hypothetical recipe and meal planning platform as a concrete example—but these patterns apply across any domain with similar variability.

Pages That Generate Themselves

Imagine you're building a recipe and meal planning platform. A user opens the app and says, "I want to plan meals for the week."

The system analyzes their context:

  • Family size and ages
  • Dietary restrictions and preferences
  • Available cooking time
  • Current pantry inventory
  • Cooking skill level
  • Budget constraints
  • Past meal preferences

From this, it generates a complete page schema. Not a page that exists in your codebase. A page composed on the fly:

A three-column dashboard layout. Left column shows a weekly calendar with meal slots, each slot showing prep time and ingredient overlap with other meals. Center column shows a shopping list organized by grocery store section, with cost estimates and substitution suggestions. Right column shows a nutrition summary chart aggregating the week's macros, plus a comparison to their goals.

At the bottom, an action row with context-aware buttons: export meal plan to calendar, send shopping list to grocery app, adjust for budget, regenerate for more variety.

The frontend receives this schema and composes it using your component library's Grid, Card, Calendar, List, Chart, and Button components. The page never existed before this moment. It was assembled because this specific user, in this specific context, needed this specific combination of features.

That's the radical shift: from pages as files in a repository to pages as generated compositions. The repository contains the components and the schemas. The combinations emerge from context.

Context-Aware Form Adaptation

Same underlying data, completely different form based on who's using it.

A beginner user gets a simplified recipe entry form: basic fields, lots of help text, suggested defaults, links to video tutorials explaining cooking terms. Single-column layout, large touch targets, progress saved after every field.

An advanced user gets the compact version: all fields visible, technical terminology, advanced options like ingredient ratios and technique variations, keyboard shortcuts for quick entry. Multi-column layout, minimal explanatory text, batch editing capabilities.

The system decides which variation to show based on user behavior analysis. Not A/B testing. Not user segments. Individual adaptation.

And it goes deeper. Forms that adapt mid-flow based on answers. User selects "dietary restriction: vegan" → the form immediately hides all questions about meat preparation, adds questions about B12 supplementation, adjusts the nutrition target ranges, suggests vegan protein sources in the ingredient picker.

The form is responding to context in real-time, reshaping itself to show what's relevant and hide what isn't.

Adaptive Complexity

Interfaces that scale complexity based on user sophistication, not just hiding advanced features behind a settings toggle.

A recipe platform might show the same dish completely differently based on skill level. Beginners see: "Sauté the onions until soft" with a photo showing what "soft" looks like and a link to a basic sautéing video. Intermediate cooks see: "Sauté onions in butter over medium heat until translucent, 5-7 minutes" with suggested pan types and heat settings. Advanced cooks see: "Sweat onions in clarified butter, 82°C, until cell walls break down but no Maillard reaction occurs—monitor for steam release, not browning" with technique alternatives like using a immersion circulator for precise temperature control.

The ingredient list adapts too. Beginners see standard grocery store items. Intermediate cooks see preferred brands and substitution options. Advanced users see specific varieties ("Vidalia onions for sweetness, or shallots for depth"), quality indicators, and even molecular composition notes for technique-critical ingredients.

Same recipe. Same underlying data. But the interface reveals layers of technical sophistication progressively, matching what each user can handle and wants to see.

The system watches behavior. User consistently completes advanced recipes without issues? Start showing more complexity. User struggles with intermediate recipes? Pull back to basics.

This isn't just hiding fields or showing tooltips. It's fundamentally different interfaces generated for different capability levels, all from the same underlying schemas.

Cross-Domain Schema Standards

Here's where my mind goes to interesting places.

What if schema patterns became standardized across domains?

An ACTION_CARD schema could power meal suggestions in a recipe app, workout recommendations in a fitness app, budget alerts in a finance app, treatment plan updates in a healthcare app. Different domains, same structure: context analysis → recommended action → visualization → user choice.

A WIZARD schema could power dietary preference surveys in a recipe app, goal-setting wizards in a fitness app, budget creation flows in a finance app, symptom checkers in a healthcare app. Same multi-step structure, same question types, same conditional logic patterns.

Your component library becomes a universal renderer for structured intents. You build the primitives once—cards, forms, charts, buttons—and they work across every domain that speaks the schema language.

This is bigger than code reuse. It's conceptual reuse. The patterns for how to structure adaptive UI become portable knowledge, not locked into specific implementations.

Imagine open schema standards for common UI patterns. Like how we have standard HTTP methods or standard database query languages, we could have standard schemas for "action recommendation with visualization" or "multi-step data collection with conditional logic."

Build a great implementation once, use it everywhere.

The Self-Assembling Application

Push this to its logical endpoint: applications that materialize from intent.

You describe what you want in natural language: "I want to help users plan healthy meals on a budget with easy recipes they can actually make."

The AI generates schemas for:

  • Data models (user profiles, recipes, pantry items, meal plans)
  • UI patterns (dashboard layouts, recipe cards, planning wizards)
  • Action types (save recipe, generate shopping list, track spending, suggest substitutions)
  • Visualizations (nutrition charts, budget tracking, ingredient freshness timelines)

You review the schemas, refine them, approve them. The frontend already knows how to render any valid schema. The backend already knows how to validate and store schema-conforming data.

What you're doing is no longer building features. You're curating and refining schemas. The system does the composition.

This sounds far-fetched until you realize we're already doing pieces of it. Code generation tools are getting better. Schema validation is mature. Component libraries are comprehensive. AI understands context remarkably well.

We're just connecting the pieces.

Personalization Without Fragmentation

Traditional personalization means building variants: "Here's the health-focused version. Here's the budget-focused version. Here's the time-saving version."

You end up managing three codebases pretending to be one. Changes require updating all variants. Testing multiplies. Maintenance becomes painful.

Schema-driven personalization means generating the optimal interface for each user from the same underlying system.

User A cares about health. Their recipe cards prominently display nutrition information, macro breakdowns, ingredient quality scores, health impact summaries. The charts show nutrient density. The recommendations prioritize nutritional completeness.

User B cares about time. Their recipe cards show prep time first, active vs. passive time breakdowns, make-ahead options, batch cooking opportunities. The charts show time saved through meal prep. The recommendations prioritize efficiency.

User C cares about budget. Their recipe cards show cost per serving, bulk buying opportunities, seasonal ingredient savings, leftover utilization. The charts show cost comparisons. The recommendations prioritize affordability.

Same data. Same component library. Same schemas. Infinitely variable presentation based on what matters to each individual user.

No variant management. No A/B test fragments. No "which version am I in?" confusion. Just contextual generation.

And because it's schema-driven, you can combine dimensions. User D cares about both health and budget. They get nutrition data weighted by cost-effectiveness. User E cares about time and is also vegan. They get quick recipes that happen to be plant-based, not "vegan recipes" as a special category.

The combinations emerge from the generation logic, not from pre-built variants.

The Trade-offs

This sounds exciting, and it is. But it's not free. The complexity moves around; it doesn't disappear.

Schema Management Becomes Critical

Your schemas are your contract. They're what enables the whole system to work. Which means changing them is like changing your API.

You need versioning strategies. How do you evolve schemas without breaking existing data? How do you migrate old schema instances to new versions? How do you deprecate schema fields while maintaining backward compatibility?

You need synchronization mechanisms. The backend that generates schemas and the frontend that renders them must stay aligned. A mismatch means broken UI or validation errors.

You need discovery tools. As schemas multiply, developers need ways to find them, understand them, know which one to use for which scenario. Documentation becomes more important, not less.

This is real work. You might need a schema registry, similar to what you'd use for event-driven architectures. You might need automated testing that validates schema compatibility across versions. You might need tooling to generate TypeScript types from schemas and keep them in sync. Tools like Storybook become invaluable—you can document not just components, but how those components render different schema variations. Each story becomes a living example of "here's this schema shape, here's how it renders," making it easier for developers to understand which schema to use for which scenario.

Schema management is now a first-class concern in your architecture.

Database Structure Gets Messy

The relational database purist in you will not like this.

Instead of neat columns with proper foreign keys and database-level constraints, you end up with JSONB blobs. The validation that would normally happen at the database layer moves to application code. Querying becomes harder—you can't easily "show me all high-priority compliance actions" when that data is buried in JSON.

There are workarounds. Hybrid models where you extract commonly-queried fields to columns and keep the dynamic data as JSON. PostgreSQL's JSONB type with indexes on frequently-accessed paths. Generated columns that pull specific values out of JSON for querying. Materialized views for complex queries that need to run often.

But it's more complex than traditional relational design. You're trading structure for flexibility.

The database is no longer your source of truth for what data is valid. Your schemas are. The database is just storage.

This is a philosophical shift as much as a technical one.

Debugging Becomes Archaeology

When something renders wrong, you can't just look at a component file and see what's broken.

You need to trace: What was the user's context? What did the AI decide based on that context? What schema did it generate? Did it pass validation? How did the mapping layer interpret it? Which component got selected? What props were passed?

That's a lot of layers between "user saw wrong thing" and "here's the bug."

You need comprehensive logging at each step. You need replay tools that can take a saved context and schema and show you exactly how it rendered. You need schema versioning so you know which version of which schema generated which UI at which time. You need visibility into AI decision-making—why did it choose this action type over that one?

Debugging a static component is straightforward. Debugging a generated interface is detective work.

The tooling helps, but the conceptual overhead is real.

Testing Strategy Becomes Critical

Testing dynamically generated UIs requires a different approach than testing traditional components. You can't just write unit tests for a component and call it done—the component itself might be simple, but the schema that drives it can vary infinitely.

Our testing strategy has three layers:

Schema validation tests are the foundation. Every schema gets comprehensive validation tests that verify the structure itself is correct. These tests catch issues like missing required fields, invalid enum values, or type mismatches. We treat schemas as first-class code artifacts with their own test suites.

Contract tests verify the relationship between schemas and components. Given a valid schema, does the component render without errors? We maintain a library of example schemas—edge cases, common patterns, minimal valid schemas—and run them through the rendering pipeline. This catches breaking changes when either schemas or components evolve.

Integration tests validate the full generation pipeline. We mock the AI responses with known schemas, then verify the entire flow: context analysis → schema generation → validation → rendering. This ensures the retry logic works, fallbacks activate correctly, and error boundaries catch failures gracefully.

We rely heavily on snapshot testing for the rendered output. When a schema changes, snapshot tests highlight exactly what UI changes resulted. This gives us confidence that schema evolution doesn't break existing interfaces unexpectedly.

The biggest shift is acceptance that you can't test every possible variation. Instead, you test the boundaries: minimum valid schemas, maximum complexity schemas, common patterns, and known edge cases. Type safety through Zod catches most issues at compile time, and runtime validation catches the rest before users see them.

Error Handling Multiplies

More moving parts means more things that can go wrong.

AI generation can fail. Network timeout, rate limit, malformed response. Validation can fail. AI generated something that doesn't match the schema. Retry loop can exhaust. After three attempts, still no valid schema. Rendering can fail. Unknown schema type, missing required data, component error.

You need graceful fallbacks for every failure mode.

Default schemas that are safe to show when generation fails. Error boundaries in the frontend that catch rendering failures and show something useful. Monitoring and alerting for validation failures so you know when your prompts or schemas need adjustment. User-friendly error states—"We're having trouble generating recommendations right now, here's what we suggest generally..."

Every dynamic system trades simplicity for resilience engineering.

Performance Considerations

AI generation adds latency. LLM API calls can take seconds. You can't generate fresh schemas on every page load.

You need strategies: pre-generation for common scenarios, caching generated schemas by context hash, background generation with optimistic UI, hybrid approaches where you show static defaults while dynamic enhancement loads, edge computing to move generation closer to users.

But you're adding complexity to maintain responsiveness. The simple "render component with props" is now "check cache, maybe generate, validate, render, handle failure cases."

The performance budget gets spent differently.

Team Learning Curve

This is a different way of thinking about UI development.

Your team needs to understand schema design—what makes a good schema, how to evolve them, how to version them. They need to understand runtime validation and how to write schemas that AI can reliably generate. They need to understand slot-based architecture and mapping layers. They need to understand the trade-offs between flexibility and predictability.

Not every team wants this complexity. For simple, predictable applications, it's overkill. The traditional component-per-feature approach works fine when features are truly unique and don't follow patterns.

You're choosing a different set of problems. More upfront design work on schemas, less repetitive component building. More system-level thinking, less feature-level implementation.

It's not objectively better. It's different. And it requires buy-in.

When Does This Make Sense?

So when should you actually consider this approach?

Good fit:

You're building something with high variability in user contexts or data types. The same underlying features need to look different for different users, different roles, different situations.

You have frequent new requirements that follow similar patterns. Not "build a completely new feature," but "this feature needs to work slightly differently for this new context."

Your domain is one where AI can make intelligent contextual decisions. There are patterns to learn, data to analyze, reasonable inferences to make.

Your team is comfortable with schema-driven development. Or willing to learn. And has the capacity to manage the additional architectural complexity.

Your users benefit meaningfully from personalized, adaptive experiences. The variability actually matters to them; it's not just engineer preference.

Bad fit:

You have pixel-perfect design requirements. Brand campaigns, marketing sites, anything where the exact visual presentation is non-negotiable. Schema-driven generation gives you flexibility, not precision.

Your feature set is predictable and stable. If you're building ten truly unique features with no pattern between them, there's no schema to extract. Just build the ten features.

You have a small team without capacity for schema management. The overhead might outweigh the benefits.

You're working on performance-critical real-time interactions. The latency of generation and validation might not be acceptable.

You have regulatory requirements for fixed UI flows. Sometimes the law requires specific workflows in specific orders. Dynamic generation adds compliance risk.

The litmus test: If you find yourself building slight variations of the same component over and over—same structure, different data, different actions, different styling—you're a candidate. You have patterns worth extracting into schemas.

If every feature is truly unique, truly bespoke, you're probably not.

Components Learning to Think

Component libraries aren't dead. They're evolving.

We're moving from tools we manually compose to systems that compose themselves based on context.

What we built at Bambee is proof that this works in production. Dynamic wizards that adapt to what information is missing. Adaptive cards that surface different insights for different users. Contextual visualizations that show what matters most. All using our component library—Nuxt UI—just arranged by the system instead of by developers.

But it's early. The schemas are still simple. The generation logic is straightforward. The mapping layers are manageable.

What comes next is more interesting.

More sophisticated schemas that can express complex layouts, responsive variations, accessibility requirements, animation preferences. Cross-domain schema standards that let us share UI patterns across completely different applications. Entire pages assembled from intent rather than files. Vector-powered asset selection that makes every interface feel custom. Progressive complexity that adapts to user capability in real-time. Mass personalization without the maintenance burden of variants.

The vision: developers define schemas and provide component libraries. AI generates optimal UI for each user's context. Users see interfaces that feel custom-built for them. No custom building required.

This isn't replacing the craft of UI development. It's augmenting it. Moving us from pixel-pushing to pattern-defining. From building variations to building systems that generate variations. From asking "how do I build this interface?" to asking "how do I describe the space of possible interfaces?"

The future of UI development might not be about building interfaces. It might be about building the systems that build interfaces.

Component libraries aren't dying. They're learning to think.