Ellie.ai · 2025 – 2026 · Product Designer · Web

AI Agent for Enterprise Data Modeling

Designing a context-aware system that acts on models, metadata, and workflows — embedded directly where engineers and analysts work.

90% faster routine workflows AI / ML B2B SaaS Web
Overview

The opportunity

The rise of generative AI created an opportunity to improve how enterprise data modeling workflows operate. Rather than launching a standalone chatbot, we embedded AI directly into core product workflows — helping users move from problem to resolved faster, explore models with greater confidence, and make complex source systems easier to understand and document.

Instead of a separate AI interface, the goal was workflow acceleration grounded in context and trust — AI that knows what you're working on, acts within it, and explains its reasoning.

Context

The real blocker isn't drawing entities

Enterprise data modeling is rarely blocked by the act of drawing entities. It is blocked by context — specifically, the lack of it. Across industry conversations, analyst forums, and customer interviews, the same themes surfaced repeatedly: getting started is difficult, understanding complex source schemas is overwhelming, and a significant portion of time is spent on repetitive metadata and documentation work.

Another critical barrier was trust. AI could accelerate workflows — but only if users felt confident enough not to double-check everything manually from scratch. Any AI-generated output had to be editable, explainable, and controllable.

My role

Leading design across the AI surface

I led the design initiative in collaboration with AI engineers, product managers, and core platform teams. The scope covered every AI-powered workflow in the product — from the first draft model generator to the conversational agent layer.

I defined AI-supported interaction patterns across modeling and discovery surfaces, designed the trust and transparency layer, and ensured the experience remained cohesive across the platform — regardless of which AI feature a user was interacting with.

Key insights

Two findings shaped everything

01
Most delays happen before modeling
Context gathering and schema interpretation — not canvas work — consumed the majority of engineers' time. The blank canvas was just the visible symptom.
02
Trust determines adoption
Users would not delegate to AI unless they could verify, edit, and override it. AI that felt like a black box was abandoned — even when the output was correct.
Typical data modeling project lifecycle
Solution

AI embedded in the workflow, not beside it

Rather than a standalone AI interface, we integrated intelligence directly into the surfaces where users were already working — the canvas, the source explorer, the documentation panel. Each AI feature was designed to reduce a specific cognitive load, not to be impressive for its own sake.

Tool-to-Model

From business requirements to first draft model

Users paste or describe business context and receive a structured conceptual model as a starting point. Eliminates the blank-canvas barrier and reduces time-to-first-draft from hours to minutes.

Lumi main interface — AI model generation
Canvas AI Copilot

Structural suggestions, live on the canvas

AI supports refinement directly within the live model — suggesting relationships, flagging structural inconsistencies, and offering stakeholder-friendly explanations of technical decisions without leaving the canvas.

Lumi Canvas — AI suggestions on the model canvas
AI Source Navigator

Semantic clarity for complex source schemas

Generates semantic metadata and clarifies technical schemas using sample data or metadata-only modes. Turns opaque source tables into documented, understandable assets — without requiring manual annotation.

Lumi Sources — AI source schema navigation
Lumi — AI Agent

Conversational agent acting on your data assets

Lumi works directly with models and source assets — creating entities, editing relationships, suggesting structures, and generating documentation based on full system context. It operates within the model, not just alongside it.

Lumi Chat — conversational AI agent
Enterprise Trust Layer

Privacy and control at the foundation

BYOK (Bring Your Own Key) configuration and strict data boundary enforcement ensured enterprise-grade privacy. Organizations could connect their own AI providers and define exactly what data was exposed — making adoption viable even in regulated industries.

Lumi Providers — enterprise trust layer
Impact

What changed for users

Results are based on internal usage data, time-tracking analysis, and user satisfaction surveys.

What I learned

AI delivers value when it disappears into the work

Embedded beats standalone
AI features that lived inside existing workflows saw adoption rates far above anything that required users to switch context to a separate AI interface.
Context is the product
The quality of AI output was almost entirely a function of how much context it had. Investing in context-gathering surfaces was as important as the model itself.
Control builds trust
Users adopted AI faster when they felt in control — not when outputs were most impressive. Editability, transparency, and override mechanisms were essential design requirements.
Reduce cognitive load, don't add novelty
Features that removed a specific friction — documentation, schema interpretation, blank canvas — succeeded. Features designed to showcase AI capability without a clear job-to-be-done were abandoned.
Next project
Web3 App for iOS and Android
← All work