01 The Problem 02 Context 03 Designing for End Users 04 Designing for Clients 05 Designing for Scale 06 Outcomes

Every website has answers. None of them know how to give one.

Reimagining enterprise search for the GPT era where users expect answers, not pages

Design Management First principle thinking Stakeholder Alignment UX/UI Design Usability Studies Product Thinking Product Strategy

The Gap

A user looks for an answer.
The website gives them pages.

58.5%

of searches end with zero clicks

SparkToro, 2024

1.1%

average B2B SaaS website conversion

Electroiq, 2024

2.5B

daily queries now go to AI, not websites

OpenAI, 2025

The search begins

A user lands on an enterprise website with one specific question. They find the search bar, type it in, and wait. The spinner starts turning.

47 seconds about to be wasted

Pages. More pages.

Results load. A blog post. A whitepaper. A case study from 2021. A "Contact Sales" page. Not one of them answers the question. The user scans the list, confused.

Wrong page. Back. Repeat.

They click the most promising result. Scroll. Read. Wrong page. Hit back. Try another. The answer exists on this website. They cannot reach it.

Still no answer

Opens a new tab

Frustration peaks. The user stops trying. A new tab opens. ChatGPT. Perplexity. Claude. Same question, different surface.

The website just lost them

Same question. Typed in.

They type the exact same query into an AI assistant. The typing indicator appears. Two seconds. Not forty-seven.

Direct answer. Done.

A clear, direct answer appears. They feel satisfied, close the tab. The enterprise never knew they were there. That question was theirs to answer. Someone else answered it.

User satisfied. Website never knew.
acme-enterprise.com
acme-enterprise.com/search
Loading...
Blog post
Whitepaper
Case study
Contact sales
acme-enterprise.com/blog/enterprise-trends-2024
Not what I needed
acme-ent...
+New Tab
GChatGPT
PPerplexity
CClaude
"How does enterprise pricing work?"
AI

AI Assistant

Thinking...

"How does enterprise pricing work for your product?"
Ask anything...

AI Assistant

Answer ready

"How does enterprise pricing work?"

Enterprise plans start at $2,000/month, scaling with query volume. Includes dedicated onboarding, SLA support, and custom integrations. Free 14-day trial — no card required.

Direct answer. 2 seconds. Done.

The problem, stated plainly.

What Webless does

Webless embeds AI-powered search directly onto enterprise websites. A user types a question and gets a direct answer pulled from the company's own content using RAG and vector embeddings, delivered in seconds. No redirects. No page hunting. No dead ends.

The product sits in the MarTech and GTM stack. Its buyers are Growth and Demand Gen leaders at B2B companies whose websites carry years of content that users cannot find.

The answer was always there. Webless makes sure the user finds it before they leave.

My Role

Sole designer across two parallel workstreams over six months: the end-user search and answer experience embedded on client websites, and the client-facing admin panel that gave enterprise teams visibility and control over the product.

Partnered directly with the founder and frontend team. Ran usability studies, shaped product prioritisation, and owned the design decisions end to end, from which tools to build first to how each one shipped.

Outcomes

140%

Search discoverability increased across redesigned deployments

43%

Users who asked one question came back to ask a second

43s

Average session time for users who engaged with search

3X

Client onboarding speed improved after component system was built

The Design

PART I

The Search

Potentially too passive for users Too aggressive for clients to agree to have on website sweet spot

Visible enough for users to catch attention

without demanding it

Restrained enough for clients to live on an enterprise website without a fight

Visible enough to earn user attention.

Restrained enough to clear client approval.

Six concepts explored

Concept 01

Shortlisted

Concept 02

Concept 03

Concept 03

Concept 04

Concept 04

Concept 05

Concept 06

But the numbers told
a different story
1.5%
Client A
1.07%
Client B
1.45%
Client C
4.96%
Client D
1.76%
Client E

Four of five clients came back below 2%. The variation was not random. It revealed which design variables mattered most.

Client D's rate was 3x higher, driven by placement, faster load time, and intent-driven users. That gap became the question worth answering.

Then we watched
real users try it.
C

Chandhana, 25
Website Designer

Mistook the bottom bar for a cookie consent modal. Ignored it entirely until a different element caught her attention.

Oh, another one of those cookie things at the bottom. I never read those—they're always the same.

Visibility gap
A

Arunava, 28
Software Professional

"Ask Anything" — but ask what, exactly? No context for what the search could or should handle.

Ask anything... okay, but like, anything about what? Can I ask about pricing? Features? I don't know what's in scope here.

Label mismatch
S

Shubhang, 29
Product Professional

Found search only after dismissing a cookie popup that had been blocking it the entire time. Reacted with an "aaahhh."

Wait, that was there the whole time? Aaahhh! I kept looking for a search icon in the nav. The popup was blocking it!

Discoverability gap
A

Anoop, 28
General User

Went straight to the top nav. Expected search to live there. Never looked below the fold.

Where's the search? Usually it's up here... top right corner. That's where every website puts it. I didn't even scroll down.

Mental model mismatch
So we went back in.
With four things we
should have caught earlier.

Rotating questions

Users did not notice the rotation happening

Type-in animation

Motion catches peripheral attention without asking

No top nav icon

Removed to reduce client approval friction in V1

Search icon in top nav

Users look here first. Meet them where the mental model is.

Static prompt bar

Nothing signalled the bar was interactive

Visual microanimations

Signals interactivity before the user acts

No clear button

Basic input state only

Clear button + UX hygiene

Reduces friction once the user is inside the interaction

None of these were radical redesigns. They were corrections — each one tracing back to something a real user showed us. The version that moved the numbers was quieter, not louder.

Experience that moved
the needle
140%

Search discoverability increased across redesigned deployments

43%

Users who asked one question came back to ask a second

PART II
Question & Answer Exp

Designing the answer state

Getting a user to ask a question was only half the problem. What happened next had to make them trust the answer. The lightbox had to do four things at once: deliver a concise response, show the depth behind it, avoid feeling empty on first load, and give the user a reason to keep going.

Nine directions explored. Each one testing a different balance between those four. The same answer content sits inside every layout — what changes is the structure around it.

PART III
Webless Product

Designing for Webless's Customer

The admin panel gave enterprise clients direct ownership. Three tools, built in the order clients needed them.

Prove the product
is working
How many users are engaging with search?
What questions are users asking most?
Which content is driving the most answers?
Is search performance improving over time?

Analytics

Turns search activity into evidence - the data clients needed to justify Webless before every renewal.

Trust what goes live
on their website
Webless auto-classifies content - but is every page correct?
Some pages should never surface in search results
Classifications need human sign-off before going live
Edits and overrides need to be tracked, not lost

Content Manager

AI proposes. Clients approve, edit, or exclude. Nothing goes live without their say.

Own the answers
their users receive
What are users actually asking on my site right now?
Some AI responses are incomplete or off-brand
Critical queries need guaranteed, hard-coded answers
The AI needs to learn what it does not know yet

Query Sandbox

Real queries, visible for the first time. Clients edit, override, or teach the AI - on their terms.

Analytics

The first tool built. Without visibility into search performance, clients had no way to see value - and the commercial argument collapsed before it could be made.

Analytics
Other Explorations
Exploration 1 Exploration 2 Exploration 3 Exploration 4
Decisions + Debates
  • The early explorations were testing more than layouts - each one proposed a different definition of success. Engagement rate. Topic coverage. Funnel efficiency. The shortlisted direction refused to pick one: it showed the full journey from session to outcome and let the client decide which part to act on.
  • The Sankey surfaced something the other charts missed: users who opened a Webless session but never searched. Making that drop-off visible meant clients could see exactly where to focus, not just where things were working. That level of transparency became a trust signal at renewal.
  • The word cloud was dropped not because "Others" dominated - it was answering a different question than what clients actually needed. Topic frequency without answer quality is noise. What a demand gen lead actually needs is: which topics are users asking about that our content does not answer well? That required a different data model, not a different chart.
  • The color palette was a deliberate argument about who this product is for. Enterprise dashboards are gray by convention - designed for IT and ops. Webless buyers are Growth and Marketing leads. The colorful palette was a signal: this is a tool you chose, not infrastructure you inherited. It had to feel like a product decision, not a procurement one.
  • Every stat card was designed to point somewhere. Seeing an engagement number in isolation is reporting. Connecting it to the queries that drove it, the content that answered them, and the next action - that is the difference between a dashboard clients open at renewal and one they open every Monday.

Query Sandbox

Real queries, real responses, visible for the first time. And for the first time, editable. The sandbox gave clients full editorial control over every element of the AI answer state - the response, the CTA, the content tiles, the related questions.

Decisions + Debates
  • The core tension in this tool was trust. Clients needed to feel confident about what the AI was saying on their website. But if the editing experience felt too heavy, they would either not use it or over-correct every response until the AI added no value at all. The design had to make editing feel like a safety net, not a maintenance burden. That is why every edit affordance is contextual and inline - you only see it when you are looking at the thing you want to change, not as a separate workflow you have to remember to run.
  • The suggested questions on the empty state were a deliberate first-impression decision. The empty sandbox is the most critical moment in onboarding - clients who see a blank input and have no idea what to type will close the tab and not return. Pre-populating with real queries from their own deployment reframed the blank state from intimidating to informative. It told clients what their users were already asking before the client had done anything.
  • The "Copy and paste from current response" shortcut looks minor. The reasoning behind it was this: a client staring at a blank "New Response" field will write something from scratch that reflects how they think the answer should sound, not how a user actually reads it. The AI response, even when imperfect, was generated from the actual content. Starting from it and editing preserves that grounding. The shortcut was a nudge toward refinement over replacement - and refinement keeps the AI in the loop rather than sidelining it entirely.
  • Breaking the answer state into independently editable pieces - response, CTA copy, content tiles, related questions - was a product scope decision with real tradeoffs. A single "edit this answer" flow would have been simpler to build and simpler to explain. But clients had very different things they needed to fix: some responses were accurate but the CTA was wrong, some tiles pointed to the right page but used outdated imagery. A monolithic editor would have forced clients to re-enter everything to change one thing. Granularity was the point.
  • The drag-to-reorder on related questions was not a UX nicety. It was the moment clients understood that the order of questions shapes user behavior. The first related question gets clicked most. Giving clients control over that sequence meant they could use the sandbox to guide what users ask next - not just what they get told. That turned the tool from a quality control layer into a content strategy tool.

Manage Content

800 pages, ingested automatically. The hard part was not indexing them - it was giving clients the confidence that the right ones were included, the wrong ones were excluded, and nothing had gone live without their review.

Decisions + Debates
  • The central design problem was scale without overwhelm. A client opening the Content Manager for the first time sees 800 rows. That number is not a problem to solve - it is the product working correctly. The design challenge was making 800 rows feel reviewable rather than insurmountable. Search, filters, and status grouping were the answer, but the key decision was defaulting the view to "needs review" rather than "all content." Clients who see only what requires action are far more likely to act than those who see everything at once.
  • Exclude is not the same as delete, and the design had to make that clear. When a client excludes a URL, they are telling Webless not to surface answers from that page - but the page still exists in the index and can be re-included later. The confirmation modal ("Are you sure you want to exclude Row ID 2819?") was not there to slow the client down. It was there because exclusions have downstream effects on answer quality, and clients who understood the stakes made better decisions than those who clicked through without thinking.
  • The "Mark as Gated" action was a late addition that came from a real client need. Enterprise websites often have content behind login walls - whitepapers, pricing pages, product documentation. When the AI indexed these pages, answers could reference content that required authentication to access. Gating a page flags it so the AI deprioritises it in responses. The design had to make this distinction visible in the table without cluttering every row.
  • The headline edit pattern follows the same logic as Query Sandbox - show current, allow replacement, provide a copy-from-current shortcut. The reason this mattered for Content Manager specifically is that AI-generated headlines are often technically accurate but tonally wrong. A client's brand voice is not something the AI can infer from a webpage. Giving clients the ability to override just the headline - without touching the rest of the classification - meant they could fix the surface without having to re-do the underlying work.
  • Bulk actions were the hardest thing to get right. The instinct was to add them early because clients reviewing 800 rows clearly need to act on multiple items at once. But bulk exclusion is risky - a misclick on "exclude all" for a filtered set could silently remove large parts of a client's content from the AI index. The design solution was to make selection state very visible and require a confirmation step for any bulk destructive action, while making bulk inclusion feel lighter. Asymmetric friction was intentional.

Execution Handbook

Built to move fast, designed to last

When you are trying to convince clients that an AI tool is worth betting on, the product has to feel serious. Not just functional but considered, down to every detail. I mapped each component to its smallest moving part, defined exactly what a client could change and what we would never compromise on, then sat with the engineers until it shipped. A client could see Webless on their site and believe it belonged there.

Search bar
Button
How does
Try asking...
Search for...
Fields
Search prompt
Lightbox
Anatomy — search prompt
Ask anything about our products... Search icon color · size Text color · family · size · weight Bar bg · radius · border · shadow Button bg · icon · border · radius Position from bottom · bar width Variable per client Icon color Icon size Text color Font family BG color Locked Touch target Anim. timing Int. spacing Focus state Same structure across all 5 components · full token spec in Figma
Client Resource · v1.0
Webless
Handbook
Component variants, token specs, and customisation options for client onboarding.
Search bar
Button
Fields
Lightbox
Search prompt

The Outcomes

No single decision produced these. Each number traces back to a specific constraint taken seriously.

End to end system

4to12+

Clients in 6 months

Faster onboarding and a product that could absorb any client's brand meant the commercial pipeline moved significantly faster.

Search placement

85%

Discoverability increase

Experimenting with placement of the search bar, micro-animations, and intriguing rotating starter questions brought significantly more users to the search entry point.

Lightbox design

46%

Asked a second question

Suggested questions were purely contextual — generated from what the user just asked, not generic prompts. The answer state earned the next question.

Answer depth

43s

Avg. session with search

Sessions that included search lasted significantly longer. Users were reading, not bouncing.

Product signal

Signals

Surfaced by usage

Usage revealed chatbot flows, gated content, and form filling as new product directions.

Reflections

This engagement was my introduction to AI as a domain, not just a surface to design for. I came away with a working understanding of what happens behind the interface, how retrieval works, how confidence affects output, where the model's limits sit. That kind of knowledge changes how you approach any design solution.

Suyog, the CEO and founder, treated UX as a genuine product lever, not a finishing step. Working directly with him meant unconventional ideas had a fair hearing, and the best work came from exactly that. That kind of creative latitude does not come often, and the results reflect it.

Looking back, I would have structured the work differently from the start. Three distinct audiences, end users, clients, and the admin side, needed clearer separation early on. Running them in parallel under one workstream worked, but going deeper on each with its own design pass would have produced sharper results.

The one thing I would change operationally is investing earlier in documentation and prototyping handoffs. The collaboration with the front end team was close, but the iteration cycles took time that better tooling would have shortened. The workflows that exist today would have freed up energy for the harder design problems.

Supriya Vulivireddy

[email protected]
LinkedIn Behance