Case Study — OptaHub
AI-Powered Product Search & Chatbot Platform
How we built an intelligent product discovery engine that transformed OptaHub's customer experience.
65%
Faster product discovery
Average search time dropped from 7 minutes to 2.5 minutes
28%
Increase in conversion rate
More searches became purchases
99.99%
System uptime
Even during massive catalog updates
<300ms
Response time
Instantaneous user experience
The Challenge
OptaHub, a leading promotional products distributor, faced a critical challenge: their product catalog had grown too vast for traditional search methods. With thousands of products across multiple suppliers, customers struggled to find what they needed.
40%
of search sessions ended without a purchase
7 min
average time spent searching per product
60%
of sales team time spent manually searching
15%
of catalog effectively invisible
Their existing search solution was basic keyword matching — it couldn't understand context, learn from conversations, or handle the nuanced way customers describe products. When a customer asked for “red polo shirts under $30 for a corporate event,” the system would miss half the intent.
OptaHub needed more than just search — they needed an intelligent product discovery platform.
The Solution: An AI-Powered Product Discovery Engine
We partnered with OptaHub to build a sophisticated AI chatbot platform that fundamentally reimagined how customers interact with their product catalog.
The Core Innovation
Instead of treating product search as a simple database query, we created a conversational AI layer that understands context, learns from interactions, and delivers remarkably accurate results through:
- •Semantic Understanding — Moving beyond keywords to grasp product intent
- •Contextual Awareness — Remembering conversation history to refine results
- •Intelligent Reranking — Using LLMs to order results by true relevance
- •Zero-Downtime Updates — Continuously refreshing product data without interrupting service
The Architecture: Built for Scale and Intelligence
The Three-Layer Intelligence Stack
Layer 1
Semantic Understanding Layer
We implemented vector embeddings using OpenAI's latest models to transform every product description into a mathematical representation. This allowed the system to understand that “comfortable summer wear” and “breathable t-shirts” are semantically related — even when they share no keywords.
Layer 2
Conversational Context Layer
Unlike traditional search that treats each query in isolation, our system maintains rich conversation history. It remembers that you were looking for corporate gifts and factors this into every subsequent search, asking intelligent follow-ups and refining results in real-time.
Layer 3
Continuous Learning Pipeline
The platform doesn't just serve results — it learns from every interaction. A sophisticated evaluation system analyzes which products users engage with and continuously improves the ranking algorithm.
System Architecture
User Query → Context Understanding → Semantic Search → LLM Reranking → Personalized Response
↑ ↑ ↑ ↑ ↓
└── History ───┘ └── Products ────┘ S3 StorageThe Technical Journey: Key Implementation Stories
Story 1
Teaching the Machine to Understand Products
The Challenge: Product descriptions were inconsistent across suppliers. One vendor might describe a “cotton t-shirt” while another called it a “cotton tee.” Humans understood they were the same; machines didn't.
Our Solution: We built an intelligent embedding pipeline that normalizes product data, cleans HTML, standardizes terminology, and creates rich semantic vectors using OpenAI's text-embedding-ada-002 model — capturing nuanced relationships across 1,536 dimensions.
The Result: When a customer searches for “business casual attire,” the system now understands this includes “dress shirts,” “blazers,” and “formal polos” — even when those exact words aren't used.
Story 2
The Context Challenge
The Challenge: Imagine a conversation: “Show me coffee mugs” → [Shows mugs] → “Now show me ones under $10.” A traditional search would show all products under $10 — including unrelated items. The user wanted mugs under $10, but the system lost context.
Our Solution: We implemented a contextual query refinement system that maintains complete conversation history, uses LLMs to intelligently merge past and present queries, and generates refined search queries that capture full intent.
// Before: Static search
search("under $10") // Returns everything cheap
// After: Context-aware search
context = "User was looking at coffee mugs"
query = "under $10"
refined = llm_refine(context, query) // "coffee mugs under $10"Story 3
Zero-Downtime Magic
The Challenge: Product catalogs change constantly — new items added, prices updated, descriptions improved. But traditional reindexing meant taking search offline or serving stale data.
Our Solution: We engineered a rolling index system that creates new search indices in the background, validates data integrity before switching, atomically swaps indices with zero downtime, and automatically rolls back if issues are detected.
12:00 Start building new index (background) 12:30 Validate all embeddings (quality check) 12:31 Atomically swap to new index (zero downtime) 12:32 Clean up old index (users never notice)
Story 4
The Quality Assurance Engine
The Challenge: How do you know if your AI is getting better? Traditional metrics like click-through rates are lagging indicators.
Our Solution: We built a real-time evaluation framework that analyzes every search result against the original query, uses LLM judges to score relevance, tracks keyword coverage and semantic matching, and provides instant feedback on system performance.
The Result: We can now quantify that the system understands 94% of product attributes correctly, with precise tracking of what's improving and what needs work.
The Results: Transformative Business Impact
Within 3 months of launch, the platform delivered measurable improvements across every key metric.
For Customers
65%
Faster product discovery
Average search time dropped from 7 minutes to 2.5 minutes
94%
Search satisfaction
Customers found what they wanted on first try
40%
Increase in engagement
Users interacted longer and deeper
For OptaHub's Business
28%
Increase in conversion rate
More searches became purchases
45%
Reduction in sales search time
Sales focused on selling, not searching
15%
Catalog visibility increase
Previously invisible products now discovered
+32
NPS score improvement
Customers loved the intelligent experience
Technical Excellence
99.99%
Uptime
Even during massive catalog updates
<300ms
Response time
Instantaneous user experience
10,000+
Products indexed daily
Always fresh, always accurate
Key Learnings & Best Practices
Start with Semantics, Not Keywords
Traditional search assumes users know exact terminology. Our approach taught us that understanding intent matters more than matching words. The system now grasps that "something for a summer picnic" implies casual, portable, and seasonally appropriate — even when those aren't explicitly stated.
Context is Everything
Single-query search is dead. Users expect systems to remember, learn, and adapt throughout a conversation. Our context-refinement pipeline proved that 78% of queries benefit from historical context.
Quality Requires Continuous Validation
AI isn't "set and forget." Our evaluation framework catches regressions instantly and provides actionable insights for improvement. We now run 10,000+ automated tests daily to ensure quality.
Zero-Downtime is Non-Negotiable
Modern businesses can't afford maintenance windows. Our rolling index architecture proved that complex updates can happen transparently, with users never noticing the transition.
Hybrid Approaches Win
Pure AI isn't always the answer. Our system combines vector search for semantic understanding, traditional filters for exact matching, LLMs for reasoning and refinement, and business rules for specific constraints. This hybrid approach outperformed any single technique.
Technical Differentiators
| Feature | Traditional Approach | Our Solution |
|---|---|---|
| Query Understanding | Keyword matching | Semantic vectors + LLM refinement |
| Context | Single query only | Full conversation history |
| Updates | Scheduled downtime | Rolling zero-downtime |
| Quality | Manual testing | Automated LLM evaluation |
| Speed | Seconds | Sub-300ms |
| Accuracy | Variable | 94% measured relevance |
The Future: What's Next
The platform continues to evolve. Current development focuses on:
Phase 4
Predictive Intelligence
Anticipating customer needs before they search, suggesting relevant products based on behavior patterns and seasonal trends.
Phase 5
Multi-modal Search
Enabling image-based product discovery — "find products that look like this" — using computer vision and unified embeddings.
Phase 6
Personalization at Scale
Learning individual customer preferences to tailor results, with privacy-preserving personalization that improves with every interaction.
“This isn't just a search engine — it's like having our most knowledgeable salesperson available 24/7, remembering every conversation, and getting smarter with every interaction.”
Technology Stack
Engagement Details
- Industry
- Promotional Products / E-commerce
- Services
- AI Solutions, Custom Software
- Core Capabilities
- Semantic search, LLM reranking, conversational AI
Applicability
- ✓E-commerce product discovery
- ✓Content platform search
- ✓Support portal intelligence
- ✓Internal knowledge bases
What could intelligent search do for your business?
Let's discuss how we can help.
Book a Strategy Call