Skip to main content
Customer Service Advanced Jp This advanced LLM Controls template provides an AI-powered customer support system that uses your uploaded company documents to deliver personalized, context-aware responses to user queries.
It integrates Prompt Optimization, Vector Search, and LLMC Executor to create a fully automated support flow.

Prerequisites

  • An active OpenAI API key
  • Access to the LLM Controls dashboard
  • At least one support document (PDF, TXT, or website URL) for data ingestion

Create the Customer Service Flow

From the LLM Controls dashboard, click New Flow → Select Customer Service – Advanced.
The flow includes three main sections:

1. Prompt Optimizer Flow

Automatically generate, evaluate, and rank optimized prompts to ensure your model responds effectively to customer questions.

Flow Components

  1. User Task – Define the type of prompt you want to optimize.
  2. Enhanced Prompt Optimizer – Generates variations, criteria, and scoring to identify the best prompt.
  3. Results – Displays ranked performance and top prompt outputs.

Run It

  • Start from the User Task node, input your optimization goal.
  • Click Run on the Results node to execute the optimizer.
  • Review the leaderboard to see the top-performing prompt.

2. Load Data Flow

Prepare your knowledge base for AI retrieval. This step embeds your support documents into a Vector DB.

Steps

  1. Upload Files or URLs – Add your customer support guides, FAQs, or product documentation.
  2. Split Text – Breaks content into smaller chunks.
  3. Embed Text – Uses OpenAI Embeddings to create vector representations.
  4. Store Data – Saves all embeddings to the LLMC Vector DB.
    Make sure your OpenAI API key is configured in the embedding component.

3. Retriever Flow

Handles real customer queries using RAG (Retrieval-Augmented Generation).

How It Works

  1. The Customer Query Input collects a user question.
  2. The LLMC Vector DB retrieves relevant support content.
  3. The Parser formats the retrieved content.
  4. The Prompt injects that context into the model request.
  5. The LLMC Executor generates the final AI-powered response.
  6. The Support Chat Output displays the answer.

Example

Input: “Why was my refund delayed?”
Output: “Your refund was delayed due to a pending bank verification step, as stated in our policy.”

4. LLMC Executor

Executes optimized prompts using selected models (like GPT-4o).

Usage Steps

  1. Select your Prompt and Model.
  2. Ensure your API Key is active.
  3. Enter the user query and click Play.
  4. View the generated response in the Output section.

Run the Full Customer Support Flow

  1. Load and embed your company data via the Load Data Flow.
  2. Optimize prompts with the Prompt Optimizer Flow.
  3. Connect the Retriever Flow for live Q&A.
  4. Test everything in the Playground to verify responses.
    You can further fine-tune prompts or connect different model endpoints for performance testing.

Modify and Extend

  • Add custom prompt templates for tone or brand voice.
  • Integrate additional embedding sources (e.g., Notion, Confluence).
  • Chain multiple LLMC Executors for tiered reasoning.