Skip to main content
Summarizationand Aggregation Jp The Summarization and Aggregation template enhances the basic summarization workflow by integrating Prompt Optimization and LLMC Executor capabilities.
This template enables you to upload documents, generate summaries, automatically optimize prompts, and evaluate the best-performing versions, all within one seamless LLM Controls workflow.

Prerequisites

  • An OpenAI API Key
  •  A structured document (PDF) for upload
  •  Basic familiarity with LLM Controls Flows

Create the Advanced Summarization Flow

1. Upload a File

From the File component:
  • Upload your document (PDF format supported)
  • This component outputs both raw and parsed data
  • Output is automatically passed to the Parser component

2. Parse the Document

The Parser component converts uploaded content into plain text.
  • Mode: Parser
  • Template: Define how content is extracted (e.g., Text: text)
    This ensures your data is clean and ready for prompting

3. Optimize the Summarization Prompt

Use the Prompt Optimizer Flow to automatically refine your prompt.
  • Input a summarization task, such as:
    “Summarize uploaded documents into concise, professional summaries.”
  • The optimizer generates multiple prompt variations, test cases, and evaluation metrics.
  • Outputs include ranked prompts based on clarity, effectiveness, and consistency.
Click the Prompt Optimizer Flow node in the workspace to open and run this optimization process.

4. Execute with LLMC Executor


The LLMC Executor allows you to run the best-performing prompts with a chosen AI model.
  • Model Selection: Choose from GPT models (e.g., gpt-4o)
  • Prompt Selection: Pick from optimized prompts generated earlier
  • API Key: Enter your valid OpenAI key in the custom_api_key field
Then click Run to execute and generate the summarized output.

5. View the Summary Output

The Summary Output component displays the final, aggregated summary in a clean text format.
You can review, copy, or export it directly from the Playground.

Modify or Extend

  • To change the summarization style, open the Prompt template and adjust the tone or persona (e.g., “Write an executive summary” or “Summarize in bullet points”).
  • You can chain additional evaluation nodes for advanced comparison of results.
  • Use the Results leaderboard in the optimizer to monitor which prompt performs best.

Configuration Checklist

SettingDescription
OpenAI API KeyRequired to connect to GPT models
File UploadInput document (PDF)
Parser TemplateControls how text is extracted
Prompt Optimizer FlowGenerates, evaluates, and ranks prompt variations
LLMC ExecutorRuns prompts with the chosen model
Summary OutputDisplays the final summarized result

  Use Cases

  • Generate executive summaries from lengthy reports
  • Compare and benchmark multiple summarization prompts
  • Automate evaluation of summarization quality
  • Build adaptive summarization pipelines for research or business data

Built With

  • LLM Controls Framework
  • Prompt Optimizer Flow
  • LLMC Executor
  • OpenAI GPT Models
Streamline your summarization process with intelligent optimization and automated evaluation.