Use this file to discover all available pages before exploring further.
Tools are typically connected to agent components at the Tools port. Agents use LLMs as a reasoning engine to decide which of the connected tool components to use to solve a problem.Tools in agentic functions are, essentially, functions that the agent can call to perform tasks or access external resources. A function is wrapped as a Tool object, with a common interface the agent understands. Agents become aware of tools through tool registration, where the agent is provided a list of available tools, typically at agent initialization. The Tool object’s description tells the agent what the tool can do.The agent then uses a connected LLM to reason through the problem to decide which tool is best for the job.
Tools are typically connected to agent components at the Tools port.The simple agent starter project uses URL and Calculator tools connected to an agent component to answer a user’s questions. The OpenAI LLM acts as a brain for the agent to decide which tool to use.To make a component into a tool that an agent can use, enable Tool Mode in the component. Enabling tool mode modifies a component input to accept calls from an agent. If the component you want to connect to an agent doesn’t have a Tool Mode option, you can modify the component’s inputs to become a tool.
This component allows agents to query data from Astra DB collections.To use this tool in a flow, connect it to an Agent component. The flow looks like this:The Tool Name and Tool Description fields are required for the Agent to decide when to use the tool. Tool Name cannot contain spaces.The values for Collection Name, Astra DB Application Token, and Astra DB API Endpoint are found in your Astra DB deployment. For more information, see the DataStax documentation.In this example, an OpenAI embeddings component is connected to use the Astra DB tool component’s Semantic Search capability. To use Semantic Search, you must have an embedding model or Astra DB Vectorize enabled. If you try to run the flow without an embedding model, you will get an error.Open the Playground and ask a question about your data. The Agent uses the Astra DB Tool to return information about your collection.
The Tool Parameters configuration pane allows you to define parameters for filter conditions for the component’s Find command.These filters become available as parameters that the LLM can use when calling the tool, with a better understanding of each parameter provided by the Description field.
To define a parameter for your query, in the Tool Parameters pane, click Add a new row.
Complete the fields based on your data. For example, with this filter, the LLM can filter by unique customer_id values.
Name: customer_id
Attribute Name: Leave empty if the attribute matches the field name in the database.
Description: "The unique identifier of the customer to filter by".
Is Metadata: False unless the value is stored in the metadata field.
Is Mandatory: True to require this filter.
Is Timestamp: False since the value is an ID, not a timestamp.
Operator: $eq to look for an exact match.
If you want to apply filters regardless of the LLM’s input, use the Static Filters option, which is available in the component’s Controls pane.
Parameter
Description
Name
The name of the parameter that is exposed to the LLM. It can be the same as the underlying field name or a more descriptive label. The LLM uses this name, along with the description, to infer what value to provide during execution.
Attribute Name
When the parameter name shown to the LLM differs from the actual field or property in the database, use this setting to map the user-facing name to the correct attribute. For example, to apply a range filter to the timestamp field, define two separate parameters, such as start_date and end_date, that both reference the same timestamp attribute.
Description
Provides instructions to the LLM on how the parameter should be used. Clear and specific guidance helps the LLM provide valid input. For example, if a field such as specialty is stored in lowercase, the description should indicate that the input must be lowercase.
Is Metadata
When loading data using LangChain or LLM Controls, additional attributes may be stored under a metadata object. If the target attribute is stored this way, enable this option. It adjusts the query by generating a filter in the format: {"metadata.<attribute_name>": "<value>"}
Is Timestamp
For date or time-based filters, enable this option to automatically convert values to the timestamp format that the Astrapy client expects. This ensures compatibility with the underlying API without requiring manual formatting.
Operator
Defines the filtering logic applied to the attribute. You can use any valid Data API filter operator. For example, to filter a time range on the timestamp attribute, use two parameters: one with the $gt operator for “greater than”, and another with the $lt operator for “less than”.
Parameters
Inputs
Name
Type
Description
Tool Name
String
The name used to reference the tool in the agent’s prompt.
Tool Description
String
A brief description of the tool. This helps the model decide when to use it.
Collection Name
String
The name of the Astra DB collection to query.
Token
SecretString
The authentication token for accessing Astra DB.
API Endpoint
String
The Astra DB API endpoint.
Projection Fields
String
The attributes to return, separated by commas. The default is *.
Tool Parameters
Dict
Parameters the model needs to fill to execute the tool. For required parameters, use an exclamation mark, for example !customer_id.
Static Filters
Dict
Attribute-value pairs used to filter query results.
Limit
String
The number of documents to return.
OutputsThe Data output is used when directly querying Astra DB, while the Tool output is used when integrating with agents.
Name
Type
Description
Data
List[Data]
A list of Data objects containing the query results from Astra DB. Each Data object contains the document fields specified by the projection attributes. Limited by the number_of_results parameter.
Tool
StructuredTool
A LangChain StructuredTool object that can be used in agent workflows. Contains the tool name, description, argument schema based on tool parameters, and the query function.
The Astra DB CQL Tool allows agents to query data from CQL tables in Astra DB.
Parameters
Inputs
Name
Type
Description
Tool Name
String
The name used to reference the tool in the agent’s prompt.
Tool Description
String
A brief description of the tool to guide the model in using it.
Keyspace
String
The name of the keyspace.
Table Name
String
The name of the Astra DB CQL table to query.
Token
SecretString
The authentication token for Astra DB.
API Endpoint
String
The Astra DB API endpoint.
Projection Fields
String
The attributes to return, separated by commas. Default: ”*”.
Partition Keys
Dict
Required parameters that the model must fill to query the tool.
Clustering Keys
Dict
Optional parameters the model can fill to refine the query. Required parameters should be marked with an exclamation mark, for example, !customer_id.
Static Filters
Dict
Attribute-value pairs used to filter query results.
Limit
String
The number of records to return.
Outputs
Name
Type
Description
Data
List[Data]
A list of Data objects containing the query results from the Astra DB CQL table. Each Data object contains the document fields specified by the projection fields. Limited by the number_of_results parameter.
Tool
StructuredTool
A LangChain StructuredTool object that can be used in agent workflows. Contains the tool name, description, argument schema based on partition and clustering keys, and the query function.
This component runs Icosa’s Combinatorial Reasoning (CR) pipeline on an input to create an optimized prompt with embedded reasons. For more information, see Icosa computing.
Parameters
Inputs
Name
Type
Description
prompt
String
The input to run CR on.
openai_api_key
SecretString
An OpenAI API key for authentication.
username
String
A username for Icosa API authentication.
password
SecretString
A password for Icosa API authentication.
model_name
String
The OpenAI LLM to use for reason generation.
Outputs
Name
Type
Description
optimized_prompt
Message
A message object containing the optimized prompt.
reasons
List[String]
A list of the selected reasons that are embedded in the optimized prompt.
The MCP connection component exposes Model Context Protocol (MCP) servers, including your other flows, as tools for LLM Controls agents.MCP consists of parameters Such as :
This component provides tools for searching through spreadsheet data using SQL queries, natural language, filters, and aggregations. It also supports chart generation (line, scatter, bar, histogram, pie, box plots).
Parameters
Inputs
Name
Display Name
Info
dataframe
Excel DataFrame
DataFrame from Excel/CSV file
data_list
Excel Data
List of Data objects from Excel file
tool_name
Tool Name
Name of the tool for the agent
tool_description
Tool Description
Description of what this tool does
default_limit
Default Result Limit
Default maximum number of results to return
enable_sql_queries
Enable SQL Queries
Enable SQL query support for advanced data analysis
This component is an agent designed to utilize various tools seamlessly within workflows. It uses a language model to process user input and decide which tools to call.
This component performs basic arithmetic operations on a given expression. It safely evaluates mathematical expressions using Python’s AST parser, supporting addition, subtraction, multiplication, division, and exponentiation.
Parameters
Inputs
Name
Display Name
Info
expression
Expression
The arithmetic expression to evaluate (e.g., ‘44(33/22)+12-20’).
This component is a Python code executor that lets you run Python code with specific imported modules. Remember to always use print() to see your results.
Parameters
Inputs
Name
Display Name
Info
global_imports
Global Imports
A comma-separated list of modules to import globally, e.g. ‘math,numpy,pandas’.
python_code
Python Code
The Python code to execute. Only modules specified in Global Imports can be used.
This component calls the searchapi.io API with result limiting. It returns organic search results with title, link, and snippet fields. Supports Google, Bing, and DuckDuckGo engines.
This component is a search engine optimized for LLMs and RAG, aimed at efficient, quick, and persistent search results. It supports configurable search depth, topic filtering, time ranges, and can include images and short answers.
This component enables queries to Wolfram Alpha for computational data, facts, and calculations across various topics, delivering structured responses.
Parameters
Inputs
Name
Display Name
Info
input_value
Input Query
Example query: ‘What is the population of France?‘
This component uses yfinance (unofficial package) to access financial data and market information from Yahoo Finance. It supports multiple data retrieval methods including stock info, news, balance sheets, earnings, and more.
Parameters
Inputs
Name
Display Name
Info
symbol
Stock Symbol
The stock symbol to retrieve data for (e.g., AAPL, GOOG).
method
Data Method
The type of data to retrieve. Options: get_info, get_news, get_actions, get_analysis, get_balance_sheet, get_calendar, get_cashflow, get_institutional_holders, get_recommendations, get_sustainability, get_major_holders, get_mutualfund_holders, get_insider_purchases, get_insider_transactions, get_insider_roster_holders, get_dividends, get_capital_gains, get_splits, get_shares, get_fast_info, get_sec_filings, get_recommendations_summary, get_upgrades_downgrades, get_earnings, get_income_stmt.
num_news
Number of News
The number of news articles to retrieve (only applicable for get_news).