Use this file to discover all available pages before exploring further.
Embedding models convert text into numerical vectors. These embeddings capture the semantic meaning of the input text, and allow LLMs to understand context.Refer to your specific component’s documentation for more information on parameters.
This component is used to load embedding models from Amazon Bedrock.
Parameters
Inputs
Name
Type
Description
credentials_profile_name
String
The name of the AWS credentials profile in ~/.aws/credentials or ~/.aws/config, which has access keys or role information.
model_id
String
The ID of the model to call, such as amazon.titan-embed-text-v1. This is equivalent to the modelId property in the list-foundation-models API.
endpoint_url
String
The URL to set a specific service endpoint other than the default AWS endpoint.
region_name
String
The AWS region to use, such as us-west-2. Falls back to the AWS_DEFAULT_REGION environment variable or region specified in ~/.aws/config if not provided.
Outputs
Name
Type
Description
embeddings
Embeddings
An instance for generating embeddings using Amazon Bedrock.
Connect this component to the Embeddings port of the Astra DB vector store component to generate embeddings.This component requires that your Astra DB database has a collection that uses a vectorized embedding provider integration. For more information and instructions, see Embedding Generation.
This component connects to Google’s generative AI embedding service using the GoogleGenerativeAIEmbeddings class from the langchain-google-genai package.
Parameters
Inputs
Name
Display Name
Info
api_key
API Key
The secret API key for accessing Google’s generative AI service. Required.
model_name
Model Name
The name of the embedding model to use. Default: “models/text-embedding-004”.
This component loads embedding models from HuggingFace.Use this component to generate embeddings using locally downloaded Hugging Face models. Ensure you have sufficient computational resources to run the models.
This component generates embeddings using Hugging Face Inference API models and requires a Hugging Face API token to authenticate. Local inference models do not require an API key.Use this component to create embeddings with Hugging Face’s hosted models, or to connect to your own locally hosted models.
Parameters
Inputs
Name
Display Name
Info
API Key
API Key
The API key for accessing the Hugging Face Inference API.
Connect the Hugging Face component to a local embeddings model
To run an embedding inference locally, see the HuggingFace documentation.To connect the local Hugging Face model to the Hugging Face embeddings inference component and use it in a flow, follow these steps:
Create a Vector store RAG flow. There are two embeddings models in this flow that you can replace with Hugging Face embeddings inference components.
Replace both OpenAI embeddings model components with Hugging Face model components.
Connect both Hugging Face components to the Embeddings ports of the Astra DB vector store components.
In the Hugging Face components, set the Inference Endpoint field to the URL of your local inference model. The API Key field is not required for local inference.
Run the flow. The local inference models generate embeddings for the input text.
This component generates text using IBM watsonx.ai foundation models.To use IBM watsonx.ai embeddings components, replace an embeddings component with the IBM watsonx.ai component in a flow.An example document processing flow looks like the following:This flow loads a PDF file from local storage and splits the text into chunks.The IBM watsonx embeddings component converts the text chunks into embeddings, which are then stored in a Chroma DB vector store.The values for API endpoint, Project ID, API key, and Model Name are found in your IBM watsonx.ai deployment. For more information, see the Langchain documentation.
The component automatically fetches and updates the list of available models from your watsonx.ai instance when you provide your API endpoint and credentials.
This component generates embeddings using Ollama models.For a list of Ollama embeddings models, see the Ollama documentation.To use this component in a flow, connect LLM Controls to your locally running Ollama server and select an embedding model.
In the Ollama component, in the Ollama Base URL field, enter the address for your locally running Ollama server. This value is set as the OLLAMA_HOST environment variable in Ollama. The default base URL is http://localhost:11434.
To refresh the server’s list of models, click Refresh.
In the Ollama Model field, select an embedding model. This example uses all-minilm:latest.
Connect the Ollama embeddings component to a flow. For example, this flow connects a local Ollama server running a all-minilm:latest embeddings model to a Chroma DB vector store to generate embeddings for split text.