LLMs (Large Language Models)
The Large Model node can invoke the LLMs to generate responses based on input parameters and prompts. It is typically used for text generation tasks such as copywritting, text summerization, article expansion, and more.
Leveraging the language comprehension and generation capabilities of LLMs, the Large Model node can handle complex processing tasks provided in natural language. You can select different models based on your business requirements and configure prompts to define the model's persona and response style. To precisely control the model's output, you may adjust parameters within the Large Model node to define text length, content diversity, and other attributes.
Node Configuration
Large Language Model (LLM): The large language model to be invoked by the current node.
AI Creativity: Controls the diversity of generated content. Higher values increases creativity and randomness of the model's response.
Maximum Response: The maximum output length of the LLM, measured in tokens.
Identity Prompt: Defines the identity of the LLM for the current invocation. This typically includes descriptions of its role, task, skills, workflow, constrains, and background to guide the model's behavior and response style, ensuring alignment with the intended task and execution process.
User Prompt: The specific question or request provided by the user to direct the model's response.
Memory: If the current workflow is embedded within an AI agent, the agent's conversation history (i.e., memory) can be included as part of the context for the LLM invocation.
Tool: Tools required for the LLM invocation. Supports both custom tools and open tools.
Data Table: Data tables to be queried during the LLM invocation. Supports cross-table queries.
Node Output
JSON: Defines custom key-value pairs for the desired JSON structure. The LLM will output results according to the defined JSON format.
LLM Response: Directly outputs the raw content generated by the LLM.