#
Chat
LLM: Chat node lets you use conversation-tuned large language models like GPT-4.
Using the model selection, you can choose different models (and versions) from different providers.
Additional model parameters are available, depending on the model you choose.
#
Inputs
| Name | Type | Description |
|---|---|---|
| prompt | text | The prompt sent to the LLM as the last message |
| conversation | struct | The full conversation to be sent - a list of chat | messages. |
| model | text | the model to use |
| OpenAI models | ||
| temperature | number | the temperature parameter |
combining prompt and conversation
If both inputs are provided, the prompt will be appended to the conversation as the last message using the user role.
System prompt
We do not include an extra input for the system prompt at the moment. But you can always add a system prompt via the conversation input:
#
Outputs
| Name | Type | Description |
|---|---|---|
| text | text | The content of the models last response. |
| response | struct | The full vendor response - useful for inspecting e.g. used tokens. |
| conversation | struct | The full conversation - the input conversation appended with the last response. |
| error | struct | the error response from the model vendor if an error occured |
#
Usage examples
The most basic LLM workflow: take some user input, create a prompt and send it to an LLM. Returns the last LLM response.
A more advanced example combining conversations and memory nodes for remembering user conversations. Takes an additional userId input and will use it to lookup and store conversations in our Key/Value store.