Skip to main content
Version: v6.7.0

Invoke large models

Command Description

Call large models to answer questions or process natural language

Command Prototype

LLM.GeneralChat({"name":"","model":"","base_url":"","api_key":""},"","","",30,0.7,4000,"text",0)

Command parameters

parametermandatorytypedefault valueInstructions
configTrueobject{"name":"","model":"","base_url":"","api_key":""}Configure the model by filling in the model name, model access address, and API key
contenttruestring""user prompt
promptfalsestring""system prompt
file_pathfalsestring""Image path (JPEG/PNG/BMP, maximum 10MB, some models may not support all three formats of images)
timeoutfalseint30sTimeout (seconds)
temperaturefalseintzero point seventemperature
max_output_lengthfalseintfour thousandMaximum output length (token)
formatfalseobjectPlain text, JSON formatChoose the format of the content returned by the large model
max_retry_countfalseintthreeWhen the large model returns an error or returns a structure that does not conform to the output format, the context content will be resent to the large model, allowing it to output again. The maximum number of retries is 3

Running instance

/*************************** Input Text **********************************
Command Prototype:
LLM.GeneralChat({"name":"","model":"","base_url":"","api_key":""},"","","",30,0.7,4000,"text",0)

Input Parameters:
config -- Model configuration, specify model name, access URL, API key
content -- User prompt
prompt -- System prompt
file_path -- Image path (JPEG/PNG/BMP, max 10MB, some models may not support all three formats)
timeout -- Timeout in seconds
temperature -- Temperature
max_output_length -- Maximum output length (tokens)
format -- Format of the LLM response content
max_retry_count -- Maximum retry attempts on failure

Output Parameters:
sRet -- Command execution result assigned to this variable

Important Notes:
Deleting the LLM environment configuration file or removing the invoked LLM in the configuration management interface will prevent the command from running normally

*********************************************************************/

//Using Pre-configured Model:
LLM.GeneralChat({"name":"deepseek"},"Help me translate the content in the image into Chinese","You are a translator with professional translation expertise",'''C:\Users\Administrator\Downloads\1280X1280.PNG''',30,0.7,4000,"text",0)

//Using Custom Model Configuration:
LLM.GeneralChat({"model":"openai/deepseek-chat","api_key":"123456789","base_url":"https://api.deepseek.com"},"Help me translate the content in the image into Chinese","You are a translator with professional translation expertise",'''C:\Users\Administrator\Downloads\1280X1280.PNG''',30,0.7,4000,"text",0)

Visual Example