AI calls
Automat has inbuilt AI calls support. All applications can use this feature. Only thing needed is to load tokens under the account.
To check the current token count or add more install the following application from automat store: Resource Quota and Purchase
Methods
There are 2 methods to call the AI for data generation from automat templates.
Invoke
In this mode we call the AI to generate a single output. In this mode we will need to pass all the context ourselves if we want it to process anything custom.
Note that all the input will count towards token usage.
The input data matches that of the bedrock calls, so adjust it as required.
The default model currently will be selected as “global.anthropic.claude-sonnet-4-6”. You can pass an optional second parameter to the AIInvokeModel to select a different one (that is enabled).
Parameters:
- Invoke request json structure based on the model used
- (optional) select a specific model
Using a different custom model.
This will output the invoke response json.
Retrieve and generate
In this mode we can improve the behavior with 2 ways:
- We need to use pregenerated knowledgebase for context.
- We can continue the session so we can improve on the token and context usage in the same chat session.
Example knowledge base (erply context): LUBFWEYSPM (eu-west-1)
Parameters:
- The prompt
- (optional) - Leave empty when starting the session. The response will return a session id, use that to pass to the next possible prompt to continue that session context
- The knowledge base id.
- (optional) - Select a specific AI model. Leave empty to use a default model.
- Region where the knowledge base will be taken from. Or if the model is directed to a specific region, then this needs to match that.
This will output