Support

Documentation

4.The plugin parameters

4.The plugin parameters

You can edit the plugin parameters by going to your site's back-end, System, Manage, Plugins. Look for and click on System - AITiny to edit the plugin's parameters.

Plugin

The parameters under this section control the overall behaviour of the plugin.

Note

Some of the following options may be hidden depending on the option selected in a different setting. For example, the Endpoint URL is only shown when the AI Service selected is Custom.

Some model configuration options such as Maximum Completion Tokens may have no effect for some AI services; these are noted in the documentation pages describing each supported AI service.

AI Service

Choose which AI service you want to use.

Endpoint URL

Only shown when using the Custom setting for the AI Service option. Enter the full URL to the API endpoint of an AI service compatible with the OpenAI API specification.

Local AI services –used with the Custom service provider– require additional considerations when used by a live site. Please read the "Using a locally hosted AI service" section.

API Key / Token

Only shown when using an AI Service option other than Custom. Enter the API Key or API Token provided by the AI service under your user account. This is required to communicate with the AI Service.

Model

Choose the generative AI model which will be used by default.

To show the list, select an AI Service and enter the Endpoint URL or API Key / Token, depending on which service you chose. Then, click on Save in the toolbar. When the page reloads, the Model drop-down will be populated. If it still tells you to save the configuration, please check your settings above.

If you find the list to be out of sync with your AI service close the plugin options page (important!), then go to System, Maintenance, Clear Cache. On this page, select the plg_system_tiny item and click on the Delete button in the toolbar. Now go back to the plugin option's page. The list should be reloaded. If this still does not work please look at Problems communicating with AI services.

Auto-load Model

Only shown when using the Custom setting for the AI Service option.

When using a commercial AI service set this to No.

When using a locally hosted AI server (Ollama, LM Studio, Cortex, Jan, etc) it depends on your service.

Most locally hosted AI servers (e.g. Ollama, LM Studio, ...) handle model loading and unloading automatically. When using those servers you can set this to No.

Other locally hosted AI servers, mostly those based on Cortex.so such as Jan, require explicit model loading. If you are using one of these servers you need to set this option to Yes.

Tip

If the API documentation for your locally hosted AI server has a /models/start endpoint you need to set this option to Yes. If no such endpoint is documented, you can leave this option set to No.

Streaming

When enabled, the AI service is instructed to return the chunks of the response as they are generated instead of waiting to return the entire response as a big block when it's fully ready. In this case, AITiny will show you the response as it's being received. That's how AI chat on your browsers works.

The downside of streaming replies is that they consume a lot more bandwidth. The overhead of the reply is typically ten to thirty times the size of the reply, as opposed to under ten percent when using a non-streaming reply.

System Prompt

The System Prompt used for all features of this plugin. For the AI Generate feature this is the only system prompt used. For all other features it's the first system prompt used, the secondary one defined in other options further below or, in the case of the Custom… tool, being provided by the user.

You should customise the prompt to match the content type, tone of voice, and target audience of your site. Think of it as giving the AI service instructions on how to “role-play” the ideal person who should be writing content for the site.

To better understand the fundamental generative AI concepts please consult the section of this documentation titled A crash course in generative AI.

Tone of Voice

Choose one of the Tone of Voice presets. These add an additional system prompt which instruct the AI how to express itself when generating new content or rewriting existing content. You may want to take a look at the samples to understand how each tone of voice preset affects text generation.

If you'd rather handle the tone of voice in the system prompt, you can choose None.

Maximum Completion Tokens

The maximum number of tokens that can be generated by the generative AI service. This includes visible output tokens (the text you receive) and the invisible reasoning tokens (for reasoning models). Some AI services may ignore this setting.

Set to 0 (default) to ignore this setting.

Please note that the 0 setting does not imply a potentially infinite length of reply. The reply size will be constrained by the default value of maximum completion tokens set up by the service and model you are using. This is typically fairly long, but certainly not infinite!

Service specific notes on maximum completion tokens:

  • When using Anthropic, this option actually sets the max_tokens parameter which only takes output into account. Anthropic requires setting this parameter; it can never be omitted. When you set this to 0, a default value of 8192 will be used. This may improper, and cause errors, or trigger a message about exceeding your rate limit.

To better understand the fundamental generative AI concepts please consult the section of this documentation titled A crash course in generative AI.

Temperature

Sets the Temperature parameter of the generative AI model. It is a decimal value between 0.0 and 1.0, or 0.0 and 2.0 depending on the AI provider and model.

This setting controls the predictability of the words used in the generated text. A low setting (0.1 to 0.5) gives high reliability and accuracy. It is the recommended value for the Writing Tools and any similar custom actions you may define yourself.

A medium setting (0.5 to 1.0) strikes a balance between creativity and predictability. It is the recommended value for text generation.

A high setting (1.0 to 2.0) gets very creative, but suffers greatly from hallucination (fabrications and falsehoods). It is recommended only for creative writing and fictional content generation, as long as you understand that the result may contain a lot of hallucinations.

The default value is 0.7.

If you want to NOT pass this parameter to the model, set it to -1 (negative one).

Service specific notes on temperature:

  • When using Anthropic model, the allowed range is 0.0 to 1.0. You will need to change the temperature in the predefined writing tools, possibly scaling them by half (i.e. when a temperature of 0.7 is there by default, set it to 0.35). Failure to do so may yield unexpected results, or errors.

To better understand the fundamental generative AI concepts please consult the section of this documentation titled A crash course in generative AI.

Top_p

Also known as “nucleus sampling”. This setting affects how many words the model considers. Some AI services may ignore this setting; use the Temperature setting instead.

Lower values (0 to 0.9) give ‘creative’, but likely nonsensical text.

Higher values (0.9-1.0) give repetitive, ‘safe’ text.

The default value is 0.95.

If you want to NOT pass this parameter to the model, set it to -1 (negative one).

To better understand the fundamental generative AI concepts please consult the section of this documentation titled A crash course in generative AI.

User Groups

Only users in the User Groups defined in this setting will be able to use this plugin's features in the TinyMCE / JCE editor.

By default, this only includes the default Super User group (group ID 8).

Warning

It is strongly advised that you only use User Groups which contain users you trust not to abuse the generative AI.

Since – at least for now – AITiny does not integrate with any third party AI security service to put any guardrails on the input text, it is possible that a malicious user overrides the system prompt, thereby using the generative AI to generate responses to whichever question they want while you incur the usage fees (if any). This is true even if you do not make the Custom… tool available to them.

Generation

This tab contains a set of options which apply to the AI Generate feature shown in the TinyMCE editor. The options are:

Show “AI Generate” action

Should the AI Generate feature be shown in the TinyMCE interface under the AI button?

Default: Yes.

Keyboard Shortcut

You can enter a keyboard shortcut to access this feature directly, without having to click through the icons in the interface.

Default: CTRL+SHIFT+13 (reminder: this means holding CTRL, SHIFT, and ENTER)

Icon Class

The icon class used for this feature in the TinyMCE interface. This accepts any FontAwesome 6 icon in the Free collection of the Solid and Brand icons. These icons are shipped with Joomla itself. Leave empty to not use an icon.

Default: fa-laptop-code

Model

Which model should be used to run this feature? The “- Use Default -” option means that the model selected under the Plugin tab will be used.

Default: - Use Default -

Additional System Prompt

An additional system prompt which will be sent to the AI Service together with the main System Prompt defined in the Plugin tab to give it more specific instructions for the task at hand.

Default: (empty)

Tone of Voice

Override the tone of voice set up in the plugin just for this feature.

Default: - Inherit -

Maximum Completion Tokens

As per the same-named option under the Plugin tab.

A value of 0 means “use the Maximum Completion Tokens defined under the Plugin tab”. If the value under the Plugin tab is also 0 then the default value defined by the AI service and model will be used.

Default: 0

Temperature

As per the same-named option under the Plugin tab.

Default: 0.7

Top_p

As per the same-named option under the Plugin tab.

Default: 0.95

Writing Tools

This tab contains a set of options which apply to the writing tools feature shown in the TinyMCE editor. The writing tools are the Proofread, Friendly, Professional, Concise, and Custom… items.

The options for each feature are the same as those under the Generation tab.

Tip

You can find the default settings, including the default system prompt, for each predefined tool in the file plugins/system/aitiny/src/defaults.json.

Moreover, if you ever feel you have messed up your prompt big time, you can revert to the default. Delete the prompt and then click on the Save button TWICE. The first save will reset the visible prompt to the built-in default (as per the contents of the defaults.json file described above), the second save will actually save that default prompt you are seeing into the plugin's configuration.

There is one additional option for your custom writing tools:

Override System Prompt

When enabled the Additional System Prompt provided above overrides (replaces) the System Prompt provided in the Plugin tab. When disabled (default) the Additional System Prompt is used in addition to the System Prompt provided in the Plugin tab. Tip: Enable this option if you want to create a completely different “personality”.

ALT generator

This feature allows you to generate ALT attributes for image elements. This feature is available for both inline images in the TinyMCE content editor, as well as any Media field in any Joomla form (e.g. the intro and full text image fields in Joomla's article editor).

Important

You MUST use a model which supports image input. Here are some examples (NOT an exhaustive list!) depending on your AI provider that we have successfully tried with real world content:

  • OpenAI: the latest GPT Mini model available to you

  • Anthropic (Claude): the latest version of Haiku available to you

  • Locally hosted (LM Studio): Gemma 3, Qwen 2.5 VI

Custom actions

If you find yourself frequently using the same prompt in the Custom… feature you can set up a Custom Action.

The options you have for each custom action are the same as those under Writing Tools and Generation, plus the following:

Title

The title of this feature in the TinyMCE interface.

User Groups

Only users in the following User Groups will be able to use this custom action in the TinyMCE editor. Leave blank to not apply a restriction (the user groups under the Plugin tab will be used).

Important

The User Groups you select here must be a subset of the groups you have selected in the User Groups option in the Plugin tab.