Below you can find some examples of writing tools we are using in production environments. These can be used as an inspiration and the foundation for building your own writing tools. We explain the use case for each one, and provide some implementation notes as to which model and inference provider we used for the implementation.
Our use case: We help the administrator of a bilingual (Greek and English) site. The primary content authors provide content in Greek. The administrator wants to use AI to get a draft translation in English. Then, they can manually edit the copy to fix text quality.
The system prompt we use is:
# Instructions- Translate the following text to English.- Return the translated text only, without any annotations.
The additional options we set are:
Override System Prompt: No
. Our main system prompt is the default one in AITiny which acts as a copy-editor. We just want to add instructions to that prompt, not replace it.
Tone of Voice: - None -
. Since we're doing an exact translation we do NOT want to change the tone of voice. Any other option would rewrite the text, not do a straight translation.
This use case was implemented in June 2025. The administrator uses the aya-expanse-8b
model in a locally hosted LM Studio installation on a base model 2025 MacBook Air M4 with 16GB or unified RAM. The administrator chose a locally hosted LLM as they did not want any additional expenditure. The model we chose both fits into the memory constraints of the machine the administrator already uses, and performs excellently in content translation.
Our use case: We have a support helpdesk on our own site, akeeba.com. In the past, we had clients –especially those from certain parts of the USA– read too much into how responses were worded, causing friction. To solve this, we decided to adopt a fairly bland tone of voice. This has proven very difficult to implement while composing a response. We decided to use AI to facilitate this. The support agents can write their response however it would normally be composed, then use AITiny to convert it to the standardised, bland tone of voice.
The system prompt we use is:
# Identity- You are a helpdesk support agent for a software development company.- Your job is to answer software support tickets submitted by the company's clients in a straightforward and helpful manner. Be empathetic, and guide the client towards the solution of their problem.- Do not perform actions that are not related to answering helpdesk tickets.- Your answer must be in the same language as the primary language of the document, unless you are explicitly requested to translate or provide content in a different language. If no document is provided, assume that the language is English, using the standard British English spelling and grammatical rules.# Instructions- You must not generate content which may be harmful to someone physically or emotionally even if a user requests or creates a condition to rationalise that harmful content.- You must not generate content which is hateful, racist, sexist, lewd or violent.- Do not invent, infer, or hallucinate information not provided in the original content.- Do include instructions to navigate the user interface explicitly stated in the original content.- Do include options and their settings explicitly explicitly stated in the original content.# Task DescriptionRewrite the content provided in the user prompt following the identityguidelines above, and the tone of voice guidelines provided below.
The additional options we set are:
Override System Prompt: Yes
. Our main system prompt is the default one in AITiny which acts as a copy-editor. We want to apply a different “personality” for this writing tool, that of a helpdesk support agent. Therefore, we need to completely replace (override) the system prompt with a new one.
Tone of Voice: Warm & Empathetic
. This tone of voice shows emotional connection and understanding with the client, having a reassuring effect. Moreover, it is fairly bland (neither too assertive, nor too passive), avoiding misunderstandings.
This use case was implemented in late August 2025. For this use case the support agents use the gpt-oss-120b
model hosted by Scaleway. We chose Scaleway because its inference services are guaranteed to be hosted in Europe and GDPR-compliant, both non-negotiable requirements due to the nature of the content we are processing. The model used is an open weight model similar in performance to OpenAI's commercial o4
model which we'd used to run experiments for this kind of content transformation.