demo.mp4
Writing Tools, Apple's AI-inspired app, enchants Windows, enhancing your pen with AI LLMs. One hotkey press, system-wide, fixes grammar, punctuation, and more. It's the world's most intelligent, system-wide grammar assistant.
Based on ChatLLM.cpp, and built with Delphi and Lazarus.
๐๐๐ Don't forget to update ChatLLM.cpp bindings to enjoy GPU acceleration!
In the realm of AI, a secret lies,
Where LLMs run on your very site.
System-wide, without a hitch,
Instantly in any app you might pick.Clipboard untouched, a pure delight,
No need to worry, no data to steal.
Purely native, no Python, no Java, no JS,
Bloat-free, CPU usage, it's a joke.Chat mode, a quick query's delight,
No text selected, a chat mode you'll find.
For quick queries, assistance at your command,
All commands customizable, a true delight.Privacy, absolute, a promise kept,
No data to share, no worries to find.
Free and open-source, a pure delight,
Purely native, no bloat, no pain.With OpenAI's API online,
A trade-off made for access refined.
In the world of AI, a secret lies,
Where LLMs run on your very site.
- Absolute privacy: Based on ChatLLM.cpp. LLMs run on your machine.
- System-wide functionality: Works instantly in any application where you can select text. Does not overwrite your clipboard.
- Completely free and open-source: Purely native. Bloat-free & uses pretty much 0% of your CPU.
- Chat Mode: Seamlessly switching between context processing mode and chat mode.
- Customization: All commands are fully customizable.
- Markdown/Math rendering: Beautiful and elegant.
- Trade-off on privacy: Sometimes one needs to access an online even powerful models, so OpenAI compatible services are also supported.
All features are defined by users based on their own needs. Some examples.
- Proofread: The smartest grammar and spelling corrector.
- Rewrite: Improve the phrasing of your text.
- Make Friendly/Professional: Adjust the tone of your writing.
- Summarize: Create concise summaries of longer texts.
- Create Tables: Convert text into a structured Markdown table.
- Custom Instructions: Give specific directions (e.g.
Translate to Chinese).
Invoke Writing Tools with no text selected to enter quick chat mode.
-
Go to the Releases page and download the latest package.
-
Extract it anywhere you want.
-
To use local LLMs served by ChatLLM.cpp for complete privacy, download a quantized model. Some small models:
Model name Size (GB) Link QWen-2.5 1.5B 1.6 Link Gemma-2 2B 2.8 Link -
Configure your profile.
Copy
profile.json.intoprofile.json. Fill in the path of the quantized model file and other options (see ChatLLM.cpp):To setup an local LLM, just define a list of options required by ChatLLM.cpp:
{ //... "chatllm": { "default": [ "-m", "path of the quantized model file", "-ngl", "all", // for GPU acceleration "+detect-thoughts" // detect thoughts ] }, //... }
To setup an OpenAI compatible server, just define a simple dictionary:
{ //... "chatllm": { "default": { "url": "api for chat completion", "key": "....", "model": "..." } }, //... }
Take DeepSeek as an example:
{ "url": "https://api.deepseek.com/chat/completions", "key": "sk-.....", "model": "deepseek-chat" }
profile.jsonshall be encoded in UTF-8. -
Start
WritingTools.exe;
-
Select any text in any application (or don't select any text to use quick chat mode).
-
Press your hotkey (Default: Win+Shift+I).
-
Choose an option from the popup menu or enter a custom instruction.
Select nothing in other applications and press your hotkey will enter quick chat mode. Enter a prompt and start chatting.
Even if some text is selected, you can input a prompt starting with "/chat " or "chat :" to switch to chat mode.
Fully customizable through profile.json.
Multiple LLMs can be defined and loaded simultaneously. Each is defined as an entry in chatllm:
{
//...
"chatllm": {
"default": ...,
"another_one": ...
},
//...
}default is the default LLM for actions, which can be omitted when defining actions.
Actions are represented to users as a collection of buttons. Each action is defined as a dictionary:
{
"name": "My Action",
"prompt": "Check this:\n\n{context}",
"sys_prompt": "....",
"accelerator": "p", // optional
"llm": "another_one", // optional
"ai_prefix": "...", // optional
"ai_suffix": "...", // optional
"feels_lucky": true, // optional (true or false)
"web_app": "..." // optional
"action": "show" // optional
}-
namegives the caption of the button. -
promptis the prompt fed to the LLM, in which{context}represents the selected text. -
sys_promptis the system prompt fed to the LLM. -
acceleratoris the accelerator of the button (single char, optional). -
llmis selected LLM to serve this action (when omitted, "default" is selected). -
ai_prefixis used for generation steering. -
ai_suffixis used abort generation: once this suffix is found in LLM's output, generation is aborted. -
feels_luckyis a flag to accept the first round of LLM's output. When set tofalse(default), users can click Redo button to try again. -
web_appis a special field providing additional functionalities on LLM's output (whenfeels_lucksisfalse). Possible values:diff: compare LLM's suggestion and original text (useful for proofreading like actions).
-
actionis the post action to handle the output of LLM. Possible values:show: show the output in a box.prepend: prepend the output in front of current selection.replace: replace the current selection by the output (default).append: append the output following current selection.clipboard: copy the output to the clipboard.
An example of ai_prefix and ai_suffix: force AI to generate just doxgen-style comments for functions.
{
"ai_prefix": "\/** @brief",
"ai_suffix": "*\/",
// ...
}A list of such actions are defined under actions in profile.json.
There are two special actions, one is defined under custom in profile.json, which defines the behavior when
users input some custom instruction; the other one is defined under quick-chat, which defines the behavior for quick chat mode.
action of a custom instruction can be configured on-the-fly by adding a prefix to the instruction, e.g. to select the show action:
- "/show explain this", or
- "show: explain this"
Users can change hotkey, title and background. Note: width and height are be saved automatically when resized.
{
"title": "My Writing Tools",
"hotkey": "win+shift+I",
"ui": {
"width": 587,
"background": {
"color1": "ccaaaa",
"color2": "ffffff"
},
"height": 422
},
//...
}Some vibrant gradients:
-
Clipboard not working?
These three delays in milliseconds specify how to simulate Ctrl+C and detect clipboard changes.
{ "ui": { ... "hotkey": { "delay1": 200, // before simulating Ctrl+C "delay2": 40, // between key inputs within Ctrl+C "delay3": 100 // wait for clipboard changed after Ctrl+C }, }, //... }
-
App can't load?
WebView2 runtime is required, which is preinstalled onto all Windows 11 and most of Windows 10. If problems are encountered, check the document and install it. Link
- Reading of current selection through clipboard is unreliable. Please re-try.
Add a shortcut of the WritingTools.exe to the Windows Start-Up folder.
Precondition: Build libchatllm or get *.dll from releases;
- Install Delphi Community Edition;
- Build this project (Target: Win64);
- Copy all
*.dllfiles to the output directory (such as Win64/Debug).
- Install Lazarus Win64;
- Install package WebView4Delphi to Lazarus;
- Build this project (LazWritingTools.lpi);
- Copy all
*.dllfiles to the output directory (such as lib/x86_64-win64).
-
This project is inspired by another WritingTools. Let's keep things simple, with Delphi and Lazarus.
-
Super Object Toolkit for JSON manipulation.
-
Prism for code highlighting.
-
Marked for Markdown rendering.
-
MathJax for math rendering.
Distributed under the MIT License.































