EverywhereEverywhere

AI Assistant

Manage and configure your AI assistants

You can manage and configure your AI assistants through the AI Assistant page in the sidebar of the Everywhere main interface.

Settings

Check Connectivity

When configuring an AI assistant, you can use the Check Connectivity button in the upper right corner of the page to verify if the entered API information is correct and if the network environment is available.

If the check fails, please verify the AI provider-related settings, or try switching to a different network environment.

Basic Info

Icon

You can set a custom icon for each assistant to make it easier to distinguish between different assistants in the sidebar.

Click on the icon area to select a Lucide icon or some Emoji as the assistant's icon. You can also modify the icon and background color.

Name

Give the assistant an easily recognizable name to facilitate switching between multiple assistants.

Description

Add a short description to the assistant to help you remember its purpose or features.

System Prompt

The system prompt is used to guide the AI assistant's behavior and response style. You can customize the system prompt as needed to make the assistant better suit your requirements.

Placeholders

We provide some placeholders that you can use in the system prompt to dynamically insert relevant information:

  • {Time}: Current time
  • {OS}: Current operating system name
  • {SystemLanguage}: System language consistent with the Everywhere interface language
  • {WorkingDirectory}: Current working directory path usually a folder named with the current date under the plugins folder

Default System Prompt Example

You are a helpful assistant named "Everywhere", a precise and contextual digital assistant.
You are able to assist users with various tasks directly on their computer screens.
Visual context is crucial for your functionality, can be provided in the form of a visual tree structure representing the UI elements on the screen (If available).
You can perceive and understand anything on your screen in real time. No need for copying or switching apps. Users simply press a shortcut key to get the help they need right where they are.

<SystemInformation>
OS: {OS}
Current time: {Time}
Language: {SystemLanguage}
Working directory: {WorkingDirectory}
</SystemInformation>

<FormatInstructions>
Always keep your responses concise and to the point.
Do NOT mention the visual tree or your capabilities unless the user asks about them directly.
Do not use HTML or mermaid diagrams in your responses since the Markdown renderer may not support them.
Reply in System Language except for tasks such as translation or user specifically requests another language.
</FormatInstructions>

<FunctionCallingInstructions>
Functions can be dynamic and may change at any time. Always refer to the latest tool list provided in the tool call instructions.
NEVER print out a codeblock with arguments to run unless the user asked for it. If you cannot make a function call, explain why (Maybe the user forgot to enable it?).
When writing files, prefer letting them inside the working directory unless absolutely necessary. Prohibit writing files to system directories unless explicitly requested by the user.
</FunctionCallingInstructions>

Configure AI Provider

You can choose between Preset Mode or Advanced Mode based on the level of customization you need.

Please ensure that the AI provider you choose supports the features you intend to use, such as tool usage or image input, to avoid request failures.

Privacy

Please note that API keys are sensitive information. Do not disclose them to anyone or share them in public places.

Preset Mode

In Preset Mode, you only need to fill in common configuration options. Everywhere will automatically complete other settings based on the selected model provider, suitable for most users.

For providers such as Ollama and OpenRouter, it is recommended to use Advanced Mode for configuration to have more flexibility in selecting models.

  • Model Provider
    Select the model provider you want to use from the dropdown menu. Everywhere will automatically fill in the relevant API call address and protocol based on the selected provider.
  • API Key
    Select your previously saved API key from the dropdown menu, or add a new API key.
  • Model Name
    Select the specific model name you want to use from the dropdown menu. For example, for OpenAI, you can choose GPT 5.

Advanced Mode

In Advanced Mode, you can customize more configuration options, suitable for users with specific needs or those using model providers not officially supported.

  • URL
    Enter the API call address of the model provider.
  • API Key
    Select your previously saved API key from the dropdown menu, or add a new API key.
  • API Schema
    Select the API schema corresponding to the model provider you are using. Different protocols may affect the format and parameters of the request.
  • Model ID
    Enter the specific model ID you want to use. For example, for OpenAI, you can enter gpt-5-mini.
  • Image Input Support
    Enable this option to upload images in the chat context. If the model does not support image input, enabling this option may cause request failures.
  • Tool Usage Support
    Enable this option to allow the model to use the tools (plugins) you have configured. If the model does not support tool usage, enabling this option may cause request failures.
  • Deep Thinking Support
    Enable this option to allow the model to think before answering. If the model does not support deep thinking, enabling this option may cause request failures.
  • Maximum Context Length
    Set the maximum context length (in tokens) the model can use in a conversation. Please adjust according to the limitations of the model you are using.

Request Timeout

Set the timeout for each API request in seconds. If a request is not completed within this time, it will be terminated. For slower models, it is recommended to increase the timeout duration appropriately.

Creativity (Temperature)

Adjust the creativity of the model's generated responses. A higher value (e.g., 0.8) will make the responses more diverse and creative, while a lower value (e.g., 0.2) will make them more conservative and consistent.

Thought Openness (Top-P)

Adjust the range of vocabulary the model considers when generating responses. A higher value (e.g., 0.9) will make the model consider more vocabulary options, while a lower value (e.g., 0.3) will restrict the model to only the most likely words.

How is this guide?

Last updated on

On this page