Preferences (Eclipse)
Oxygen AI Positron Assistant Preferences Page

- Enable AI Positron Assistant
- Deselect this setting to disable the AI assistant.
- Context prompt
- The context provides useful information about the user to the AI and is used in each action and chat request to create more relevant and personalized responses.
- Actions section
-
- Load default actions
- Specifies if default actions are loaded.
- Additional actions folder
- You can use this option to specify a local folder where you have stored additional actions.
- Actions to exclude
- You can specify a comma-separated list of IDs for the actions that you do not want presented in the list of available actions. Use the menu to the right of the text field to choose the actions to exclude.
AI Service Configuration Preferences

${env(ENV_NAME)}
in all configuration and header parameter
values.- AI Connector
- Specifies the connector type: OpenAI, Microsoft Azure OpenAI, Anthropic Claude, or Google Gemini.
OpenAI:
If OpenAI is chosen as the connector type, the following settings are available:
- Address
- The web address of the OpenAI service. By default:
https://api.openai.com
. - API key
- The OpenAI API key necessary to work with the connector. Note:This option does not get saved in the Project-level options.
- Organization ID
- For users who belong to multiple organizations, they can specify which organization is used for an API request. Usage from these API requests will count as usage for the specified organization.
- Default model
- The default model is used for the chat view and for actions that do not explicitly specify a model.
- Enable text moderation
- This setting applies moderation (checks whether content complies with OpenAI's usage
policies) to both the input text sent to the AI service and the response received from
the AI service. It is enabled by default.Tip:
By default, when executing an action using the OpenAI connector, three requests are made:
- A moderation on input content request to
configured_web_address/v1/moderations
. - A completion request to
configured_web_address/v1/chat/completions
. -
A moderation on content returned by AI to
configured_web_address/v1/moderations
.If your AI service does not require moderation (for example, moderation is already made by chat/completions endpoint) you can disable it by unchecking this checkbox.
- A moderation on input content request to
- Enable streaming
- This option controls whether streaming is enabled. When enabled (default), AI-generated answers are delivered in real time as a continuous flow. If disabled, the complete answer is delivered all at once after the processing is finished.
- Extra Headers
- Extra name/value parameters to set in the headers that are specific for the AI
requests.Tip:If the service uses Bearer Authentication, you can specify the key in the Key text field. If another authentication method is used, the Key field can be left empty, and the Extra Headers table can be used to set the authentication info on the request header. Note that editor variables can be used in this field and you can set your key in editor variables and specify the value in this table like this:
${env(AI_SERVICE_KEY)}
to access pre-set values of environmental variables.
- You can use your own fine-tuned OpenAI models.
- The OpenAI connector might work with other AI engines that use the OpenAI APIs (like Grok or Deepseek).
MS Azure OpenAI:
If Microsoft Azure OpenAI is chosen as the connector type, the following settings are available:
- Endpoint
- The web address where the connector service is located. This value can be found in the
Keys & Endpoint section when examining your resource from the Azure
portal. For example:
https://your-company-name.openai.azure.com/
. - Deployment
- The deployment name that was chosen when the model was deployed in Microsoft Azure.
- API key
-
The Microsoft Azure OpenAI Service key necessary to work with the connector.
If an API key is not provided, the application will attempt to authenticate with Microsoft Entra ID by using one of the supported identity-based methods.
- Enable streaming
- This option controls whether streaming is enabled. When enabled (default), AI-generated answers are delivered in real time as a continuous flow. If disabled, the complete answer is delivered all at once after the processing is finished.
- Extra Headers
- Extra name/value parameters to set in the headers that are specific for the AI requests.
The Microsoft Azure OpenAI connector supports the following identity-based authentication flow:
- Service Principal Authentication
-
Service principal authentication is ideal for non-interactive scenarios, such as background processes or automation scripts.
Choose one of the options below if you need to authenticate without prompting the user to sign in.
Before you begin, create a service principal and assign a role to it that allows access to the Azure OpenAI service (e.g. the Cognitive Services OpenAI User role).
-
Client Secret
Use this method for simpler setups in trusted environments.
Be aware that on desktop systems, secrets stored in environment variables or config files are not encrypted by default and may be exposed to other local users or processes.
Required Environment Variables:
Variable name Value AZURE_CLIENT_ID ID of a Microsoft Entra application. AZURE_TENANT_ID ID of the application's Microsoft Entra tenant. AZURE_CLIENT_SECRET One of the application's client secrets. -
Client Certificate
Use this method in high-security environments where certificate-based authentication is preferred over storing client secrets.
Be aware that certificate files, like other secrets, must be protected properly—especially on desktop systems where local users may gain access to file contents if not secured appropriately.
Required Environment Variables:
Variable name Value AZURE_CLIENT_ID ID of a Microsoft Entra application. AZURE_TENANT_ID ID of the application's Microsoft Entra tenant. AZURE_CLIENT_CERTIFICATE_PATH Path to the PEM or PFX certificate file. AZURE_CLIENT_CERTIFICATE_PASSWORD Password for the certificate file (if any).
-
The connector will automatically detect and use the correct authentication flow based on which environment variables you have configured.
Anthropic Claude:
If Anthropic Claude is chosen as the connector type, the following settings are available:
- Endpoint
- The web address where the connector service is located. By default, it is
https://api.anthropic.com/
. - API key
- The Anthropic Claude API key necessary to work with the connector.
- Model
- The Anthropic Claude model to use. By default, it is
claude-3-opus-20240229
. - Enable streaming
- This option controls whether streaming is enabled. When enabled (default), AI-generated answers are delivered in real time as a continuous flow. If disabled, the complete answer is delivered all at once after the processing is finished.
- Extra Headers
- Extra name/value parameters to set in the headers that are specific for the AI requests.
Google Gemini:
If Google Gemini is chosen as the connector type, the following settings are available:
- Address
- The web address where the connector service is located. By default, it is
https://generativelanguage.googleapis.com/
. - API Key
- The API key that can be generated in Google AI Studio: https://aistudio.google.com/app/apikey.
- Model
- The Gemini model to use (the available models can be found at https://ai.google.dev/gemini-api/docs/models).
Functions and RAG Preferences

- Enable functions
- Enables the use of functions for retrieval-augmented generation and for writing
content in the project.
- Enable project-based RAG
- Enables retrieval-augmented generation based on similar content obtained from the current open project. Actions and chat interactions generate content give more precise and meaningful responses when this setting is enabled. It is enabled by default. The available functions used for RAG are listed in the text box.
- Enable external RAG sources
- Enables the use of external retrieval augmented generation sources.
- Oxygen Feedback site token
-
When the Oxygen Feedback product is used to provide search functionality for a web site generated from the DITA XML project, its search system can be used to retrieve related content. In the Oxygen Feedback administrative interface, find the installation instructions for the site version that you want to use (click the Installation button on your version's tile in the Site Version page). The installation information contains a unique
deploymentToken
value that can be copied and pasted into the Oxygen Feedback site token field. - Oxygen Feedback site description
- A description for this external content retrieval site that is passed to the AI engine to help it decide whether or not the external source is used.
- Enable writing content in project
- Allows functions that can be used by the AI to write content in the current project. The available functions used for this purpose are listed in the text box.
- Limit read/write access to
- Allows you to restrict the locations where functions can read and write content. You can specify these locations as a comma-separated list of resource paths. By default, read and write access is restricted to the project directory and the directory of the current root map.
XPath Functions Preferences
- Enable XPath Functions
- Enables the use of AI-specific XPath functions in Oxygen when applying
Schematron validation or XSLT transformations.
- Cache responses and reuse them for identical prompts
- If enabled (default), responses for identical requests are stored (cached), resulting in fewer requests being sent to the AI server and faster completion times. A Clear cache button located to the right of this option can be used to clear the cache.
- Cache size
- Specifies a maximum limit for the cache size.
- Notify me when the number of requests exceeds
- You can select this option and specify a number of AI requests that when exceeded, a confirmation dialog box is displayed asking if you want to continue using the XPath AI functions. If you select "No" for the answer, the XPath functions will be disabled.