Configuration
Introductionโ
This document provides a comprehensive guide to the configuration options available in the system. It outlines how to access and modify application settings, customize prompts, and manage usage limits to tailor the system to your specific requirements.
Table of Contentsโ
- Introduction
- Table of Contents
- Accessing Configuration
- Application Settings
- Prompt Personalization
- Usage Limits
- Next Steps
Accessing Configurationโ
To access the configuration options:
- Click on Configuration in the navigation bar
- Choose from the three available options:
- Application Settings
- Prompt Personalization
- Usage Limit
Application Settingsโ
Model Configurationโ
Configure the foundation models used by the system:
- Default Foundation Model: Select the primary LLM to be used
- Fallback LLM Model: Secondary model used if the primary fails
- Embedding Model: Select the embedding model (can only be changed by resetting)
- To change the embedding model, click the Reset Embedding Model button
- Note: This will remove all existing embedding data
- Multi-model Configuration: Enable multiple models for embedding indexing
After making changes, click the Save button to apply your configurations.
Search on Web Settingsโ
- Search on Web: Add default extensions
- Default Foundation Model: Select model for search on web
- Groups for Access: Set access permissions
Ask AI Configurationโ
Configure the Ask AI feature:
- Default Foundation Model: Select model for Ask AI
- Groups: Configure access permissions
Guardrailsโ
Set boundaries for the AI assistant:
- Enable/disable guardrails
- Select foundation models used specifically for guardrails
File Upload Limitsโ
Expand this section to set file upload limits for knowledge management across all users.
Chat Accuracy Settingsโ
Adjust settings that affect chat accuracy:
- Chat History Number: Configure how much history is retained
- Number of Chunks: Set how many chunks are used in chat
Advanced Settingsโ
Conversation Starterโ
- Enable to add multiple predefined questions that appear in the chat interface
- Users can click these to start a conversation
- Enable text-to-speech output for responses
- Enable speech-to-text for asking questions verbally
Attached Documentsโ
When enabled:
- Adds an option in chat to upload documents
- Allows chatting with document content
- Requires selecting:
- LLM foundation model to generate responses
- Embedding model to store data in vector space
Source Citationโ
- When enabled, citations appear in the chat with responses
- Shows which parts of the response came from which document
Feedback Mechanismโ
- Enable to add feedback options (like/dislike) for responses
Browse Sourceโ
- When enabled, displays the source files used in generating responses
Reasoning Engineโ
- Enable to see the step-by-step thinking process behind query responses
Application Labelsโ
Customize labels for:
- Chat reasoning/thinking text
- Search reasoning/thinking text
- Search creating text
Brand Voiceโ
- Enable to allow users to generate responses according to your organization's brand voice
- Helps control the tone and style of AI-generated responses
Personaโ
- When enabled, adds an option in the top left of chat to select a persona
- Chat will respond according to the selected persona
- Responses are generated based on user-created personas
Web Scraping Limitsโ
Configure limits for web scraping operations by users.
Prompt Personalizationโ
Accessing prompt personalization settings:
- Click on Prompt Personalization from the Configuration page
- Configure various prompt types as needed
Available Prompt Typesโ
Each prompt type allows you to customize the system's behavior:
Prompt Type | Description |
---|---|
Search Knowledge Prompt | Controls how responses are generated when searching knowledge |
Search Web Prompt | Controls responses for web searches |
RCA Prompt | For root cause analysis functions |
Brand Voice Prompt | Used with "Generate with AI" to create branded responses |
Task Prompt | Used for generating task-specific responses |
Search Summary Prompt | Controls how search summaries are generated |
Conversation Name Prompt | Determines how conversation names are generated based on queries |
Follow-up Questions Prompt | Configures how follow-up questions are generated |
Question Validation Prompt | Controls the validation process for questions |
Answer Validation Prompt | Controls the validation process for answers |
Query Rephrase Prompt | Used when rephrasing queries (triggered by Alt+L) |
Organization Policy Prompt | Controls how organization policies are applied |
Response Length Prompt | Determines how response length is managed |
Moderation Panelโ
Configure content moderation settings:
- Enable/disable filters for threatening content
- Control filters for sexual content, minors-related content
- Manage filters for self-harm content
- Configure destructive content filters
When enabled, these filters will block corresponding content from responses.
Usage Limitsโ
Adding Usage Limitsโ
- Click on Usage Limit from the Configuration page
- In the "Add Usage Limit" popup, configure:
- Select Limit Level: Choose user, group, or application level
- Application: Select from ACE Companion, ACE, AI Studio, or other options
- Select Limit Type: Choose budget amount or number of requests
- Set Frequency: Configure when thresholds reset (daily, weekly, or monthly)
- Click the Add button to create the limit
Managing Usage Limitsโ
For existing usage limits, you can:
- View: See limit usage details including:
- Amount used
- Cost type consumed
- Number of requests made
- Use the Refresh button to update the limit data
- Edit: Modify settings including:
- Target application
- Budget type
- Number of requests
- Reset frequency
- Delete: Remove usage limits that are no longer needed
Next Stepsโ
After configuring the system according to your organization's needs, you can:
- Train users on the available features
- Monitor usage to optimize settings
- Periodically review and adjust configurations as requirements change
For additional help or specific use cases, refer to the user manual or contact support.