Application Settings
Overviewโ
Application Settings provide comprehensive configuration options for customizing the system's behavior, model selection, and feature availability. This document details each setting category, configuration options, and their impacts on the system's functionality.
Table of Contentsโ
- Application Settings
- Overview
- Table of Contents
- Accessing Application Settings
- Model Configuration
- Search on Web Configuration
- Ask AI Configuration
- Guardrails Configuration
- File Upload Limit
- Chat Accuracy Settings
- Advanced Settings
- Application Labels
- Brand Voice
- Persona Configuration
- Web Scraping Limits
- Saving Changes
- Best Practices
- Troubleshooting
- Security Considerations
Accessing Application Settingsโ
- Navigate to the main navigation bar
- Click on Configuration
- Select Application Settings from the three available options
- You will be redirected to the Application Settings page
Model Configurationโ
The Model Configuration settings section allows you to define which AI models are used throughout the system. Choose and configure your foundation chat, embedding, and fallback models to ensure reliable responses and optimal performance for your specific needs.
Default Foundation Modelโ
- Select from the dropdown of available foundation models
- This model will be used as the default for all operations of chat, unless overridden by specific feature settings
- The selected model impacts response quality, speed, and cost
Fallback LLM Modelโ
- Select a secondary model that will be used automatically if the default foundational model encounters issues
- Recommended to choose a reliable model with different infrastructure than your primary model
- Ensures system continuity during model outages or rate limiting/model deprecation
Embedding Modelโ
- Important: The embedding model can only be changed by performing a reset
- Once selected, this model will be used for all vector embeddings in the system
- To change the embedding model:
- Click the Reset Embedding Model button
- Confirm the warning that all existing embedding data will be removed
- Select a new embedding model from the dropdown
- Save the configuration
Multimodal modelโ
- When enabled, allows selection of multiple embedding models
- Useful for comparing performance or specific use cases
- After making selections, click Save to apply changes
Search on Web Configurationโ
Configure settings for the Search on Web that manages content precision.
Search on Webโ
Purpose: Controls web search capabilities
- Add default extensions for web searches
- Configure which external sources are accessible
- Set authentication requirements for external sources
Default Foundation Model for PCMโ
Purpose: Selects the AI model specifically for Search on Web operations
- Can be different from the system default model
- Select based on Search on web requirements
Groups for Accessโ
Purpose: Controls which user groups can access Search on Web features
- Select from available user groups
- Manage access permissions at a group level
- Restrict sensitive content management features to appropriate teams
Ask AI Configurationโ
Configure the Ask AI component, which provides direct question-answering capabilities.
Default Foundation Model for Ask AIโ
Purpose: Selects the AI model specifically for Ask AI operations
- Can be different from the system default model
- Choose models optimized for Q&A performance
Groups for Ask AI Accessโ
Purpose: Controls which user groups can access AskAI features
- Select from available user groups
- Manage access permissions at a group level
- Limit access based on usage requirements or budget constraints
Guardrails Configurationโ
Guardrails set boundaries for AI interactions to ensure safe and appropriate responses.
Enable Guardrailsโ
Purpose: Activates content and behavior guardrails for AI responses
- Toggle to enable or disable guardrails system-wide
- When enabled, additional configuration options appear
Foundation Models for Guardrailsโ
Purpose: Selects which model evaluates content against guardrails
- Select the foundation model specifically used for guardrail enforcement
- This model will evaluate content against policy violations
- Recommended to use a model with strong safety capabilities
File Upload Limitโ
Manage file size and quantity restrictions for knowledge management.
File Upload Limit Configurationโ
Purpose: Controls maximum file sizes and counts for uploads
- Set maximum file size in MB
- Configure maximum number of files per upload
- Set maximum total upload size
- These limits apply to all users system-wide
Chat Accuracy Settingsโ
Fine-tune settings that impact the accuracy and context of chat interactions.
Chat History Numberโ
Purpose: Determines how many past messages are included in context
- Set a numerical value for chat history retention
- Higher values provide more context but consume more tokens
- Lower values reduce context but may increase performance
Number of Chunksโ
Purpose: Controls how many document chunks are used in chat context
- Set the maximum number of chunks to include
- Higher values improve comprehensiveness but increase token usage
- Lower values focus on most relevant content but may miss details
Advanced Settingsโ
The Advanced Settings section contains numerous specialized configuration options that enable additional features.
Conversation Starterโ
Purpose: Provides predefined questions to help users begin interactions
- Enable/disable this feature
- When enabled: Add multiple predefined questions in the configuration panel
- Questions will appear in the chat interface
- Users can click on these to start a conversation without typing
Text to Speech Outputโ
Purpose: Converts AI responses to spoken audio
- Enable/disable text-to-speech capability
- When enabled, a speech output option becomes available in the chat interface
- Supports accessibility requirements and alternative interaction modes
Speech to Text Inputโ
Purpose: Allows users to speak their queries instead of typing
- Enable/disable speech recognition
- When enabled, a microphone button appears in the chat interface
- Users can click this button to speak their queries
Attached Documentsโ
Purpose: Enables document upload and querying in chat
- Enable/disable document attachment functionality
- When enabled, additional configuration options appear:
- Select LLM Foundation Model: Choose which model generates responses from documents
- Embedding Model Selection: Select which model converts documents to vector space
- This feature allows users to upload documents directly in the chat and ask questions about them
Source Citationโ
Purpose: Shows which sources informed specific parts of responses
- Enable/disable citation functionality
- When enabled, responses will include citations linking response sections to source documents
- Improves transparency and verifiability of information
Feedback Mechanismโ
Purpose: Collects user feedback on response quality
- Enable/disable feedback collection
- When enabled, like/dislike options appear with responses
- Collected feedback can be used for system improvement
Browse Source File Link Optionโ
Purpose: Provides direct links to source documents
- Enable/disable source browsing
- When enabled, responses include links to the source files used
- Users can click these links to view the original documents
Reasoning Engineโ
Purpose: Shows the AI's step-by-step reasoning process
- Enable/disable reasoning visibility
- When enabled, users can see how the AI approached answering their query
- Shows the thought process and logical steps taken to reach conclusions
Application Labelsโ
Customize the text labels used throughout the application interface.
Chat Reasoning/Thinking Textโ
Purpose: Customizes labels for reasoning processes in chat
- Modify the text that appears when showing reasoning steps
- Customize to match organizational terminology
Search Reasoning/Thinking Textโ
Purpose: Customizes labels for search reasoning processes
- Modify the text that appears when showing search reasoning
- Tailor to specific search contexts or requirements
Search Creating Textโ
Purpose: Customizes labels for search creation processes
- Modify the text shown during search creation
- Customize to align with organizational search workflows
Brand Voiceโ
Configure the system to respond in a way that aligns with your organization's communication style.
Enable Brand Voiceโ
Purpose: Activates brand voice customization
- Toggle to enable/disable brand voice features
- When enabled, responses will conform to the defined brand voice
- Additional configuration options become available
Brand Voice Configurationโ
Purpose: Defines the characteristics of your brand voice
- Once enabled, users will have the option to generate responses using the brand voice
- The system will generate responses matching the configured tone and style
- Brand voice affects all responses from the AI system
Persona Configurationโ
Create and manage different AI personas that users can select for interactions.
Enable Persona for Chatbotโ
Purpose: Allows users to select different AI personas
- Toggle to enable/disable persona selection
- When enabled, a persona selector appears in the top left of the chat interface
- Users can choose different personas for different contexts
Persona Managementโ
Purpose: Create and configure available personas
- Create multiple personas with distinct characteristics
- Each persona can have different:
- Tone of voice
- Knowledge specialization
- Response style
- The system will generate responses according to the selected persona's characteristics
Web Scraping Limitsโ
Control how and when the system scrapes web content.
Configure Web Scraping Limitโ
Purpose: Sets boundaries for web content scraping
- Set limits on how many pages can be scraped per query
- Configure depth of crawling for linked pages
- Set frequency limits to prevent excessive scraping
- These limits help maintain responsible use of external web resources
Saving Changesโ
After making any changes to Application Settings:
- Review all modified settings to ensure they match your requirements
- Click the Save button at the bottom of the relevant section
- A confirmation message will appear when settings are successfully saved
- Some settings may require a system restart to take effect
Best Practicesโ
For optimal configuration of Application Settings:
- Performance Balance: Higher accuracy settings generally consume more resources and tokens
- Model Selection: Match models to use cases (e.g., use more capable models for complex reasoning)
- Feature Enabling: Only enable features that are actively needed to reduce interface complexity
- Regular Review: Periodically review settings as new models and capabilities become available
- Testing: Test settings changes in a controlled environment before applying system-wide
Troubleshootingโ
Issue | Possible Solution |
---|---|
Changes not saving | Ensure you click the Save button after making changes |
Model not available | Check API key configuration and model access permissions |
Slow response times | Reduce chat history or number of chunks settings |
Embedding reset failure | Ensure no active embedding processes are running before reset |
Feature not appearing | Verify both the feature toggle and group permissions are configured |
Security Considerationsโ
When configuring Application Settings, consider these security best practices:
- Limit access to configuration pages to administrative users only
- Apply guardrails when dealing with sensitive information
- Regularly audit which groups have access to advanced features
- Document all configuration changes for compliance and tracking