Cherry Studio Integration Guide
Cherry Studio is a cross-platform AI desktop client that supports multi-model conversations, knowledge bases, workflows, and more. This guide explains how to integrate Cherry Studio with OPEAI Platform.
Installing Cherry Studio
Visit the Cherry Studio official website to download the client for your platform.
Supported platforms:
- macOS (Apple Silicon / Intel)
- Windows
- Linux
Configuring OPEAI Platform
1. Open Settings
After launching Cherry Studio, navigate to Settings → Model Services.
2. Add Provider
Click the Add button to create a new API Provider.
3. Enter Configuration Information
Configuration Parameters
| Configuration Item | Value |
|---|---|
| Name | OPEAI |
| API Type | OpenAI |
| API Address | https://api-platform.ope.ai/v1 |
| API Key | Your OPEAI API Key |
Add Models
Manually add the following model IDs (one per line):
Claude-4.6-Sonnet
Claude-4.6-Opus
Claude-4.5-Haiku
GPT-5.4-Pro
GPT-5.4
GPT-5.3-Codex
DeepSeek-V3.2
DeepSeek-V3.2-thinking
Gemini-3-Pro
Gemini-3-Flash
Qwen3-235B-A22B
Kimi-K2-Instruct-0905
OPEAI Platform uses the OpenAI-compatible protocol and supports all models.
4. Save Configuration
After entering the information, click Save. Cherry Studio will automatically verify the connection.
Recommended Models
| Use Case | Recommended Model | Features |
|---|---|---|
| Daily Coding & Writing | Claude-4.6-Sonnet | Strong overall capability, fast response, cost-effective |
| Complex Reasoning | Claude-4.6-Opus | Top-tier reasoning, ideal for architecture design |
| Quick Q&A | Claude-4.5-Haiku | Ultra-fast response, low cost |
| Ultra-long Context | GPT-5.4-Pro | 1M context, suitable for analyzing long documents |
| Code Generation | GPT-5.3-Codex | Optimized for programming scenarios |
| Cost Optimization | DeepSeek-V3.2 | Extremely low cost, preferred for high-frequency calls |
| Deep Thinking | DeepSeek-V3.2-thinking | Supports chain-of-thought reasoning |
For the complete model list and pricing, please check Model Pricing.
Usage Tips
Multi-Model Comparison
Cherry Studio allows multiple models to answer the same question simultaneously, perfect for comparing output quality across different models:
- Select multiple models in the conversation interface (hold
Cmd/Ctrlto multi-select) - Send your question
- View and compare each model's response
Recommended comparison combinations:
- Speed vs Quality:
Claude-4.5-HaikuvsClaude-4.6-Opus - Different Vendors:
Claude-4.6-SonnetvsGPT-5.4vsGemini-3-Pro - Cost Comparison:
DeepSeek-V3.2vsClaude-4.6-Sonnet
Knowledge Base Feature (RAG)
Combined with Cherry Studio's knowledge base feature, you can implement Q&A based on private documents:
- Create a knowledge base and upload documents (PDF, Word, Markdown, etc.)
- Enable the knowledge base in conversation
- The model will answer questions based on your documents
Cherry Studio's knowledge base feature requires the Embeddings API. OPEAI Platform supports the vector model bge-m3.
When configuring the Provider, ensure the API address is https://api-platform.ope.ai/v1. Cherry Studio will automatically call the embeddings interface.
Workflow Mode
Cherry Studio's workflow feature allows you to create multi-step AI task flows:
- Create a new workflow in the Workflows tab
- Drag and drop to add nodes (LLM calls, conditional logic, data processing, etc.)
- Configure which model each node uses
- Save and run the workflow
Application scenarios:
- Code review → Bug fix → Unit test generation
- Requirements analysis → Architecture design → Code implementation
- Long text translation → Polishing → Formatting
Troubleshooting
Model List Does Not Auto-Load
Cherry Studio requires manual addition of model names. Please refer to the Add Models section above and copy the complete model ID list.
Image Analysis Not Working
Confirm that your selected model supports vision capabilities. Recommended models:
Gemini-3-ProGemini-3-FlashQwen2.5-VL-32B-Instruct
Ensure the API address in Provider configuration is correct: https://api-platform.ope.ai/v1
Authentication Failed
- Confirm API Key format is correct (starts with
sk-) - Visit OPEAI Platform Console to confirm Key validity
- Check if the API address is entered correctly
Knowledge Base Vectorization Failed
- Confirm you have added a Provider in Settings → Model Services
- API address must be
https://api-platform.ope.ai/v1 - Embeddings model will be called automatically, no manual configuration needed
Slow Response Speed
- Try switching to faster models (e.g.,
Claude-4.5-Haiku,Gemini-3-Flash) - Check network connection, confirm access to
https://api-platform.ope.ai - Avoid using ultra-large context during peak hours