Skip to main content

Cherry Studio Integration Guide

Cherry Studio is a cross-platform AI desktop client that supports multi-model conversations, knowledge bases, workflows, and more. This guide explains how to integrate Cherry Studio with OPEAI Platform.

Installing Cherry Studio

Visit the Cherry Studio official website to download the client for your platform.

Supported platforms:

  • macOS (Apple Silicon / Intel)
  • Windows
  • Linux

Configuring OPEAI Platform

1. Open Settings

After launching Cherry Studio, navigate to SettingsModel Services.

2. Add Provider

Click the Add button to create a new API Provider.

3. Enter Configuration Information

Configuration Parameters

Configuration ItemValue
NameOPEAI
API TypeOpenAI
API Addresshttps://api-platform.ope.ai/v1
API KeyYour OPEAI API Key

Add Models

Manually add the following model IDs (one per line):

Claude-4.6-Sonnet
Claude-4.6-Opus
Claude-4.5-Haiku
GPT-5.4-Pro
GPT-5.4
GPT-5.3-Codex
DeepSeek-V3.2
DeepSeek-V3.2-thinking
Gemini-3-Pro
Gemini-3-Flash
Qwen3-235B-A22B
Kimi-K2-Instruct-0905
info

OPEAI Platform uses the OpenAI-compatible protocol and supports all models.

4. Save Configuration

After entering the information, click Save. Cherry Studio will automatically verify the connection.


Use CaseRecommended ModelFeatures
Daily Coding & WritingClaude-4.6-SonnetStrong overall capability, fast response, cost-effective
Complex ReasoningClaude-4.6-OpusTop-tier reasoning, ideal for architecture design
Quick Q&AClaude-4.5-HaikuUltra-fast response, low cost
Ultra-long ContextGPT-5.4-Pro1M context, suitable for analyzing long documents
Code GenerationGPT-5.3-CodexOptimized for programming scenarios
Cost OptimizationDeepSeek-V3.2Extremely low cost, preferred for high-frequency calls
Deep ThinkingDeepSeek-V3.2-thinkingSupports chain-of-thought reasoning

For the complete model list and pricing, please check Model Pricing.


Usage Tips

Multi-Model Comparison

Cherry Studio allows multiple models to answer the same question simultaneously, perfect for comparing output quality across different models:

  1. Select multiple models in the conversation interface (hold Cmd/Ctrl to multi-select)
  2. Send your question
  3. View and compare each model's response

Recommended comparison combinations:

  • Speed vs Quality: Claude-4.5-Haiku vs Claude-4.6-Opus
  • Different Vendors: Claude-4.6-Sonnet vs GPT-5.4 vs Gemini-3-Pro
  • Cost Comparison: DeepSeek-V3.2 vs Claude-4.6-Sonnet

Knowledge Base Feature (RAG)

Combined with Cherry Studio's knowledge base feature, you can implement Q&A based on private documents:

  1. Create a knowledge base and upload documents (PDF, Word, Markdown, etc.)
  2. Enable the knowledge base in conversation
  3. The model will answer questions based on your documents
info

Cherry Studio's knowledge base feature requires the Embeddings API. OPEAI Platform supports the vector model bge-m3.

When configuring the Provider, ensure the API address is https://api-platform.ope.ai/v1. Cherry Studio will automatically call the embeddings interface.

Workflow Mode

Cherry Studio's workflow feature allows you to create multi-step AI task flows:

  1. Create a new workflow in the Workflows tab
  2. Drag and drop to add nodes (LLM calls, conditional logic, data processing, etc.)
  3. Configure which model each node uses
  4. Save and run the workflow

Application scenarios:

  • Code review → Bug fix → Unit test generation
  • Requirements analysis → Architecture design → Code implementation
  • Long text translation → Polishing → Formatting

Troubleshooting

Model List Does Not Auto-Load

Cherry Studio requires manual addition of model names. Please refer to the Add Models section above and copy the complete model ID list.

Image Analysis Not Working

Confirm that your selected model supports vision capabilities. Recommended models:

  • Gemini-3-Pro
  • Gemini-3-Flash
  • Qwen2.5-VL-32B-Instruct

Ensure the API address in Provider configuration is correct: https://api-platform.ope.ai/v1

Authentication Failed

  • Confirm API Key format is correct (starts with sk-)
  • Visit OPEAI Platform Console to confirm Key validity
  • Check if the API address is entered correctly

Knowledge Base Vectorization Failed

  • Confirm you have added a Provider in SettingsModel Services
  • API address must be https://api-platform.ope.ai/v1
  • Embeddings model will be called automatically, no manual configuration needed

Slow Response Speed

  • Try switching to faster models (e.g., Claude-4.5-Haiku, Gemini-3-Flash)
  • Check network connection, confirm access to https://api-platform.ope.ai
  • Avoid using ultra-large context during peak hours