Supported Providers (summary)
Oppla supports both cloud and local AI providers. Choose one based on your latency, privacy, and cost needs.- Cloud providers
- OpenAI (GPT family)
- Anthropic (Claude)
- Google AI (Gemini)
- Azure OpenAI
- AWS Bedrock
- Local providers
- Ollama
- LLama-based runtimes (llama.cpp, GGML variants)
- Custom HTTP endpoints (self-hosted inference)
- Note: Provider availability depends on your Oppla build channel (stable/preview) and platform.
Quick setup (2–5 minutes)
- Open Oppla → Preferences → AI or open the Command Palette and run
oppla: Open AI Settings
. - Select your provider (Cloud or Local).
- Add credentials (API key or local endpoint). Prefer using OS secret stores or environment variables (see below).
- Choose the default model for completions and the model for heavier tasks (agents, refactoring).
- Save and test the connection via the “Test Provider” button.
Configuration locations
- GUI: Preferences → AI
- CLI / config file: your Oppla settings file (user-level)
- macOS / Linux:
~/.config/oppla/settings.json
- Windows (when supported):
%APPDATA%\Oppla\settings.json
- macOS / Linux:
Credentials and secrets (best practices)
- Do not store API keys in plain text in your project repositories.
- Use platform secret stores:
- macOS Keychain
- Linux: Secret Service / GNOME Keyring / pass
- Environment variables for CI: OPPLA_AI_PROVIDER, OPPLA_AI_API_KEY
- Oppla will prompt to store keys in the OS secret store by default.
- For self-hosted endpoints, use HTTPS and certificate validation; provide an access token instead of an API key when available.
Example settings (user-level)
This example shows the keys commonly used in the Oppla settings file. Adapt to your environment and provider.api_key_env
references an environment variable. Using env vars avoids storing secrets in plain files.local_model_preferred: true
prefers local models when available.
Local models & on-premises
To run models locally:- Install and run your chosen runtime (Ollama, llama.cpp frontend, or containerized inference).
- Point Oppla to the runtime endpoint in settings (e.g.,
http://localhost:11434
). - For large models, ensure you meet the hardware and storage requirements (see System Requirements).
- If you operate in an air-gapped environment, enable “local only” mode in AI settings. This prevents any outbound requests.
Privacy & security highlights
- Data residency: When using cloud providers, code context sent to the provider may be logged according to the provider’s policy — review provider privacy docs.
- Granular control: Use per-project privacy settings to limit what is shared with cloud providers.
- Audit logging: Enterprise installs can enable audit logs for AI requests and agent actions.
- Secure endpoints: For custom endpoints, use HTTPS and validate certificates. Rotate tokens frequently and use short-lived credentials where possible.
Cost & usage controls
- Configure model choice per task (cheap, fast models for completion; larger models for agents).
- Set per-user or per-team quotas in enterprise environments.
- Monitor usage in the Oppla dashboard or via provider billing dashboards.
Troubleshooting
- “Test Provider” fails:
- Verify API key / token is valid and not expired.
- Ensure network connectivity and low latency to provider endpoints.
- Check OS secret store permissions.
- Slow suggestions or high latency:
- Switch to a lower-latency or local model.
- Reduce requested context window or tokens.
- Unsatisfactory completions:
- Try a different model or tweak temperature/settings.
- Provide more relevant context files in the editor.
- Local model not found:
- Confirm runtime is running and endpoint is reachable (curl / HTTP request).
- Check logs for model-loading errors and memory availability.
Recommended defaults
- Completions: fast, small model (low cost, low latency)
- Refactoring / code transforms: medium-sized model with higher reasoning capability
- Agents & multi-file tasks: powerful model or enterprise-hosted model
Links & next steps
- AI Overview: AI Overview
- AI edit prediction (coming soon): Edit Prediction (stub)
- Agent Panel (coming soon): Agent Panel (stub)
- Privacy & security (recommended — create page): Privacy & Security (stub)
- Themes and keybindings are separate configs:
Audit & validation
Before publishing specific latency or accuracy claims:- Run benchmarks on representative hardware and network conditions.
- Document test environment, date, and methodology.
If you want, I can:
- Create small stubs for the referenced pages (edit-prediction, agent-panel, privacy-and-security) so these links resolve.
- Add an OS-specific credential-storage guide with commands.
- Provide example curl tests for each provider to validate connectivity.