Foundry Local · Web User Interface · No CLI Required

Use Foundry Local
with a Web UI

Foundry Local exposes an OpenAI-compatible API on localhost:PORT. Any chat UI that talks to OpenAI also talks to Foundry Local — no code, no CLI, just a browser tab. This guide covers the two best options.

🌐
Open WebUI

The most popular open-source AI chat frontend. Clean ChatGPT-style interface with conversation history, model switching, image generation, and a plugin system. Officially recommended by Microsoft for use with Foundry Local.

Chat Interface Conversation History Model Switching Microsoft Recommended
📚
AnythingLLM

All-in-one AI desktop app with built-in RAG (chat with your documents), workspaces, and native Foundry Local integration — automatically starts the service and manages models for you. Official partner of Microsoft Foundry.

Document RAG Workspaces Native Integration Official Partner
Part 01 of 02
🌐

Open WebUI

A powerful, self-hosted ChatGPT-style interface. Microsoft's official documentation points to Open WebUI as the recommended chat frontend for Foundry Local.

📄
This guide follows the official Microsoft Learn article: Integrate Open WebUI with Foundry Local ↗
1
Start Foundry Local with a model Required first

Open WebUI connects to a running Foundry Local service. You must start a model before launching Open WebUI, otherwise no models will appear in the dropdown.

# Start a model — phi-4 is the recommended model
foundry model run phi-4

# In a SECOND terminal — get the service port
foundry service status
✓ Service is running
  Endpoint: http://localhost:5272/v1
  Models loaded: phi-4
foundry model run phi-4

# In a second terminal
foundry service status
✓ Endpoint: http://localhost:5272/v1
⚠️
Note the port number. Foundry Local dynamically assigns a port — it is not always 5272. Always run foundry service status to get the current port before connecting any UI.
2
Install Open WebUI Windows · Official steps

Open WebUI is installed via the BrainDriveAI Conda Installer — a self-contained package that installs Miniconda and Open WebUI together. This is the method referenced in the official Microsoft Community Hub guide.

a Download the installer

Download OpenWebUIInstaller.exe from the BrainDriveAI releases page and copy it to C:\Temp\.

# Download from:
https://github.com/BrainDriveAI/OpenWebUI_CondaInstaller/releases

# Copy the installer to C:\Temp\
copy %USERPROFILE%\Downloads\OpenWebUIInstaller.exe C:\Temp\
⚠️
Windows Defender SmartScreen will show a warning when you download the file. This is expected — click More info → Run anyway to proceed.
b Run the installer from an elevated prompt

Open PowerShell as Administrator and run the following commands in order. They install Miniconda system-wide, accept the conda terms of service, then launch the Open WebUI installer.

# 1. Install Miniconda system-wide
winget install -e --id Anaconda.Miniconda3 --scope machine

# 2. Add Miniconda to the current session PATH
$env:Path = 'C:\ProgramData\miniconda3;' + $env:Path
$env:Path = 'C:\ProgramData\miniconda3\Scripts;' + $env:Path
$env:Path = 'C:\ProgramData\miniconda3\Library\bin;' + $env:Path

# 3. Accept conda Terms of Service (required before first use)
conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main
conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r
conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/msys2

# 4. Launch the Open WebUI installer
C:\Temp\OpenWebUIInstaller.exe
💡
When the installer dialog opens, choose "Install and run Open WebUI". It will set up the environment and start the server automatically.
Open WebUI launches at http://localhost:8080 — open that URL in your browser to continue
3
Create your account & open Admin Settings

Open a browser to http://localhost:8080. The first time you open Open WebUI, it asks you to create an admin account — this is a local account, nothing is sent anywhere.

localhost:8080 — Open WebUI
Create Admin Account
Local account — stays on your device
Create Account →

After logging in, click your avatar (top right) → Admin Settings → Connections.

4
Enable Direct Connections in Admin Settings

Before you can add a custom endpoint, you need to enable the Direct Connections toggle. This is a one-time admin setting.

localhost:8080 — Admin Settings → Connections
Admin Panel
General
Connections
Models
Connections
Direct Connections
Allow users to connect to their own OpenAI-compatible APIs
Direct Connections enabled — users can now add custom endpoints

Navigate: Avatar → Admin Settings → Connections → enable "Direct Connections" toggle → Save.

5
Add Foundry Local as a Connection

Now navigate to your personal Settings (not Admin Settings). Go to Connections → click + next to "Manage Direct Connections".

localhost:8080 — Settings → Connections → Add
Add Direct Connection
URL
Auth
Connection saved — phi-4-mini now appears in model dropdown
💡
URL format: http://localhost:PORT/v1 — replace PORT with the number from foundry service status. Set Auth to None. Click Save.
6
Start chatting — you're done!

Close Settings. At the top of the page, your Foundry Local models appear in the model dropdown. Select one and start typing. Every request runs entirely on your local hardware.

localhost:8080 — Open WebUI
⬡ phi-4 · Foundry Local ▾
A
What is Microsoft Foundry Local?
Microsoft Foundry Local is a developer runtime that runs AI models directly on your own hardware — no cloud, no API costs, no data leaving your device. It exposes an OpenAI-compatible API on localhost, so any existing app or SDK...
Message phi-4...
You are now running a private ChatGPT on your own hardware — conversation history stored locally, zero cloud, zero cost per message
Open WebUI is running
Start Foundry Local first, then open your browser. The model dropdown will show all loaded models.
Open WebUI ↗

Part 02 of 02
📚

AnythingLLM

The all-in-one AI desktop app with built-in RAG, workspaces, and a native Foundry Local integration. AnythingLLM is an official Microsoft Foundry partner and the recommended choice if you need to chat with your documents.

🤝
Official Partner: AnythingLLM has a native Foundry Local integration. It automatically starts the service and manages models for you — no CLI interaction needed for day-to-day use. AnythingLLM Docs ↗
1
Install Foundry Local (if not already installed)

AnythingLLM will detect Foundry Local automatically if it's installed. If you've already followed earlier chapters, skip to Step 2.

# Windows
winget install Microsoft.FoundryLocal

# macOS
brew tap microsoft/foundry && brew install foundry-local

# Verify
foundry --version
2
Download & install AnythingLLM Desktop

AnythingLLM Desktop is a native app for Windows, macOS, and Linux. Download the installer for your platform from the official website.

# Download from:
https://anythingllm.com/download
# Run AnythingLLMDesktop-Setup.exe
# Accept the installer defaults
# AnythingLLM launches automatically
# Download AnythingLLMDesktop.dmg from:
https://anythingllm.com/download
# Open the .dmg and drag AnythingLLM to /Applications
# Launch from Launchpad or Spotlight
# Download the AppImage from:
https://anythingllm.com/download
chmod +x AnythingLLMDesktop.AppImage
./AnythingLLMDesktop.AppImage
AnythingLLM opens to the onboarding screen — you'll configure the LLM provider in the next step
3
Select Foundry Local as the LLM Provider Native Integration

AnythingLLM has a dedicated Foundry Local option in its LLM provider list. It will automatically detect and start the service for you.

AnythingLLM Desktop — LLM Configuration
LLM Provider
✓ Foundry Local detected
AnythingLLM will automatically start Microsoft Foundry Local when launched. Unloads models when idle to keep system resources free.

Navigate to Settings (⚙) → LLM Preference → select "Microsoft Foundry Local" → choose your model → Save.

⚠️
Only already-downloaded models appear in the dropdown. To add a new model, use the CLI: foundry model download phi-4-mini — then reload AnythingLLM.
4
Create a Workspace and start chatting

AnythingLLM organizes conversations into Workspaces — think of them as separate chat contexts, each with its own document library and conversation history. Create one and start chatting.

AnythingLLM Desktop
🏠 My Workspace
📄 Project Docs
💼 Meeting Notes
+ New Workspace
Model: phi-4-mini · Foundry Local · ⬡ LOCAL
Summarize the attached PDF
Based on the document you uploaded, here is a summary: The report covers Q3 performance metrics showing a 23% increase in...
Ask anything or upload a document...
5
Upload documents for RAG (optional)

AnythingLLM's killer feature: drag PDF, Word, TXT, or web URLs into any workspace and the AI answers questions using your documents — all processed locally. No document ever leaves your machine.

Supported document types:
PDF, DOCX, TXT, MD, HTML, CSV, JSON, YouTube URLs

How to upload:
1. Open a Workspace
2. Click the Upload icon (📎) in the chat input
3. Drag and drop files or paste a URL
4. AnythingLLM embeds and indexes locally
5. Ask questions — answers cite your documents
You now have a private, local AI that knows your documents — completely offline after setup, zero data shared with any server
AnythingLLM is your private knowledge base
Foundry Local runs the AI. AnythingLLM manages your docs, chat history, and workspaces. Everything stays on your machine.
Download AnythingLLM ↗

Troubleshooting

Common Issues & Fixes

❌ No models appear in Open WebUI dropdown

A model isn't running yet, or the port has changed.

foundry service status
# Note the current port
foundry model run phi-4-mini
# In Open WebUI Settings → Connections
# Update the URL with the current port
# Reload Open WebUI
❌ Open WebUI can't connect / connection refused

Foundry Local service isn't running, or port mismatch.

foundry service status
# If error: restart it
foundry service restart
# Then update the port in Open WebUI settings
# Docker users: use host.docker.internal:PORT
❌ AnythingLLM shows "No models available"

Models must be downloaded via CLI before they appear.

foundry model list
# Download the model you want
foundry model download phi-4-mini
# Restart AnythingLLM — it will now
# show the downloaded model in the dropdown
❌ "Direct Connections" option not visible in Open WebUI

The Admin toggle was not enabled — it controls access for all users.

# You must be logged in as Admin
# Profile avatar → Admin Settings
# (not regular Settings)
# → Connections → enable Direct Connections toggle
# → Save
# Now go to Profile → Settings → Connections
# to add the endpoint
❌ Slow responses / model taking long to reply

Running on CPU, or a large model on limited hardware.

# Use phi-4-mini — smallest fastest model
foundry model run phi-4-mini
# Check hardware routing
foundry service status
# Shows: NPU / GPU / CPU being used
# Copilot+ PC NPU = fastest option
❌ Port changes after every restart

Foundry Local dynamically assigns a port. Always check it.

# Every time you restart Foundry Local:
foundry service status
# Copy the new port and update:
# Open WebUI → Settings → Connections
# AnythingLLM → Settings → LLM Preference
# (AnythingLLM with native integration
#  handles this automatically)