Back to all posts
ollama ai programming

Enhancing your Ollama experience with OpenWebUI

1 min read (194 words)

Following up on my previous post about getting phi3 running locally - while the command line interface is functional, a proper UI can improve the interaction with these powerful models

OpenWebUI is an open-source front-end that connects to Ollama and provides that polished ChatGPT experience but locally.

Here's my quick setup guide if you want to try it:

1. Get Ollama Running First: Start it with ollama serve and keep that terminal window open - it's your AI engine running in the background.

2. Set Up OpenWebUI via Docker: In a new terminal, run:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

3. Access Your New Interface: Just visit http://localhost:3000 in your browser

Screenshot of OpenWebUI interface connected to Ollama showing a chat conversation

The biggest advantage? You'll have conversation history and a clean interface that the CLI simply doesn't provide. Create an account when prompted, select your model, and you'll immediately notice the workflow improvement.

This setup gives you the best of both worlds:

  • the security of local AI
  • the usability of commercial platforms.

It's perfect if you're working with sensitive data or just exploring what these models can do. What's Next?

  • multi-model capabilities
  • fine-tuning
  • What else is possible?
Screenshot showing OpenWebUI customization options and settings panel
Dmitry Golovach
About

Dmitry Golovach

Principal Network Engineer and AI enthusiast. Always learning, always building.

Share this post

All posts