Back to all posts
ollama ai programming

Running Local AI with Ollama

2 min read (458 words)

Why Even Bother Running AI Locally?

Hey, imagine this: you’re playing with AI. Asking wild questions, generating stuff. And it’s not on some distant server - it’s right on your machine. Cool, right? That’s what running AI locally is all about. You get privacy - no one’s snooping on your prompts. It’s free - no monthly fees sneaking up. Plus, you’re in control. Tweak it however you want! I mean, why trust a cloud when you can just DIY it?

So, let’s grab Ollama and get this party started.

What’s This Ollama Thing Anyway?

Ollama’s like that friend who makes complicated stuff easy. It’s an open-source tool that lets you run AI models - like, actual language models - on your own gear. No fancy degree needed. You just type a couple commands, and bam, you’ve got an AI buddy ready to chat. It’s lightweight, works with small or big models, and honestly, it’s a blast to play with. Let’s set it up and see what it can do.

Let’s Get It Running - Step by Step

Alright, I tried this on my Mac, and it was a breeze. (It works on Linux too, and Windows is catching up!) Grab a coffee, and let’s do this together.

Install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Hit enter, and it’ll do its thing. You’ll see some text flying by, and in like a minute, Ollama’s installed. Done. No sweat.

Terminal interface showing Ollama AI model installation and first interaction

Fire Up a Model

I went with phi3 - it’s small, fast, and still pretty smart. List of available modes

ollama run phi3

First time, it’ll download the model (takes a sec depending on your internet), then you’ll get a little “>” prompt. That’s it - your AI’s alive!

Terminal screenshot showing successful Ollama installation process

You can also pull different models and test them out:

Diagram showing Ollama model download and initialization flow

Time to Chat with It

So, it’s running, and I’m staring at that prompt like, “What now?” Let’s test it. I typed: What is AI? It thought for a sec, then hit me with this:

Screenshot of first interaction with phi3 model in Ollama

Solid answer, phi3! You can hit it with anything - ask for a joke, some code, whatever. It’s your playground. Oh, and when you’re done? Use Ctrl + d or /bye to exit.

To delete a model:

ollama rm phi3

What’s Next? Let’s Level Up

The CLI’s neat, no doubt. Want something slicker? Try OpenWebUI - it’s like ChatGPT in your browser. Or, if your setup’s got power, run a bigger model. Your local AI’s ready to roll - how cool is that?

So, there you go - no subscriptions, no leaks, just you and your AI. What’s the first thing you’ll ask it? Hit me up on my socials - I’d love to know!

Bonus: Guess which model’s the most popular?

Infographic showing progression path for Ollama usage from CLI to advanced features

19.5M Pulls - not bad

Dmitry Golovach
About

Dmitry Golovach

Principal Network Engineer and AI enthusiast. Always learning, always building.

Share this post

All posts