Ainbox.ai

HOW TO SETUP A SELF-HOSTED AI MODEL ON UBUNTU SERVER

In this guide, we will install Ollama on an Ubuntu server and set up a web-based interface for employees to interact with it.

Hardware Requirements

  • CPU: Minimum 4 cores (8 recommended)
  • RAM: Minimum 16GB (32GB recommended for larger models)
  • Storage: At least 50GB free space
  • GPU (Optional): NVIDIA GPU with CUDA for better performance
  • OS: Ubuntu 20.04 or later

Prerequisites

  • Root or sudo privileges
  • Basic knowledge of terminal commands

Installation Steps

1. Install Ollama

First, we need to install Ollama using their installation script:

curl -fsSL https://ollama.com/install.sh | sh

2. Install Required Python Packages

We’ll need Python’s pip package manager and virtual environment support:

sudo apt install python3-pip
sudo apt install python3.12-venv

3. Create a Virtual Environment

Now let’s create a dedicated virtual environment for OpenWebUI. This helps manage dependencies and prevents conflicts with other Python applications.

python3 -m venv OI_env

4. Activate the Virtual Environment

Before installing OpenWebUI, we need to activate our virtual environment. This ensures all packages are installed in the correct location.

source OI_env/bin/activate

Note: Your terminal prompt should change to show (OI_env) at the beginning, indicating you’re now working within the virtual environment.

5. Install OpenWebUI

With our virtual environment active, we can now install OpenWebUI using pip.

pip install open-webui

6. Start the OpenWebUI Server

Launch the OpenWebUI server using the following command:

open-webui serve

By default, this will start the server on http://localhost:8080

Now, your employees can interact with the AI securely within your network without relying on cloud services!

Click here to download the installer.

Scroll to Top