If you need to compile and install the software - here is the slackbuilds.org how-to about compiling and installing SlackBuild scripts: SlackBuilds.org - SlackBuild Script Repository How-To
1.) First I created an unprivileged group and user named "ollama" which include the locations for ollama's working directories and false as its shell. I looked up https://slackbuilds.org/uid_gid.txt to see if ollama user and/or group had already been assigned a uid or gid, and they have not - so I set the uid and gid at the next possible id 393. Note: This might change in the future if Slackware developers assign a differnt uid and gid to ollama as a user and group.
[root@host]# groupadd -g 393 ollama
[root@host]# useradd -u 393 -g ollama -d /var/lib/ollama -s /bin/false -c "Ollama Service User" ollama
[root@host]# mkdir -p /var/lib/ollama /var/run/ollama /var/log/ollama
[root@host]# chown -R ollama:ollama /var/lib/ollama /var/run/ollama /var/log/ollama
2.) I then wrote the following /etc/rc.d/rc.ollama script to bring up the ollama serve REST API ( LLM inference engine ). Notice: I also created an ollama configuration defaults file located in /etc/default/ollama to manage environment variables like OLLAMA_HOST, OLLAMA_MODELS, etc.
/etc/rc.d/rc.ollama
#!/bin/sh
# /etc/rc.d/rc.ollama - Ollama service for Slackware
# Created by Jerry B Nettrouer II https://www.inpito.org/projects.php
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see
#
# Load configuration (if file exists)
[ -f /etc/default/ollama ] && . /etc/default/ollama
# Set the Process ID file
PIDDIR="/run/ollama/"
PIDFILE="/run/ollama/ollama.pid"
# Set the log file
LOGFILE="/var/log/ollama/ollama.log"
case "$1" in
start)
if [ -f "$PIDFILE" ] && kill -0 $(cat "$PIDFILE") 2>/dev/null; then
echo "Ollama already running."
exit 0
fi
echo "Starting Ollama... (models: $OLLAMA_MODELS, host: $OLLAMA_HOST)"
# Creating run directory
mkdir -p $PIDDIR
chown -R ollama:ollama $PIDDIR
# Use nohup + setsid for clean daemon behavior
su -s /bin/sh -c "setsid nohup ollama serve >> $LOGFILE 2>&1 & echo \$! > $PIDFILE" ollama
echo "Started with PID $(cat "$PIDFILE")"
;;
stop)
echo "Stopping Ollama..."
if [ -f "$PIDFILE" ]; then
kill $(cat "$PIDFILE") 2>/dev/null
rm -f "$PIDFILE"
else
pkill -f "ollama serve" 2>/dev/null
fi
;;
restart)
$0 stop
sleep 1
$0 start
;;
status)
if [ -f "$PIDFILE" ] && kill -0 $(cat "$PIDFILE") 2>/dev/null; then
echo "Ollama is running (PID $(cat "$PIDFILE"))."
elif pgrep -f "ollama serve" >/dev/null; then
echo "Ollama is running (but no PID file)."
else
echo "Ollama is not running."
fi
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
;;
esac
exit 0
/etc/default/ollama
OLLAMA_MODELS=${OLLAMA_MODELS:-"/var/lib/ollama/.ollama"}
OLLAMA_HOST=${OLLAMA_HOST:-"127.0.0.1"}
# You can add more variables if needed, e.g.:
# OLLAMA_ORIGINS="*"
# OLLAMA_DEBUG="1"
3.) Ensure the rc.ollama file is executable by issuing the chmod command chmod +x /etc/rc.d/rc.ollama and in the event that you want the ollama server REST API to launch at startup add the following to the /etc/rc.d/rc.local to ensure it launches during boot.
# Ollama REST API
if [ -x /etc/rc.d/rc.ollama ]; then
/etc/rc.d/rc.ollama start
fi
4.) Next, decide who you want to have access to ollama by adding them to the ollama group.
[root@host]# usermod -aG ollama inpito
[root@host]# usermod -aG ollama jerryn
[root@host]# usermod -aG ollama doyleh
[root@host]# ect ...
Warning : Any user that can use the ollama REST API will also be able to download LLM modules which can lead to huge downloads and take up a lot of hard drive space.
Finally : After launching the ollama REST API /etc/rc.d/rc.ollama start, login as a user and attempt to use the resource.
[root@host]# su inpito
[inpito@host]$ ollama run llama3.1:8b "Tell me a programmer joke?"
pulling manifest
pulling 667b0c1932bc: 100% ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 4.9 GB
pulling 948af2743fc7: 100% ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 1.5 KB
pulling 0ba8f0e314b4: 100% ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 12 KB
pulling 56bb8bd477a5: 100% ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 96 B
pulling 455f34728c9b: 100% ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 487 B
verifying sha256 digest
writing manifest
success
Here's one:
Why did the programmer quit his job?
Because he didn't get arrays.
[inpito@host]$
Happy AI-ing!