
I was playing with my new OKD cluster and getting some workloads running. It was all simple enough, but I really don’t like spending time doing things that are not prod-like.
I know that most shops are probably not trusting upstream registries , additionally OpenShift does not allow containers to run as root by default, and many upstream images assume they are root. I wanted a way to rebuild upstream container images so they satisfy OKD/OpenShift security requirements, store them locally in Harbor, and deploy them reliably without pulling from upstream registries at runtime.
So I went on the hunt for a enterprise container registry and landed on Harbor.
This document details my installation of Harbor. Because I have not yet implemented a Certificate Authority in my environment and I’m being deliberate about where I spend my time this installation is configured to use HTTP rather than HTTPS.
I initially attempted to use self-signed certificates, but consistently ran into certificate-related errors across multiple components. While switching Harbor to HTTP introduced a different set of issues, those problems were at least predictable and debuggable, allowing me to make forward progress.
For now, the goal of this deployment is functionality and understanding, not production-grade TLS. HTTPS will be introduced later once a proper CA strategy is in place.
vv-harbor-01.vv-int.io172.26.2.10/opt/harbordnf update -y
dnf install -y curl wget tar vim firewalld dnf-plugins-core
systemctl enable --now firewalld
Open required ports (HTTP only):
firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --reload
Add Docker repo:
dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker:
dnf install -y docker-ce docker-ce-cli
Enable and start Docker:
systemctl enable --now docker
Verify:
docker version
curl -L https://github.com/docker/compose/releases/download/v2.27.0/docker-compose-linux-x86_64 \
-o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker compose version
cd /opt
wget https://github.com/goharbor/harbor/releases/download/v2.10.0/harbor-offline-installer-v2.10.0.tgz
tar xvf harbor-offline-installer-v2.10.0.tgz
cd harbor
/etc/hosts (REQUIRED)This is the authoritative fix we actually used.
Edit /etc/hosts:
vi /etc/hosts
Add this exact line at the top (above other entries), whatever hostname you choose:s
172.26.2.10 vv-harbor-01.vv-int.io
⚠️ Do NOT point this hostname to 127.0.0.1 or ::1.
Verify:
getent hosts vv-harbor-01.vv-int.io
Expected output (ONE line only):
172.26.2.10 vv-harbor-01.vv-int.io
If you see ::1 or fe80::, stop and fix this before continuing.
Even with /etc/hosts pinned, mDNS can reintroduce IPv6 resolution in other contexts.
Edit /etc/nsswitch.conf:
vi /etc/nsswitch.conf
Change:
hosts: files mdns4_minimal [NOTFOUND=return] dns
To:
hosts: files dns
This makes DNS and /etc/hosts authoritative.
Re-verify:
getent hosts vv-harbor-01.vv-int.io
It must still return IPv4 only.
cp harbor.yml.tmpl harbor.yml
Edit harbor.yml:
hostname: vv-harbor-01.vv-int.io http: port: 80
hostname: vv-harbor-01.vv-int.io
http:
port: 80
# HTTPS MUST NOT EXIST AT ALL
# (commenting incorrectly can still break nginx)
# https:
# port: 443
harbor_admin_password: Harbor12345
database:
password: HarborDB12345
⚠️ Do not leave an empty https: block.
./prepare
Expected output includes:
Generated configuration file: ./docker-compose.yml
Generated configuration file: ./common/config/nginx/nginx.conf
If prepare fails → do not proceed.
docker compose up -d
Verify all services:
docker compose ps
All must be Up (healthy).
ss -lntp | grep :80
Expected:
LISTEN 0.0.0.0:80 docker-proxy
Docker ALWAYS assumes HTTPS on port 443 unless told otherwise.
Therefore, HTTP registries REQUIRE insecure-registries.
vi /etc/docker/daemon.json
{
"insecure-registries": [
"vv-harbor-01.vv-int.io"
]
}
Restart Docker:
systemctl restart docker
Verify:
docker info | grep -A5 -i insecure
You MUST see the registry listed.
⚠️ Restarting Docker stops Harbor.
Bring it back:
cd /opt/harbor
docker compose up -d
Use the hostname, not IP:
curl -H "Host: vv-harbor-01.vv-int.io" http://127.0.0.1
Browser:
http://vv-harbor-01.vv-int.io
docker pull vv-harbor-01.vv-int.io/dockerhub-apps/library/nginx:1.25
systemctl restart docker
cd /opt/harbor
docker compose up -d
Harbor does not auto-start.
This is basically what we will be doing below
Upstream Image
↓
Harbor Host (Docker/Podman)
↓ (optional modification)
Golden Harbor Project
↓
Robot Account (pull)
↓
OKD Worker Nodes
↓
Running Pod + Route
I also want to set the project to private because I already punted on the TLS and CA. I really wanted to force "auth correctess"
golden image projectgolden repository and verify that Grafana runs successfullygolden repositoryUI → Administration → Registries → New Endpoint
Provider: Docker Hub
Name: dockerhub
Auth: none (public images)
Test Connection: OK
Log into Harbor
Create project:
Name: golden
Visibility: Private
Inside golden project → Robot Accounts Create robot: Name: I named it robot-pull-goldens The resulting name in Harbor will be robot$golden+robot-pull-golden
Name: robot$golden+robot-pull-golden
Permissions:
Pull repository
Push repository
Save the generated token, this is the password.
Note: I granted both pull and push permissions to this single robot account. In a production environment, these accounts should be separate. OKD would use a pull-only account for deploying images. This reduces the attack surface, if the deployment credential were ever compromised, an attacker could not push or modify images in the registry.
On the Harbor VM:
docker login vv-harbor-01.vv-int.io
Use:
Pull Upstream phpMyAdmin Image
From the Harbor host:
docker pull grafana/grafana:12.4.0-21230963995
This confirms:
Next we tag the image:
docker tag grafana/grafana:12.4.0-21230963995 \
vv-harbor-01.vv-int.io/golden/grafana:12.4.0
Then we push it to the golden repo:
docker push vv-harbor-01.vv-int.io/golden/grafana:12.4.0
you will now see it in the golden repo and its tags
Switch to a VM that has OC and create the project
oc new-project harbor-test
Validate
oc project harbor-test
oc get project harbor-test
Now we will be using the Harbor repo and token we created earlier Every OpenShift project (namespace) has a default service account.If a pod does not explicitly specify a service account, OpenShift automatically runs it as default.
Attach the pull secret to the default SA
oc create secret docker-registry harbor-pull \
--docker-server=vv-harbor-01.vv-int.io \
--docker-username='robot$golden+robot-pull-golden' \
--docker-password='RAJMT4gNNw0EoavxBXdCl5TgXSdc2Pxo' \
-n harbor-test
Patch the default ServiceAccount (before any pods exist)
oc patch sa default -n harbor-test \
-p '{"imagePullSecrets":[{"name":"harbor-pull"}]}'
Validate
oc get sa default -n harbor-test -o yaml | grep -A3 imagePullSecrets
Patch the default ServiceAccount (before any pods exist)
Time to deploy Grafana
oc create deployment grafana \
--image=vv-harbor-01.vv-int.io/golden/grafana:12.4.0-ubuntu \
-n harbor-test
Validate
oc get deployment grafana
Expose Grafana
oc expose deployment grafana --port=3000
oc get route grafan
OK so that is how it goes when a docker image respects OKD's rules. Now we will pull phpMyAdmin from Docker Hub show how it fails. Then we will fix it in our golden repo and show it running
Pull Upstream phpMyAdmin Image, tag it and push to Harbor then we will deploy
From the Harbor host:
docker pull phpmyadmin/phpmyadmin:5.2.1
docker tag phpmyadmin/phpmyadmin:5.2.1 \
vv-harbor-01.vv-int.io/golden/phpmyadmin:5.2.1-raw
docker push vv-harbor-01.vv-int.io/golden/phpmyadmin:5.2.1-raw
Now we go back over to OKD OC
oc new-app vv-harbor-01.vv-int.io/golden/phpmyadmin:5.2.1-raw \
--name=phpmyadmin-raw \
-n harbor-test
Validate Crash Loop
oc get pods -n harbor-test -w
Why it failed on OKD
The upstream phpmyadmin/phpmyadmin image assumes it can:
/etc/phpmyadminOKD reality
Why a golden registry matters
Create a Working Directory
On the Harbor host:
mkdir -p ~/phpmyadmin-golden
cd ~/phpmyadmin-golden
Why:
Create Dockerfile:
nano Dockerfile
FROM phpmyadmin/phpmyadmin:5.2.1
# Switch Apache to non-privileged port
RUN sed -i 's/Listen 80/Listen 8080/' /etc/apache2/ports.conf && \
sed -i 's/:80/:8080/g' /etc/apache2/sites-enabled/000-default.conf
# Ensure writable dirs for random UID
RUN mkdir -p /etc/phpmyadmin && \
chgrp -R 0 /etc/phpmyadmin && \
chmod -R g=u /etc/phpmyadmin
# Expose non-privileged port
EXPOSE 8080
# Do NOT force USER — let OpenShift inject one
docker build -t vv-harbor-01.vv-int.io/golden/phpmyadmin:5.2.1-okd .
Verify:
docker images | grep phpmyadmin
docker push vv-harbor-01.vv-int.io/golden/phpmyadmin:5.2.1-okd
Expected output:
At this point:
On an OKD node or admin workstation:
oc new-app vv-harbor-01.vv-int.io/golden/phpmyadmin:5.2.1-okd \
--name=phpmyadmin-fixed \
-n harbor-test
Expose it:
oc expose svc/phpmyadmin-fixed -n harbor-test
Quick patch, this tells OpenShift:
accept traffic on port 80 (what the route expects) forward it internally to port 8080 (what the container is actually listening on)
oc patch svc phpmyadmin-fixed -n harbor-test \
-p '{"spec":{"ports":[{"port":80,"targetPort":8080}]}}'
Verify:
oc get pods -n harbor-test
oc get route phpmyadmin-fixed -n harbor-test
So that’s about it. I now have a golden local repository for all my images, which also helps streamline my build and test workflow.
OKD / Kubernetes / OpenShift definitely has a learning curve. It’s nothing too crazy, but working through some of the behaviors can be challenging, especially when you’re trying to figure out:
Is this a Docker issue?
A Kubernetes-native issue?
Or an OKD/OpenShift opinion layered on top?
So, another attrition-based learning experience in the books.
I hope this article helps someone.
Thanks for reading!
—Christian