Well, I got it done...
I would say the image perfectly describes how I tend to dive into greenfield tasks. If I am asked to deploy something new (stuff I love doing), my first question is usually: where is the installer? Yes, I will glean the documentation, but I need real context. I want to learn from the inside out. Service names and what they do, I prefer to know the names and functions through attrition, and brute force real time. Learning the services when they are broke or misbehaving, it's how you reach true resiliency. I want to see it install and run, then I do it again and tighten the documentation. This is essentially what this post is.
With that being said, it was truly logageddon for a few days, and my eyes were burning.
I’ve deployed Kubernetes in a lab environment before, working through install, cluster, networking, DNS, and ingress to get a functional cluster running. Not for play, but for POC vendor products, etc. While it wasn’t something I used daily, the experience gave me a solid understanding of how the core components fit together. However, it was fleeting, and its been a while.

Once the pain of this was over (web UI and green status), honestly, the deployment has an elegance to it. If you get Open Shift into your environment once, additional deploys should basically be: stand up the load balancer, set some DNS and MAC reservations on your network, insert the embedded ISO or PXE, and go for it. Grab lunch, and there is a high probability it will be done when you get back. For me, it was one edge case after another; however, the amount I learned, and refreshed in that time was invaluable.
My goal with this post is to tell the tale. It may help someone else installing this in their Proxmox lab. Without a doubt, it was a difficult install first go. It exposed many edge cases in my home lab setup, some I knew about and some I did not. I will detail them here in case it helps anybody down the road.
It was pretty EPIC. 😄
This post is solely about getting two node worker and master cluster up and running, and maybe adding an additional worker node after its running. I will be following up with many posts about this product, but if it isn't built you cant do anything. I feel the biggest pain point is always getting the thing running dependably, so you can do the "other" thing in a dependable, predictable way.
I will start with a summary of my big issues on this implementation. Hopefully, it will help someone.
I host my own DNS on Pi-hole and Unbound DNS on a LePotato, which I already knew could be a weak point. Normally, my lab uses a more enterprise-style DNS setup, but I implemented ACL-binding to my Pi-hole/Unbound server across all my VLANs. I never went back and fully corrected it for my labs needs.
All port 53 UDP traffic (DNS) across my VLANs is forced to my Unbound server. At the start of this install, I added what I believed were the correct DNS records. Queries were resolving, responses were returned, and everything appeared to be working. All good (nah). I’ll elaborate more on this during the install walkthrough.
The two core DNS issues were:
100% I would make sure that those two things are rock solid before proceeding. OpenShift pummels your DNS on install, and if its not ready for it, you will be in log hell. I will elaborate with some tests once we get to that stage, if I still have your attention 😉
Quick shout out to my little LePotato DNS. It did not want to do the job, but I made it do the job, and its doing its job. I made so many changes to it pretty fast, I do fear rebooting it. I guess we will find out, but today is not that day. I always have a plan b anyways.
Desperation Alley:
I'll talk more about that when we get to it further down the road. The main thing is the installer is the ONLY source of truth. I would not get fancy with it. I will give you the commands when we get to the install portion of this blog.
If I had run across that tidbit above, the title of this article would have been Three Days, Zero to One. Eh, anyway, I digress...
The Mind Killer:
Of course:
This is the basic diagram of what will will be doing. We will be doing a bare metal type install, and close to prod type of deploy. I would expect in an actual bare metal deploy all of this stuff would work. Another note here, we will not be getting to VM creation until much later in this document. The VM's are the last consideration.

Non negotiable: lock in your DNS names, changing these after the install = Very Bad, as in just do it over.
Choose your load balancer. I have deployed both ngnix and HAProxy, and there is no appreciable difference to OpenShift. Having installed and configured both, nginx I had to mess with a bit more. It wasn't a big deal, but at the time everything was. Just choose what you are comfortable with.
HAProxy is a simple install on any RH Adjecent OS and here is my config. As well as some follow up validation
nano /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
daemon
maxconn 2000
defaults
log global
mode tcp
option tcplog
timeout connect 10s
timeout client 1m
timeout server 1m
# =========================
# API SERVER (6443) !!!This is Pointed at Boostrap early stages, then moved to master!!!
# =========================
frontend openshift-api
bind *:6443
default_backend openshift-api-master
backend openshift-api-master
balance roundrobin
server master 172.26.7.53:6443 check
# =========================
# MACHINE CONFIG SERVER (22623) !!!This is Pointed at Boostrap early stages, then moved to master!!!
# =========================
frontend machine-config
bind *:22623
default_backend machine-config-master
backend machine-config-master
balance roundrobin
server master 172.26.7.53:22623 check
# =========================
# INGRESS HTTP (80)
# =========================
frontend openshift-apps-http
bind *:80
default_backend openshift-apps-http
backend openshift-apps-http
balance roundrobin
server worker 172.26.7.52:80 check
# =========================
# INGRESS HTTPS (443) – TCP PASSTHROUGH
# =========================
frontend openshift-apps-https
bind *:443
mode tcp
default_backend openshift-apps-https
backend openshift-apps-https
mode tcp
balance source
server worker 172.26.7.52:443 check
haproxy -c -f /etc/haproxy/haproxy.cfg
systemctl reload haproxy
ss -lntp | egrep '(:6443|:22623|:80|:443)'
These endpoints will be the ingress for the OpenShift cluster they will point to the HAProxy IP. Just note for now we will be changing two of these backend IP's to the master IP at a set point in the install.
api.okd.vv-int.io 172.26.7.50
api-int.okd.vv-int.io 172.26.7.50
*.apps.okd.vv-int.io 172.26.7.50
VM DNS
vv-bootstrap-okd.vv-int.io A 172.26.7.53 # disposed of after the install
vv-master-01-okd.vv-int.io A 172.26.7.51 # It's good to be king
vv-worker-01-okd.vv-int.io A 172.26.7.52 # backbone
Have these DNS names bound and tested before anything.
I will say it one more time: TEST YOUR DNS!!!
In the end, after a successful deploy, things will look like this from an OpenShift perspective.
| Purpose | Derived FQDN |
|---|---|
| External API | api.okd.vv-int.io |
| Internal API | api-int.okd.vv-int.io |
| Apps wildcard | *.apps.okd.vv-int.io |
| OAuth | oauth-openshift.apps.okd.vv-int.io |
| Console | console-openshift-console.apps.okd.vv-int.io |
Here are all the DNS commands I used to validate resolution.
nslookup test.apps.okd.vv-int.io # tests wildcard
dig test.apps.okd.vv-int.io # get a real look
dig api.okd.vv-int.io +short # validate API endpoints
dig api-int.okd.vv-int.io +short # validate API endpoints
dig oauth-openshift.apps.okd.vv-int.io +short # probably overkill but also not
dig console-openshift-console.apps.okd.vv-int.io +short # probably overkill but also not
dig does-not-exist.apps.okd.vv-int.io +short # testing full
If everything is good to that point, I would just check that your DNS server is up to traffic load.
for i in {1..50}; do dig test.apps.okd2.vv-int.io +short; done # burst test
for i in {1..200}; do dig test.apps.okd2.vv-int.io +short; sleep 0.1; done # sustained
Any timeouts could potentially destroy all that you are attempting to build, and that would be sad.
Assuming you are good here, we will talk about the VM's, only from a DNS/DHCP perspective. These are the names I chose for each in my environment.
vv-bootstrap-okd.vv-int.io A 172.26.7.53 # disposed of after the install
vv-master-01-okd.vv-int.io A 172.26.7.51 # It's good to be king
vv-worker-01-okd.vv-int.io A 172.26.7.52 # backbone
Apparently, these do not need, according to some documentation and random AI results, to actually be in DNS from the perspective of OpenShift “doing its thing.” However, if you ever want to log into them by name resolution… yeah, nah, put them in DNS.
OKD wants LiveISO or PXE install of CoreOS, I did not feel like setting up dependable PXE (maybe later). Next, we do:
I chose DHCP/MAC reservations. I applied the reservations to my router, and set the DNS.
So now we are here:
vv-bootstrap-okd.vv-int.io A 172.26.7.53 BC:24:11:EF:E5:39
vv-master-01-okd.vv-int.io A 172.26.7.51 BC:24:11:93:A4:65
vv-worker-01-okd.vv-int.io A 172.26.7.52 BC:24:11:25:29:A6
This is the most flexible to me. I set the DNS, I bind the MAC to that IP, and nothing is embedded to the ISO. It allows me to use the embedded kernel argument ISO's for multiple installs. VM gets its hostname from DNS. We will be getting to those shortly.
OK, now choose a Linux VM on your network that can talk to all this stuff, and make yourself an install directory. I used the ngnix / HAProxy, but it could be any VM.
Make yourself a directory.
mkdir -p /root/okd/installer # whatever works for you
cd /root/okd/installer # you know the drill
Now lets get the things: install, client, verify the CoreOS install download location, and clean up a little.
# Installer
curl -L \
https://github.com/okd-project/okd/releases/download/4.20.0-okd-scos.2/openshift-install-linux-4.20.0-okd-scos.2.tar.gz \
-o openshift-install-linux-4.20.0-okd-scos.2.tar.gz
tar -xvf openshift-install-linux-4.20.0-okd-scos.2.tar.gz
chmod +x openshift-install
# Client
curl -L https://github.com/okd-project/okd/releases/download/4.20.0-okd-scos.2/openshift-client-linux-4.20.0-okd-scos.2.tar.gz \
-o openshift-client-linux-4.20.0-okd-scos.2.tar.gz
tar -xvf openshift-client-linux-4.20.0-okd-scos.2.tar.gz
chmod +x oc kubectl
# Lets see what we have
ls -la
# get the version verify
./openshift-install version
# Get exact CoreOS version download link
./openshift-install coreos print-stream-json | \
jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location'
# Clean it up
mv openshift-client-linux-4.20.0-okd-scos.2.tar.gz openshift-install-linux-4.20.0-okd-scos.2.tar.gz ../
What you should be seeing:

OK, as derived from the installer, the ISO link is:
curl -O https://rhcos.mirror.openshift.com/art/storage/prod/streams/c10s/builds/10.0.20250628-0/x86_64/scos-10.0.20250628-0-live-iso.x86_64.iso
# go get that tasty iso
What we should have now:

This is where I pause to talk about the install process before we get into the VM/ISO prep.
What we will be doing next, in this order:
Note: when you just run the installer without passing a YAML file, it will just present a ton of different provider options. I found none of them dependable for my install, and you can just create your own YAML and go.

The following is necessary to build the install-config.yaml, which is core to this install.
On my first successful install of OKD, I used a pull secret from the Red Hat Network. I noticed that it also had quay creds in there. When I finally got things running, I saw a ton of red and no hats. It turns out it was because OKD was trying to pull from the Red Hat registry, not fatal, and you just remove. I want to do this install as clean as possible, so I am scoping to OKD.
OKD uses Quay as its upstream image registry. Authentication is optional but recommended to avoid image pull rate limits during installation.
Use Quay robot account:
About how it should look:

Download it to your installer machine in the install directory, then run the following on it. It will pull it in a format that works for copying and pasting into the install-config.yaml. I was beating my head against this one for a while until I got a format that the installer would take. All these instructions would be the same if you were doing straight OpenShift, but you would be getting the pull secret direct from the Red Hat site at that point. I noticed that Quay gives you a .yaml, and RHN gave me a .txt file. I will include commands for both versions.
# give me the format now
grep '\.dockerconfigjson:' cvolcko-okd-installer-secret.yml \
| awk '{print $2}' \
| base64 -d
This will give you valid JSON format for your install-config.yaml. Copy and paste that somewhere, getting ready to create the install-config.yaml.

# give me the format now
jq . pull-secret.txt | sed 's/^/ /'

# give me my server ssh key please
ssh-keygen -t ed25519 -f ~/.ssh/okd_id_ed25519
cat ~/.ssh/okd_id_ed25519.pub

Copy the whole line and save it, nothing fancy, the root@vv-ha-01 is a description field; its not necessary.
The SSH key provided in install-config.yaml allows the installer and the administrator to securely access cluster nodes for debugging during and after installation. It is not used by Kubernetes itself and is only injected for human access.
I know we have not even got to the VM's yet, and its deliberate. I feel if everything is rock solid and done in this order, it will be obvious and sane. Honestly, the VM's are the last consideration on the list. Now lets get to creating the legendary installer-config.yaml!!!
Note: once you create ignition files, the installer will delete your install-config.yaml. It was another stimulating find, but security conscious. Also Ignition/Manifests/Install metadata is MAX good for 24 hours (can't find a number) say you have your templates all nice and snapshotted and you are cruising through multiple install attempts just to get a repeatable MOP. Well it may work 4 attempts withe same ignition files etc, however at some point you will be burning time figuring out why the SSL errors. If you decide to do another deploy off your templates the next day just delete all the files created by the installer, and start over
# From installer dir delete all files created from the installer
rm -rf auth tls metadata.json bootstrap.ign master.ign worker.ign manifests openshift *.ign *.json
Ask me how I know ?
I suggest creating a install-config-backup dir, work on the yaml there, and copy it into the installer dir for running against the installer.
First thing I will show is just the commented version of install-config.yaml so we can understand whats happening here. This file is purely for documenting its fairly self-explanitory, I have a saved version of this I reference.
##### EXAMPLE ONLY ######
apiVersion: v1
# Installer API version (always v1 for OKD / OpenShift 4.x)
baseDomain: vv-int.io
# Existing DNS domain.
# The installer builds cluster names as:
# <cluster-name>.<baseDomain>
# Example:
# api.okd.vv-int.io
# *.apps.okd.vv-int.io
metadata:
name: okd
# Cluster name (subdomain under baseDomain, this to me is the magic because thats the only thing that really needs to flip with additional installs)
compute:
- name: worker
replicas: 1
# Workers run the router, web console, and workloads
platform: {}
controlPlane:
name: master
replicas: 1
# Control-plane nodes run the API and etcd
platform: {}
platform:
none: {}
# Bare metal / UPI install (no cloud provisioning)
networking:
networkType: OVNKubernetes
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
pullSecret: |
{
"auths": {
"quay.io": {
"auth": "REDACTED",
"email": ""
}
}
}
# Registry auth for pulling OKD images from quay.io
sshKey: |
ssh-ed25519 AAAAC3... root@vm-ha-01
# SSH access as 'core' for debugging
Ok so now a clean version of installer yaml for my enviornment, adjust for yours.
apiVersion: v1
baseDomain: vv-int.io
metadata:
name: okd
compute:
- name: worker
replicas: 1
platform: {}
controlPlane:
name: master
replicas: 1
platform: {}
platform:
none: {}
networking:
networkType: OVNKubernetes
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
pullSecret: |
{
"auths": {
"quay.io": {
"auth": "YOURPULLSECRET",
"email": ""
}
}
}
sshKey: |
ssh-ed25519 YOURSSHKEYHERE root@vm-name-01
Now this is what your dir should look like.

Ok now we will be creating the manifests this is also a validation test, if your YAML is broke this step should expose this. Manifests are the installer-generated Kubernetes resources used during the bootstrap phase
Lets create some manifests
./openshift-install create manifests
This is what you should be seeing now, if you get an error at this phase its good beause most likely its a simple formatting error. The installer creates two directories manifests/ and openshift/

Ok we got those tasty manifests, this lets create those ignitions, there will be 3, bootstrap,master,and worker 🚀🚀🚀 Just one more warning have your install-config.yaml backed up because this step consumes the file and deletes it.
./openshift-install create ignition-configs
You will see your ignition files created and auth/ directory added. The auth/ directory holds the initial admin credentials required to access and manage the cluster during and immediately after installation.

Ok so this is were we will talk about next steps, we are close to VM config.
Make sure you are in the installer directory (where the ignition files live)
These files we will serve on a web server ( in my case a temporary python server directly in the installer directory)
# FW Rules
firewall-cmd --add-port=8080/tcp
firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd --reload
# run the server
python3 -m http.server 8080

Test these from a VM that is not the installer machine, for extra sanity
curl -I http://172.26.7.50:8080/bootstrap.ign
curl -I http://172.26.7.50:8080/master.ign
curl -I http://172.26.7.50:8080/worker.ign

Ok that looks good 👍
Kernel arguments provide the live installer with the target disk and ignition URL so the node can self-install and configure on first boot. There are two approaches to this.
After suffering through typing these lines into a console repeatedly, my only suggestion is to do it embedded. I will walk you through it next. Get one character wrong and you might as well rebuild the VM. Snapshotting and templating usually spiral into a net loss of time.
I will be installing coreos-installer on my AlmaLinux 9 HAProxy/installer VM
dnf install -y epel-release
dnf install -y coreos-installer
Ok, remember that ISO we downloaded way back when? We are now going to embed the kernel args using coreos-installer. Make a new directory in the installer directory, move the ISO into it, and let’s work there.
# make a new dir in the istaller dir and move the iso into it
mkdir coreos
mv ./scos-10.0.20250628-0-live-iso.x86_64.iso coreos/
cd coreos

So now we will embed the kernel arguments and will end up with three ISO that we will attach to Proxmox VM's properly configured and we will call them base templates I don't have them flagged as templates I just clone them per install attempt. Below is the config that worked for me, the VM's get hostnames though the MAC DHCP reservations and DNS.Adjust the ignition file urls to your enviornment.
# bootstrap
coreos-installer iso customize \
--live-karg-append coreos.inst.install_dev=/dev/sda \
--live-karg-append coreos.inst.ignition_url=http://172.26.7.50:8080/bootstrap.ign \
scos-10.0.20250628-0-live-iso.x86_64.iso \
-o scos-bootstrap-okd.iso
# master
coreos-installer iso customize \
--live-karg-append coreos.inst.install_dev=/dev/sda \
--live-karg-append coreos.inst.ignition_url=http://172.26.7.50:8080/master.ign \
scos-10.0.20250628-0-live-iso.x86_64.iso \
-o scos-master-okd.iso
# worker
coreos-installer iso customize \
--live-karg-append coreos.inst.install_dev=/dev/sda \
--live-karg-append coreos.inst.ignition_url=http://172.26.7.50:8080/worker.ign \
scos-10.0.20250628-0-live-iso.x86_64.iso \
-o scos-worker-okd.iso
So once this is done you will have 3 ISO in the directory scos-bootstrap-okd.iso, scos-master-okd.iso, scos-worker-okd.iso
Upload them to your Proxmox sever and finally after years of prep we can get to VM creation

Upload them to your Proxmox sever and finally after years of prep we can get to VM creation
You will see below I have 3 pre-seeded VM templates. My process.

Remember these ?
######################################### The Sauce
vv-bootstrap-okd.vv-int.io A 172.26.7.53 BC:24:11:EF:E5:39
vv-master-01-okd.vv-int.io A 172.26.7.51 BC:24:11:93:A4:65
vv-worker-01-okd.vv-int.io A 172.26.7.52 BC:24:11:25:29:A6
So what I am going to do is show you the VM Config that matters. The Horizontal Gallery below should show you all the important VM config
VM Config should be as follows:
Memory 16 GB Min (all templates)Disk 120 GB (all templates)CPU 1/4 (1 socket 4 cores) **TYPE HOST**Proper embeded ISO for the machine you are about to deployProper boot order **##Extremely important DISK --> Then ISO **Proper MAC added for the VM you are deployingI clone from the template VM's and Snapshot them in a powered off state
ONE MORE TIME: Make sure the MAC address on the NICs are changed. If DHCP and DNS are working correctly, this is how the VM gets its hostname. If the VM boots incorrect MAC, even after ignition runs, the console will still show localhost. Forget DNS for a second, if the MAC's are not right the VM's won'ty get the right IP's that correspond to the HAProxy config. So yeah do the work.
At this point:
We have cloned the VM's We have verified that they all have the proper ISOWe have verified that they all the the **proper MAC** assigned at the NICI snapshot these VM's in a never booted and powered off state as it allows for easy recovery.
At this point one last check before we power these on
Test DNS
nslookup test.apps.okd.vv-int.io # tests wildcard
dig test.apps.okd.vv-int.io # get a real look
dig api.okd.vv-int.io +short # validate API endpoints
dig api-int.okd.vv-int.io +short # validate API endpoints
dig oauth-openshift.apps.okd.vv-int.io +short # probably overkill but also not
dig console-openshift-console.apps.okd.vv-int.io +short # probably overkill but also not
dig does-not-exist.apps.okd.vv-int.io +short # testing full
Push the DNS server
for i in {1..50}; do dig test.apps.okd2.vv-int.io +short; done # burst test
for i in {1..200}; do dig test.apps.okd2.vv-int.io +short; sleep 0.1; done # sustained
Test the LB
nc -vz api.okd.vv-int.io 6443
nc -vz api.okd.vv-int.io 22623
Verify you are serving igntion files
curl -I http://172.26.7.50:8080/bootstrap.ign
curl -I http://172.26.7.50:8080/master.ign
curl -I http://172.26.7.50:8080/worker.ign
Lets talk about the install. The installer is not bad in the sense that, if you do all your testing, everything should just work. However, a few things are definitely blockers. Because of all the “things” going on, and the differences in timing of each of these services due to hardware differences, speed, etc., it is difficult to find a single source of truth that it is doing what it should be doing. Here are the issues that hit me on every single deploy. In the end, because of the default logging capabilities, you end up watching CPU usage and knowing it is dead, even though proving that is not exactly straightforward.
About the built-in openshift-install. There is no way I can trust the installer logging to tell me whats broke. I need to see and verify. The only thing I found the installer debugging good for was AFTER you flip the HAProxy --> to the Master VM after sucessfull bootstrap, it safely gives you a hard gate as to when the bootstrap can be powered off.
First these are the installer logging commands below and I will show you where I used them as a gate.
./openshift-install wait-for bootstrap-complete --log-level=info
tail -f .openshift_install.log
I will walk through this install again with you and, show you signals at each step, and the commands that will show pass fail. I will also include the repeatable issues I resolved.
Here is my rough timings for this run, this could probably be less going forward but a decent indicator, also these VM's are running on SATA SSD
#Bootstrap
9:15 Booted
9:21 journalctl -fu bootkube # Machine config good API ready
#Master
9:28 booted master #two boots
9:32 master reboot
9:50 API Z ready
9:58 Bootstrap safe to remove
#Worker
10:13 Boot worker #
10:16 Worker Reboots
10:22 Approve CSR from OC
10:25 worker shows OC ready
10:35 Running UI
#make sure doing things and not ton of errors
#ssh to bootstrap
#api ready/ok time to boot master
journalctl -fu bootkube # Should see action
crictl ps | grep kube-apiserver # If it makes you feel good
curl -k https://127.0.0.1:6443/readyz # This is the signal that you are good to boot master
curl -k https://127.0.0.1:6443/healthz # his is the signal that you are good to boot master
This will VM will boot up and
# On installer VM
export KUBECONFIG=/root/okd/installer/auth/kubeconfig
oc get nodes
oc get csr
curl -k https://127.0.0.1:6443/readyz
curl -k https://127.0.0.1:6443/healthz
If those all look good its flip HAProxy
nano /etc/haproxy/haproxy.cfg
haproxy -c -f /etc/haproxy/haproxy.cfg
systemctl reload haproxy
# Wait about 5 minutes
oc get nodes
oc get csr
curl -k https://api.okd.vv-int.io:6443/readyz
curl -k https://api.okd.vv-int.io:6443/healthz
Ok lets boot the worker
#Everytime will see pending
oc get nodes
oc get csr
#approve all get it done
oc get csr -o name | xargs oc adm certificate approve
#ssh worker node look for bad things
journalctl -u kubelet -f
curl -k https://api.okd.vv-int.io:6443/readyz
curl -k https://api.okd.vv-int.io:6443/healthz
Ingress is always stuck apparenty because its only one worker ? so a worker is defined but stuff decides to try to schedule on the master ?
oc get pods -n openshift-ingress
oc patch deployment router-default -n openshift-ingress --type=json \
-p='[{"op":"remove","path":"/spec/template/spec/nodeSelector/node-role.kubernetes.io~1master"}]'
oc get pods -n openshift-ingress --show-labels
oc delete pod -n openshift-ingress router-default-6b4b5fd48-kgtzk
Now Set Local Auth
oc create secret generic htpasswd-secret \
--from-file=htpasswd=./users.htpasswd \
-n openshift-config
oc apply -f - <<'EOF'
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: local
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpasswd-secret
EOF
oc get co authentication -w
oc adm policy add-cluster-role-to-user cluster-admin admin
oc logout
oc login https://api.okd.vv-int.io:6443 -u admin -p 'YourStrongPassword'
https://console-openshift-console.apps.okd.vv-int.io