Below are steps to Install Oracle Cloud Native Environment (OCNE) 1.9 Non HA on OCI (Oracle Cloud Infrastructure)
1) Provision 3 OL8 instances from OCI Cloud portal - 1 for Operator, 1 for control and 1 for Worker nodes. You have have more worker nodes as well if you would like. Latest OL8 instances come with UEK7 kernel. Default OCI user is opc.
2) For opc user, enabled passwordless SSH from Operator node to control & worker nodes and to itself.
For this run below steps.
Generate public key for operator node, by running below command.
# ssh-keygen -t rsa
Above command will generate /home/opc/.ssh/id_rsa.pub file which is public key file.
Copy the content inside /home/opc/.ssh/id_rsa.pub key on Operator node and append it to the end of /home/opc/.ssh/authorized_keys file on Operator, Control and worker nodes.
3) Verify that the passwordless SSH works from Operator node to itself and to control and worker nodes using ssh command.
4) On Operator node, Control and all worker nodes, install oracle-olcne-release-el8 package
#sudo dnf -y install oracle-olcne-release-el8sudo dnf -y install oracle-olcne-release-el8Last metadata expiration check: 1:27:12 ago on Wed 26 Feb 2025 03:41:24 AM GMT.Dependencies resolved.===========================================================================================Package Architecture Version Repository Size===========================================================================================Installing:oracle-ocne-release-el8 x86_64 1.0-12.el8 ol8_baseos_latest 16 kTransaction Summary===========================================================================================Install 1 PackageTotal download size: 16 kInstalled size: 20 kDownloading Packages:oracle-ocne-release-el8-1.0-12.el8.x86_64.rpm 214 kB/s | 16 kB 00:00-------------------------------------------------------------------------------------------Total 209 kB/s | 16 kB 00:00Running transaction checkTransaction check succeeded.Running transaction testTransaction test succeeded.Running transactionPreparing : 1/1Installing : oracle-ocne-release-el8-1.0-12.el8.x86_64 1/1Running scriptlet: oracle-ocne-release-el8-1.0-12.el8.x86_64 1/1Verifying : oracle-ocne-release-el8-1.0-12.el8.x86_64 1/1Installed:oracle-ocne-release-el8-1.0-12.el8.x86_64Complete!
5) On Operator, Control and Worker nodes backup /etc/yum.repos.d/oracle-ocne-ol8.repo file and update the file to change ol8 developer repo name from ol8_developer_olcne to ol8_developer. For this run below command.
# sudo sed -i 's/ol8_developer_olcne/ol8_developer/g' /etc/yum.repos.d/oracle-ocne-ol8.repo
6) On Operator, Control, and Worker nodes, Enable OLCNE 1.9 and other OL8 & kernel yum repositories
# sudo dnf config-manager --enable ol8_olcne19 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7
7) On Operator, Control, and Worker nodes, Disable old OCNE repos.
# sudo dnf config-manager --disable ol8_olcne18 ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_UEKR6
8) On Operator, Control, and Worker nodes, Verify that OCNE 1.9 repo and other repos enabled in above steps (3) are enabled.
#sudo dnf repolist enabled
sudo dnf repolist enabledRepository ol8_developer is listed more than once in the configurationrepo id repo nameol8_MySQL84 MySQL 8.4 Server Community for Oracle Linux 8 (x86_64)ol8_MySQL84_tools_community MySQL 8.4 Tools Community for Oracle Linux 8 (x86_64)ol8_MySQL_connectors_community MySQL Connectors Community for Oracle Linux 8 (x86_64)ol8_UEKR7 Latest Unbreakable Enterprise Kernel Release 7 for Oracle Linux 8 (x86_64)ol8_addons Oracle Linux 8 Addons (x86_64)ol8_appstream Oracle Linux 8 Application Stream (x86_64)ol8_baseos_latest Oracle Linux 8 BaseOS Latest (x86_64)ol8_ksplice Ksplice for Oracle Linux 8 (x86_64)ol8_kvm_appstream Oracle Linux 8 KVM Application Stream (x86_64)ol8_oci_included Oracle Software for OCI users on Oracle Linux 8 (x86_64)ol8_olcne19 Oracle Cloud Native Environment version 1.9 (x86_64)
9) On Operator node
Install the olcnectl software package:
# sudo dnf -y install olcnectl
sudo dnf -y install olcnectlRepository ol8_developer is listed more than once in the configurationOracle Linux 8 BaseOS Latest (x86_64) 215 kB/s | 4.3 kB 00:00 Oracle Linux 8 Application Stream (x86_64) 379 kB/s | 4.5 kB 00:00 Oracle Linux 8 Addons (x86_64) 286 kB/s | 3.5 kB 00:00 Oracle Cloud Native Environment version 1.9 (x86_64) 736 kB/s | 89 kB 00:00 Latest Unbreakable Enterprise Kernel Release 7 for Oracle 269 kB/s | 3.5 kB 00:00 Oracle Linux 8 KVM Application Stream (x86_64) 8.4 MB/s | 1.6 MB 00:00 Dependencies resolved.=========================================================================================== Package Architecture Version Repository Size===========================================================================================Installing: olcnectl x86_64 1.9.2-3.el8 ol8_olcne19 4.8 M
Transaction Summary===========================================================================================Install 1 Package
Total download size: 4.8 MInstalled size: 15 MDownloading Packages:olcnectl-1.9.2-3.el8.x86_64.rpm 21 MB/s | 4.8 MB 00:00 -------------------------------------------------------------------------------------------Total 20 MB/s | 4.8 MB 00:00 Running transaction checkTransaction check succeeded.Running transaction testTransaction test succeeded.Running transaction Preparing : 1/1 Installing : olcnectl-1.9.2-3.el8.x86_64 1/1 Verifying : olcnectl-1.9.2-3.el8.x86_64 1/1
Installed: olcnectl-1.9.2-3.el8.x86_64
Complete![opc@rhck-opr yum.repos.d]$
10) Run olcnectl provision command to create OCNE Kubernetes environment.
In below command replace the names for --api-server flag with operator node, --control-plane-nodes flag with control nodes, --worker-nodes with worker nodes. For --environment-name give the desired OCNE environment name of choice, for --name give Kubernetes cluster name of choice.
# olcnectl provision \--api-server rhck-opr \--control-plane-nodes rhck-ctrl \--worker-nodes rhck-wrkr \--environment-name cne-rhck-env \--name cne-rhck-nonha-cluster \--yes
Below is console output of the success run of provision command for reference.
#olcnectl provision \> --api-server rhck-opr \> --control-plane-nodes rhck-ctrl \> --worker-nodes rhck-wrkr \> --environment-name cne-rhck-env \> --name cne-rhck-nonha-cluster \> --yesINFO[26/02/25 05:34:51] Generating certificate authority INFO[26/02/25 05:34:51] Generating certificate for rhck-opr INFO[26/02/25 05:34:51] Generating certificate for rhck-ctrl INFO[26/02/25 05:34:52] Generating certificate for rhck-wrkr INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-opr INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-opr INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-opr INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.key" to "/etc/olcne/certificates/node.key" on rhck-opr INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-ctrl INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-ctrl INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-ctrl INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.key" to "/etc/olcne/certificates/node.key" on rhck-ctrl INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-wrkr INFO[26/02/25 05:34:53] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-wrkr INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-wrkr INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.key" to "/etc/olcne/certificates/node.key" on rhck-wrkr INFO[26/02/25 05:34:53] Apply api-server configuration on rhck-opr:* Install oracle-olcne-release* Enable olcne19 repo* Install API Server Add firewall port 8091/tcp INFO[26/02/25 05:34:53] Apply control-plane configuration on rhck-ctrl:* Install oracle-olcne-release* Enable olcne19 repo* Configure firewall rule: Add interface cni0 to trusted zone Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp 6443/tcp* Disable swap* Load br_netfilter module* Load Bridge Tunable Parameters: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1* Set SELinux to permissive* Install and enable olcne-agent INFO[26/02/25 05:34:53] Apply worker configuration on rhck-wrkr:* Install oracle-olcne-release* Enable olcne19 repo* Configure firewall rule: Add interface cni0 to trusted zone Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp* Disable swap* Load br_netfilter module* Load Bridge Tunable Parameters: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1* Set SELinux to permissive* Install and enable olcne-agent Environment cne-rhck-env created.Modules created successfully.Modules installed successfully.INFO[26/02/25 05:49:19] Kubeconfig for instance "cne-rhck-nonha-cluster" in environment "cne-rhck-env" written to kubeconfig.cne-rhck-env.cne-rhck-nonha-cluster
11) Update OCNE Config For running olcnectl command without --api-server argument. For this run below command on Operator node.
In below command replace the --api-server node name with the Operator node name, --environment-name with the OCNE environment name which was given in provision command in above step (10)
olcnectl module instances \--api-server rhck-opr:8091 \--environment-name cne-rhck-env \--update-config
Now again rerun olcnectl module instances command again without --api-server argument as follows. This command will list the Control & Worker Nodes and Kubernetes cluster name.
olcnectl module instances --environment-name cne-rhck-env
#olcnectl module instances --environment-name cne-rhck-envINSTANCE MODULE STATE rhck-ctrl:8090 node installedrhck-wrkr:8090 node installedcne-rhck-nonha-cluster kubernetes installed
12) Setup the kubectl environment on Control node to run kubectl commands for Kubernetes operations. For this run below commands on Control node.
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configexport KUBECONFIG=$HOME/.kube/configecho 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
13) Validate kubectl is working on Control Node and Kubernetes Nodes are Ready and Pods Are In Running State. For this run below kubectl commands.
kubectl get nodeskubectl get pods -A
Below are sample outputs for reference.
# kubectl get nodes
NAME STATUS ROLES AGE VERSIONrhck-ctrl Ready control-plane 11m v1.29.9+3.el8rhck-wrkr Ready <none> 10m v1.29.9+3.el8
# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-5859f68d4-2z6vq 1/1 Running 0 11mkube-system coredns-5859f68d4-lqxxk 1/1 Running 0 11mkube-system etcd-rhck-ctrl 1/1 Running 0 11mkube-system kube-apiserver-rhck-ctrl 1/1 Running 0 11mkube-system kube-controller-manager-rhck-ctrl 1/1 Running 0 11mkube-system kube-flannel-ds-gz548 1/1 Running 0 8m49skube-system kube-flannel-ds-rmpdt 1/1 Running 0 8m49skube-system kube-proxy-ffnzs 1/1 Running 0 10mkube-system kube-proxy-n7kxf 1/1 Running 0 11mkube-system kube-scheduler-rhck-ctrl 1/1 Running 0 11mkubernetes-dashboard kubernetes-dashboard-547d4b479c-fnjtf 1/1 Running 0 8m48socne-modules verrazzano-module-operator-6cb74478bf-xv8z2 1/1 Running 0 8m48s
Now you have a installed OCNE Kubernetes environment ready to go.
- - -
Keywords added for search:
OCNE installation
No comments:
Post a Comment