Main Menu

Search

HTML - Home - ABOUT TARBOTS


Welcome to tarbots.com
... AI Search Assistant

This site has articles which provides information on everyday commands, procedures & scripts for DevOps & Infrastructure technologies which will be very useful for System Admin, DevOps, CloudOps, Network Admins and any teams or individuals working on Infrastructure technologies.

CLICK HERE to learn more about tarbots.com


Infrastructure Products Covered In This Site

EMOC (2) Exalogic (20) Hardware (4) Infiniband (53) Linux (228) OVM (29) OVS (19) ZFS Storage Appliance (2)

HTML - HOME - RECENT ARTICLES TITLE


Recent Articles

OCNE: How To Check Exact OCNE Version and Kubernetes Version In Oracle Cloud Native Environment 1.x Release?

Following are steps to Check Exact OCNE Version and Kubernetes Version In Oracle Cloud Native Environment 1.x Release

From the OCNE operator node list of OCNE and Kubernetes packages that are installed on the Operator node.

For this run below command 

#rpm -qa --last | egrep -i "olcne|kube"

olcne-api-server-1.9.2-3.el8.x86_64           Wed 26 Feb 2025 03:48:15 PM GMT

olcne-utils-1.9.2-3.el8.x86_64                Wed 26 Feb 2025 03:48:14 PM GMT

kubectl-1.29.9-3.el8.x86_64                   Wed 26 Feb 2025 03:48:10 PM GMT

olcne-selinux-1.0.0-9.el8.x86_64              Wed 26 Feb 2025 03:47:53 PM GMT

olcnectl-1.9.2-3.el8.x86_64                   Wed 26 Feb 2025 03:46:14 PM GMT


You can also list the Kubernetes and OCNE package versions on the Control Node using above command.

To gather the Kubernetes version on the Kubernetes nodes, You can run below kubectl command on the Control Node where you have setup kubectl environment

#kubectl get nodes -owide


NAME         STATUS   ROLES           AGE   VERSION         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                   KERNEL-VERSION                   CONTAINER-RUNTIME

ol8-19ctrl   Ready    control-plane   16m   v1.29.9+3.el8   10.0.1.30     <none>        Oracle Linux Server 8.10   5.15.0-304.171.4.el8uek.x86_64   cri-o://1.29.1

ol8-19wrkr   Ready    <none>          15m   v1.29.9+3.el8   10.0.1.73     <none>        Oracle Linux Server 8.10   5.15.0-304.171.4.el8uek.x86_64   cri-o://1.29.1


OCNE: Install Oracle Cloud Native Environment (OCNE) 1.9 Non HA on OCI (Oracle Cloud Infrastructure)

Below are steps to Install Oracle Cloud Native Environment (OCNE) 1.9 Non HA on OCI (Oracle Cloud Infrastructure)

1) Provision 3 OL8 instances from OCI Cloud portal - 1 for Operator, 1 for control and 1 for Worker nodes. You have have more worker nodes as well if you would like. Latest OL8 instances come with UEK7 kernel. Default OCI user is opc.

2) For opc user, enabled passwordless SSH from Operator node to control & worker nodes and to itself.

For this run below steps.

Generate public key for operator node, by running below command.
# ssh-keygen -t rsa 
Above command will generate /home/opc/.ssh/id_rsa.pub file which is public key file.

Copy the content inside /home/opc/.ssh/id_rsa.pub key on Operator node and append it to the end of /home/opc/.ssh/authorized_keys file on Operator, Control and worker nodes.

3) Verify that the passwordless SSH works from Operator node to itself and to control and worker nodes using ssh command.

4) On Operator node, Control and all worker nodes, install oracle-olcne-release-el8 package

#sudo dnf -y install oracle-olcne-release-el8

sudo dnf -y install oracle-olcne-release-el8
Last metadata expiration check: 1:27:12 ago on Wed 26 Feb 2025 03:41:24 AM GMT.
Dependencies resolved.
===========================================================================================
 Package                       Architecture Version          Repository               Size
===========================================================================================
Installing:
 oracle-ocne-release-el8       x86_64       1.0-12.el8       ol8_baseos_latest        16 k

Transaction Summary
===========================================================================================
Install  1 Package

Total download size: 16 k
Installed size: 20 k
Downloading Packages:
oracle-ocne-release-el8-1.0-12.el8.x86_64.rpm              214 kB/s |  16 kB     00:00    
-------------------------------------------------------------------------------------------
Total                                                      209 kB/s |  16 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                   1/1 
  Installing       : oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 
  Running scriptlet: oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 
  Verifying        : oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 

Installed:
  oracle-ocne-release-el8-1.0-12.el8.x86_64                                                

Complete!

5) On Operator, Control and Worker nodes backup /etc/yum.repos.d/oracle-ocne-ol8.repo file and update the file to change ol8 developer repo name from ol8_developer_olcne to ol8_developer. For this run below command.
# sudo sed -i 's/ol8_developer_olcne/ol8_developer/g' /etc/yum.repos.d/oracle-ocne-ol8.repo

6) On Operator, Control, and Worker nodes, Enable OLCNE 1.9 and other OL8 & kernel yum repositories

# sudo dnf config-manager --enable ol8_olcne19 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7


7) On Operator, Control, and Worker nodes, Disable old OCNE repos.

sudo dnf config-manager --disable ol8_olcne18 ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_UEKR6


8) On Operator, Control, and Worker nodes, Verify that OCNE 1.9 repo and other repos enabled in above steps (3) are enabled.

#sudo dnf repolist enabled

sudo dnf repolist enabled
Repository ol8_developer is listed more than once in the configuration
repo id                        repo name
ol8_MySQL84                    MySQL 8.4 Server Community for Oracle Linux 8 (x86_64)
ol8_MySQL84_tools_community    MySQL 8.4 Tools Community for Oracle Linux 8 (x86_64)
ol8_MySQL_connectors_community MySQL Connectors Community for Oracle Linux 8 (x86_64)
ol8_UEKR7                      Latest Unbreakable Enterprise Kernel Release 7 for Oracle Linux 8 (x86_64)
ol8_addons                     Oracle Linux 8 Addons (x86_64)
ol8_appstream                  Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest              Oracle Linux 8 BaseOS Latest (x86_64)
ol8_ksplice                    Ksplice for Oracle Linux 8 (x86_64)
ol8_kvm_appstream              Oracle Linux 8 KVM Application Stream (x86_64)
ol8_oci_included               Oracle Software for OCI users on Oracle Linux 8 (x86_64)
ol8_olcne19                    Oracle Cloud Native Environment version 1.9 (x86_64)


9) On Operator node

Install the olcnectl software package: 

# sudo dnf -y install olcnectl

sudo dnf -y install olcnectl
Repository ol8_developer is listed more than once in the configuration
Oracle Linux 8 BaseOS Latest (x86_64)                      215 kB/s | 4.3 kB     00:00    
Oracle Linux 8 Application Stream (x86_64)                 379 kB/s | 4.5 kB     00:00    
Oracle Linux 8 Addons (x86_64)                             286 kB/s | 3.5 kB     00:00    
Oracle Cloud Native Environment version 1.9 (x86_64)       736 kB/s |  89 kB     00:00    
Latest Unbreakable Enterprise Kernel Release 7 for Oracle  269 kB/s | 3.5 kB     00:00    
Oracle Linux 8 KVM Application Stream (x86_64)             8.4 MB/s | 1.6 MB     00:00    
Dependencies resolved.
===========================================================================================
 Package             Architecture      Version                Repository              Size
===========================================================================================
Installing:
 olcnectl            x86_64            1.9.2-3.el8            ol8_olcne19            4.8 M

Transaction Summary
===========================================================================================
Install  1 Package

Total download size: 4.8 M
Installed size: 15 M
Downloading Packages:
olcnectl-1.9.2-3.el8.x86_64.rpm                             21 MB/s | 4.8 MB     00:00    
-------------------------------------------------------------------------------------------
Total                                                       20 MB/s | 4.8 MB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                   1/1 
  Installing       : olcnectl-1.9.2-3.el8.x86_64                                       1/1 
  Verifying        : olcnectl-1.9.2-3.el8.x86_64                                       1/1 

Installed:
  olcnectl-1.9.2-3.el8.x86_64                                                              

Complete!
[opc@rhck-opr yum.repos.d]$ 



10) Run olcnectl provision command to create OCNE Kubernetes environment.

In below command replace the names for --api-server flag with operator node, --control-plane-nodes flag with control nodes, --worker-nodes with worker nodes. For --environment-name give the desired OCNE environment name of choice, for --name give Kubernetes cluster name of choice.

olcnectl provision \
--api-server rhck-opr \
--control-plane-nodes rhck-ctrl \
--worker-nodes rhck-wrkr \
--environment-name cne-rhck-env \
--name cne-rhck-nonha-cluster \
--yes

Below is console output of the success run of provision command for reference.

#olcnectl provision \
> --api-server rhck-opr \
> --control-plane-nodes rhck-ctrl \
> --worker-nodes rhck-wrkr \
> --environment-name cne-rhck-env \
> --name cne-rhck-nonha-cluster \
> --yes
INFO[26/02/25 05:34:51] Generating certificate authority             
INFO[26/02/25 05:34:51] Generating certificate for rhck-opr          
INFO[26/02/25 05:34:51] Generating certificate for rhck-ctrl         
INFO[26/02/25 05:34:52] Generating certificate for rhck-wrkr         
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.key" to "/etc/olcne/certificates/node.key" on rhck-opr 
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.key" to "/etc/olcne/certificates/node.key" on rhck-ctrl 
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.key" to "/etc/olcne/certificates/node.key" on rhck-wrkr 
INFO[26/02/25 05:34:53] Apply api-server configuration on rhck-opr:
* Install oracle-olcne-release
* Enable olcne19 repo
* Install API Server
    Add firewall port 8091/tcp
 
INFO[26/02/25 05:34:53] Apply control-plane configuration on rhck-ctrl:
* Install oracle-olcne-release
* Enable olcne19 repo
* Configure firewall rule:
    Add interface cni0 to trusted zone
    Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp 6443/tcp
* Disable swap
* Load br_netfilter module
* Load Bridge Tunable Parameters:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
* Set SELinux to permissive
* Install and enable olcne-agent
 
INFO[26/02/25 05:34:53] Apply worker configuration on rhck-wrkr:
* Install oracle-olcne-release
* Enable olcne19 repo
* Configure firewall rule:
    Add interface cni0 to trusted zone
    Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp
* Disable swap
* Load br_netfilter module
* Load Bridge Tunable Parameters:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
* Set SELinux to permissive
* Install and enable olcne-agent
 
Environment cne-rhck-env created.
Modules created successfully.
Modules installed successfully.
INFO[26/02/25 05:49:19] Kubeconfig for instance "cne-rhck-nonha-cluster" in environment "cne-rhck-env" written to kubeconfig.cne-rhck-env.cne-rhck-nonha-cluster


11) Update OCNE Config For running olcnectl command without --api-server argument. For this run below command on Operator node. 

In below command replace the --api-server node name with the Operator node name, --environment-name with the OCNE environment name which was given in provision command in above step (10)

olcnectl module instances \
--api-server rhck-opr:8091 \
--environment-name cne-rhck-env \
--update-config

Now again rerun olcnectl module instances command again without --api-server argument as follows. This command will list the Control & Worker Nodes and Kubernetes cluster name.
olcnectl module instances --environment-name cne-rhck-env

#olcnectl module instances --environment-name cne-rhck-env
INSTANCE               MODULE     STATE    
rhck-ctrl:8090         node       installed
rhck-wrkr:8090         node       installed
cne-rhck-nonha-cluster kubernetes installed


12) Setup the kubectl environment on Control node to run kubectl commands for Kubernetes operations. For this run below commands on Control node.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc

13) Validate kubectl is working on Control Node and Kubernetes Nodes are Ready and Pods Are In Running State. For this run below kubectl commands.
kubectl get nodes
kubectl get pods -A

Below are sample outputs for reference.

# kubectl get nodes

NAME        STATUS   ROLES           AGE   VERSION
rhck-ctrl   Ready    control-plane   11m   v1.29.9+3.el8
rhck-wrkr   Ready    <none>          10m   v1.29.9+3.el8

# kubectl get pods -A

NAMESPACE              NAME                                          READY   STATUS    RESTARTS   AGE
kube-system            coredns-5859f68d4-2z6vq                       1/1     Running   0          11m
kube-system            coredns-5859f68d4-lqxxk                       1/1     Running   0          11m
kube-system            etcd-rhck-ctrl                                1/1     Running   0          11m
kube-system            kube-apiserver-rhck-ctrl                      1/1     Running   0          11m
kube-system            kube-controller-manager-rhck-ctrl             1/1     Running   0          11m
kube-system            kube-flannel-ds-gz548                         1/1     Running   0          8m49s
kube-system            kube-flannel-ds-rmpdt                         1/1     Running   0          8m49s
kube-system            kube-proxy-ffnzs                              1/1     Running   0          10m
kube-system            kube-proxy-n7kxf                              1/1     Running   0          11m
kube-system            kube-scheduler-rhck-ctrl                      1/1     Running   0          11m
kubernetes-dashboard   kubernetes-dashboard-547d4b479c-fnjtf         1/1     Running   0          8m48s
ocne-modules           verrazzano-module-operator-6cb74478bf-xv8z2   1/1     Running   0          8m48s

Now you have a installed OCNE Kubernetes environment ready to go.

- - -
Keywords added for search:

OCNE installation

ORACLE LINUX 8: How to Uninstall UEK Kernel and Make Redhat Compatible Kernel (RHCK) Default Boot Kernel?

Following are steps to Uninstall UEK Kernel and Make Redhat Compatible Kernel (RHCK) Default Boot Kernel in Oracle Linux 8 (OL8)

1) List the currently installed kernels using "rpm -qa" command as shown below.

As we can see we have both Oracle UEK7 kernel 5.15.x and RHCK kernel 4.18.x installed.

# rpm -qa | egrep -i kernel

kernel-tools-libs-4.18.0-553.34.1.el8_10.x86_64

kernel-core-4.18.0-553.34.1.el8_10.x86_64

kernel-uek-core-5.15.0-304.171.4.el8uek.x86_64

kernel-headers-4.18.0-553.34.1.el8_10.x86_64

kernel-tools-4.18.0-553.34.1.el8_10.x86_64

kernel-uek-5.15.0-304.171.4.el8uek.x86_64

kernel-uek-modules-5.15.0-304.171.4.el8uek.x86_64

kernel-modules-4.18.0-553.34.1.el8_10.x86_64

kernel-devel-4.18.0-553.34.1.el8_10.x86_64

kernel-uek-devel-5.15.0-304.171.4.el8uek.x86_64

kernel-4.18.0-553.34.1.el8_10.x86_64

2) List the grub order of the Kernels using grubby command as shown below. In below command it will show the ordering of the kernel, starting line with 0 and then increment. As you can see in below output boot order 0 is for UEK kernel 5.15.x and boot order 1 is for RHCK kernel 4.18.x

#sudo grubby --info=ALL | grep title | nl -v 0


0  title="Oracle Linux Server (5.15.0-304.171.4.el8uek.x86_64 with Unbreakable Enterprise Kernel) 8.10"

1  title="Oracle Linux Server (4.18.0-553.34.1.el8_10.x86_64) 8.10"

title="Oracle Linux Server 8 (0-rescue-b5bf925c42f4075a28da8441ac55fcdf) "

3) List default bootup kernel using grubby command as follows. As you can see default bootup kernel is UEK7 kernel 5.15.

#sudo grubby --default-kernel

/boot/vmlinuz-5.15.0-304.171.4.el8uek.x86_64

4) Change the default kernel to RHCK kernel using grubby command and order number corresponding to RHCK kernel from above step (2). In this case RHCK kernel boot order is 1.

# sudo grub2-set-default 1

5)  Verify that the default kernel changed to RHCK kernel.

sudo grubby --default-kernel

/boot/vmlinuz-4.18.0-553.34.1.el8_10.x86_64

6) Now reboot the Node using reboot command.

#sudo reboot -n


7) After the reboot log back into the node and check the default kernel as follows. You should see default kernel as RHCK kernel.

#sudo grubby --default-kernel

/boot/vmlinuz-4.18.0-553.34.1.el8_10.x86_64

8) Check whether the currently active kernel is RHCK kernel using uname command as follows:

#uname -r

4.18.0-553.34.1.el8_10.x86_64

9) Now remove kernel-uek* packages to uninstall UEK kernel as follows:

#sudo dnf -y remove kernel-uek*

10) Backup /etc/sysconfig/kernel file. And update the file to change below DEFAULTKERNEL line

From

DEFAULTKERNEL=kernel-core

To

DEFAULTKERNEL=kernel

11) Reboot the VM again and log back in.


- - - 

Keywords added for Search:

uninstallation

LINUX: How to Connect to Remote Host via Alias Name Using SSH Config File

Following steps for Connecting to Remote Host via Alias Name Using SSH Config File.

All below steps have to be executed on Linux host from which you want to connect to remote server using alias name

NOTE: Please note that below steps can also be applied on MacBook Mac OS terminal as well.

1) create config file inside .ssh folder of user home.

mkdir ~/.ssh/config
2) Edit the ~/.ssh/config file as follows:
Host <alias name>
    HostName <IP/hostname>
    User <username>
    ProxyCommand nc -X connect -x <proxy> %h %p
    ServerAliveInterval <internal>
  • <alias name> with the alias name you want to use to server you want to connect for e.g. testmachine
  • <IP/hostname> with the hostname/ip of remote server for e.g. 10.10.10.10
  • <Username> with the username of the remote server for e.g. appuser
  • <proxy> replace it with the proxy details if any. If no proxy you can remove this line.
  • <internal> with the ServerAliveInterval in seconds. You can skip this flag as well.

There may be other additional SSH settings as well you can place inside config file. Above are just few examples.

3) Now connect to the remote server using alias name as follows. Replace <alias name> with the alias name you set in config file.
ssh <alias name>