Exchanging messages using port forwarding
Prerequisite
Before you start, you need to have access to a running Kubernetes cluster environment. A Minikube instance running on your laptop will do fine.
Start minikube with a parametrized dns domain name
minikube start --profile tutorialtester
minikube profile tutorialtester
* [tutorialtester] minikube v1.32.0 on Fedora 40
* Automatically selected the docker driver. Other choices: kvm2, qemu2, ssh
* Using Docker driver with root privileges
* Starting control plane node tutorialtester in cluster tutorialtester
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=15900MB) ...
* Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "tutorialtester" cluster and "default" namespace by default
* minikube profile was successfully set to tutorialtester
Enable nginx and ssl passthrough for minikube
minikube addons enable ingress
minikube kubectl -- patch deployment -n ingress-nginx ingress-nginx-controller --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value":"--enable-ssl-passthrough"}]'
* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
- Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
- Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
- Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
* Verifying ingress addon...
* The 'ingress' addon is enabled
deployment.apps/ingress-nginx-controller patched
Make sure the domain of your cluster is resolvable
If you are running your OpenShift cluster locally, you might not be able to resolve the urls to IPs out of the blue. Follow this guide to configure your setup.
This tutorial will follow the simple /etc/hosts approach, but feel free to use the most appropriate one for you.
Deploy the operator
create the namespace
kubectl create namespace send-receive-project
kubectl config set-context --current --namespace=send-receive-project
namespace/send-receive-project created
Context "tutorialtester" modified.
Go to the root of the operator repo and install it:
./deploy/install_opr.sh
Deploying operator to watch single namespace
Client Version: 4.15.0-0.okd-2024-01-27-070424
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Kubernetes Version: v1.28.3
customresourcedefinition.apiextensions.k8s.io/activemqartemises.broker.amq.io created
customresourcedefinition.apiextensions.k8s.io/activemqartemisaddresses.broker.amq.io created
customresourcedefinition.apiextensions.k8s.io/activemqartemisscaledowns.broker.amq.io created
customresourcedefinition.apiextensions.k8s.io/activemqartemissecurities.broker.amq.io created
serviceaccount/activemq-artemis-controller-manager created
role.rbac.authorization.k8s.io/activemq-artemis-operator-role created
rolebinding.rbac.authorization.k8s.io/activemq-artemis-operator-rolebinding created
role.rbac.authorization.k8s.io/activemq-artemis-leader-election-role created
rolebinding.rbac.authorization.k8s.io/activemq-artemis-leader-election-rolebinding created
deployment.apps/activemq-artemis-controller-manager created
Wait for the Operator to start (status: running
).
kubectl wait pod --all --for=condition=Ready --namespace=send-receive-project --timeout=600s
pod/activemq-artemis-controller-manager-55b8c479df-2nxqd condition met
Deploying the ActiveMQ Artemis Broker
For this tutorial we need to:
- have a broker that is able to listen to any network interface. For that we
setup an
acceptor
that will be listening on every interfaces on port62626
. - have queues to exchange messages on. These are configured by the broker
properties. Two queues are setup, one called
APP.JOBS
that is of typeANYCAST
and one calledAPP.COMMANDS
that is of typeMULTICAST
.
kubectl apply -f - <<EOF
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: send-receive
namespace: send-receive-project
spec:
acceptors:
- bindToAllInterfaces: true
name: acceptall
port: 62626
brokerProperties:
- addressConfigurations."APP.JOBS".routingTypes=ANYCAST
- addressConfigurations."APP.JOBS".queueConfigs."APP.JOBS".routingType=ANYCAST
- addressConfigurations."APP.COMMANDS".routingTypes=MULTICAST
EOF
activemqartemis.broker.amq.io/send-receive created
Wait for the Broker to be ready:
kubectl wait ActiveMQArtemis send-receive --for=condition=Ready --namespace=send-receive-project --timeout=240s
activemqartemis.broker.amq.io/send-receive condition met
Exchanging messages between a producer and a consumer
Download the latest release of ActiveMQ Artemis, decompress the tarball and locate the artemis executable.
wget --quiet https://archive.apache.org/dist/activemq/activemq-artemis/2.36.0/apache-artemis-2.36.0-bin.tar.gz
tar -zxf apache-artemis-2.36.0-bin.tar.gz apache-artemis-2.36.0/
# First we need to start port forwarding
kubectl port-forward send-receive-ss-0 62626 -n send-receive-project &
# Then produce and consume some messages
./artemis producer --destination APP.JOBS --url tcp://localhost:62626
./artemis consumer --destination APP.JOBS --url tcp://localhost:62626
# Finally we need to kill the port forwarding
pkill kubectl -9
Forwarding from 127.0.0.1:62626 -> 62626
Forwarding from [::1]:62626 -> 62626
Connection brokerURL = tcp://localhost:62626
Handling connection for 62626
Handling connection for 62626
Producer ActiveMQQueue[APP.JOBS], thread=0 Started to calculate elapsed time ...
Producer ActiveMQQueue[APP.JOBS], thread=0 Produced: 1000 messages
Producer ActiveMQQueue[APP.JOBS], thread=0 Elapsed time in second : 8 s
Producer ActiveMQQueue[APP.JOBS], thread=0 Elapsed time in milli second : 8766 milli seconds
Connection brokerURL = tcp://localhost:62626
Consumer:: filter = null
Handling connection for 62626
Handling connection for 62626
Consumer ActiveMQQueue[APP.JOBS], thread=0 wait until 1000 messages are consumed
Received 1000
Consumer ActiveMQQueue[APP.JOBS], thread=0 Consumed: 1000 messages
Consumer ActiveMQQueue[APP.JOBS], thread=0 Elapsed time in second : 0 s
Consumer ActiveMQQueue[APP.JOBS], thread=0 Elapsed time in milli second : 68 milli seconds
Consumer ActiveMQQueue[APP.JOBS], thread=0 Consumed: 1000 messages
Consumer ActiveMQQueue[APP.JOBS], thread=0 Consumer thread finished
cleanup
To leave a pristine environment after executing this tutorial you can simply, delete the minikube cluster.
minikube delete --profile tutorialtester
* Deleting "tutorialtester" in docker ...
* Deleting container "tutorialtester" ...
* Removing /home/tlavocat/.minikube/machines/tutorialtester ...
* Removed all traces of the "tutorialtester" cluster.