Creating Operator-based broker deployments

Deploying a basic broker instance

The following procedure shows how to use a Custom Resource (CR) instance to create a basic broker deployment.

  • You cannot create more than one broker deployment in a given Kubernetes project by deploying multiple Custom Resource (CR) instances. However, when you have created a broker deployment in a project, you can deploy multiple CR instances for addresses.

  • You must have already installed the ArtemisCloud Operator.

    • To use the Kubernetes command-line interface (CLI) to install the ActiveMQ Artemis Operator, see Installing the Operator.


When you have successfully installed the Operator, the Operator is running and listening for changes related to your CRs. This example procedure shows how to use a CR instance to deploy a basic broker in your project.

  1. Start configuring a Custom Resource (CR) instance for the broker deployment.

    1. Using the Kubernetes command-line interface:

      1. Swith to the namespace you are using for your project

        $ kubectl config set-context $(kubectl config current-context) --namespace= <project-name>
      2. Open the sample CR file called broker_activemqartemis_cr.yaml that is included in the deploy/crs directory of the Operator installation archive that you downloaded and extracted. For a basic broker deployment, the configuration might resemble that shown below. This configuration is the default content of the broker_activemqartemis_cr.yaml sample CR.

    kind: ActiveMQArtemis
      name: ex-aao
      application: ex-aao-app
        version: 7.7.0
            size: 2

    Observe that the sample CR uses a naming convention of ex-aao. This naming convention denotes that the CR is an example resource for the ArtemisCloud (based on the ActiveMQ Artemis project) Operator. When you deploy this sample CR, the resulting Stateful Set uses the name ex-aao-ss. Furthermore, broker Pods in the deployment are directly based on the Stateful Set name, for example, ex-aao-ss-0, ex-aao-ss-1, and so on. The application name in the CR appears in the deployment as a label on the Stateful Set. You might use this label in a Pod selector, for example.

  2. The size value specifies the number of brokers to deploy. The default value of 2 specifies a clustered broker deployment of two brokers. However, to deploy a single broker instance, change the value to 1.

  3. The image value specifies the container image to use to launch the broker. Ensure that this value specifies the latest version of the ActiveMQ Artemis broker container image in the Red Hat Ecosystem Catalog, as shown below.

    In the preceding step, the image attribute specifies a floating image tag (that is, ) rather than a full image tag (for example, -5). When you specify this floating tag, your deployment uses the latest image available in the image stream. In addition, when you specify a floating tag such as this, if the imagePullPolicy attribute in your Stateful Set is set to Always, your deployment automatically pulls and uses new micro image versions (for example, -6, -7, and so on) when they become available from
  4. Deploy the CR instance.

    1. Save the CR file.

    2. Switch to the namespace in which you are creating the broker deployment.

      $ kubectl config set-context $(kubectl config current-context) --namespace= <project-name>
    3. Create the CR.

      $ kubectl create -f <path/to/custom-resource-instance>.yaml
  5. In the Kubernetes web console you will see a new Stateful Set called ex-aao-ss.

    1. Click the ex-aao-ss Stateful Set. You see that there is one Pod, corresponding to the single broker that you defined in the CR.

    2. Within the Stateful Set, click the pod link and you should see the status of th epod as running. Click on the logs link in the top right corner to see the broker’s output.

  6. To test that the broker is running normally, access a shell on the broker Pod to send some test messages.

    1. Using the Kubernetes web console:

      1. Click Pods on the left menu

      2. Click the ex-aao-ss Pod.

      3. In the top righthand corner, click the link to exec into pod

    2. Using the Kubernetes command-line interface:

      1. Get the Pod names and internal IP addresses for your project.

        $ kubectl get pods -o wide
        NAME                          STATUS   IP
        amq-broker-operator-54d996c   Running
        ex-aao-ss-0                   Running
      2. Access the shell for the broker Pod.

        $ kubectl exec --stdin --tty ex-aao-ss-0 -- /bin/bash
  7. From the shell, use the artemis command to send some test messages. Specify the internal IP address of the broker Pod in the URL. For example:

    sh-4.2$ ./amq-broker/bin/artemis producer --url tcp:// --destination queue://demoQueue

    The preceding command automatically creates a queue called demoQueue on the broker and sends a default quantity of 1000 messages to the queue.

    You should see output that resembles the following:

    Connection brokerURL = tcp://
    Producer ActiveMQQueue[demoQueue], thread=0 Started to calculate elapsed time ...
    Producer ActiveMQQueue[demoQueue], thread=0 Produced: 1000 messages
    Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in second : 3 s
    Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in milli second : 3492 milli seconds
Additional resources

Deploying clustered brokers

If there are two or more broker Pods running in your project, the Pods automatically form a broker cluster. A clustered configuration enables brokers to connect to each other and redistribute messages as needed, for load balancing.

The following procedure shows you how to deploy clustered brokers. By default, the brokers in this deployment use on demand load balancing, meaning that brokers will forward messages only to other brokers that have matching consumers.

  1. Open the CR file that you used for your basic broker deployment.

  2. For a clustered deployment, ensure that the value of deploymentPlan.size is 2 or greater. For example:

    kind: ActiveMQArtemis
      name: ex-aao
      application: ex-aao-app
        version: 7.7.0
            size: 4
  3. Save the modified CR file.

  4. Switch to projects namespace:

    $ kubectl config set-context $(kubectl config current-context) --namespace= <project-name>
  5. At the command line, apply the change:

    $ kubectl apply -f <path/to/custom-resource-instance>.yaml

    In the Kubernetes web console, additional broker Pods starts in your project, according to the number specified in your CR. By default, the brokers running in the project are clustered.

  6. Open the Logs tab of each Pod. The logs show that Kubernetes has established a cluster connection bridge on each broker. Specifically, the log output includes a line like the following:

    targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6f13fb88

Applying Custom Resource changes to running broker deployments

The following are some important things to note about applying Custom Resource (CR) changes to running broker deployments:

  • You cannot dynamically update the persistenceEnabled attribute in your CR. To change this attribute, scale your cluster down to zero brokers. Delete the existing CR. Then, recreate and redeploy the CR with your changes, also specifying a deployment size.

  • The value of the deploymentPlan.size attribute in your CR overrides any change you make to size of your broker deployment via the kubectl scale command. For example, suppose you use kubectl scale to change the size of a deployment from three brokers to two, but the value of deploymentPlan.size in your CR is still 3. In this case, Kubernetes initially scales the deployment down to two brokers. However, when the scaledown operation is complete, the Operator restores the deployment to three brokers, as specified in the CR.

  • As described in Deploying the Operator using the CLI, if you create a broker deployment with persistent storage (that is, by setting persistenceEnabled=true in your CR), you might need to provision Persistent Volumes (PVs) for the ArtemisCloud Operator to claim for your broker Pods. If you scale down the size of your broker deployment, the Operator releases any PVs that it previously claimed for the broker Pods that are now shut down. However, if you remove your broker deployment by deleting your CR, ArtemisCloud Operator does not release Persistent Volume Claims (PVCs) for any broker Pods that are still in the deployment when you remove it. In addition, these unreleased PVs are unavailable to any new deployment. In this case, you need to manually release the volumes. For more information, see Releasing volumes in the Kubernetes documentation.

  • During an active scaling event, any further changes that you apply are queued by the Operator and executed only when scaling is complete. For example, suppose that you scale the size of your deployment down from four brokers to one. Then, while scaledown is taking place, you also change the values of the broker administrator user name and password. In this case, the Operator queues the user name and password changes until the deployment is running with one active broker.

  • All CR changes – apart from changing the size of your deployment, or changing the value of the expose attribute for acceptors, connectors, or the console – cause existing brokers to be restarted. If you have multiple brokers in your deployment, only one broker restarts at a time.