Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 3 Next »



Create Compute System Users


IBM PowerVM preparation

A user is required for the operation of the BVQ PowerVM Scanner who should at least have read-only (hmcviewer) access to the HMC.

(warning) Please create this user before configuring the BVQ Scanner. We recommend to name the user "bvq"

(warning) Please open the user properties dialogue and select "Allow remote access via the web"

Add User dialogueUser Properties dialogue






Enable performance data collection

BVQ can only collect performance statistics if "Data Collection" on the managed systems and LPARs is enabled.

  • For a better performance of the HMC, we recommend to change the Performance Data Storage value to "1".
     

BVQ Scanner configuration

To configure a PowerVM scanner in BVQ the following information is required:

  • IP address or hostname of the HMC
  • User and password of the HMC user for BVQ

(warning) Starting with BVQ 2023.H1: Redundant HMCs managing the same systems must be configured in the same PowerVM scanner. Otherwise, the managed systems will appear twice in BVQ. Define the most powerful HMC first, because the order of HMCs determines the order in which they are scanned by BVQ. Additional HMCs managing other systems should be configured in an additional PowerVM scanner.

Up to BVQ 2022.H2: Typically, two redundant HMCs manage the same IBM Power systems. Please ensure that only one scanner is created for one of the HMCs to avoid duplication in BVQ.



Error rendering macro 'panel' : com.atlassian.renderer.v2.macro.basic.validator.MacroParameterValidationException: Color value is invalid



Error rendering macro 'panel' : com.atlassian.renderer.v2.macro.basic.validator.MacroParameterValidationException: Color value is invalid



Kubernetes preparation

Kubernetes (k8s) clusters are scanned via 2 different methods:

Kubernetes API Server

To gain access to the k8s API server the following preparations must be made:

  1. Create a CustomResourceDefinition (CRD) to set up a k8s cluster as master grouping object (MGO) definition for BVQ
  2. Create a MasterGroupingObject instance (binded to the CRD) for the k8s cluster
  3. Create a ClusterRole to get read-only (get, list, watch) access to the k8s cluster
  4. Create a ServiceAccount for authentication
  5. Create a ClusterRoleBinding to bind the ServiceAccount to the ClusterRole

(info) Use kubectl apply -f  to create the expected objects. You can edit & use the all in one preparation YAML file to set up all requirements in one step.
(make sure all 5 objects are created properly - sometimes MasterGroupingObject creation fails due to the delayed creation of the CustomResourceDefinition)

CustomResourceDefinition

Create a CustomResourceDefinition (CRD) to set up a k8s cluster as master grouping object (MGO) definition for BVQ

mgo-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: mastergroupingobjects.bvq.sva
spec:
  group: bvq.sva
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                clusterName:
                  type: string
                  description: Cluster-Name
                customer:
                  type: string
                  description: Customer-Name
                location:
                  type: string
                  description: Location where the Cluster is located at
                dc:
                  type: string
                  description: Datacenter-Name
                contact:
                  type: string
                  description: Customer-Contact-Name
                email:
                  type: string
                  description: E-Mail-Address of the Contact
                phone:
                  type: string
                  description: Phone-Number of the Contact
  scope: Cluster
  names:
    plural: mastergroupingobjects
    singular: mastergroupingobject
    kind: MasterGroupingObject
    shortNames:
    - mgo

MasterGroupingObject

Create a MasterGroupingObject instance (binded to the CRD) for the k8s cluster

(info) Edit/adjust the values for clusterName, customer, location, dc, contact, email  & phone  to the required information

(warning) IMPORTANT: clusterName  will represent the name of the k8s cluster within BVQ, so choose a meaningful name (example would be: Prod-Cluster-01)

mgo-instance.yaml
apiVersion: bvq.sva/v1
kind: MasterGroupingObject
metadata:
  name: bvq-mgo-k8s
  labels:
    bvq: mgo
spec:
  clusterName: Prod-Cluster-01
  customer: Customer Inc.
  location: Berlin, Germany
  dc: Example-DC-01
  contact: Max Mustermann
  email: max.mustermann@customer.de
  phone: +49-171-1234-56789

ClusterRole

Create a ClusterRole to get read-only (get, list, watch) access to the k8s cluster

(info) Read only permissions (get, list, watch) are required
(info) apiGroups may be applied via a wildcard ('*') to get access to all api groups, otherwise apiGroups given in the example must be set

cluster-role-bvqscan.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: bvq-scanner-rl
rules:
  - verbs:
      - get
      - watch
      - list
    apiGroups:
      - ''
      - apiextensions.k8s.io
      - apps
      - batch
      - bvq.sva
      - networking.k8s.io
      - storage.k8s.io
      - discovery.k8s.io
      - scheduling.k8s.io
    resources:
      - '*'

ServiceAccount

Create a ServiceAccount for authentication

(info) The Token created for this ServiceAccount is needed to set up a BVQ scanner config for the k8s cluster
(info) namespace may be adjusted to another kubernetes namespace. Remember to edit the namspace set in the ClusterRoleBinding

(warning) IMPORTANT: With k8s version 1.24 the LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default (see here). Use this guide to create a non-expiring token (recommended)

bvq-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bvqscan
  namespace: default

ClusterRoleBinding

Create a ClusterRoleBinding to bind the ServiceAccount to the ClusterRole

cluster-role-binding-bvqscan-sa.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: bvq-scanner-sa-bnd
subjects:
- kind: ServiceAccount
  name: bvqscan
  namespace: default
roleRef:
  kind: ClusterRole
  name: bvq-scanner-rl
  apiGroup: rbac.authorization.k8s.io

BVQ Prometheus Server

To get performance and topology data a custom bvq-prometheus stack must be deployed in the k8s cluster via helm. This helm chart will install a bvq-prometheus server as a deployment with a 8GB persistent volume (configurable via values.yaml) and bvq-prometheus-node-erxprters as a DaemonSet (helm dependency).

See values.yaml and other configuration files in the bvq-prometheus-helm.zip file for further information about the bvq-prometheus configuration.

Execute the following steps to deploy the bvq-prometheus helm chart to the k8s cluster:

  • Create a namespace (e.g. bvq-prometheus) for the prometheus stack:
    kubectl create namespace bvq-prometheus 
  • Unzip helm files → bvq-prometheus-helm.zip
  • For external communication an ingress for the bvq-prometheus server is needed. Edit prometheus.ingress.hosts  in values.yaml to set a proper ingress.
  • Run helm dependency build / helm dependency update 
  • Install the helm chart via helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./ 

    helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./
    ▶ helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./
    NAME: bvq-prometheus
    LAST DEPLOYED: Thu Dec 15 11:00:08 2022
    NAMESPACE: bvq-prometheus
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  • Check the installation with kubectl get pods -n bvq-prometheus  - A pod called bvq-prometheus-* and a set of bvq-prometheus-bvq-node-exporter-* pods should be in running state

    kubectl get pods -n bvq-prometheus
    ▶ kubectl get pods -n bvq-prometheus
    NAME                                     READY   STATUS    RESTARTS   AGE
    bvq-prometheus-5b8cd79d79-r587m          1/1     Running   0          64s
    bvq-prometheus-bvq-node-exporter-jz46z   1/1     Running   0          2s

Gather information for BVQ Scanner configuration

BVQ scanners need the following information to be configured for each k8s cluster:

  • API server IP address or DNS name (FQDN) - Default TCP port: 6443
  • API Token of the bvqscan ServiceAccount
  • Prometheus URL or IP (if NodePort service is used)
  • Prometheus user & password (optional, if BasicAuth of Prometheus is used)

Preparation for the BVQ Server

For BVQ Servers which are gathering information from Kubernetes clusters, the correct DNS configuration is important.
Make sure that the BVQ Server & Kubernetes clusters are in the same domain and have the same DNS server configured. 


  • No labels