Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel6
outlinefalse
typeflat
separatorpipe
printabletrue

Panel
bgColor#DEEBFF

IBM PowerVM

A user is required for the operation of the BVQ PowerVM Scanner who should at least have read-only (hmcviewer) access to the HMC.

(warning) Please create this user before configuring the BVQ Scanner. We recommend to name the user "bvq"

(warning) Please open the user properties dialogue and select "Allow remote access via the web"


Enable performance data collection

BVQ can only collect performance statistics if "Data Collection" is enabled on the managed systems and all LPARs.

Adjust System settings

Enable "Performance Monitoring Data Collection for Managed Servers"

image-20250102-112435.png

To lower the system load and storage capacity usage on the HMC, we recommend to reduce the number of days to store performance data below the title "Performance Data Storage" to its minimum value of "1". BVQ takes over the role to store a longer history of that data.

Adjust LPAR settings

Switch on "Enable Performance Information Collection" on all LPARs

image-20250102-112640.png

You can check the state of the "Performance Collection" enablement per LPAR easily. Run the following script on the HMC to show all LPARs without enabled collection:

Code Block
languagebash
bvq@hmc3:~> for SYS in $(lssyscfg -r sys -F name); do lssyscfg -r lpar -m $SYS -F name,allow_perf_collection ; done | grep ",0"
rju_viot,0
IOS74DHP,0
HABS74D,0
HAQS74D,0
HAMS74D,0


BVQ Scanner configuration

To configure a PowerVM scanner in BVQ the following information is required:

  • IP address or hostname of the HMC

  • User and password of the HMC user for BVQ

(warning) Starting with BVQ 2023.H1: Redundant HMCs managing the same systems must be configured in the same PowerVM scanner. Otherwise, the managed systems will appear twice in BVQ. Define the most powerful HMC first, because the order of HMCs determines the order in which they are scanned by BVQ. Additional HMCs managing other systems should be configured in an additional PowerVM scanner.

Up to BVQ 2022.H2: Typically, two redundant HMCs manage the same IBM Power systems. Please ensure that only one scanner is created for one of the HMCs to avoid duplication in BVQ.

...

Panel
bgColor#DEEBFF

VMware vSphere

A user is required for the operation of the BVQ VMware Scanner, who should at least have read-only access to the VMware vCenter system. The read-only permission for the user must be defined at the vCenter level. Permissions in a lower level (e.g. Datacenter, Cluster, ...) will lead to scan errors.

(warning) Please create this user before configuring the BVQ Scanner. We recommend to name the user: bvq

Create or select the right user role

  • Go to user roles

    37a21f30-4306-457d-a5ee-da5fadc7a10b.png

  • Duplicate (1) the read-only role and store it as BVQ-Read-only (2) and add the following privileges (3)
    Datastore - browse datastore
    Profile driven storage - view
    Storage views - view

    2e66a16c-165c-40b5-ae43-26a1f8df51c5.png

Create the BVQ User for the vCenter

  • Create the bvq user with the role "BVQ-read-only"
    create it as vsphere.local or as AD user - please remember to add it correctly into the scanner dialog later

    7e7310bb-161e-4487-a046-688a8e237b62.png

  • Add the user to the vCenter
    Add the  user to the vCenter (4) and do not forget to define it for all children

    ac90aa5c-013c-4d98-9af3-a35320af63aa.jpg

Add the right vCenter Statistics

  • Interval duration has to be 5 minutes

  • Level 2 is sufficient for standard VMware
    Level 3 should be used for VSANs

    62b727fd-e8d6-45a5-ac50-1cb2357a931b.jpg


    High vCenter CPU usage during BVQ Performance scan

    During the BVQ performance scan of a vCenter server the CPU usage of the vCenter server will increase. Please monitor the vCenter server utilization depending on the workload to avoid performance degradation.

Gather information for BVQ Scanner configuration

BVQ scanners need the following information to be configured for each vCenter system:

  • vCenter IP address or hostname

  • vCenter user domain

  • vCenter ID and password of the bvq user

Preparation for the BVQ Server

For BVQ Servers which are gathering information from NetApps and vCenters, the correct DNS configuration is important.
Make sure that the BVQ Server, NetApp systems and vCenters are in the same domain and have the same DNS server configured. 

This is required to match the DNS-Name of the NFS Datastores to the corresponding IP Adresses of the NFS file shares on NetApp systems.

Panel
bgColor#EAE6FF

Kubernetes

Kubernetes (k8s) clusters are scanned via 2 different methods:

  • Kubernetes API Server for topology information

  • BVQ Prometheus Server for topology & performance information


Kubernetes API Server

To gain access to the k8s API server the following preparations must be made:

  1. Create a Compute_layer#CustomResourceDefinition (CRD) to set up a k8s cluster as master grouping object (MGO) definition for BVQ

  2. Create a Compute_layer#MasterGroupingObject instance (binded to the CRD) for the k8s cluster

  3. Create a Compute_layer#ClusterRole to get read-only (get, list, watch) access to the k8s cluster

  4. Create a Compute_layer#ServiceAccount for authentication

  5. Create a Compute_layer#ClusterRoleBinding to bind the ServiceAccount to the ClusterRole

ℹ Use kubectl apply -f  to create the expected objects. You can edit & use the all in one preparation YAML file to set up all requirements in one step.(make sure all 5 objects are created properly - sometimes Compute_layer#MasterGroupingObject creation fails due to the delayed creation of the Compute_layer#CustomResourceDefinition)

Panel
bgColor#EAE6FF

CustomResourceDefinition

Create a CustomResourceDefinition (CRD) to set up a k8s cluster as master grouping object (MGO) definition for BVQ

mgo-crd.yaml 

Code Block
languageyml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: mastergroupingobjects.bvq.sva
spec:
  group: bvq.sva
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                clusterName:
                  type: string
                  description: Cluster-Name
                customer:
                  type: string
                  description: Customer-Name
                location:
                  type: string
                  description: Location where the Cluster is located at
                dc:
                  type: string
                  description: Datacenter-Name
                contact:
                  type: string
                  description: Customer-Contact-Name
                email:
                  type: string
                  description: E-Mail-Address of the Contact
                phone:
                  type: string
                  description: Phone-Number of the Contact
  scope: Cluster
  names:
    plural: mastergroupingobjects
    singular: mastergroupingobject
    kind: MasterGroupingObject
    shortNames:
    - mgo

MasterGroupingObject

Create a MasterGroupingObject instance (binded to the CRD) for the k8s cluster

ℹ Edit/adjust the values for clusterName, customer, location, dc, contact, email  & phone  to the required information

IMPORTANT: clusterName  will represent the name of the k8s cluster within BVQ, so choose a meaningful name (example would be: Prod-Cluster-01)

mgo-instance.yaml

Panel
bgColor#EAE6FF
Code Block
languageyml
apiVersion: bvq.sva/v1
kind: MasterGroupingObject
metadata:
  name: bvq-mgo-k8s
  labels:
    bvq: mgo
spec:
  clusterName: Prod-Cluster-01
  customer: Customer Inc.
  location: Berlin, Germany
  dc: Example-DC-01
  contact: Max Mustermann
  email: max.mustermann@customer.de
  phone: +49-171-1234-56789
Panel
bgColor#EAE6FF

ClusterRole

Create a ClusterRole to get read-only (get, list, watch) access to the k8s cluster

ℹ Read only permissions (get, list, watch) are required 

apiGroups may be applied via a wildcard ('*') to get access to all api groups, otherwise apiGroups given in the example must be set

cluster-role-bvqscan.yaml 

Code Block
languageyml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: bvq-scanner-rl
rules:
  - verbs:
      - get
      - watch
      - list
    apiGroups:
      - ''
      - apiextensions.k8s.io
      - apps
      - batch
      - bvq.sva
      - networking.k8s.io
      - storage.k8s.io
      - discovery.k8s.io
      - scheduling.k8s.io
    resources:
      - '*'
Panel
bgColor#EAE6FF

ServiceAccount

Create a ServiceAccount for authentication

ℹ The Token created for this ServiceAccount is needed to set up a BVQ scanner config for the k8s cluster 

namespace may be adjusted to another kubernetes namespace. Remember to edit the namspace set in the Compute_layer#ClusterRoleBinding

IMPORTANT: With k8s version 1.24 the LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default (see here). Use this guide to create a non-expiring token (recommended)

bvq-serviceaccount.yaml

Code Block
languageyml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bvqscan
  namespace: default
Panel
bgColor#EAE6FF

ClusterRoleBinding

Create a ClusterRoleBinding to bind the Compute_layer#ServiceAccount to the Compute_layer#ClusterRole

cluster-role-binding-bvqscan-sa.yaml

Code Block
languageyml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: bvq-scanner-sa-bnd
subjects:
- kind: ServiceAccount
  name: bvqscan
  namespace: default
roleRef:
  kind: ClusterRole
  name: bvq-scanner-rl
  apiGroup: rbac.authorization.k8s.io
Panel
bgColor#EAE6FF

BVQ Prometheus Server

To get performance and topology data a custom bvq-prometheus stack must be deployed in the k8s cluster via helm. This helm chart will install a bvq-prometheus server as a deployment with a 8GB persistent volume (configurable via values.yaml) and bvq-prometheus-node-erxprters as a DaemonSet (helm dependency).

See values.yaml and other configuration files in the bvq-prometheus-helm.zip file for further information about the bvq-prometheus configuration.

Execute the following steps to deploy the bvq-prometheus helm chart to the k8s cluster:

  • Create a namespace (e.g. bvq-prometheus) for the prometheus stack:
    kubectl create namespace bvq-prometheus 

  • Unzip helm files → bvq-prometheus-helm.zip

  • For external communication an ingress for the bvq-prometheus server is needed. Edit prometheus.ingress.hosts  in values.yaml to set a proper ingress.

  • Run helm dependency build / helm dependency update 

  • Install the helm chart via helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./ 

    helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./ Quelle erweitern

Code Block
languagebash
▶ helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./
NAME: bvq-prometheus
LAST DEPLOYED: Thu Dec 15 11:00:08 2022
NAMESPACE: bvq-prometheus
STATUS: deployed
REVISION: 1
TEST SUITE: None
  • Check the installation with kubectl get pods -n bvq-prometheus  - A pod called bvq-prometheus-* and a set of bvq-prometheus-bvq-node-exporter-* pods should be in running state

    kubectl get pods -n bvq-prometheus

Code Block
languagebash
▶ kubectl get pods -n bvq-prometheus
NAME                                     READY   STATUS    RESTARTS   AGE
bvq-prometheus-5b8cd79d79-r587m          1/1     Running   0          64s
bvq-prometheus-bvq-node-exporter-jz46z   1/1     Running   0          2s
Panel
bgColor#EAE6FF

Gather information for BVQ Scanner configuration

BVQ scanners need the following information to be configured for each k8s cluster:

  • API server IP address or DNS name (FQDN) - Default TCP port: 6443

  • API Token of the bvqscan Compute_layer#ServiceAccount

  • Prometheus URL or IP (if NodePort service is used)

  • Prometheus user & password (optional, if BasicAuth of Prometheus is used)

Preparation for the BVQ Server

For BVQ Servers which are gathering information from Kubernetes clusters, the correct DNS configuration is important.
Make sure that the BVQ Server & Kubernetes clusters are in the same domain and have the same DNS server configured.