Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.



Introduction

...

Panel
borderColor#24678D
bgColorwhite
titleColorwhite
borderWidth3
titleBGColor#24678D
borderStylesolid
titleCreate Compute System Users
Anchor
compute
compute
Anchor
powervm
powervm


Panel
borderColor#24678D
bgColorwhite
titleColorwhite
borderWidth3
titleBGColor#24678D
borderStylesolid
titleIBM PowerVM preparation

A user is required for the operation of the BVQ PowerVM Scanner who should at least have read-only (hmcviewer) access to the HMC.

(warning) Please create this user before configuring the BVQ Scanner. We recommend to name the user "bvq"

(warning) Please open the user properties dialogue and select "Allow remote access via the web"

Add User dialogueUser Properties dialogue

Image Modified


Image Modified





Enable performance data collection

BVQ can only collect performance statistics if "Data Collection" on the managed systems and LPARs is enabled.

BVQ Scanner configuration

To configure a PowerVM scanner in BVQ the following information is required:

  • IP address or hostname of the HMC
  • User and password of the HMC user for BVQ

(warning) Typically two redundant HMCs manage the same IBM Power systems. Please make sure that only one scanner for one of the HMCs is being created to avoid duplication in BVQ.



Anchor
aix
aix

Panel
borderColor#24678D
bgColorwhite
titleColorwhite
borderWidth3
titleBGColor#24678D
borderStylesolid
titleOS Agent for AIX & Linux preparation

AIX and Linux are the first BVQ platforms where data is not pulled from the systems by the BVQ scanner. Instead, data is sent (pushed) from the OS on the LPARs to the BVQ Server by an BVQ OS Agent using SCP. This means, an ssh-server on the BVQ Server is receiving data from the OS instances. Once an AIX or Linux BVQ Scanner is configured, the ssh-server is being started and listening on port 2222.

Note
titleImportant

Please ensure that port 2222 is not blocked by a firewall!

BVQ Scanner configuration

To configure an AIX or Linux BVQ scanner the following information is required:

  • NAME - Name of the AIX or Linux scanner
  • INSTANCE GROUP NAME - Select a name which is used to group all AIX or Linux Instances (=partitions) together that are running the BVQ OS Agent for AIX or Linux
  • USERNAME - This user authorizes the SCP transfer from the AIX or Linux Instances to the BVQ Server. It will be configured during the installation process

OS Agent installation

The BVQ Agent for AIX or Linux RPM installation package is automatically generated once a new BVQ AIX or Linux scanner configuration is being created in the BVQ WebUI. After the "Save"-button is pressed, the RPM package is automatically generated and can be downloaded directly. Further installation instructions can be found in the scanner configuration page or the readme included in the RPM download package.

OS User requirements

OSusergroupRestrictions
AIXrootsystemnone
AIXothersystemNo stats for FC adapters
AIXotherstaffNo stats for FC adapters and LV, VG objects
Linuxrootrootnone (other uid / gid not supported)

Alternatively, the BVQ AIX agent can be rolled out automatically to many systems using an AIX NIM server. The download package for AIX includes a script that helps configuring the NIM server.

Note
titleImportant!

It is essential that BVQ Server and AIX/Linux clocks are in sync. Please ensure that NTP is configured and active on all monitored systems and the BVQ Server!

The OS Agent cannot be installed or upgraded as long as NTP is not configured!



Anchor
vmware
vmware

VMware vSphere preparationA user is required for the operation of the BVQ VMware Scanner, who should at least have read-only access to the VMware vCenter system. The read-only permission for the user must be defined at the vCenter level. Permissions in a lower level (e.g. Datacenter, Cluster, ...) will lead to scan errors.

(warning) Please create this user before configuring the BVQ Scanner. We recommend to name the user: bvq

Create or select the right user role

  • Go to user roles

Panel
borderColor#24678D
bgColorwhite
titleColorwhite
borderWidth3
titleBGColor#24678D
borderStylesolid
titleVMware vSphere preparation

Include Page
VMware vSphere preparation
Expand
titlescreenshot ...
Info
iconfalse

   Image Added

  • Duplicate (1) the read-only role and store it as BVQ-Read-only (2) and add the following privileges (3)
    Datastore - browse datastore
    Profile driven storage - view
    Storage views - view

    Expand
    titlescreenshot ...
    Info
    iconfalse
    Image Added
  • Create the BVQ User for the vCenter

    • Create the bvq user with the role "BVQ-read-only"
      create it as vsphere.local or as AD user - please remember to add it correctly into the scanner dialog later

      Expand
      titlescreenshot ...
      Info
      iconfalse
      Image Added
    • Add the user to the vCenter
      Add the  user to the vCenter (4) and do not forget to define it for all children

      Expand
      titlescreenshot
      Info
      iconfalse
      Image Added

    Add the right vCenter Statistics

    • Interval duration has to be 5 minutes
    • Level 2 is sufficient for standard VMware
      Level 3 should be used for VSANs

      Expand
      titlescreenshot
      Info
      iconfalse
      Image Added


    Gather information for BVQ Scanner configuration

    BVQ scanners need the following information to be configured for each vCenter system:

    • vCenter IP address or hostname
    • vCenter user domain
    • vCenter ID and password of the bvq user


    Preparation for the BVQ Server

    For BVQ Servers which are gathering information from NetApps and vCenters, the correct DNS configuration is important.
    Make sure that the BVQ Server, NetApp systems and vCenters are in the same domain and have the same DNS server configured. 

    This is required to match the DNS-Name of the NFS Datastores to the corresponding IP Adresses of the NFS file shares on NetApp systems.






    Anchor
    kubernetes
    kubernetes

    Panel
    borderColor#24678D
    bgColorwhite
    titleColorwhite
    borderWidth3
    titleBGColor#24678D
    borderStylesolid
    titleKubernetes preparation

    Kubernetes (k8s) clusters are scanned via 2 different methods:

    Kubernetes API Server

    To gain access to the k8s API server the following preparations must be made:

    1. Create a Setup- Prepare System Users (CRD) to set up a k8s cluster as master grouping object (MGO) definition for BVQ
    2. Create a Setup- Prepare System Users instance (binded to the CRD) for the k8s cluster
    3. Create a Setup- Prepare System Users to get read-only (get, list, watch) access to the k8s cluster
    4. Create a Setup- Prepare System Users for authentication
    5. Create a Setup- Prepare System Users to bind the ServiceAccount to the ClusterRole

    (info) Use kubectl apply -f  to create the expected objects. You can edit & use the all in one preparation YAML file to set up all requirements in one step.
    (make sure all 5 objects are created properly - sometimes Setup- Prepare System Users creation fails due to the delayed creation of the Setup- Prepare System Users)

    CustomResourceDefinition

    Create a CustomResourceDefinition (CRD) to set up a k8s cluster as master grouping object (MGO) definition for BVQ

    Code Block
    languageyml
    titlemgo-crd.yaml
    linenumberstrue
    collapsetrue
    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    metadata:
      name: mastergroupingobjects.bvq.sva
    spec:
      group: bvq.sva
      versions:
        - name: v1
          served: true
          storage: true
          schema:
            openAPIV3Schema:
              type: object
              properties:
                spec:
                  type: object
                  properties:
                    clusterName:
                      type: string
                      description: Cluster-Name
                    customer:
                      type: string
                      description: Customer-Name
                    location:
                      type: string
                      description: Location where the Cluster is located at
                    dc:
                      type: string
                      description: Datacenter-Name
                    contact:
                      type: string
                      description: Customer-Contact-Name
                    email:
                      type: string
                      description: E-Mail-Address of the Contact
                    phone:
                      type: string
                      description: Phone-Number of the Contact
      scope: Cluster
      names:
        plural: mastergroupingobjects
        singular: mastergroupingobject
        kind: MasterGroupingObject
        shortNames:
        - mgo

    MasterGroupingObject

    Create a MasterGroupingObject instance (binded to the CRD) for the k8s cluster

    (info) Edit/adjust the values for clusterName, customer, location, dc, contact, email  & phone  to the required information

    (warning) IMPORTANT: clusterName  will represent the name of the k8s cluster within BVQ, so choose a meaningful name (example would be: Prod-Cluster-01)

    Code Block
    languageyml
    titlemgo-instance.yaml
    linenumberstrue
    collapsetrue
    apiVersion: bvq.sva/v1
    kind: MasterGroupingObject
    metadata:
      name: bvq-mgo-k8s
      labels:
        bvq: mgo
    spec:
      clusterName: Prod-Cluster-01
      customer: Customer Inc.
      location: Berlin, Germany
      dc: Example-DC-01
      contact: Max Mustermann
      email: max.mustermann@customer.de
      phone: +49-171-1234-56789

    ClusterRole

    Create a ClusterRole to get read-only (get, list, watch) access to the k8s cluster

    (info) Read only permissions (get, list, watch) are required
    (info) apiGroups may be applied via a wildcard ('*') to get access to all api groups, otherwise apiGroups given in the example must be set

    Code Block
    languageyml
    titlecluster-role-bvqscan.yaml
    linenumberstrue
    collapsetrue
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: bvq-scanner-rl
    rules:
      - verbs:
          - get
          - watch
          - list
        apiGroups:
          - ''
          - apiextensions.k8s.io
          - apps
          - batch
          - bvq.sva
          - networking.k8s.io
          - storage.k8s.io
          - discovery.k8s.io
          - scheduling.k8s.io
        resources:
          - '*'

    ServiceAccount

    Create a ServiceAccount for authentication

    (info) The Token created for this ServiceAccount is needed to set up a BVQ scanner config for the k8s cluster
    (info) namespace may be adjusted to another kubernetes namespace. Remember to edit the namspace set in the Setup- Prepare System Users

    (warning) IMPORTANT: With k8s version 1.24 the LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default (see here). Use this guide to create a non-expiring token (recommended)

    Code Block
    languageyml
    titlebvq-serviceaccount.yaml
    linenumberstrue
    collapsetrue
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: bvqscan
      namespace: default

    ClusterRoleBinding

    Create a ClusterRoleBinding to bind the Setup- Prepare System Users to the Setup- Prepare System Users

    Code Block
    languageyml
    titlecluster-role-binding-bvqscan-sa.yaml
    linenumberstrue
    collapsetrue
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: bvq-scanner-sa-bnd
    subjects:
    - kind: ServiceAccount
      name: bvqscan
      namespace: default
    roleRef:
      kind: ClusterRole
      name: bvq-scanner-rl
      apiGroup: rbac.authorization.k8s.io

    BVQ Prometheus Server

    To get performance and topology data a custom bvq-prometheus stack must be deployed in the k8s cluster via helm. This helm chart will install a bvq-prometheus server as a deployment with a 8GB persistent volume (configurable via values.yaml) and bvq-prometheus-node-erxprters as a DaemonSet (helm dependency).

    See values.yaml and other configuration files in the bvq-prometheus-helm.zip file for further information about the bvq-prometheus configuration.

    Execute the following steps to deploy the bvq-prometheus helm chart to the k8s cluster:

    • Create a namespace (e.g. bvq-prometheus) for the prometheus stack:
      kubectl create namespace bvq-prometheus 
    • Unzip helm files → bvq-prometheus-helm.zip
    • For external communication an ingress for the bvq-prometheus server is needed. Edit prometheus.ingress.hosts  in values.yaml to set a proper ingress.
    • Run helm dependency build / helm dependency update 
    • Install the helm chart via helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./ 

      Code Block
      languagebash
      titlehelm install -n bvq-prometheus -f values.yaml bvq-prometheus ./
      collapsetrue
      ▶ helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./
      NAME: bvq-prometheus
      LAST DEPLOYED: Thu Dec 15 11:00:08 2022
      NAMESPACE: bvq-prometheus
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
    • Check the installation with kubectl get pods -n bvq-prometheus  - A pod called bvq-prometheus-* and a set of bvq-prometheus-bvq-node-exporter-* pods should be in running state

      Code Block
      languagebash
      titlekubectl get pods -n bvq-prometheus
      collapsetrue
      ▶ kubectl get pods -n bvq-prometheus
      NAME                                     READY   STATUS    RESTARTS   AGE
      bvq-prometheus-5b8cd79d79-r587m          1/1     Running   0          64s
      bvq-prometheus-bvq-node-exporter-jz46z   1/1     Running   0          2s

    Gather information for BVQ Scanner configuration

    BVQ scanners need the following information to be configured for each k8s cluster:

    Preparation for the BVQ Server

    For BVQ Servers which are gathering information from Kubernetes clusters, the correct DNS configuration is important.
    Make sure that the BVQ Server & Kubernetes clusters are in the same domain and have the same DNS server configured.