Add platform specific scanners

Add platform specific scanners

 

 

 

 

Compute Layer

VMware vSphere

VMware vSphere

  1. Select VMware vSphere and fill out the mandatory fields

    1. Username has to be "user@vcenter"

    2. Do not forget to check the event scan if you like to see VMware event data in BVQ.

    3. Do not change the defaults unless there is an important reason to do so.

    4. You cannot access the SDK URL at https://servername/sdk
      The service at https://localhost/sdk is not an HTML webserver that can serve web pages. 

  2. After setting up the VMware scanner, use the save button, to finish your work.

  3.  The Storage Adapter Performance Scan scans all paths from each SCSI disk to SAN and will generate a lot of data.
    Switch this on, only when you need this data.
    This can also be switched for only some hours when you see the need to analyze round robin behavior of storage paths. 

IBM PowerVM

IBM PowerVM

A single PowerVM scanner can administer either one or two HMCs.

Redundant HMCs managing the same systems must be configured in the same PowerVM scanner. Otherwise managed systems will appear twice in BVQ. Additional HMCs managing other systems should be configured in an additional PowerVM scanner to avoid long scan times.

Please define the most powerful HMC first because the order of the HMCs defines the order they are scanned by BVQ. The order can be changed at any time using drag & drop.

In case the first HMC fails or cannot be scanned for any reason, BVQ continues with the other HMC in the list (if there is one).

AIX

AIX

The AIX OS agent sends data to BVQ server using scp. The ssh server on the BVQ server listens on port 2222. Please define an inbound rule on the BVQ server which allows access to this port!

Configuration of an NTP server on the OS instances is mandatory and highly recommended on the BVQ server!

First time install

Fill in the mandatory fields of the AIX scanner configuration:

  • Name: Name of the AIX scanner

  • Instance group name: All AIX instances sending their data to this AIX scanner will be grouped together. The number of instances per instance group should not exceed 50.

  • Username: This user will be used by the AIX instance to send the data to BVQ. It will be configured during the installation process

Set up the AIX instances

  1. Download BVQ AIX Agent and copy it to each AIX instance. Unpack bvq_aix_agent_[scanner name].tar.gz:

gunzip -c bvq_aix_agent_[scanner name].tar.gz |tar -xvf -
  1. Install or upgrade BVQ AIX agent: Install:

rpm -i bvq-aix-agent-[version].ppc.rpm

Upgrade:

rpm -U bvq-aix-agent-[version].ppc.rpm
  1. Make bvq_autoconf.sh executable and run it:

chmod 755 bvq_autoconf.sh; ./bvq_autoconf.sh

Alternatively, the BVQ AIX agent can be rolled out automatically to many systems using an AIX NIM server. To add the RPM package as lpp_source to the NIM repository and to configure bvq_autoconf.sh as post install script, use the bvq_nim_autoconf.sh script:

./bvq_nim_autoconf.sh

Disable/Enable BVQ Agent on an AIX instance

Disable/stop the BVQ AIX agent:

./opt/sva/bvq/os-agent/bin/manage_aix_agent.sh disable

Enable/start the BVQ AIX agent:

./opt/sva/bvq/os-agent/bin/manage_aix_agent.sh enable

Linux

Linux

The Linux OS agent sends data to BVQ server using scp. The ssh server on the BVQ server listens on port 2222.

Please define an inbound rule on the BVQ server which allows access to this port!

Configuration of an NTP server on the OS instances is mandatory and highly recommended on the BVQ server!

First time install

Fill in the mandatory fields of the Linux scanner configuration:

  • Name: Name of the Linux scanner

  • Instance group name: All Linux instances sending their data to this Linux scanner will be grouped together. The number of instances per instance group should not exceed 50.

  • Username: This user will be used by the Linux instance to send the data to BVQ. It will be configured during the installation process

Set up the Linux instances

 

Download BVQ Linux Agent and copy it to each Linux instance.

Unpack bvq_linux_agent_[scanner name].tar.gz:

gunzip -c bvq_linux_agent_[scanner name].tar.gz |tar -xvf -

Install BVQ Linux agent

For ppc64:

rpm -i bvq-linux-agent-[version].ppc64le.rpm

For RHEL8 / RHEL9 on x86:

yum install bvq-linux-agent-[version].x86_64.rpm

For SLES12 / SLES15 on x86:

zypper install bvq-linux-agent-[version].x86_64.rpm

Upgrade BVQ Linux agent

rpm -U bvq-linux-agent-[version].[architecture].rpm

Make bvq_autoconf.sh executable and run it:

chmod 755 bvq_autoconf.sh; ./bvq_autoconf.sh

Install BVQ Linux agent using NIM

On ppc64 architecture, the BVQ Linux agent can be rolled out automatically to many systems using a NIM server.

To add the RPM package as lpp_source to the NIM repository and to configure bvq_autoconf.sh as post install script, use the bvq_nim_autoconf.sh script:

./bvq_nim_autoconf.sh

Disable/Enable BVQ Agent on a Linux instance

Disable/stop the BVQ Linux agent:

./opt/sva/bvq/os-agent/bin/manage_linux_agent.sh disable

Enable/start the BVQ Linux agent:

./opt/sva/bvq/os-agent/bin/manage_linux_agent.sh enable

Kubernetes

Kubernetes

Kubernetes (k8s) clusters are scanned by BVQ via two different methods:

  • Kubernetes API Server for topology information

  • Prometheus Server for topology & performance information

Preparation

  • Start your preparations by downloading the setup files from your BVQ server:

    • Go to your BVQ server to "Scanners"

    • Create a new scanner configuration

    • Select "Kubernetes" as scanner type

    • Go to the "Instructions" tab and click the button "Download setup files"

DNS server configuration

Correct DNS configuration is crucial for BVQ servers gathering information from Kubernetes clusters. Ensure that both the BVQ server and Kubernetes clusters are in the same domain and have the same DNS server configured.

Helm Chart: bvq-prometheus

To gain access to all information used by BVQ it is necessary to install the bvq-prometheus Helm chart. The chart can be downloaded from the scanner page. The following kubernetes resources will be created during the setup:

  • CustomResourceDefinition for the BVQ master grouping object (MGO)

  • Master grouping object (MGO) to represent the Kubernetes Cluster within BVQ

  • Corresponding ClusterRole & ClusterRoleBinding for the ServiceAccount to get the authorizations needed

  • Prometheus server with a custom configuration to collect performance data, which is scanned by BVQ

  • Prometheus Node Exporter to expose a wide variety of hardware- and kernel-related metrics

Usage:

  1. Download and extract bvq-prometheus-helm.zip

  2. Install/Upgrade:

helm install -n bvq-prometheus --create-namespace -f values.yaml bvq-prometheus ./ helm upgrade -n bvq-prometheus --create-namespace -f values.yaml bvq-prometheus ./
  1. Uninstall:

helm uninstall bvq-prometheus -n bvq-prometheus kubectl delete namespace bvq-prometheus kubectl delete crd mastergroupingobjects.bvq.sva

OpenShift

As OpenShift requires some different parameters some adjustments have to be made in the values.yaml file:

  1. openshift.enabled has to be true

  2. Ensure that prometheus-node-exporter.serviceAccount.name is set

  3. All fsGroup, runAsGroup and runAsUser parameters have to be set to null

For more information how to configure and adjust the helm chart, please see README.md inside the Helm Chart

Gathering information for BVQ scanners

BVQ scanners need the following information to be configured for each k8s cluster:

  • API server IP address or DNS name (FQDN) – Default TCP port: 6443

  • API Token of the bvqscan ServiceAccount

  • Prometheus URL or IP (if NodePort service is used)

  • Prometheus user and password (optional, if basic auth of Prometheus is used)

Scanner configuration

Fill in the mandatory fields of the Kubernetes scanner configuration:

  • Name: Name of the Kubernetes scanner (e.g. Prod-Cluster-01)

  • Kubernetes API: FQDN or IP address for the k8s API server (e.g. https://prod-cluster-api-server.company.org:6443 – Remember to adjust protocol or port if needed)

  • Authentication token: Token of the ServiceAccount user created during preparation

  • Prometheus: FQDN or IP address for the BVQ-Prometheus server (e.g. https://prod-cluster-bvq-prometheus.company.org – Remember to adjust protocol or port if needed)

  • Prometheus user / Prometheus password: Optional: Set username and password if Basic auth for Prometheus is set

 

Network layer

Brocade FC

Brocade FC

Brocade REST interface is only available for Fabric O8.2.1 and higher. Visit Network layer for more information.

  1. Select Brocade FC and fill out the mandatory fields

    1. Do not change the defaults unless there is an important reason to do so.

    2. Select a Switch (or Director) in your SAN that has the highest firmware level and has the most performant hardware (Core directors preferred)
      and insert its address in the corresponding fields

    3. Insert the user credentials you prepared in Step 1

    4. Click "Update switches" below "Discovered switches" → this will take a while, please be patient
      BVQ will discover the IP addresses of all connected Switches and will ask every discovered Switch for additional connected Switches until no more Switches are found.
      All discovered Switches are Part of a (virtual) Fabric and all discovered Fabrics are part of a SAN, that is scanned completely by this BVQ Scanner instance.

    5. It is essential, that the principal switch of each fabric can be found in that discovery to identify all the Fabrics inside that SAN.

    6. Please review the list of discovered Switches for completeness

    7. It is recommended to have the same user credentials on all Switches. If that is not possible, you can change the user credentials of each Switch found in the first discovery. Please hit "Update Switches" again after that, if the list of Switches is not complete as expected.

  2. After setting up this Scanner hit the "Save configuration" button to finish your work.

Cisco MDS

Cisco MDS

  1. Select CISCO MDS and fill out the mandatory fields

  2. After setting up the scanner, use the save button, to finish your work

 

Storage layer

IBM SVC

IBM SVC

  1. Select IBM SVC and fill out the mandatory fields

  2.  After setting up the scanner, use the save button, to finish your work

NetApp ONTAP

NetApp ONTAP

  1. Select Netapp ONTAP and fill out the mandatory fields

  2. After setting up the scanner, use the save button, to finish your work

Dell PowerStore

Dell PowerStore

  1. Select Dell PowerStore and fill out the mandatory fields

  2.  After setting up the scanner, use the save button, to finish your work

PureStorage FlashArray

PureStorage FlashArray

A BVQ scanner configuration for PureStorage FlashArrays either supports single arrays or two arrays that are configured as ActiveCluster.

Fill in the mandatory fields:

  1. Configuration name: Name of the FlashArray scanner

  2. Cluster name: Name of the array or cluster

  3. Click on Add Array and enter Hostname or IP address of the first FlashArray and the API token to access it. If applicable, repeat this step for the second FlashArray.

All other information is optional.

Dell Unity

Dell Unity

  1. Select Dell Unity and fill out the mandatory fields

  2. After setting up the scanner, use the save button, to finish your work.