Compute_layer
A user is required for the operation of the BVQ PowerVM Scanner who should at least have read-only (hmcviewer) access to the HMC.
Please create this user before configuring the BVQ Scanner. We recommend to name the user "bvq"
Please open the user properties dialogue and select "Allow remote access via the web"
Add User dialogue | User Properties dialogue |
---|---|
Enable performance data collection
BVQ can only collect performance statistics if "Data Collection" is enabled on the managed systems and all LPARs.
Adjust System settings
Enable "Performance Monitoring Data Collection for Managed Servers"
To lower the system load and storage capacity usage on the HMC, we recommend to reduce the number of days to store performance data below the title "Performance Data Storage" to its minimum value of "1". BVQ takes over the role to store a longer history of that data.
Adjust LPAR settings
Switch on "Enable Performance Information Collection" on all LPARs
You can check the state of the "Performance Collection" enablement per LPAR easily. Run the following script on the HMC to show all LPARs without enabled collection:
bvq@hmc3:~> for SYS in $(lssyscfg -r sys -F name); do lssyscfg -r lpar -m $SYS -F name,allow_perf_collection ; done | grep ",0" rju_viot,0 IOS74DHP,0 HABS74D,0 HAQS74D,0 HAMS74D,0
BVQ Scanner configuration
To configure a PowerVM scanner in BVQ the following information is required:
- IP address or hostname of the HMC
- User and password of the HMC user for BVQ
Starting with BVQ 2023.H1: Redundant HMCs managing the same systems must be configured in the same PowerVM scanner. Otherwise, the managed systems will appear twice in BVQ. Define the most powerful HMC first, because the order of HMCs determines the order in which they are scanned by BVQ. Additional HMCs managing other systems should be configured in an additional PowerVM scanner.
Up to BVQ 2022.H2: Typically, two redundant HMCs manage the same IBM Power systems. Please ensure that only one scanner is created for one of the HMCs to avoid duplication in BVQ.
AIX and Linux are the first BVQ platforms where data is not pulled from the systems by the BVQ scanner. Instead, data is sent (pushed) from the OS on the LPARs to the BVQ Server by an BVQ OS Agent using SCP/SFTP. This means, an ssh-server on the BVQ Server is receiving data from the OS instances. Once an AIX or Linux BVQ Scanner is configured, the ssh-server is being started automatically and is listening on port 2222.
Important
Please ensure that port 2222 is not blocked by a firewall!
BVQ Scanner configuration
To configure an AIX or Linux BVQ scanner the following information is required:
- NAME - Name of the AIX or Linux scanner
- INSTANCE GROUP NAME - Select a name which is used to group all AIX or Linux Instances (=partitions) together that are running the BVQ OS Agent for AIX or Linux. The number of instances per instance group should not exceed 50.
- USERNAME - This user authorizes the SCP/SFTP transfer from the AIX or Linux Instances to the BVQ Server. It will be configured during the installation process
- SSH PUBLIC KEY - Optional. Leave empty if you want to use the default ssh key-pair included in bvq_agent.tar.gz. If you want to use a different ssh key-pair type, then enter the content of the public ssh key file here and adjust bvq_config.sh on the OS agent accordingly.
Supported key types are:- ssh-rsa (default)
- ecdsa-sha2
- rsa-sha2-256
- ssh-ed25519
OS Agent installation
The BVQ Agent for AIX or Linux RPM installation package is automatically generated once a new BVQ AIX or Linux scanner configuration is being created in the BVQ WebUI. After the "Save"-button is pressed, the RPM package is automatically generated and can be downloaded directly. Further installation instructions can be found in the scanner configuration page or the readme included in the RPM download package.
OS User requirements
OS | user | group | Restrictions |
---|---|---|---|
AIX | root | system | none |
AIX | other | system | No stats for FC adapters |
AIX | other | staff | No stats for FC adapters and LV, VG objects |
Linux | root | root | none (other uid / gid not supported) |
Alternatively, the BVQ AIX agent can be rolled out automatically to many systems using an AIX NIM server. The download package for AIX includes a script that helps configuring the NIM server.
Important!
It is essential that BVQ Server and AIX/Linux clocks are in sync. Please ensure that NTP is configured and active on all monitored systems and the BVQ Server!
The OS Agent cannot be installed or upgraded as long as NTP is not configured!
A user is required for the operation of the BVQ VMware Scanner, who should at least have read-only access to the VMware vCenter system. The read-only permission for the user must be defined at the vCenter level. Permissions in a lower level (e.g. Datacenter, Cluster, ...) will lead to scan errors.
Please create this user before configuring the BVQ Scanner. We recommend to name the user: bvq
Create or select the right user role
Go to user roles
Duplicate (1) the read-only role and store it as BVQ-Read-only (2) and add the following privileges (3)
Datastore - browse datastore
Profile driven storage - view
Storage views - view
Create the BVQ User for the vCenter
Create the bvq user with the role "BVQ-read-only"
create it as vsphere.local or as AD user - please remember to add it correctly into the scanner dialog laterAdd the user to the vCenter
Add the user to the vCenter (4) and do not forget to define it for all children
Add the right vCenter Statistics
- Interval duration has to be 5 minutes
Level 2 is sufficient for standard VMware
Level 3 should be used for VSANs
High vCenter CPU usage during BVQ Performance scan
During the BVQ performance scan of a vCenter server the CPU usage of the vCenter server will increase. Please monitor the vCenter server utilization depending on the workload to avoid performance degradation.
Gather information for BVQ Scanner configuration
BVQ scanners need the following information to be configured for each vCenter system:
- vCenter IP address or hostname
- vCenter user domain
- vCenter ID and password of the bvq user
Preparation for the BVQ Server
For BVQ Servers which are gathering information from NetApps and vCenters, the correct DNS configuration is important.
Make sure that the BVQ Server, NetApp systems and vCenters are in the same domain and have the same DNS server configured.
This is required to match the DNS-Name of the NFS Datastores to the corresponding IP Adresses of the NFS file shares on NetApp systems.
Kubernetes (k8s) clusters are scanned via 2 different methods:
- Kubernetes API Server for topology information
- BVQ Prometheus Server for topology & performance information
Kubernetes API Server
To gain access to the k8s API server the following preparations must be made:
- Create a Compute_layer#CustomResourceDefinition (CRD) to set up a k8s cluster as master grouping object (MGO) definition for BVQ
- Create a Compute_layer#MasterGroupingObject instance (binded to the CRD) for the k8s cluster
- Create a Compute_layer#ClusterRole to get read-only (get, list, watch) access to the k8s cluster
- Create a Compute_layer#ServiceAccount for authentication
Create a Compute_layer#ClusterRoleBinding to bind the ServiceAccount to the ClusterRole
Use kubectl apply -f
to create the expected objects. You can edit & use the all in one preparation YAML file to set up all requirements in one step.
(make sure all 5 objects are created properly - sometimes Compute_layer#MasterGroupingObject creation fails due to the delayed creation of the Compute_layer#CustomResourceDefinition)
CustomResourceDefinition
Create a CustomResourceDefinition (CRD) to set up a k8s cluster as master grouping object (MGO) definition for BVQ
MasterGroupingObject
Create a MasterGroupingObject instance (binded to the CRD) for the k8s cluster
Edit/adjust the values for clusterName, customer, location, dc, contact, email
& phone
to the required information
IMPORTANT: clusterName
will represent the name of the k8s cluster within BVQ, so choose a meaningful name (example would be: Prod-Cluster-01)
ClusterRole
Create a ClusterRole to get read-only (get, list, watch) access to the k8s cluster
Read only permissions (get, list, watch) are required
apiGroups
may be applied via a wildcard ('*') to get access to all api groups, otherwise apiGroups given in the example must be set
ServiceAccount
Create a ServiceAccount for authentication
The Token created for this ServiceAccount is needed to set up a BVQ scanner config for the k8s cluster
namespace
may be adjusted to another kubernetes namespace. Remember to edit the namspace
set in the Compute_layer#ClusterRoleBinding
IMPORTANT: With k8s version 1.24 the LegacyServiceAccountTokenNoAutoGeneration
feature gate is beta, and enabled by default (see here). Use this guide to create a non-expiring token (recommended)
ClusterRoleBinding
Create a ClusterRoleBinding to bind the Compute_layer#ServiceAccount to the Compute_layer#ClusterRole
BVQ Prometheus Server
To get performance and topology data a custom bvq-prometheus stack must be deployed in the k8s cluster via helm. This helm chart will install a bvq-prometheus server as a deployment with a 8GB persistent volume (configurable via values.yaml
) and bvq-prometheus-node-erxprters as a DaemonSet (helm dependency).
See values.yaml
and other configuration files in the bvq-prometheus-helm.zip file for further information about the bvq-prometheus configuration.
Execute the following steps to deploy the bvq-prometheus helm chart to the k8s cluster:
- Create a namespace (e.g. bvq-prometheus) for the prometheus stack:
kubectl create namespace bvq-prometheus
- Unzip helm files → bvq-prometheus-helm.zip
- For external communication an ingress for the bvq-prometheus server is needed. Edit
prometheus.ingress.hosts
invalues.yaml
to set a proper ingress. - Run
helm dependency build / helm dependency update
Install the helm chart via
helm install -n bvq-prometheus -f values.yaml bvq-prometheus ./
Check the installation with
kubectl get pods -n bvq-prometheus
- A pod called bvq-prometheus-* and a set of bvq-prometheus-bvq-node-exporter-* pods should be in running state
Gather information for BVQ Scanner configuration
BVQ scanners need the following information to be configured for each k8s cluster:
- API server IP address or DNS name (FQDN) - Default TCP port: 6443
- API Token of the bvqscan Compute_layer#ServiceAccount
- Prometheus URL or IP (if NodePort service is used)
- Prometheus user & password (optional, if BasicAuth of Prometheus is used)
Preparation for the BVQ Server
For BVQ Servers which are gathering information from Kubernetes clusters, the correct DNS configuration is important.
Make sure that the BVQ Server & Kubernetes clusters are in the same domain and have the same DNS server configured.