Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Topology Persist fails due to missing location code of a volume in the VIOS DB.
This problem often takes place after rebooting the VIOS and or upgrading VIO Servers.

Note:
TopoPersist will fail as soon as it hits the first issue.
This means, there might be more VIOs and more disks with the same issue that need to be fixed before persist of the data will be successful!

Applies to

All BVQ Versions and PVM Systems

Procedure

The error message in the current_TopoPersist.log should will look like similar to the following ( BVQ Scanner ) or
C:\ProgramData\SVA\BVQ\bvq-server\data\scanner-data\POWERVM\<scanner_name>\logs\done

Code Block
LC is missing and volume is NOT an iSCSI volume
Code Block
2024-07-10T15:02:30,770 ERROR [PersistExecutor_1]: Error during command execution [HMC] [TopoPersist] (BaseJobExecutor)
de.sva.bvq.exception.BvqScanException: Cannot persist physical volume 01MkMwMzMDUyYzDEwMUzMDADAwMDAMTwMyNTM4jAyMAxNjAzNDMzMzQW52bWU= because LC is missing and volume is NOT an iSCSI volume
	at de.sva.bvq.persister.powervm.commands.PersistPhysicalVolumesCommand.detectLocationCode(PersistPhysicalVolumesCommand.java:89) ~[bvq-powervm-persist-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.powervm.commands.PersistPhysicalVolumesCommand.mapToPhysicalVolumeToVirtualIoServer(PersistPhysicalVolumesCommand.java:67) ~[bvq-powervm-persist-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.powervm.commands.PersistPhysicalVolumesCommand.executeCommand(PersistPhysicalVolumesCommand.java:46) ~[bvq-powervm-persist-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.powervm.commands.AbstractPersistPowerVmCommand.execute(AbstractPersistPowerVmCommand.java:51) ~[bvq-powervm-persist-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.powervm.commands.AbstractPersistPowerVmCommand.execute(AbstractPersistPowerVmCommand.java:29) ~[bvq-powervm-persist-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.jobs.BaseJobExecutor.executeCommand(BaseJobExecutor.java:280) ~[bvq-one-persister-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.jobs.BaseJobExecutor.executeCommands(BaseJobExecutor.java:300) ~[bvq-one-persister-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.jobs.BaseJobExecutor.executeJobInternal(BaseJobExecutor.java:173) ~[bvq-one-persister-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.jobs.AbstractTrackingProgressJobExecutor.workOnQueue(AbstractTrackingProgressJobExecutor.java:120) ~[bvq-one-persister-2023.H2.5.jar!/:?]
	at de.sva.bvq.persister.jobs.AbstractTrackingProgressJobExecutor.lambda$executeJob$0(AbstractTrackingProgressJobExecutor.java:90) ~[bvq-one-persister-2023.H2.5.jar!/:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
	at java.lang.Thread.run(Thread.java:840) [?:?]
2024-07-10T15:02:30,770 ERROR [PersistExecutor_1]: Command [de.sva.bvq.jobexecution.commands.CommandExecution@3f6ad68c] failed, failing job-execution [Fe0HMC30] [TopoPersist] (BaseJobExecutor)

Solution

Hint: The TopoPersist will fail at the first occurence of the event: Cannot persist physical volume 01MkMwMzMDUyYzDEwMUzMDADAwMDAMTwMyNTM4jAyMAxNjAzNDMzMzQW52bWU= because LC is missing and volume is NOT an iSCSI volume
In multi VIOS environments this could mean, that there is more than one disk with a missing location code in the environment related to the reporting HMC. Therefore you need to check further.

Further Checks: 

First of we need the topology data, which can only be collected via the full support package. 
Now we need to look for the following file in the topology data: To find the problematic volumes and to fix the problem, please analyze the latest topology file which can be found here:

C:\ProgramData\SVA\BVQ\bvq-server\data\scanner-data\POWERVM\<scanner_name>\topology\error\
or collect a full support package.

Steps:

  1. Unzip the topology file and open file de.sva.bvq.powervm.model.VirtualIOServer.json.
    This file shows the VIOS information collected from the HMC.
    Convert the JSON file using Notepad++ and a JSON plugin.

...

  1. We recommend converting the JSON into a pretty print format in the editor of your choice, as this is way more human readable, than the single line JSON provided by the HMC API call

  2. Search for the following statement: "locationCode": null

  3. Depending on how many findings you have regarding this statement you need to repeat this step and the following one multiple times

  4. Scroll up in the JSON file until you find the name of the VIOS which the specific disk belongs to

  5. Go to the system and execute the script provided by IBM (

    2024-09-30 14_48_51-Bearbeiten - PowerVM Topology Scan fails due to missing location code in VIOS DB.pngImage Added


  6. Fold all

    2024-09-30 14_49_55-Find.pngImage Added

  7. There is only 1 foldable Textblock left
    Search for the uniqueDeviceID, which was already displayed in the log above:
    ”Cannot persist physical volume 01MkMwMzMDUyYzDEwMUzMDADAwMDAMTwMyNTM4jAyMAxNjAzNDMzMzQW52bWU=

    2024-09-30 14_51_04-Bearbeiten - PowerVM Topology Scan fails due to missing location code in VIOS DB.pngImage Added

  8. You have found the correct volume, as you can see the volume infos are now unfolded, as is the info of the corresponding VIO
    You can see:
    "uniqueDeviceID" - of the volume
    "volumeName" - of the volume
    "locationCode" - internal disk location → null is problematical
    ”partitionName" - affected VIOS system

    2024-09-30 14_50_44-.pngImage Added

  9. Log on to the affected VIOS system and run cfgmgr to re-check the configuration of the VIOS
    Execute the script

    View file
    namecleanup_cmdb_with_logging.sh
    )
    on the VIOS.
    (warning) Important: These are very profound changes! Only execute this script on HA tested Systems!

  1. Check with

...

  1. lscfg -vl hdiskX” if the volume is now showing a location code.
    Repeat the procedure for all affected VIOS.
    As soon as all hdisks are fixed, the topo persist should succeed.