Copyright (C) 2010 Fidelity National Information Services, Inc.
April 26, 2010
Revision History | |
---|---|
Revision 1.3 | 26 April 2010 |
| |
Revision 1.2 | 14 June 2006 |
Revision 1.1 | 03 May 2006 |
Revision 1.0 | 15 November 2005 |
GT.M Group Fidelity National Information Services, Inc. 2 West Liberty Boulevard, Suite 300 Malvern, PA 19355, United States of America |
GT.M Support: +1 (610) 578-4226 Switchboard: +1 (610) 296-8877 Fax: +1 (484) 595-5101 http://www.fis-gtm.com gtmsupport@fnis.com |
Follow these steps if you simply want to upgrade your database as expeditiously as possible. In this context, field test releases of V5 (e.g., V5.0-FT01) are considered V4 releases. For details, go to Background. There are two approaches, Traditional and In-place Upgrade.
The block header in GT.M V5 databases occupies 16 bytes vs. 8 bytes [UNIX] / 7 bytes [OpenVMS] on prior releases of GT.M through V5.0-FT02. Thus, for a given block size, the maximum record size in GT.M V5.0-000 is less than with prior releases of GT.M. |
The advantage of the traditional database migration technique is that database files created with GT.M V5 have a maximum file size limit of 134,217,728 (128M) blocks, which means that the maximum database file size using the popular 4KB block size is 512GB, and the largest possible database file size is a little less than 8TB. With GT.M V4 database files converted to V5, the maximum number of blocks is 64M, which means that with a 4KB block size, the maximum database file size is 256GB, and the largest possible database file size is a little less than 4TB.
Using DSE, increase the Reserved Bytes parameter in each database file header by 8 (UNIX) or 9 (OpenVMS).
Run dbcertify scan on each V4 database file to determine whether there are any records that are too large to fit in a V5 database with the same block size. If there are, a Traditional Upgrade will be required for this database file.
Bring down all V4 GT.M processes and execute mupip rundown -file on each database file to ensure that there are no processes accessing the database files. Change the file permissions to read-only.
For every database region that has GT.M replication turned on, note down the Region Seqno (DSE DUMP -FILE for each database region displays this value). If upgrading to a version prior to GT.M V5.1-000, also note down the Resync Seqno (displayed alongside the Region Seqno) for each database region.
Use V4 mupip extract -format=binary to extract data in each database file. Archive V4 database files (and journal files, if you normally archive journal files).
Make copies of all global directory files. Use V5 GDE to open and exit each global directory to update it to V5 format. (This step is not needed when upgrading from V5 field test releases.)
For those database files identified in step 1 as containing records too large to fit in a V5 database file with the same block size, use V5 GDE to increase the block size, and since the initial allocation and extension sizes are in blocks, ascertain whether these should be reduced.
Use V5 mupip create to create new database files.
Use V5 mupip load to load the data extracted from V4 databases into the new V5 database files.
Use V5 mupip set -replication=on to enable GT.M replication on all appropriate database files. Set the Region Seqno for each replicated database region to the value noted down from the corresponding region in Step 3a (use the DSE CHANGE -FILE -REG_SEQNO= <noted-seqno-in-hexadecimal> command for this purpose). On the other hand, if as a result of the upgrade, the database regions that have replication turned on are changing, there will no longer be a one-to-one correspondence between the pre-upgrade and post-upgrade replicated database regions. In this case, compute the maximum (across all replicated database regions) of the Region Seqno values noted down in Step 4. Set the Region Seqno of ALL the post-upgrade replicated database regions to the computed maximum value. If this replication instance will be brought up as a secondary after the upgrade, it is important to note that the receiver server should be started with the -updateresync qualifier. If upgrading to a version prior to GT.M V5.1-000, set the Resync Seqno for each replicated database region to the value noted down from the corresponding region in Step 4 (use the DSE CHANGE -FILE -RESYNC_SEQNO= <noted-seqno-in-hexadecimal> command for this purpose). Just like the Region Seqno, if there is no one-to-one correspondence between the pre-upgrade and post-upgrade replicated database regions, compute the maximum of the noted Resync Seqno values and set the Resync Seqno of ALL replicated database regions to this value.
Recompile routines and rebuild shared libraries (can be done in parallel with the above).
Use GT.M V5 normally.
The advantage of an in-place upgrade is that in a single site implementation of GT.M, the time that applications are unavailable is minimized (tens of seconds to minutes in the typical case).
The following steps can be taken while GT.M V4 applications are normally active.
Using DSE, increase the Reserved Bytes parameter in each database file header by 8 (UNIX) or 9 (OpenVMS).
Run dbcertify scan on each V4 database file to determine whether there are any records that are too large to fit in a V5 database with the same block size.
Run V5CBSU as a V4 process (this is an M program that will compile and run on all V4 versions of GT.M). Optionally, run V4 mupip reorg before V5CBSU - this may reduce the time taken by V5CBSU in the event that the output of the preceding step is large.
Take a backup of all the database files, in the event that a backup is not planned for when the application is down.
Bring down all V4 GT.M processes and execute mupip rundown -file on each database file to ensure that there are no processes accessing the database files.
Take a backup of all the database files. Alternatively, use mupip journal -recover -forward to apply journal files to backups of database files taken above. Note that this may involve multiple generations of journal files for each database file. Archive journal files, if the policy is to archive journal files.
Use dbcertify certify to certify all the database files as being ready to be upgraded to V5 format.
Use V5 mupip upgrade to upgrade the database file header from V4 format to V5 format.
Recompile routines and rebuild shared libraries (can be done in parallel with the above).
Make copies of all global directory files. Use V5 GDE to open and exit each global directory to update it to V5 format. (This step is not needed when upgrading from V5 field test releases.)
Run dse maps -restore_all on the upgraded database. This will rebuild all the maps and reflect the blocks used in the database file.
Resume normal GT.M applications with V5.
While GT.M V5 applications are in normal use, run V5 mupip reorg -upgrade to convert any V4 format blocks that may be read but may not be updated (e.g., history records) to V5 format.
At the heart of GT.M's transaction management capability is the "transaction number" (TN). Each index or data block has a "block transaction number" (BTN) field. At each database update, the BTN field of all the altered blocks has the "current transaction number" (CTN) recorded within it. When a process references a block, by comparing the BTN with the BTN the last time it referenced that block, the process can determine whether the block has changed or not. Having a monotonically increasing CTN for the database, and being able to compare this with the BTNs is therefore crucial to being able to maintain consistency and structural integrity of the database during normal operation with hundreds or thousands of concurrent processes simultaneously updating the database.
For all GT.M releases through the V4 series[1], the TN was an unsigned 32-bit integer, which meant that the maximum transaction number was 4,294,967,295. Before the maximum transaction number was reached, a database had to have all of its BTNs reset with a mupip integ -tn_reset command (see Chapter 6 of the GT.M Administration and Operations manual), a procedure that requires stand-alone access to the database, and which can take several hours on a large production database. GT.M was not very user friendly in that while a MUPIP INTEG would generate a warning when the CTN was within approximately 300 million of the limit (i.e., more than 93% of the TN range was used), normal operation would not generate any warning. Causing the transaction number to wrap around could damage the database.
Furthermore, as computer systems have become faster, production databases at large financial institutions would require a TN RESET every few months. Unless logical dual site operation of an application is implemented, a TN RESET requires that the GT.M based application not be available for those few hours.
Effective GT.M V5.0-000, the TN becomes an unsigned 64-bit number with a maximum value of 18,446,744,073,709,551,615. This means that if a financial institution needs an annual TN reset with GT.M V4, it would need a TN reset every 4,294,967,295 years with GT.M V5. A transaction processing application that runs flat out at a non-stop rate of 1,000,000 updates per second would need a TN reset approximately every 584,554 years. Thus, a 64-bit TN simplifies the operational management of a GT.M application.
Since V4 databases only reserve 32-bits of space for TNs, the GT.M V5 database format is different from the V4 format (hence the need to change the major software version number from V4 to V5, as in V4.4-004 to V5.0-000), and a database upgrade is required when migrating from GT.M V4 to GT.M V5. In V5.0-000, the block header requires 16 bytes, vs. 8 bytes in all prior releases, through V5.0-FT02. This means that any prior database file that has even a single block that contains a single record that is larger than sixteen bytes less than the block size will require a database file with larger block size in V5.0-000.
The traditional way to migrate a database is to perform a MUPIP EXTRACT with GT.M V4, and a MUPIP LOAD of the extracted data with GT.M V5. However, this approach requires that the database file be unavailable for the time that it takes to extract and load - potentially several hours - during which the application is not available, unless the application is deployed in a logical dual site configuration.
GT.M V5.0 therefore provides an alternative mechanism to upgrade the database, one that allows most operations to be performed while the database is in use, and reduces the required time that the database is unavailable to - in the typical case - seconds to minutes.
Also, for those who wish to try GT.M V5 before they switch to it, GT.M V5.0-000 has the ability to run with a database format that can almost instantaneously be switched to a V4 format, albeit with some impact on performance when using this feature.
In order to accommodate 64-bit TNs, there are changes to the database file header, as well as to each database index and data block. In order to facilitate a database migration approach that minimizes the amount of time that an application must be unavailable, there are additional fields to support an in-place upgrade.
A further enhancement effective V5.0-000 is that the maximum database file size is 134,217,728 (128M) blocks, whereas in all prior releases, through GT.M V5.0-FT02, the maximum database file size is 67,108,864 (64M) blocks. V4 databases that are upgraded to V5 retain the maximum database file size limit of 64M blocks (which means that with the popular 4KB block size, a single database file can grow to a maximum size of 256GB). |
There are a number of changes to the database file header, and these are reflected in the output of the dse dump -fileheader command, as described below:
All transaction number fields (including CTN) are increased in size from 32 bits to 64 bits.
Master Bitmap Size - since the master bitmap size is different for databases created with V5, compared to those upgraded from V4, the master bitmap size is made explicit and visible.
Blocks to Upgrade - in order to support an incremental upgrade, there is a count of the blocks in the database that are still in V4 format.
Desired DB format - for an option to "try before you switch", as well as an incremental database downgrade for those wishing to migrate a database from V5 to V4, there is a field that specifies the format of database blocks that are written out to the database file.
Certified for Upgrade to - in order to accomplish the incremental upgrade, a database must be "pre-certified" using a new dbcertify utility that certifies that a database is ready for an incremental upgrade.
"Hard Stop" transaction number - even if a database is in V5 format with a maximum transaction number of 264 - 1, for users who wish to preserve the ability to downgrade to V4, where the maximum transaction number cannot exceed 232-1, there is a "hard stop" TN. In order to support journal recovery and rollback for databases in V4 format, this needs to be 232-1-128M, or 4,160,749,567 (0xF7FFffff). For a database in V5 format, this needs to be 264-1-256M or 18,446,744,073,441,116,159 (0xFFFFffffEFFFffff)
"Warning" transaction number - when the transaction number crosses this limit, GT.M will generate a warning, and update the warning transaction number to a new value. The new warning TN will be 256M (128M for a database with a Desired db format of V4) greater than the current warning TN unless the current warning TN is within 256M (128M for a Desired db format of V4) of the Hard Stop TN, in which case the new Warning TN will be halfway between the old Warning TN and the Hard Stop TN (so that the frequency of warnings will increase as the Hard Stop TN is approached).
The layout of the database file header has also changed, but this is not of any consequence to users since the only way to interact with the database file header is via the dse change -fileheader and mupip set commands.
Every V5 format database and index block requires a 64-bit BTN. Furthermore, since a database can have both V4 and V5 format blocks within it, there is a need to distinguish between V4 format blocks and V5 format blocks.
The V4 block header is 8 bytes on UNIX, 7 bytes on OpenVMS, and is laid out as follows:
2 bytes - bytes currently in use in the block
1 byte - block level
4 bytes - BTN
In V5, the block header is increased to 16 bytes, and is laid out as follows:
2 bytes - block version number
2 bytes - bytes currently in use in the block
1 byte - block level
8 bytes - BTN
If the value in the first two bytes is 0 through 6, GT.M V5 knows that the block is a V5 block and the field is a block version number; if it is 7 or greater, GT.M V5 knows that it is a V4 block and the field is the number of bytes currently in use in the block. This feature - a true hack - permits a GT.M database to contain both V4 format blocks as well as V5 format blocks, and facilitates the incremental upgrading of V4 databases to V5 with minimal unavailability of applications.
One result of the fact that GT.M V5 has a 16-byte block header, whereas GT.M V4 has a 7 or 8 byte block header is that it is possible to have a record in a V4 database that will not fit in a V5 database with the same block size. The largest record that can fit in a V4 GT.M database on x86 GNU/Linux with a 4KB block size is 4088 bytes, whereas the largest record that can fit in a V5 GT.M database on the same platform is 4080 bytes. If there are any records whose size exceeds blocksize-16, then a new V5 database will need to be created with a larger block size. [2]
Since GT.M V5 adds support for names longer than 8 characters (see GT.M Long Names Technical Bulletin), as well as optional changes to the collation of null subscripts (see GT.M Null Subscripts Technical Bulletin), the V5 global directory format is different from a V4 global directory format, and must be upgraded.
Please refer to the V5.0-000D Release Notes to upgrade the global directory.
The global directory in use at the time of database upgrade MUST have a mapping of globals to databases that exactly matches the globals that are actually resident in those databases. Some sites have more than one global directory with some having reduced or changed mappings such that, for example, a database may have more than one global in it but the global directory mapping has only a single global in it. This situation can potentially cause the database upgrade procedure to fail because database certification was not correctly processed. A sign that this could be an issue is if MUPIP REORG UPGRADE or a GT.M process fails with the DYNUPGRDFAIL message (block has insufficient room for expansion) after the V5 upgrade. |
Journal files are specific to each GT.M version, and a journal file generated by one GT.M version is not compatible with another GT.M version. In particular, GT.M V4.4-004 journal files can only be processed by V4.4-004 and cannot be processed by V5.0-000. This means that a successful database backup as of the point of conversion is strongly recommended.
An online backup that allows the application to continue operating will not suffice for such a backup, since a GT.M mupip backup command creates a snapshot of the database as of the instant that the command is issued, and not as of the time that the command completes. |
A GT.M application that is configured for logical dual site operation can be upgraded from V4 to V5 while maintaining continuity of application availability. Please follow the procedures in the Database Replication chapter of the GT.M Administration and Operations Guide. Note that V4 journal files and mupip backup -bytestream files are not compatible with V5.
The traditional approach to upgrading a database file is to extract it on the lower GT.M version with a mupip extract and to load it on the higher GT.M version with mupip load. Both BINARY format (mupip extract -format=binary) as well as the default ZWR format will work; however, the former is likely to be faster.
Since the V4 database files are not modified with this approach, they can simply be archived, and a separate backup is not needed. The mupip extract files can also serve as an alternative backup.
The benefits of a traditional approach are its simplicity and the fact that a database created with GT.M V5.0-000 can grow to a maximum size of 134,217,728 (128M) blocks, whereas a database created with a GT.M V4 release and upgraded to V5 retains a maximum size of 67,108,864 (64M) blocks.
Since a database extract and reload can take hours for large database files, GT.M V5.0-000 provides a method to upgrade database files that, at least in the typical case, requires that the application be brought down for only seconds to minutes. The basic approach to the in-place upgrade is the ability of GT.M V5.0-000 to read database blocks in V4 or V5 format. Whenever a block is updated, it will be written in V5 format. Thus, over time, the blocks are converted from V4 format to V5 format. Conceptually, this is a three-step process:
Test and Certify: Test whether a database qualifies for an in-place upgrade, and certify it if it is. Except for a small part at the end, which in the typical case is expected to take only seconds to minutes, this step can be performed while the application is operating normally.
Upgrade Database File Header: Upgrade the database file header from V4 format to V5 format. This requires stand-alone access of at most a few seconds.
Use Applications Normally: Use the application normally with GT.M V5: When a database block is read in from disk to the global buffer cache, it is converted from V4 format to V5 format. Any V4 format blocks that are updated are automatically converted to V5 format. Since some database blocks may never be updated under normal operation (e.g., history records), a mupip reorg -upgrade command can be executed in the background to update all V4 format blocks to V5 format blocks.
The critical requirement for being able to upgrade a database from V4 format to V5 format, whether in place or traditional, is that there must be no blocks whose records occupy more than blocksize-16 bytes. This is because database blocks in the global buffer cache are always in V5 format - when GT.M V5 reads a V4 format block into the global buffer cache, it converts it to V5 format. When writing out blocks, they can be written out in V5 format or converted to V4 format. If a database has no blocks with records occupying more than blocksize-16 bytes, upgrading to V5 is a snap. If it does have such blocks, they will need to be dealt with.
There are two types of blocks whose records occupy more than blocksize-16 bytes - those with one record and those with more than one record. The latter can be made suitable for upgrading to V5 format blocks simply by splitting the block, but the former cannot: if a database contains even one block with one record which is larger than blocksize-16 bytes, that database cannot be upgraded in place from V4 to V5. Such a database file will require a new database to be created in V5 with a larger block size, following which the data must be extracted in V4 and loaded in V5.
The dbcertify program is used to scan a V4 database and identify blocks with more than blocksize-16 bytes occupied. The testing and certification process has several steps:
Increase the Reserved Bytes parameter in the database file header by 8 (UNIX/Linux) or 9 (OpenVMS). Note that the Reserved Bytes field is typically zero, so the new values will typically be 8 (UNIX) or 9 (OpenVMS). However, your database files may not be typical, so always check the value of this field with dse dump -fileheader and increment the value accordingly. This will ensure that no new records are created with a record size greater than blocksize-16, even though existing records will remain unaltered. You can run a mupip reorg on the database after setting Reserved Bytes to the new value; this will reduce the number of blocks that dbcertify will need to deal with in the next step.
Verify that the Maximum Record Size parameter is less than or equal to blocksize-16, which it must be in V5. If it is not, you must determine whether your application requires it to be greater than blocksize-16, in which case a database upgrade is not possible: you will need to create a new database in V5 with a larger block size, into which you can load data extracted from the V4 database. If the application permits the Maximum Record Size parameter to be reduced to be less than or equal to blocksize-16, then reduce it accordingly. Once the Maximum Record Size parameter is less than or equal to blocksize-16, dbcertify can be used.
The dbcertify program has two phases, a scan phase and a certify phase. Fidelity strongly recommends a backup before the certify phase, refer to the Tip in the Journal Files and Backups section for a suggestion on how to obtain a fast backup.
The scan phase runs concurrently with normal operation of GT.M V4. It makes no modifications to the database, but instead produces a report of blocks that are too large to be used by GT.M V5.
The certify phase requires stand-alone access to the database. An M program is provided with GT.M V5.0-000, V5CBSU, which can run concurrently with normal operation of GT.M V4 applications, and which can take the output of the dbcertify scan phase and help reduce the amount of work the certify phase must perform. This reduces the amount of time that dbcertify requires stand-alone access to the database.
Once the certify phase of dbcertify completes, mupip upgrade can be used to upgrade the database file header.
dbcertify assumes that there is no structural damage to the database. Therefore, Fidelity strongly recommends that a mupip integ (without the -fast switch) be performed on the database or a backup copy of the database as soon as possible prior to running dbcertify, and not to start dbcertify if there are any unusual events or unusual messages in the operator log after the mupip integ operation (or the backup operation that created the backup on which mupip integ was run). |
The dbcertify certify and mupip upgrade steps are not journaled and are not interruptible. In the event of a hardware or software malfunction in the dbcertify or mupip upgrade, the V4 database will need to be restored from a backup. |
Although dbcertify and V5CBSU are written to have minimal impact on running applications, the fact that they perform IO means that the impact will not be zero. Whether or not the impact is noticeable depends on the extent to which IO is limiting throughput on your system. |
Once the certify phase of dbcertify has completed, the database file header can be upgraded from V4 to V5 with mupip upgrade, an operation that should take a fraction of a second on even the largest databases, since it only affects the file header. Once the database file header has been upgraded, the database can be used by GT.M V5 processes.
A GT.M V5 database has a switch in the file header that specifies whether blocks that are updated are written to the database in V4 format or V5 format. Mupip upgrade will set the switch to cause GT.M V5 processes to write updated blocks in V5 format. To cause GT.M V5 processes to write updated blocks in V4 format, an option which preserves the ability to revert to GT.M V4 on short notice, use a subsequent mupip set -version=v4 command.
Using GT.M V4 blocks in GT.M V5 will incur a small performance penalty, which may or may not be noticeable depending on what is limiting the throughput of your system. |
Mupip upgrade will reduce the Reserved Bytes field in the database file header by 8 bytes (UNIX) or 9 bytes (OpenVMS). |
Once the database file header has been upgraded, the database can be used normally with GT.M V5 processes. With the default database file header upgrade, any database blocks that are updated are written in V5 format, so that over time an increasing percentage of the blocks will be in V5 format.
Since there may be blocks in V4 format that are never updated (e.g., history records), and as long as they are in V4 format, every read will incur at least a slight performance penalty, mupip reorg -upgrade can be used in the background to convert these blocks to V5 format.
To upgrade a database file used with the MM access method (supported on OpenVMS only), it must either have its access mode changed to Buffered Global (BG) to use the upgrade method that minimizes access time, or the traditional upgrade method should be used.
Changing the access method from MM to BG requires stand-alone access and is accomplished by the mupip set command. |
It is feasible to revert from GT.M V5 to V4. The database must first be downgraded to V4 format before it can be accessed by a GT.M V4 release. This is accomplished with the steps below, assuming that the CTN of the database is less than 4,294,967,295 (0xFFFFFFFF). If the CTN exceeds this number, a mupip integ -tn_reset must first be performed, or the database must be extracted using V5 mupip extract (with the default ZWR format) and loaded with V4 mupip load into a database created with V4 mupip create.
The following steps can take place during normal operation of GT.M V5 processes.
Use mupip set -version=v4 so that any blocks that are updated are written in V4 format.
Use mupip reorg -downgrade to convert all blocks from V5 format to V4 format.
Bring down all V5 GT.M processes and execute mupip rundown -file on each database file to ensure that there are no processes accessing the database files.
Use mupip downgrade to change the database file header from V5 to V4. Note that this will increase the Reserved Bytes field by 8 (UNIX) or 9 (OpenVMS).
Restore the copies of all the saved V4 global directory files.
Use GT.M V4 normally.
dbcertify is distributed with GT.M V5.0-000. Since it is intended to run against V4 databases, concurrently with GT.M V4 processes, it is not installed by the normal GT.M installation script. Instead:
On OpenVMS, dbcertify and v5cbsu are in a VMSINSTAL save set named GTDC50000. The DCL command: DBCERTIFY :== "$..[..]DBCERTIFY.EXE" should be used to make DBCERTIFY an executable command. The GTMDCDEFINE.COM command procedure contains this command.
Please note that GTCD5000 is a different GT.M component, unrelated to dbcertify, which is packaged as GTDC5000. |
On UNIX, dbcertify and v5cbsu are packaged as a separate tar file dbcertify_V50000_<osname>_<platform>_pro.tar.
Syntax is:
dbcertify scan [-outfile=fn] [-report_only] <db-region-name> dbcertify certify <scan-phase-outfile>
The DBCERTIFY utility certifies a V4 database as being ready for upgrade to V5. This is a 2-phase process, consisting of a scan phase and a certify phase.
dbcertify scan [-o[utfile]=filename] [-r[eport_only]] region_name
The scan phase identifies the blocks that need to be split as well as records and blocks with single records that are too long to be split. The scan phase can run concurrently with normal application operation of GT.M V4. The region must map to a V4 format database file (V5.0-FT01/V5.0-FT02 databases are V4 format databases).
Normally, dbcertify scan will produce an output file that will be used by dbcertify certify, and optionally by the program V5CBSU. The use of the optional -report_only qualifier will suppress the production of the output file. dbcertify scan can be run as often as needed.
The Reserved Bytes file-header setting must be at least 8 on UNIX and 9 on OpenVMS. The Maximum Record Size of the database must be at least 16 bytes less than the database block-size. If the Maximum Record Size is manually reduced with DSE, for any records whose length exceeds the new Maximum Record Size - an error is raised which will prevent the successful completion of the scan phase.[3]
Scan phase results:
Reports on how many blocks need to be split and if any records exist which are longer than blocksize-16 bytes.
If no errors and if REPORT_ONLY was not specified, generates an output file containing information for the certify phase. If a filename is not specified, the default filename is <dbfilename>.dbcertscan for UNIX and <dbfilename>_DBCERTSCAN for OpenVMS.
The V5CBSU utility can optionally be run after the scan phase and before the certify phases in order to reduce the standalone run time of the certify phase. |
The certify phase of DBCERTIFY runs standalone and performs block splits on the blocks identified by the scan phase. If there are no errors, the database is marked as being certified ready for an upgrade to V5 format. This certification is required before MUPIP UPGRADE will operate on a database.
Before running DBCERTIFY, it is recommended that an integrity check be run on the databases (not with the -fast option) or their immediate backups. If the running of the certify phase is separated from the scan phase by any appreciable amount of time, the integrity checks should be re-run. This is because dbcertify is written to ASSUME that the database's integrity is sound. Specifically, the final certify phase will not run on a database whose kill-in-progress indicator is on. The database must not have been moved or recreated since the scan phase was run or the scan phase must be re-run. The database information is stored in the scan phase output file.
dbcertify certify must be run in standalone mode.
Certify phase results:
All "too large" blocks are appropriately split.
Replication and/or journaling are turned off.
The database is marked as certified.
This command requires that $gtm_dist on UNIX or GTM$DIST on OpenVMS be set to the directory of the current GT.M V4 installation.
The dbcertify program along with the V5CBSU utility should be run on the V4 system.
For OpenVMS, gtmdcbuild.com created during install will build v5cbsu.exe if it wasn't done during the install and gtmdcdefine.com also created during the install will define the symbols DBCERTIFY and V5CBSU. The executable for V5CBSU must be built before the existing databases can be prepared for upgrade so as to be used with the new GT.M version.
V5CBSU is a normal M routine for processing the output of dbcertify, and can be run with any V4 GT.M version. It is packaged and distributed with DBCERTIFY.
V5CBSU optionally runs between the SCAN and CERTIFY phases of DBCERTIFY and reduces the work dbcertify -certify must perform in standalone mode. It runs with the GT.M V4 applications and splits level 0 global variable tree blocks as identified in the report from the scan phase of DBCERTIF.
V5CBSU splits as many blocks as it is able to and shows statistics on how successful it was when it completes. It rewrites the scan phase output file to exclude the blocks that successfully processed.
V5CBSU's input is the output file from dbcertify -scan.
The database associated with the file must still exist and be accessible to the utility.
Syntax on UNIX:
$gtm_dist/mumps -run ^v5cbsu scan-phase-output-file-name-path
Syntax on OpenVMS:
Note that V5CBSU.m is packaged as part of the DBCERTIFY but in order to use it in OpenVMS you need an executable image. When the DBCERTIFY kit (that is, GTDCnnnnn.A) is installed, it creates two command procedure files, GTMDCBUILD.COM and GTMDCDEFINE.COM. During a standard install, V5CBSU.EXE will be created using GTMDCBUILD.COM. Setup the DCL command: V5CBSU :== "$..[..]V5CBSU.EXE" where ..[..] represents the absolute path containing the executable. GTMDCDEFINE.COM creates DCL symbols for V5CBSU and DBCERTIFY.
For GT.M V5.3-003 through V5.4-000A, V5CBSU.m uses the utility routines %DH and %EXP whose M routines are _DH.m and _EXP.m Use MUMPS (V4 GT.M) to compile V5CBSU.m, _DH.m and _EXP.m, then use LINK V5CBSU.OBJ,_DH.OBJ,_EXP.obj to create a V5CBSU.EXE executable. |
V5CBSU scan-phase-output-file-name
If MUPIP REORG has been run with Reserved Bytes set to at least 8 for UNIX or 9 for OpenVMS, before a dbcertify -scan, the V5CBSU utility has little or nothing to do. |
V5CBSU operates as follows:
It renames the input scan file (created by DBCERTIFY SCAN) with a _orig suffix (for example mumps.dbcertscan to mumps.dbcertscan_orig) as part of its processing. If a file with the _orig suffix already exists, that is deleted before renaming the input scan file.
It writes bypassed records to the output scan file. This way DBCERTIFY CERTIFY later upgrades any of those blocks requiring adjustment to the new requirements.
Invoking the entryref dump^V5CBSU (instead of the usual ^V5CBSU) allows for a diagnostic mode of operation where V5CBSU dumps the contents of the scan file. It does not touch either the database or the scan file. This is usually needed for debugging purposes.
DSE incorporates new values, fields, and qualifiers to support 64-bit transaction numbers. The block and file header output are modified and restructured to accommodate new data elements.
The dse dump command displays the database blocks, records, or file headers.
-B[LOCK]=block_number
Displays the block transaction number as a 64-bit value and outputs the version number of the block format in the block header.
Example: DSE> dump -block=2 V4 format output: Block 2 Size 13 Level 0 TN 1 Rec:1 Blk 2 Off 8 Size B Cmpc 0 Ptr 4 Key ^a 8 : | B 0 0 0 61 0 0 4 0 0 0 | | . . . . a . . . . . . | V5 format output: Block 2 Size 1B Level 0 TN 1 V5 Rec:1 Blk 2 Off 10 Size B Cmpc 0 Ptr 4 Key ^a 10 : | B 0 0 0 61 0 0 4 0 0 0 | | . . . . a . . . . . . |
-F[ILEHEADER]
Outputs new fields that are associated with V5, as noted below. Existing transaction number fields that were formerly 32 bits and which are 64 bits in V5, are now displayed as 64-bit fields. Note that the sequence numbers used in replication have always been 64-bit fields whose length is unchanged in V5.
V5.0-000 adds fields to the file header in conjunction with the use of helper processes on the secondary. This adds the qualifier -updproc to the dump -fileheader command to dump those additional fields. Refer to the GT.M Update Helper Processes Technical Bulletin for details. |
Data Elements |
Description |
Master Bitmap Size |
The number of 512-byte disk blocks in the master bitmap. This is 32 for a V4 database upgraded to V5, or 64 for a database created in V5. |
Blocks to Upgrade |
The number of database blocks, including local bit maps, which are in V4 format. Except for a database in transition from V4 format to V5 format, this will normally be zero. |
Desired DB Format |
The format to use when writing database blocks back out to disk. Values are "V4" and "V5". |
Certified for Upgrade to |
The format, if any, for which the database has been certified. The only value is "V5". |
Maximum TN |
The "hard-stop" TN for this region. |
Maximum TN Warn |
TN at which the next tn_reset warning will be sent to the operator log/console. |
Example: DSE> dump -fileheader V4 format output: File /gtm/v44004/mumps.dat Region DEFAULT Date/Time 06 MAY 2005 14:33:42 [$H = 60026,52422] Access method BG Global Buffers 1024 Reserved Bytes 0 Block size (in bytes) 1024 Maximum record size 256 Starting VBN 49 Maximum key size 64 Total blocks 0x00000065 Null subscripts FALSE Free blocks 0x00000062 Last Record Backup 0x00000001 Extension Count 100 Last Database Bckup 0x00000001 Number of local maps 1 Last Bytestream Bckup 0x00000001 Lock space 0x00000028 In critical section 0x00000000 Timers pending 0 Cache freeze id 0x00000000 Flush timer 00:00:01:00 Freeze match 0x00000000 Flush trigger 960 Current transaction 0x00000001 No. of writes/flush 7 Create in progress FALSE Modified cache blocks 0 Reference count 1 Wait Disk 0 Journal State DISABLED Mutex Hard Spin Count 128 Mutex Sleep Spin Count 128 Mutex Spin Sleep Time 2048 KILLs in progress 0 Replication State OFF Region Seqno 0x0000000000000001 Resync Seqno 0x0000000000000001 Resync transaction 0x00000001
V5 format output: File /gtm/v50000/mumps.dat Region DEFAULT Date/Time 06 MAY 2005 14:33:47 [$H = 60026,52427] Access method BG Global Buffers 1024 Reserved Bytes 0 Block size (in bytes) 1024 Maximum record size 256 Starting VBN 129 Maximum key size 64 Total blocks 0x00000065 Null subscripts NEVER Free blocks 0x00000062 Standard Null Collation FALSE Free space 0x00006000 Last Record Backup 0x0000000000000001 Extension Count 100 Last Database Backup 0x0000000000000001 Number of local maps 1 Last Bytestream Backup 0x0000000000000001 Lock space 0x00000028 In critical section 0x00000000 Timers pending 0 Cache freeze id 0x00000000 Flush timer 00:00:01:00 Freeze match 0x00000000 Flush trigger 960 Current transaction 0x0000000000000001 No. of writes/flush 7 Maximum TN 0xFFFFFFFFDFFFFFFF Certified for Upgrade to V5 Maximum TN Warn 0xFFFFFFFF5FFFFFFF Desired DB Format V5 Master Bitmap Size 64 Blocks to Upgrade 0x00000000 Create in progress FALSE Modified cache blocks 0 Reference count 1 Wait Disk 0 Journal State DISABLED Mutex Hard Spin Count 128 Mutex Sleep Spin Count 128 Mutex Spin Sleep Time 2048 KILLs in progress 0 Replication State OFF Region Seqno 0x0000000000000001 Resync Seqno 0x0000000000000001 Resync trans 0x0000000000000001
-BL[OCK]=block_number -TN[=transaction_number]
The block command with the TN qualifier changes the transaction number for the given block. This takes in a 64-bit (16 hex digit) value, when not running in compatibility mode. In compatibility mode, the value is limited to 32-bit (8 hex digit) value. When a change command does not include a -TN=, DSE sets the transaction number to the current transaction number. Please note that manipulation of the block transaction number will affect whether mupip backup -bytestream backs up the block. The -TN qualifier is incompatible with all qualifiers except -block, -bsiz, and -level.
Example: V4 format: DSE> change -block=3 -tn=FFFFFFFF V5 format: DSE> change -block=3 -tn=FFFFFFFFFFFFFFFF
-FI[LEHEADER]
The dse change -fileheader command enables modification of specific fields in the file header. The new qualifiers are: blks_to_upgrade, cert_db_ver, db_write_fmt, mbm_size, max_tn, and warn_max_tn allow modifications to the new file header fields, the master bitmap size, blocks to upgrade, the desired db format, certified for upgrade to, the maximum tn, and the maximum tn warn.
File header field |
Description |
blks_to_upgrade |
Modifies the blocks to upgrade field in the file-header. Note: unless a database is damaged, e.g., in recovering from a system crash when journaling is not in use, it should never be necessary under normal operation to modify this field. If the current value of blks_to_upgrade in the file-header is suspect, a mupip integ of the database (not a mupip integ -fast) will repair the counter so long as there are no other integrity errors in the database. The value is an integer from 0 to the number of blocks in the database. |
cert_db_ver |
Modifies the certified for upgrade to version. The only accepted value is V5. |
db_write_fmt |
Modifies the desired db format for database blocks written out (to disk). Accepted values are V4 and V5. |
mbm_size |
Modifies the master bitmap size field in the file-header. Note: it should not normally be necessary to modify this field. Do not modify it except when so instructed by GT.M Support. |
max_tn |
Modifies the maximum tn field. Accepted values are: For db_write_fmt set to V5, values can be current tn up to a maximum of (2**64 - 256M) For db_write_fmt set to V4, values can be current tn up to a maximum of (2**32 - 128M) The value is specified in hex. |
warn_max_tn |
Modifies the maximum tn warn field in the file-header. This value can be set to range of the current tn up to max_tn. Note this field is updated automatically to the next warning point whenever a warning is issued or when max_tn is set. The value is specified in hex. |
Modify these fields only when so instructed by GT.M Support. |
The following qualifiers allow 64-bit (16 hex digit) values when not running in compatibility mode. In compatibility mode, the values are limited to 32-bit.
-CU[RRENT_TN]=transaction_number
Changes the current transaction number for the current region. This qualifier has implications for mupip backup -bytestream. Raising the -current_tn effectively can correct block transaction number too large errors.
-B[YTESTREAM]=transaction_number
Changes the transaction number in the file header of the last -bytestream backup to the value specified. Use this qualifier only in conjunction with the -fileheader qualifier. For compatibility issues with prior versions, this can still be specified as -b_incremental.
-B_R[ECORD]=transaction_number
Changes the transaction number in the file header of the last -record backup to the value specified. Use this qualifier only in conjunction with the -fileheader qualifier.
-D[ATABASE]=transaction_number
Changes the transaction number in the file header of the last -database backup to the value specified. Use this qualifier only in conjunction with the -fileheader qualifier. For compatibility issues with prior versions, this can still be specified as -b_comprehensive.
Displays a number in both hexadecimal and decimal formats, use it to translate a hexadecimal number to decimal and vice versa. The -decimal and -hexadecimal qualifiers specify the input base for the number. The -number qualifier is a required value. Dse eval now supports 64-bit values for all these qualifiers.
V4 format: DSE> eval -h -n=FFEEFFDD Hex: FFEEFFDD Dec: 4293853149 DSE> eval -d -n=4293853149 Hex: FFEEFFDD Dec: 4293853149 V5 format: DSE> eval -h -n=FFFFEEEEDDDDFFFF Hex: FFFFEEEEDDDDFFFF Dec: 18446725308424781823 DSE> eval -d -n=18446725308424781823 Hex: FFFFEEEEDDDDFFFF Dec: 18446725308424781823
Mupip processing includes changes to several of its operations such as backup, restore, create, integ, load, set, journal show and journal extract options.
Mupip backup now supports 64-bit transaction number qualifiers.
-T[RANSACTION]=transaction-number
Specifies a starting transaction, which causes backup -bytestream to copy all blocks that have been changed by the specified transaction and all subsequent transactions. Transaction numbers are now 16 digit hexadecimal numbers.
GT.M V5 mupip backup will not work with a V4 database If doing a bytestream backup over a TCP connection, the receiving and sending systems must both be running the same version of GT.M with the same database file header version. |
Mupip restore integrates one or more backup -bytestream files into a corresponding database. The transaction number in the first bytestream backup file must be one more than the current transaction number of the database, and the first transaction number of each file in the sequence of files must be one more than that in its predecessor. Otherwise, mupip restore will terminate with an error and refuse to perform the restore. mupip restore now supports 64-bit transaction number qualifiers, when running with a database with a V5 header.
Mupip restore cannot make use of a V4 format backup (bytestream or database). |
Files created with V5.0-000 mupip create now support 128M GDS data and index blocks instead of the 64M in V4, resulting from the larger master bitmap.
V4 databases upgraded to V5 retain a maximum database file-size limit of 64M blocks |
Mupip integ keeps track of the number of blocks that do not have the current block version during a non-fast integ (default or full) and matches this value against the blocks to upgrade counter in the file-header. It issues an error if the values are unmatched and corrects the count in the file header if there are no other integrity errors.
The -FO[RMAT]=BINARY qualifier of mupip load is enhanced to read both V4 format and V5 format binary extracts.
The -version qualifier sets the desired database block format version in the file header for any new blocks being written. Mupip upgrade and reorg -upgrade set this field to V5 while reorg -downgrade sets it to V4. In order to set the version to V4, the current tn of the database must be within the range of a 32-bit maximum.
On V5 MUPIP> set -version=V4 -file mumps.dat Database file mumps.dat now has desired DB format V4 MUPIP> set -version=V5 -file mumps.dat Database file mumps.dat now has desired DB format V5
Mupip journal -show=header displays the begin and end transaction numbers as 64-bit values.
On V4: MUPIP> journal -show=header -forward mumps.mjl Begin Transaction 308 [0x00000134] End Transaction 308 [0x00000134] On V5: MUPIP> journal -show=header -forward mumps.mjl Begin Transaction 1 [0x0000000000000001] End Transaction 1001 [0x00000000000003E9]
Transaction numbers within a journal file extract are now 64-bit integers with a range from 0 to 18,446,744,073,709,551,615
This maximum "number" exceeds the precision that GT.M carries for a numeric value (18 digits). Use of this value in an arithmetic expression will result in loss of precision so it should only ever be treated as a string value in any M program designed to process mupip journal extracts. As a practical matter, it is unlikely that a database file will have a transaction number that exceeds 18 digits. |
Mupip upgrade/downgrade upgrades or downgrades the file-header. File-header fields like current transaction, maximum tn and others that contain transaction numbers will have their size increased from 4 bytes to 8 bytes on upgrade, and correspondingly reduced on downgrade.
UPGRADE
The upgrade command resets the various trace counters and changes the database format to V5. This change does not upgrade the individual database blocks but sets the database format flag to V5. It also initializes a counter of the current blocks that are still in V4 format. This counter is decremented each time an existing V4 format block is converted to V5 format. When the counter is 0, the entire database has been converted.
MUPIP> upgrade mumps.dat You must have a backup before you proceed!! An abnormal termination will damage the database during the operation !! Are you ready to continue the operation [y/n] ? y %GTM-I-TEXT, Mupip upgrade started %GTM-I-MUINFOUINT4, Old file header size : 24576 [0x00006000] %GTM-I-MUINFOUINT8, Old file length : 2180608 [0x0000000000214600] %GTM-I-MUINFOUINT4, Old file start_vbn : 49 [0x00000031] %GTM-I-MUINFOUINT4, Old file gds blk_size : 1024 [0x00000400] %GTM-I-MUINFOUINT4, Old file total_blks : 2105 [0x00000839] %GTM-S-MUPGRDSUCC, Database file mumps.dat successfully upgraded to GT.M V5 Linux x86
DOWNGRADE
The downgrade changes the file header to V4 format. It sets the transaction number below the maximum 32-bit TN value and removes the database certification.
A database that was created with V5.0-000 or which has standard null collation set cannot be downgraded to V4. A mupip extract in ZWR format on V5 followed by a mupip load on V4 is required to effect the downgrade.
MUPIP> downgrade mumps.dat You must have a backup before you proceed!! An abnormal termination will damage the database during the operation !! Are you ready to continue the operation [y/n] ? y %GTM-I-TEXT, Mupip downgrade started %GTM-I-MUINFOUINT4, Old file header size : 8192 [0x00002000] %GTM-I-MUINFOUINT8, Old file length : 2180608 [0x0000000000214600] %GTM-I-MUINFOUINT4, Old file start_vbn : 49 [0x00000031] %GTM-I-MUINFOUINT4, Old file gds blk_size : 1024 [0x00000400] %GTM-I-MUINFOUINT4, Old file total_blks : 2105 [0x00000839] %GTM-S-MUPGRDSUCC, Database file mumps.dat successfully downgraded to GT.M V4
REORG -upgrade
Mupip reorg -upgrade upgrades the GDS blocks that are still in V4 format, after completion of mupip upgrade. This variant of the mupip reorg command runs through the entire database (or until blocks-to-convert = 0) checking the block format of each block. If the block format is V4, it is updated to V5 format and rewritten.
A block is considered to be too long to be upgraded from V4 format to V5 format if the records occupy more than blocksize-16 bytes. The presence of even one such block will prevent a database from being upgraded.
After a database has been certified, such "too long" blocks can occur if the Reserved Bytes field is reduced with a dse change -fileheader command or if a mupip journal -recover or mupip journal -rollback command changes the state of the of the database to before the dbcertify certify operation.
This condition can be detected by mupip reorg -upgrade or during normal V5.0-000 operation when blocks are upgraded from V4 to V5 format. The only way to recover from this condition is to downgrade the database to V4 format and re-run both phases of dbcertify.
This condition can be avoided by not changing Reserved Bytes after any dbcertify step and before the mupip upgrade, and, in the event, a database has a mupip journal -recover or mupip journal -rollback performed on it, then to repeat the dbcertify steps before the mupip upgrade.
-nosafejnl
If before image journaling is active, mupip reorg -upgrade will generate before image records for these block changes even though there is no change to the data in the block. This is to ensure that backwards recovery can recover the database correctly, in the event of a system crash or power outage. In the event that a system has a battery-backed IO subsystem, or a SAN, it is unlikely that there will be incomplete writes to the journal files. As long as any pending write to the journal file is completed in the event of a system crash or power outage, the before image records are not required to recover the database. In the event your hardware guarantees that there will not be an incomplete IO write operation, you can reduce IO load on your system by suppressing the generation of these before images with the use of the -nosafejnl option. If your hardware does not provide such a guarantee, then Fidelity strongly recommends the use of the default behavior, which is to generate the before image records.
MUPIP> reorg -upgrade -reg DEFAULT Region DEFAULT : MUPIP REORG UPGRADE started Region DEFAULT : Desired DB Format remains at V5 after MUPIP REORG UPGRADE Region DEFAULT : Started processing from block number [0x00000000] Region DEFAULT : Stopped processing at block number [0x00000800] Region DEFAULT : Statistics : Blocks Read From Disk (Bitmap) : 0x00000005 Region DEFAULT : Statistics : Blocks Skipped (Free/Recycled) : 0x00000000 Region DEFAULT : Statistics : Blocks Read From Disk (Non-Bitmap) : 0x000007FC Region DEFAULT : Statistics : Blocks Skipped (new fmt in disk) : 0x00000017 Region DEFAULT : Statistics : Blocks Skipped (new fmt in cache) : 0x00000000 Region DEFAULT : Statistics : Blocks Converted (Bitmap) : 0x00000003 Region DEFAULT : Statistics : Blocks Converted (Non-Bitmap) : 0x000007E6 Region DEFAULT : Total Blocks = [0x00000839] : Free Blocks = [0x00000012] : Blocks to upgrade = [0x00000000] Region DEFAULT : MUPIP REORG UPGRADE finished
REORG -downgrade
A reorg -downgrade changes the database blocks to V4 format. The transaction number fields are reduced to 32 bits. In the event the CTN is too large to fit in 32 bits, a mupip integ -tn_reset will need to be performed prior to performing the mupip reorg -downgrade. The blks_to_upgrd counter increases to signify a downgrading database.
MUPIP> reorg -downgrade -reg DEFAULT Region DEFAULT : MUPIP REORG DOWNGRADE started Region DEFAULT : Desired DB Format set to V4 by MUPIP REORG DOWNGRADE Region DEFAULT : Started processing from block number [0x00000000] Region DEFAULT : Stopped processing at block number [0x00000839] Region DEFAULT : Statistics : Blocks Read From Disk (Bitmap) : 0x00000005 Region DEFAULT : Statistics : Blocks Skipped (Free/Recycled) : 0x00000012 Region DEFAULT : Statistics : Blocks Read From Disk (Non-Bitmap) : 0x00000822 Region DEFAULT : Statistics : Blocks Skipped (new fmt in disk) : 0x00000000 Region DEFAULT : Statistics : Blocks Skipped (new fmt in cache) : 0x00000000 Region DEFAULT : Statistics : Blocks Converted (Bitmap) : 0x00000005 Region DEFAULT : Statistics : Blocks Converted (Non-Bitmap) : 0x00000822 Region DEFAULT : Total Blocks = [0x00000839] : Free Blocks = [0x00000012] : Blocks to upgrade = [0x00000827] Region DEFAULT : MUPIP REORG DOWNGRADE finished
All messages that include a transaction number will now output the numbers as 64-bit values. These are BOVTNGTEOVTN, DBBMLCORRUPT, DBFILEXT, DBTN, DBWCVERIFYEND, DBWCVERIFYSTART, DLCKAVOIDANCE, DUPTN, NLDBTNNOMATCH, JNLTNOUTOFSEQ, JNLTPNEST, MUJNINFO, NOTREPLICATED, TPRESTART, TRNARNDTNHI, and WCBLOCKED.
DBBTUFIXED |
The blocks-to-upgrade file-header field has been changed to the correct value |
Severity: |
Info |
MUPIP Informational: |
MUPIP INTEG has corrected the blocks-to-upgrade field. |
Action: |
Report this to the group responsible for database integrity at your operation. |
DBBTUWRNG |
The blocks-to-upgrade file-header field is incorrect. Expected xxxx, found yyyy |
Severity: |
Warning |
MUPIP INTEG Error: |
The "Blocks to Upgrade" counter was found to be incorrect by MUPIP INTEG (this is only checked for non-FAST integs). |
Action: |
If there are no other integrity errors, MUPIP INTEG will repair the counter. If there are other integrity errors, fix those errors first, then rerun MUPIP INTEG which will repair the counter if it is still found to be in error. Although this error is not indicative of any specific kind of database damage it does represent an out-of-design condition (except following a system crash in which before image journaling was not in use) that GT.M Support would like to know about. |
DBCBADFILE |
Source file xxx does not appear to have been generated by DBCERTIFY SCAN - rerun SCAN or specify correct file. |
Severity: |
Error |
DBCERTIFY/V5CBSU Error: |
V5CBSU and DBCERTIFY CERTIFY require the output file from DBCERTIFY SCAN. The file, which was specified is not in the correct format. |
Action: |
Specify the file created by DBCERTIFY SCAN. Rerun DBCERTIFY SCAN if needed. |
DBCCMDFAIL |
Executed command failed with return code xxxx yyyy which executed yyyy yyyy |
Severity: |
Error |
DBCERTIFY Error: |
During processing, the DBCERTIFY attempts to execute certain DSE and/or MUPIP commands in temporary command scripts that DBCERTIFY creates. The specified command failed to execute. |
Action: |
The action to take depends on the code returned by the attempt and if any associated messages were created on either the console or the operator log. Some common causes of problems could be that $gtm_dist (UNIX) or GTM$DIST (OpenVMS) are not properly pointing to the current GT.M V4 version or DBCERTIFY has no access or access to the wrong global directory for which it is executing commands. |
DBCDBCERTIFIED |
Database xxx has been certified for use with xxxx |
Severity: |
Info |
DBCERTIFY Message: |
DBCERTIFY CERTIFY has successfully completed and marked the database as certified for use by the specified GT.M version. |
Action: |
Either keep running GT.M V4 version or proceed immediately to GT.M V5 MUPIP UPGRADE at user's discretion. |
DBCINTEGERR |
Encountered integrity error in database xxxx |
Severity: |
Error |
DBCERTIFY Error: |
DBCERTIFY discovered what appears to be an integrity error while processing the specified database. This error is accompanied by a secondary message explaining the error. |
Action: |
Run a MUPIP INTEG (not FAST integ) on the database in question; fix damage, then re-run the phase reporting the error. If the integrity error persists, contact GT.M Support. |
DBCKILLIP |
Cannot proceed with kill-in-progress indicator set for database xxx |
Severity: |
Error |
DBCERTIFY Error: |
DBCERTIFY discovered that the kill in progress indicator was on for the specified database. DBCERTIFY will not process a database with this indicator on. |
Action: |
Run a MUPIP INTEG (FAST integ is OK) on the database in question; correct errors, then re-run the phase reporting the error. If the error persists, contact GT.M Support. |
DBCMODBLK2BIG |
Block 0xaaa has been modified since DCERTIFY SCAN but is still too large or has an earlier TN than in DCERTIFY SCAN - Rerun scan |
Severity: |
Error |
DBCERTIFY Error: |
DBCERTIFY reports this error when the block it is processing has a different TN than it did in the scan phase yet the block is still too large. |
Action: |
This condition indicates that something has been done to the database since the scan phase was run - either it was restored from an earlier backup or the reserved bytes value was (even temporarily) reduced. DBCERTIFY SCAN must be rerun. |
DBCNOEXTND |
Unable to extend database xxx |
Severity: |
Error |
DBCERTIFY Error: |
DBCERTIFY attempted to use MUPIP EXTEND to extend the database but the attempt failed. |
Action: |
Examine the accompanying messages from the MUPIP EXTEND attempt to see why the extend failed. Some common causes for this are that $gtm_dist on UNIX or GTM$DIST on OpenVMS did not properly point to the currently installed V4 distribution, or there was insufficient disk space to perform the expansion. |
DBCNOFINISH |
DBCERTIFY unable to finish all requested actions. |
Severity: |
Error |
DBCERTIFY Error: |
This indicates DBCERTIFY encountered an error, which prevented the requested action from completing. The action has partially completed. |
Action: |
Review the accompanying message(s) for additional information to identify the cause. |
DBCNOTSAMEDB |
Database has been moved or restored since DBCERTIFY SCAN - Rerun scan |
Severity: |
Error |
DBCERTIFY Error: |
DBCERTIFY has noted that the unique database identifiers for the database have changed since DBCERTIFY SCAN was run. |
Action: |
The database is required to have not been moved around or restored or recovered since DBCERTIFY SCAN was run. DBCERTIFY SCAN must be rerun. |
DBCREC2BIG |
Record with key xxx is length yyy in block 0xaaa is greater than the maximum length yyy in database xxx |
Severity: |
Error |
DBCERTIFY Error: |
DBCERTIFY has identified a record with the given key in the given block with a length that exceeds the maximum length allowed in the given database. |
Action: |
This is typically due to the user reducing the maximum record length to meet the DBCERTIFY requirements but not verifying that no records exist that exceeds that length. The solution is to either delete or otherwise restructure the record or to MUPIP extract/load into a database with a larger blocksize. |
DBCSCNNOTCMPLT |
Specified DBCERTIFY SCAN output file is not complete - Rerun scan |
Severity: |
Error |
DBCERTIFY Error: |
DBCERTIFY CERTIFY has noted that the header of the scan phase output is not filled in indicating that the scan phase did not complete normally. |
Action: |
Rerun DBCERTIFY SCAN to produce a complete output file for the certify phase to process. |
DBDSRDFMTCHNG |
Database file xxx, Desired DB Format set to yyy by zzz with pid ppp [0xppp] at transaction number [0xttt]. |
Severity: |
Info |
MUPIP Informational: |
The desired database block format has been changed to version yyy for database file xxx by the zzz command with process number ppp at transaction number ttt. |
DBMAXREC2BIG |
Maximum record size (xxx) is too large for this block size (yyy) - Maximum is zzz |
Severity: |
Error |
DBCERTIFY/MUPIP Error: |
DBCERTIFY and MUPIP UPGRADE report this error when the maximum record size is too close to the database blocksize and does not allow room for the expanded V5 block header. |
Action: |
Reduce the maximum record size or mupip extract/load into a database with a larger blocksize. Note that if the maximum record size is reduced with DSE, it is possible that records that exceed the reduced size still exist in the database that is now an integrity error. DBCERTIFY SCAN will find these blocks and report on them if they exist. |
DBMINRESBYTES |
Minimum RESERVED BYTES value required for certification/upgrade is xxx - Currently is yyy |
Severity: |
Error |
DBCERTIFY/MUPIP Error: |
DBCERTIFY and MUPIP UPGRADE report this error when the reserved bytes field of the database file header (as shown by DSE DUMP -FILEHEADER) is not at a sufficient value for the GT.M V5 upgrade. |
Action: |
Increase the reserved bytes value with either MUPIP or DSE so that the value is at least 8 bytes for UNIX and 9 bytes for OpenVMS. Note that the reserved bytes value is reduced by the above amounts by MUPIP UPGRADE. |
DBVERPERFWARN1 |
DBVERPERFWARN1, Performance warning: Database aaaa is running in compatibility mode that degrades performance. Run MUPIP REORG UPGRADE for best overall performance. |
Severity: |
Warning |
Run Time Warning: |
This is a warning that the database is currently in compatibility (downgrade) mode. This mode causes all modified GDS blocks to be reformatted (to the downgraded database format) before they are flushed to the database file on disk. This is a very large performance hit. |
Action: |
As the message indicates, run MUPIP REORG UPGRADE as soon as possible to move away from compatibility mode. This command can be run without taking the database offline. Once that completes successfully, the database is fully upgraded and there is no reformatting overhead anymore while flushing modified blocks to disk. |
DBVERPERFWARN2 |
Peformance warning: Database aaaa is not fully upgraded. Run MUPIP REORG UPGRADE for best overall performance. |
Severity: |
Warning |
Run Time Warning: |
This is a performance warning message that indicates the database is not yet fully upgraded i.e. there are still blocks in the database file that need to be upgraded. Staying in this mode causes some inefficiencies which include (but are not limited to) reading blocks from disk. |
Action: |
As the message indicates, run MUPIP REORG UPGRADE at the earliest. This command can be run without taking the database offline. Once that completes successfully, the database file is fully upgraded. |
DYNUPGRDFAIL |
Unable to dynamically upgrade block 0xaaa in database yyy due to lack of free space in block |
Severity: |
Error |
Runtime Error: |
There was not enough free space in the block to convert it in place to the current format during normal database access. This indicates that the DBCERTIFY database certification procedure was not properly carried out. |
Action: |
Either mark the block free (making appropriate index changes) or downgrade the database and re-run DBCERTIFY (both phases). |
MMNODYNDWNGRD |
Unable to use dynamic downgrade with MM access method for region xxx. Use BG access method for downgrade |
Severity: |
Error |
MUPIP/Runtime Error: |
An attempt was made to use MM mode on a database that has not completed being downgraded. MM mode is only supported on fully downgraded or fully upgraded databases. |
Action: |
Use MUPIP SET FILE or MUPIP SET REGION with the ACCESS_METHOD parameter to set the access mode to BG. Then complete the file downgrade using MUPIP REORG DOWNGRADE or file upgrade using MUPIP REORG UPGRADE. Finally set the access mode back to MM using the MUPIP SET FILE or MUPIP SET REGION command again. |
MMNODYNUPGRD |
Unable to use MM access method for region yyy until all database blocks are upgraded |
Severity: |
Error |
MUPIP/Runtime Error: |
An attempt was made to use MM mode on a database that has not completed being upgraded. MM mode is only supported on fully upgraded databases. |
Action: |
Use MUPIP SET FILE or MUPIP SET REGION with the ACCESS_METHOD parameter to set the access mode to BG. Then complete the file upgrade using MUPIP REORG UPGRADE. Finally set the access mode back to MM using the MUPIP SET FILE or MUPIP SET REGION command again. |
MUDWNGRDNOTPOS |
Start VBN value is [xxx] while downgraded GT.M version can support only [yyy]. Downgrade not possible |
Severity: |
Error |
MUPIP Error: |
Older versions of GT.M require the first GDS block be at Virtual Block Number yyy but it is at VBN xxx. This is likely, as the file, being created initially was using a newer version of GT.M and thus cannot be downgraded. |
Action: |
To use the database with an older version of GT.M, it must be extracted with the current version and loaded into the older version both in ZWR format. |
MUDWNGRDNRDY |
Database xxx is not ready to downgrade - still yyy database blocks to downgrade |
Severity: |
Error |
MUPIP Error: |
A MUPIP DOWNGRADE was attempted when the file-header blks_to_upgrd counter was not equal to the database used block count. This means that not all database blocks have been converted to V4 format. |
Action: |
Before the database file-header can be downgraded, all of the blocks in the database must be downgraded to V4 format. This is normally accomplished with MUPIP REORG DOWNGRADE. If this fails to set the counter correctly, run MUPIP INTEG (not FAST) on the region that will compute and set the correct counter. |
MUDWNGRDTN |
Transaction number 0xaaa in database xxx is too big for MUPIP [REORG] DOWNGRADE. Renew database with MUPIP INTEG TN_RESET |
Severity: |
Error |
MUPIP Error: |
A MUPIP DOWNGRADE or MUPIP REORG DOWNGRADE was attempted when the database transaction number was greater than 4,026,531,839 (the TN_RESET warning limit for V4 databases). |
Action: |
Before the database can be downgraded, the transaction number must be reset with the MUPIP INTEG TN_RESET command. This requires standalone access to the database and may take a significant amount of time. |
MUPGRDSUCC |
Database file xxx successfully yyy to zzz |
Severity: |
Success |
MUPIP Informational: |
The database file header for xxx has been upgraded or downgraded to the version zzz format. |
MUREUPDWNGRDEND |
MUREUPDWNGRDEND, Region xxxx : MUPIP REORG UPGRADE/DOWNGRADE finished by pid aaaa [0xbbbb] at transaction number [0xcccc]. |
Severity: |
Info |
MUPIP Informational: |
This is an informational message printed by MUPIP REORG UPGRADE or DOWNGRADE when the reorg has successfully completed its upgrade or downgrade respectively. |
Action: |
None necessary. |
MUUPGRDNRDY |
Database xxx has not been certified as being ready to upgrade to yyy format |
Severity: |
Error |
MUPIP Error: |
The named database file is in an older format than is in use by this GT.M version and has not been certified as ready for use by this GT.M version. |
Action: |
Run DBCERTIFY to certify the database as being ready for upgrade. |
TNTOOLARGE |
Database file xxx has reached the transaction number limit (0xaaa). Renew database with MUPIP INTEG TN_RESET |
Severity: |
Error |
Run Time Information: |
This indicates that GT.M detected that the transaction numbers in the named database have reached the maximum number. There are 0xFFFFFFFF ([2**32 - 1] or 4,294,967,295 decimal) possible transaction numbers for V4 or 0xFFFFFFFFFFFFFFFF ([2**64 - 1] or 18,446,744,073,709,551,615 decimal) possible transaction numbers for V5 version databases. Note that the actual maximum TN is less than this theoretical limit. DSE DUMP FILEHEADER shows what the limit is. The actual limit reflects some overhead used, for example, during a TN_RESET operation. |
Action: |
Use MUPIP INTEG with the qualifier TN_RESET to reset the transaction numbers in the database. If the database is in the V4 format, consider converting it to the V5 format. The database cannot otherwise be used, until the condition is removed by either a TN_RESET or, if a V4 database, changing the output mode to V5 with MUPIP SET VERSION. |
TNWARN |
Database file xxx has 0xaaa more transactions to go before reaching the transaction number limit (0xaaa). Renew database with MUPIP INTEG TN_RESET |
Severity: |
Warning |
Run Time Information: |
This indicates that GT.M detected that the transaction numbers in the named database are approaching the maximum number. There are 0xFFFFFFFF ([2**32 - 1] or 4,294,967,295 decimal) possible transaction numbers for V4 or 0xFFFFFFFFFFFFFFFF ([2**64 - 1] or 18,446,744,073,709,551,615 decimal) possible transaction numbers for V5 version databases. This message is sent to the operator log periodically at decreasing intervals as the transaction number approaches the maximum. Note that the actual maximum TN is less than this theoretical limit. DSE DUMP FILEHEADER shows what the limit is. The actual limit reflects some overhead used,for example, during a TN_RESET operation. |
Action: |
Use MUPIP INTEG with the qualifier TN_RESET to reset the transaction numbers in the database. If the database is in the V4 format, consider converting it to the V5 format. |
Command Syntax: UNIX syntax (i.e., lowercase text and "-" for flags/qualifiers) is used throughout this document. OpenVMS accepts both lowercase and uppercase text; flags/qualifiers should be preceded with "/".
Reference Number: The reference numbers used to track software enhancements and customer support requests appear in parentheses ( ).
Platform Identifier: If a new feature or software enhancement does not apply to all platforms, the relevant platform appears in brackets [ ].
[1] All GT.M releases prior to V5.0-000, including V5 field test releases such as V5.0-FT01, have V4 database formats. V5 field test releases however, have a V5 global directory.
[2] Since the largest permissible database block size is 65,024 bytes, the largest record that can be stored in a V4 database is 65,016 bytes, and the largest record that can be stored in a V5 database is 65,008 bytes, which leaves open the theoretical possibility of a database that cannot be upgraded to V5. Since Fidelity is not aware of any customer who uses a block size of 65,024 bytes, this remains just a theoretical issue.
[3] If you reduce the maximum record size in the database file with DSE, remember to make a change to the global directory with GDE so that any new database files have the new value.