Current version: MT 3.1 build 250714_2101
The RED Migration Tooling allows you to migrate metadata repositories from WhereScape RED 8.6 and 9.0 to WhereScape RED 10.4+. The following sections provide information on how to install and use the RED Migration Tooling.
Prerequisites
License
The RED Migration Tooling requires a 'Custom Target' enabled license. This is because the tooling will use the Custom Target database type for loading into the destination PostgreSQL RED metadata database. For customers on a traditional SQL Server, Oracle or Teradata target your license may need to be temporarily upgraded to support the migration by adding a Custom Target.
| License Fields | Values | Migration Requirements |
|---|---|---|
| Licensed Metadata Database Type(s) | SQL Server, Oracle, Teradata | One or more of SQL Server, Oracle or Teradata |
| Licensed Target Database Type(s) | SQL Server, Oracle, Teradata, Custom | 'Custom' at a minimum |
| Licensed Custom Target Database Type | Any | Any Custom Target Type* |
* Custom Target Type
Licensed Custom Target Database Type is the label in your license given to your Custom Target, this will be used for PostgreSQL targets during migration. This is just a display label for the underlying Custom target type, the important differentiator is that it is not one of the in-built target types SQL Server, Oracle or Teradata and can therefore be used for any other template enabled target platform.
Source Metadata
Source Metadata:
- A RED Metadata Repository on SQL Server, Oracle or Teradata which has a RED version of 8.6 or higher
- An ODBC DSN and a user with at least select permissions to the source metadata database.
Data Warehouse:
- Any data warehouse platform where the RED 10 version of the WhereScape Target Enablement Pack is available*
* EP's released prior to the Migration Tooling require template updates
To support legacy script output protocols from RED8/9, the execute script function in both PowerShell and Python templates will need updating if there is not a compatible EP version yet. This is explained in the post migration section.
Destination Metadata
Destination Metadata:
- RED 10.4.0.3 or higher
- An empty PostgreSQL database to become your migrated metadata repository.
- An ODBC DSN and a user with full permissions on the database (see setting up database and users in PostgreSQL for RED).
- Compatible PostgreSQL command line tools.
Data Warehouse:
- A downloaded and unzipped RED 10 WhereScape Target Enablement Pack for your data warehouse platform.
- All prerequisites as outlined in your WhereScape Target Enablement Pack's install guide.
- All Data Warehouse Source ODBC Drivers, DSN's and and any required platform specific native tooling are installed and configured (if you are running RED 10 side by side with your Source RED 8.6+ repository then these Drivers, DSN's and tooling should already exist).
Migration Tooling
Migration Tooling Metadata:
- An empty PostgreSQL database to house your Migration Tooling, this database can be removed once migration is complete.
- An ODBC DSN and a user with full permissions on the database (see setting up database and users in PostgreSQL for RED).
- Compatible PostgreSQL command line tools.
Tooling:
- A downloaded and unzipped Migration Tooling Enablement Pack.
- RED 10.4.0.3 or higher installed.
- A valid RED license installed with a Custom Target enabled.
How the Migration Tooling Works
The RED Migration Tooling is provided as an Enablement Pack which is installed, using the RED Setup Wizard, to a dedicated PostgreSQL database. Once installed you will have a RED metadata repository + the Migration Tooling Enablement Pack which provides a set of scripts and jobs to transfer RED Metadata from a Source of SQL, Oracle or Teradata to a Destination of PostgreSQL and then reconfigure the Destination to suit RED 10 and the Azkaban Scheduler.
General Migration Process
The RED Migration Tooling will try to retain wherever possible the existing Scripts and Procedures as is rather than regenerating them in RED 10.
All Objects associated to Script based or Procedure based processing in the Source Metadata Repository will not be regenerated or recompiled in the Destination Metadata, instead it is assumed that the RED 10 Target Enablement Pack will provide a suitable Action Processing Script template that generates appropriate code to deal with the legacy script output protocols and parameters in procedures.
In RED 10 all Scheduling Actions for an Object are performed through an Action Processing Script which is built for and associated to each table, the RED Migration Tooling will generate this script for each object that requires one, or assign a generic script where appropriate, this generation process can take minutes to hours depending on the size of the metadata repository, machine resources and database performance.
Migrated Object Types
Not all object types from earlier versions of RED are available in RED 10 so it is important to understand what will and won't be migrated, refer to the following table for more details:
| Object Type(s) | Migrated | Post Migration Notes |
|---|---|---|
| Connections | All connections are migrated, MSAS connections should be manually removed after migration. | |
| MSAS, Cubes, Cube Dims, Tabular Cubes | Analysis Services Object Types are not migrated since RED 10 does not support them yet. | |
| Aggregate Dimension Join Table Aggregate Fact Join Table Aggregate Join Table Fact Rollup Fact Work Table Permanent Stage Model View Fact View | These legacy object sub-types are migrated but assigned an new Custom Object Type in RED 10 of the same name. Objects of these types should be checked carefully in the Destination metadata. | |
| All Other Object Types | All other object types not mentioned in the rows above are migrated as is. | |
| Object Versions | Previous object versions are not migrated. There are a few reasons for this:
| |
| WhereScape Callable Procedures* | Since the inbuilt WhereScape Callable Routines are compiled on either SQL Server, Oracle or Teradata they can not be migrated* | |
| Non-Script-Based Loads | Non-Script-based loads such as: ODBC, DB Link, SSIS and some File Load types are migrated however these load type will require a load script to be generated and therefore these types will need thorough testing post migrations. Any Load which was already script-based should function as is provided the appropriate table level Action Processing Script has been generated. | |
| Non-Script-Based Exports | Non-Script-Based Exports will require an Export script to be generated and therefore these types will need thorough testing post migrations. Any Export which was already script-based should function as is, provided the appropriate Export level Action Processing Script has been generated. | |
| Parameters | Parameters are migrated however if you were on traditional RED8/9 SQL, Oracle or Teradata targets you should check that your RED 10 EP has a solution to synchronize parameters between old and new repositories. Additionally you will need to review Stand-alone Scripts and procedures that used parameters. |
* WS Callable Routines
Any Procedures/Blocks or Scripts which called these callable routines before will continue to work but the outcomes will be applied to the original Source Metadata Repository and depending on the procedure being called will have no effect. Only the WhereScape Parameter Functions will still be of use as is post migration.
Most use cases, outside of Parameter read/writes, will involve a customized script or procedure, these should be reviewed to find the RED 10 equivalent and adjusted after migration. Including any Jobs they were part of.
Note: Target Enablement Packs will handle legacy procedures that include the WhereScape Parameter read/write functions by synchronizing the dss_parameter table in the Target with the same table in the PostgreSQL metadata repository. In this way most procedures will continue to function as is after migration.
The Migration Tooling requires the following named connections in RED:
You should set these up during the initial run of the RED Setup Wizard as outlined in the next section. Listed here for clarity only.
| Connection Name | Type | Database Type | Target Storage Location | Notes |
|---|---|---|---|---|
| Target | Target | "Any Custom Type" | red | Refers to your Destination RED metadata database on PostgreSQL |
| Reports | Target | "Any Custom Type" | red | Refers to your Migration Tooling metadata database on PostgreSQL |
| Source | Source | SQL Server, Oracle or Teradata | n/a | Refers to your Source Metadata that will be migrated by the tooling |
Installing the Migration Tool
Check Prerequisites
Check that you have met the prerequisites to begin, here is a quick checklist:
- Destination database created on PostgreSQL with a working ODBC DSN configured for it.
- Migration Tooling database created on PostgreSQL with a working ODBC DSN configured for it.
Run the RED Setup Wizard
- Launch the RED Setup Wizard (RedSetupWizard.exe) from the RED installation directory
- Select Create a new repository.
- Configure the metadata database, this will be your Migration Tooling metadata.
- Select the directory that contains the unzipped RED Migration Tooling. Click Next.
- Review the components that will be installed. Click Next.
Create Target Connections
You must create two PostgreSQL connections with the following characteristics:
| Connection Name | Database Type | Target Storage Location | Notes |
|---|---|---|---|
| Target | Custom* | red | Refers to your Destination RED metadata database on PostgreSQL |
| Reports | Custom* | red | Refers to your Migration Tooling metadata database on PostgreSQL |
* Custom will be your licensed Custom Database Target type which might have a different label in the UI than 'Custom', basically for these two connections we can't use the inbuilt SQL, Oracle or Teradata target types.
Adding the 'Target' Destination Metadata Connection
The connection named 'Target' will be your PostgreSQL connection to your database to house the migrated RED metadata repository.
- On the Connection Name field, ensure to enter 'Target' as a name.
- On the Data Source Name field, ensure you select the connection to the destination metadata repository.
- On the Target Storage Locations field, ensure to enter 'red'.
- Complete the other fields with the appropriate data, then click Validate to check your configurations.
- Once you validate your configurations click Add.
- On the Add Targets screen you will see the connection you just added. Click Add another target to add the Reports connection.
Adding the 'Reports' Migration Tooling Metadata Connection
The connection named 'Reports' will be your PostgreSQL connection to your Migration Tooling metadata repository, which allows us to add targets to the tooling metadata database for reporting.
- On the Connection Name field, ensure you enter 'Reports' as the connection name.
- On the Data Source Name field, ensure you select the connection to the destination metadata repository.
- On the Target Storage Locations field, enter 'red' as the storage location.
- Complete the other fields with the appropriate data, then click Validate to check your configurations.
- Once you validate your configurations click Add.
- On the Add Targets screen you will see the two connections you just added. Click Next continue and add the source connection.
Create the Source Metadata Connection
- On the Add ODBC Sources screen, configure a connection with 'Source' as the connection name, select the DSN relating to your existing RED 8.6 or 9.0 metadata repository.
- Click Validate to check your connection and then click Add.
- The Add ODBC Sources screen will show the connection you just added. Click Next to continue or click Add another source to add more sources.
Review and Finalize the Install
On the Summary screen review your configurations are correct, you can click Previous to make changes or click Install to continue.
Once the installation finishes, click Finish to close the installer and launch RED.
Review your login settings and click Connect.
Launch RED as Admin
RED Setup Wizard runs with elevated privileges, therefore when RED is launched from the final page it is also starts with the same elevation. If you you start the RED Migration Tooling manually then please run med.exe as Admin as one of the scripts in the Migration Tooling relies on this elevation.
Migration Preparation
Migration Preparation Script
When WhereScape RED starts for the first time, after the installation steps described in the previous section, the script that prepares the Migration Tooling executes automatically.
The Migration Preparation Script will prompt for two items:
- Source Repository Database Type - either SQL Server, Oracle or Teradata
- Target Database Enablement Pack - this is the location of the unpacked RED 10 compatible Target Enablement Pack for your licensed target
If you get failures in the Reports pane after opening WhereScape RED, then one or more of the preparation steps in the host script named '1_prepare_migration' did not succeed. For troubleshooting view the section which details each of the scripts: 1_prepare_migration
Take note of the failure message and see if you can correct the issue, then rerun the script. On subsequent script runs you may get additional failures due to the earlier run having already applied a change but in general rerunning this script will not cause issues and some failures when re-run may be dismissed
Manual Steps and checks after RED Migration Tooling starts
Check Connections
For each connection Target, Source and Reports:
- Open the connection and click the 'Derive' button to ensure the server and port fields are up to date.
- Browse the connection to ensure the credentials are working (note that the Target connection will not have any objects to display yet).
- Are you using a remote PostgreSQL instance? Check the extended property on your Target and Reports connections is set SERVERSIDE_COPY to FALSE, this is the new default in MT 3.1+.
Review Parameters
These parameters are added by the start-up script, you should not need to change anything here but it's useful to know that these parameters drive many of the scripts executed during the migration process:
Setup the Migration Tooling Azkaban Scheduler on Windows
Windows Scheduler Installation
We'll need a Windows Scheduler installed to perform the migration tasks. Follow the Windows Scheduler Installation instructions to install a WhereScape RED Scheduler for the RED Migration Tooling metadata.
When asked for a Scheduler Metadata database use the RED Migration Tooling metadata database.
When asked for a RED Metadata database also use the RED Migration Tooling metadata database.
Remember your Profile Encryption Secret for later entry into the Scheduler Profile Maintenance wizard in the RED UI.
If you install the Migration Tooling Scheduler with a separate service user then you may need to run the script 'wsl_mt_initialize_scheduler_user' to accept the EULA for that user. Find this script under the Host Scripts in RED and run it via the Scheduler to accept the EULA for the Scheduler user.
Configure the Scheduler Credentials in RED
After installing the Scheduler ensure to enter your scheduler credentials into the Configuration page of the Scheduler tab in RED, then Save your Profile again to ensure your credentials are preserved between RED sessions.
Configure the Scheduler Profile
Before running any jobs, you must first setup the Scheduler Profile which adds the encrypted connection credentials rows for the connections in RED. This makes those credentials available to scheduled jobs. To do this run the script 'wsl_scheduler_profile_maintenance' found under 'Host Scripts' in the object tree in RED.
Use the same Profile Encryption Secret which you entered during the Scheduler installation.
Running the Migration Jobs
Run the migration Jobs one at a time. Before running a job check if it requires other jobs to be run first.
The following sections describe the jobs and any requirements they may have
1_Source_Reports
This job is optional and can be run at any time. It runs a set of queries against the source repository providing various object counts in the source. You can view the results by clicking Display Data on the View object in the UI as shown below. There is a corresponding Validation Report which compares the same report run against the destination repository, this can be populated by running the corresponding load table, after completing the migration:
2_Migrate_Current_Objects
This job has to be run for migrating to RED10. Depending on repository size and performance this job would typically finish within 10 to 30minutes. If there are any failures in Job 2, you should view the failure reason and restart the job at the point of failure from the Azkaban Scheduler Dashboard directly, by rerunning the failed execution.
Job: '2_Migrate_Current_Objects' is intended for SQL and Teradata source repositories.
Job: '2_Migrate_Current_Objects_Oracle' is intended for Oracle source repositories only.
Ensure you only run one of these jobs, depending on your source metadata repository type.
3_Prepare_Target_Repository
Job 2 should be completed successfully before continuing with Job 3. If there are any failures in Job 3, you can complete the job manually from the RED UI by running the scripts in the order outlined in the Migration Scripts Explained section.
After Job 3 has completed, or you have run the scripts manually, please log in to the migrated Destination repository and allow the RED 10 Target Enablement Pack post install process to complete. This is also a good point to check the connections and save a RED Profile for your migrated Destination metadata repository.
EP Install Required
Before continuing to Job 4 please log in to the Destination Repository to allow the Target Enablement Pack to complete it's configuration.
Additionally apply any template changes if required as described here: 'Review Action Script Templates'
4_Set_Storage_Templates
Job 4 applies the default templates which were set up by the RED 10 Target Enablement Pack, this is why it is important to have completed that install process by logging in to the Destination. This steep can be re-run if it was completed too early, or the individual scripts can be run from the Migration Tooling RED UI.
5_Generate_Windows_Action_Scripts
This job generates Windows Action Scripts for all objects. It runs a single script that can also be run from the RED UI, see the script details for the scripts prefixed with 'c' in the following section. Running this script is optional.
6_Generate_Linux_Action_Scripts
This job generates Linux Action Scripts for all objects. It runs a single script that can also be run from the RED UI, see the script details for the scripts prefixed with 'c' in the following section. Running this script is optional.
7_Generate_Load_Scripts
Generates Load routines for Load objects without an associated script. It can also be run from the RED UI, before running this job see the script details for the scripts prefixed with 'c'.
Repeating or Restarting the Migration
To repeat the migration process a second time you do not need to reinstall the Migration Tooling, you can simply follow these steps:
- Drop and recreate the Destination PostgreSQL database,
- Run script '2_target_repo_creation' - to recreate the Destination metadata repository.
- Then run the jobs again in the order specified.
If you are also upgrading the tooling please follow the upgrade process in the release notes pertaining to your version.
Tooling Load tables should not be recreated
Since the tooling spans many supported version of RED the Load tables in the tooling may not have newer metadata columns for some tables, therefore the only supported way to recreate the load tables of the tooling is to follow the steps above, so that the metadata creation process creates the correct metadata tables for your target RED version.
Migration Scripts Explained
These are the Migration Tooling Scripts, each script can be run from the RED UI or via the indicated Scheduled Job. If you choose to run these scripts manually, please follow the order carefully as listed here.
All the scripts, except for 1 and 2, can be rerun at anytime if required to address failures or if the job 2_Migrate_Current_Objects has been completely rerun.
Auto-run at startup
1_prepare_migration
- Sets up required parameters for the tooling.
- Deploys RED Applications containing the RED Objects and Jobs for the Migration Tooling
If you have not set up the required connections, the Results pane will display a failure message similar to the image shown below. Please expand the Connections node in the left tree and add or amend connections as required before rerunning the script.
Rerun this script after alterations
2_target_repo_creation
- Creates the RED metadata in the Destination PostgreSQL database.
Run only after Job '2_Migrate_Current_Objects' scripts
The following 'b' scripts are all included in job 3_Prepare_Target_Repository
b1_upgrade_obj_subtypes
- Updates the migrated objects subtype keys to be compatible with RED 10.
b2_job_metadata_updates
- Updates schedules and job states to be compatible with RED 10.
- Caps Job max thread count setting to 10 as a starting point for post migration scheduler tuning.
b3_storage_metadata_updates
- Renames the existing 'Repository' connection to 'Old RED Repository Connection'
- Creates a Target Location to represent legacy 'local' storage of migrated objects, named 'WSL_MIGRATED_LOCAL_STORAGE' on connection 'Old RED Repository Connection', then associates local storage objects to this target. The target name can be changed as required after the migration.
- Creates a new connection named 'Repository' for the new PostgreSQL Metadata connection.
b4_reset_identity_sequences
- Source metadata keys have been migrated to PostgreSQL as is, so the sequences in PostgreSQL need updating to reflect the migrated data. This script preforms those updates.
- Note: if you have to rerun any of the load objects within Job '2_Migrate_Current_Objects' then this script should be rerun to reflect the new data.
b5_target_ep_installation
- This step installs the Target Enablement Pack from the path given during the migration startup script, this path is stored and retrieved from the parameter: 'red10EPLocation'.
Caution: if the path to the EP is a network path then the user running the scheduler service may not have access to it, this can fail the job. To resolve this you can run this script manually from RED or copy the EP to a local directory the Scheduler service can access, adjust the parameter 'red10EPLocation' to match, then restart the job from the failed step in the Azkaban Dashboard by rerunning the failed execution.
b6_import_sch_integration_scripts
- The RED 10 scheduler integration scripts are removed when migrating the script tables, therefore we need to import them again, these are imported from the current version of RED running the Migration Tooling.
b7_set_default_action_scripts
RED 10 requires each object which is processed via the Scheduler to have an Action Processing Script, for large migrated repositories generating an individual script for every object can take a very long time and can increase the metadata footprint substantially. Where possible it is more efficient to use a generic action script for most objects if they have simple scheduling requirements, this script determines those candidate objects and assigns a generic script where possible.
- This script is driven by the two parameters 'red10DefaultLinuxActionScriptName' and 'red10DefaultWindowsActionScriptName'.
- This script runs a set a queries to determine candidate objects for a generic action script and then uses the parameters above to assign a default generic action scrip to those objects.
- The Migration Tooling provides two sample generic action processing scripts which are copied across to the Destination metadata and then used in these assignments.
- The sample sample generic action processing scripts are 'wsl_mt_py_action_script' for python and 'wsl_mt_ps_action_script' for powershell.*
- If you have your own generic action processing script in the target to assign you can set in in these parameters.
- To disable this process and have all object's action scripts regenerated later, you can remove the script names from these parameters.
Note: The sample generic action processing scripts provided in the Migration Tooling are not target specific and may need to be tweaked to work in some environments. After migration these scripts should be tested and adjusted as required. In some cases the Target EP may provide a target specific generic action processing script which you can deploy instead.
Support for legacy scripts
The sample action processing scripts will support the legacy script output protocol if a given Extended Property 'LEGACY_SCRIPT_SUPPORT' is set on the Target Connection or Table, or a Parameter of the same name is set. Both of these settings are set to TRUE as a part of the migration process. Note the 'Review Action Script Templates' section on manually updating earlier RED 10 templates to enable this feature.
b8_apply_legacy_obj_subtypes
- OPTIONAL: Creates custom Object Types for legacy object subtypes which don't exist in RED 10, then finds and updates any of those object types to the new custom types.
EP Install Required
Before continuing to Job 4, or manually running the following 'c' scripts, please log in to the Destination Repository to allow the Target Enablement Pack to complete it's configuration.
c1_set_storage_templates
- Associates objects with the default storage templates of the target connection they are associated with.
- This script will not overwrite existing storage template associations for an object, it only adds missing templates.
- Included in Job '4_Set_Storage_Templates'
If you find you were missing some default templates on one of your Destination repository Target connections, you can rerun this script to have them applied to the associated objects.
c2_generate_windows_action_scripts and c3_generate_linux_action_scripts
Depending on your Destination Repository's Scheduling platforms you can run either or both of these scripts. It is best to only run it for the platform you require first since this process can take a long time, you can always come back and run these scripts again at a later date.
- Each script will find any object which does not already have an assigned Action Processing Script and generate the script using the associated target connection's default templates.
- Runs in batches: due to the potentially large number of objects to process, the RedCli commands are run in batches and each batch is assessed for failures before continuing with the next batch.
- When running from the UI you can monitor the progress by clicking on the command window to view the progress and time remaining (shown above).
- When run through the scheduler the same progress messages are sent to audit log and can be viewed by refreshing the audit log in the RED Scheduler tab.
- It is common to get failures in these scripts the first few times you run them, especially with large repositories with many target connections, this is usually due to missing default templates on a target connection.
- Troubleshooting: Look at the RedCli logs in C:\ProgramData\WhereScape\Work, these will give you the most detail for any given failure. You can also see what commands were sent in each batch by looking for the batch json file in the work directory of the script.
- If you do get failures then, after correcting the underlying configuration problem, subsequent reruns will only pickup failed items or items not yet generated.
c4_set_load_templates
- Sets the load template for each Load table which has no existing load script or load template associated.
- These defaults are taken from the Load table's associated Source connection's default load script template.
It is important to go through your source connections and make sure they have suitable load script templates assigned prior to running this script. If you have SSIS or DBLink loads these will require specific templates set.
c5_generate_load_scripts
Similar to the c2 and c3 scripts, this script runs RedCli commands in batches and progress can be viewed in the cmd window or scheduler audit trail. Some failures can be expected for the first few runs until all the configurations have been resolved.
- Finds any Load table objects without an associated Load script and attempts to generate a load script for it using the template assigned in the c4 script.
- Troubleshooting: Look at the RedCli logs in C:\ProgramData\WhereScape\Work, these will give you the most detail for any given failure. You can also see what commands were sent in each batch by looking for the batch json file in the work directory of the script.
- If you do get failures then, after correcting the underlying configuration problem, subsequent reruns will only pickup failed items or items not yet generated.
Post Migration
Installing a Scheduler
You should follow the Scheduler Installation section of the RED User Guide to install a scheduler for your Destination Repository. Since the scheduler in RED 10 is java based the memory requirements will be greater than that of the RED8/9 scheduler. There are a few important things to consider when building out the scheduler infrastructure and some experiment and tuning will be required to get the optimum throughput for your workloads.
Considerations:
- Review and adjust job thread counts: The migration process capped the total thread count in jobs to 10 since this is the typical parallelism which is possible on an average machine due to memory consumption. You can begin to set this higher as your infrastructure allows.
- Review Jobs with Child Jobs: In RED 10.4.0.3 a child jobs tasks are run directly from the original job they reference, if you change a child jobs tasks then this may invalidate the parent jobs references to it which can only be fixed by re-publishing the parent job. Improved Child Job support is coming soon but until then use job nesting sparingly to avoid synchronization issues.
- If you were previously running a RED8/9 Linux scheduler then you should install an Azkaban Executor on the same machine and Linux user. If the previous workloads can not be handled on the same machine with the Azkaban Executor installed then you would scale out horizontally with more Azkaban Executor machines.
Review and Refactor
Template Generated Scripts
Some earlier versions of enablement packs, particularly Snowflake PowerShell Load templates, have specific code in them to call the the SQL Server metadata. These will require identifying and either transformations applied or regeneration of those scripts using RED 10 templates.
To identify calls to WslMetadataServiceDLL in scripts which point to SQL Sever metadata you can run this query, post migration these scripts will need an Update SQL applied (shown later) or the script regenerated using RED 10 templates.
Stand-alone Script and Procedure Objects
The term "stand-alone" in this context means script and procedure objects that are executed independently to a Table, View or Export object. These are essentially user created routines which are not generated by RED and so some refactoring post-migration may be required.
Scripts
Review any stand-alone scripts, such as High Water Mark scripts, these may have specific code in them that calls the old metadata repository directly and/or the legacy callable routines. Additionally the script output protocol of the Azkaban Scheduler is different and the script may need to be updated to conform. During refactoring consider adopting the new feature that allows scripts to be associated to Database/ODBC and Extensible Sources instead of Windows or Linux connections thus allowing secure access to that connection's credentials and settings.
SQL Blocks
SQL Blocks may have specific code in them that calls the old metadata repository directly and/or the legacy callable routines. Additionally if the associated connection was the old metadata connection then post migration the associated connection will still be that of the old metadata and so may not make sense anymore.
Procedures and Functions
Review stand-alone procedures, if these operated on the old metadata connection then they will continue to operate on the old metadata connection, these should be reviewed to see if they still work as expected and refactored to operate on the PostgreSQL meta instead if required.
Review Action Script Templates
If the only RED 10 WhereScape Target Enablement Pack you have available was released prior to this version of the migration tooling then it will be missing code to deal with the RED8/9 legacy script output protocol required after a migration to avoid having to rebuild every script. If this is the case then you can update your Action Processing Script Template's execute script function directly. This should be performed prior to the action script generation tasks in Jobs 5 and 6 or the scripts beginning with 'c<n>_' if running the tasks manually.
You can replace the functions in each of your PowerShell and Python templates depending on what your enablement provides. After making your changes you should test by manually regenerating the Action Processing Script on a few of your objects in the Destination Repo prior to running the batch generation jobs. The updated functions are as follows:
PowerShell
Replace ExecuteScript function in template wsl_common_pscript_utility_action
Python
Replace ExecuteScript function in template wsl_common_pyscript_utility_action

























