Current version: MT 3.1 build 250714_2101

Release notes

The RED Migration Tooling allows you to migrate metadata repositories from WhereScape RED 8.6 and 9.0 to WhereScape RED 10.4+. The following sections provide information on how to install and use the RED Migration Tooling. 

Prerequisites

License

The RED Migration Tooling requires a 'Custom Target' enabled license. This is because the tooling will use the Custom Target database type for loading into the destination PostgreSQL RED metadata database. For customers on a traditional SQL Server, Oracle or Teradata target your license may need to be temporarily upgraded to support the migration by adding a Custom Target.

License FieldsValuesMigration Requirements
Licensed Metadata Database Type(s) SQL Server, Oracle, TeradataOne or more of SQL Server, Oracle or Teradata
Licensed Target Database Type(s) SQL Server, Oracle, Teradata, Custom'Custom' at a minimum 
Licensed Custom Target Database Type AnyAny Custom Target Type*


Licensed Custom Target Database Type is the label in your license given to your Custom Target, this will be used for PostgreSQL targets during migration. This is just a display label for the underlying Custom target type, the important differentiator is that it is not one of the in-built target types SQL Server, Oracle or Teradata and can therefore be used for any other template enabled target platform.

Source Metadata

Source Metadata: 

Data Warehouse:

To support legacy script output protocols from RED8/9, the execute script function in both PowerShell and Python templates will need updating if there is not a compatible EP version yet. This is explained in the post migration section.


Destination Metadata

Destination Metadata: 

Data Warehouse:

Migration Tooling

Migration Tooling Metadata: 

Tooling:

How the Migration Tooling Works

The RED Migration Tooling is provided as an Enablement Pack which is installed, using the RED Setup Wizard, to a dedicated PostgreSQL database. Once installed you will have a RED metadata repository + the Migration Tooling Enablement Pack which provides a set of scripts and jobs to transfer RED Metadata from a Source of SQL, Oracle or Teradata to a Destination of PostgreSQL and then reconfigure the Destination to suit RED 10 and the Azkaban Scheduler.

General Migration Process

The RED Migration Tooling will try to retain wherever possible the existing Scripts and Procedures as is rather than regenerating them in RED 10. 

All Objects associated to Script based or Procedure based processing in the Source Metadata Repository will not be regenerated or recompiled in the Destination Metadata, instead it is assumed that the RED 10 Target Enablement Pack will provide a suitable Action Processing Script template that generates appropriate code to deal with the legacy script output protocols and parameters in procedures.

In RED 10 all Scheduling Actions for an Object are performed through an Action Processing Script which is built for and associated to each table, the RED Migration Tooling will generate this script for each object that requires one, or assign a generic script where appropriate, this generation process can take minutes to hours depending on the size of the metadata repository, machine resources and database performance.

Migrated Object Types

Not all object types from earlier versions of RED are available in RED 10 so it is important to understand what will and won't be migrated, refer to the following table for more details:

Object Type(s)MigratedPost Migration Notes
Connections(tick) All connections are migrated, MSAS connections should be manually removed after migration.
MSAS, Cubes, Cube Dims, Tabular Cubes(error) Analysis Services Object Types are not migrated since RED 10 does not support them yet. 
Aggregate Dimension Join Table
Aggregate Fact Join Table
Aggregate Join Table
Fact Rollup
Fact Work Table
Permanent Stage
Model View
Fact View
(tick) 

These legacy object sub-types are migrated but assigned an new Custom Object Type in RED 10 of the same name. 

Objects of these types should be checked carefully in the Destination metadata.

All Other Object Types(tick) All other object types not mentioned in the rows above are migrated as is.
Object Versions(error) 

Previous object versions are not migrated. There are a few reasons for this:

  • Restoring to a version predating the migration would leave your object in an unusable state.
  • The size of the versioning tables in legacy repositories adds unnecessary delay to the migration.
  • It is better to start versioning again from scratch in the migrated repository.
WhereScape Callable Procedures*(error) 

Since the inbuilt WhereScape Callable Routines are compiled on either SQL Server, Oracle or Teradata they can not be migrated*

Non-Script-Based Loads(tick) 

Non-Script-based loads such as: ODBC, DB Link, SSIS and some File Load types are migrated however these load type will require a load script to be generated and therefore these types will need thorough testing post migrations.

Any Load which was already script-based should function as is provided the appropriate table level Action Processing Script has been generated.

Non-Script-Based Exports(tick) 

Non-Script-Based Exports will require an Export script to be generated and therefore these types will need thorough testing post migrations.

Any Export which was already script-based should function as is, provided the appropriate Export level Action Processing Script has been generated.

Parameters(warning) 

Parameters are migrated however if you were on traditional RED8/9 SQL, Oracle or Teradata targets you should check that your RED 10 EP has a solution to synchronize parameters between old and new repositories. Additionally you will need to review Stand-alone Scripts and procedures that used parameters. 


Any Procedures/Blocks or Scripts which called these callable routines before will continue to work but the outcomes will be applied to the original Source Metadata Repository and depending on the procedure being called will have no effect. Only the WhereScape Parameter Functions will still be of use as is post migration.

Most use cases, outside of Parameter read/writes, will involve a customized script or procedure, these should be reviewed to find the RED 10 equivalent and adjusted after migration. Including any Jobs they were part of. 

Note: Target Enablement Packs will handle legacy procedures that include the WhereScape Parameter read/write functions by synchronizing the dss_parameter table in the Target with the same table in the PostgreSQL metadata repository. In this way most procedures will continue to function as is after migration.


The Migration Tooling requires the following named connections in RED:

You should set these up during the initial run of the RED Setup Wizard as outlined in the next section. Listed here for clarity only.


Connection NameTypeDatabase TypeTarget Storage LocationNotes
TargetTarget"Any Custom Type" redRefers to your Destination RED metadata database on PostgreSQL
ReportsTarget"Any Custom Type" redRefers to your Migration Tooling metadata database on PostgreSQL
SourceSourceSQL Server, Oracle or Teradatan/aRefers to your Source Metadata that will be migrated by the tooling

Installing the Migration Tool

Check Prerequisites

Check that you have met the prerequisites to begin, here is a quick checklist:

Run the RED Setup Wizard

  1. Launch the RED Setup Wizard (RedSetupWizard.exe) from the RED installation directory
  2. Select Create a new repository.
  3. Configure the metadata database, this will be your Migration Tooling metadata.
  4. Select the directory that contains the unzipped RED Migration Tooling. Click Next.
  5. Review the components that will be installed. Click Next.

Create Target Connections

You must create two PostgreSQL connections with the following characteristics:

Connection NameDatabase TypeTarget Storage LocationNotes
TargetCustom* redRefers to your Destination RED metadata database on PostgreSQL
ReportsCustom*redRefers to your Migration Tooling metadata database on PostgreSQL


* Custom will be your licensed Custom Database Target type which might have a different label in the UI than 'Custom', basically for these two connections we can't use the inbuilt SQL, Oracle or Teradata target types.

Adding the 'Target' Destination Metadata Connection

The connection named 'Target' will be your PostgreSQL connection to your database to house the migrated RED metadata repository. 

  1. On the Connection Name field, ensure to enter 'Target' as a name.
  2. On the Data Source Name field, ensure you select the connection to the destination metadata repository.
  3. On the Target Storage Locations field, ensure to enter 'red'.
  4. Complete the other fields with the appropriate data, then click Validate to check your configurations.
  5. Once you validate your configurations click Add.  
  6. On the Add Targets screen you will see the connection you just added. Click Add another target to add the Reports connection.

Adding the 'Reports' Migration Tooling Metadata Connection

The connection named 'Reports' will be your PostgreSQL connection to your Migration Tooling metadata repository, which allows us to add targets to the tooling metadata database for reporting.

  1. On the Connection Name field, ensure you enter 'Reports' as the connection name.
  2. On the Data Source Name field, ensure you select the connection to the destination metadata repository.
  3. On the Target Storage Locations field, enter 'red' as the storage location.
  4. Complete the other fields with the appropriate data, then click Validate to check your configurations.
     
  5. Once you validate your configurations click Add.  
  6. On the Add Targets screen you will see the two connections you just added. Click Next continue and add the source connection.

Create the Source Metadata Connection

  1. On the Add ODBC Sources screen, configure a connection with 'Source' as the connection name, select the DSN relating to your existing RED 8.6 or 9.0 metadata repository.
  2. Click Validate to check your connection and then click Add.

  3.  The Add ODBC Sources screen will show the connection you just added. Click Next to continue or click Add another source to add more sources.


Review and Finalize the Install

On the Summary screen review your configurations are correct, you can click Previous to make changes or click Install to continue. 


Once the installation finishes, click Finish to close the installer and launch RED.

Review your login settings and click Connect.

RED Setup Wizard runs with elevated privileges, therefore when RED is launched from the final page it is also starts with the same elevation. If you you start the RED Migration Tooling manually then please run med.exe as Admin as one of the scripts in the Migration Tooling relies on this elevation.

Migration Preparation

Migration Preparation Script

When WhereScape RED starts for the first time, after the installation steps described in the previous section, the script that prepares the Migration Tooling executes automatically.

The Migration Preparation Script will prompt for two items:

  1. Source Repository Database Type - either SQL Server, Oracle or Teradata
  2. Target Database Enablement Pack - this is the location of the unpacked RED 10 compatible Target Enablement Pack for your licensed target


If you get failures in the Reports pane after opening WhereScape RED, then one or more of the preparation steps in the host script named '1_prepare_migration' did not succeed. For troubleshooting view the section which details each of the scripts: 1_prepare_migration

Take note of the failure message and see if you can correct the issue, then rerun the script. On subsequent script runs you may get additional failures due to the earlier run having already applied a change but in general rerunning this script will not cause issues and some failures when re-run may be dismissed

Manual Steps and checks after RED Migration Tooling starts

Check Connections

For each connection Target, Source and Reports:

  1. Open the connection and click the 'Derive' button to ensure the server and port fields are up to date.
  2. Browse the connection to ensure the credentials are working (note that the Target connection will not have any objects to display yet).
  3. Are you using a remote PostgreSQL instance? Check the extended property on your Target and Reports connections is set SERVERSIDE_COPY to FALSE, this is the new default in MT 3.1+.

Review Parameters

These parameters are added by the start-up script, you should not need to change anything here but it's useful to know that these parameters drive many of the scripts executed during the migration process:

Setup the Migration Tooling Azkaban Scheduler on Windows

Windows Scheduler Installation

We'll need a Windows Scheduler installed to perform the migration tasks. Follow the Windows Scheduler Installation instructions to install a WhereScape RED Scheduler for the RED Migration Tooling metadata.

When asked for a Scheduler Metadata database use the RED Migration Tooling metadata database.

When asked for a RED Metadata database also use the RED Migration Tooling metadata database.

Remember your Profile Encryption Secret for later entry into the Scheduler Profile Maintenance wizard in the RED UI.

If you install the Migration Tooling Scheduler with a separate service user then you may need to run the script 'wsl_mt_initialize_scheduler_user' to accept the EULA for that user. Find this script under the Host Scripts in RED and run it via the Scheduler to accept the EULA for the Scheduler user.

Configure the Scheduler Credentials in RED

After installing the Scheduler ensure to enter your scheduler credentials into the Configuration page of the Scheduler tab in RED, then Save your Profile again to ensure your credentials are preserved between RED sessions.

Configure the Scheduler Profile

Before running any jobs, you must first setup the Scheduler Profile which adds the encrypted connection credentials rows for the connections in RED. This makes those credentials available to scheduled jobs. To do this run the script 'wsl_scheduler_profile_maintenance' found under 'Host Scripts' in the object tree in RED.

Use the same Profile Encryption Secret which you entered during the Scheduler installation.

Running the Migration Jobs

Run the migration Jobs one at a time. Before running a job check if it requires other jobs to be run first.

The following sections describe the jobs and any requirements they may have

1_Source_Reports

This job is optional and can be run at any time. It runs a set of queries against the source repository providing various object counts in the source. You can view the results by clicking Display Data on the View object in the UI as shown below. There is a corresponding Validation Report which compares the same report run against the destination repository, this can be populated by running the corresponding load table, after completing the migration:

2_Migrate_Current_Objects

This job has to be run for migrating to RED10. Depending on repository size and performance this job would typically finish within 10 to 30minutes. If there are any failures in Job 2, you should view the failure reason and restart the job at the point of failure from the Azkaban Scheduler Dashboard directly, by rerunning the failed execution.

Job: '2_Migrate_Current_Objects' is intended for SQL and Teradata source repositories.

Job: '2_Migrate_Current_Objects_Oracle' is intended for Oracle source repositories only. 

Ensure you only run one of these jobs, depending on your source metadata repository type.

3_Prepare_Target_Repository

Job 2 should be completed successfully before continuing with Job 3. If there are any failures in Job 3, you can complete the job manually from the RED UI by running the scripts in the order outlined in the Migration Scripts Explained section.

After Job 3 has completed, or you have run the scripts manually, please log in to the migrated Destination repository and allow the RED 10 Target Enablement Pack post install process to complete. This is also a good point to check the connections and save a RED Profile for your migrated Destination metadata repository.

Before continuing to Job 4 please log in to the Destination Repository to allow the Target Enablement Pack to complete it's configuration.
Additionally apply any template changes if required as described here: 'Review Action Script Templates'

4_Set_Storage_Templates

Job 4 applies the default templates which were set up by the RED 10 Target Enablement Pack, this is why it is important to have completed that install process by logging in to the Destination. This steep can be re-run if it was completed too early, or the individual scripts can be run from the Migration Tooling RED UI. 

5_Generate_Windows_Action_Scripts

This job generates Windows Action Scripts for all objects. It runs a single script that can also be run from the RED UI, see the script details for the scripts prefixed with 'c' in the following section. Running this script is optional.

6_Generate_Linux_Action_Scripts

This job generates Linux Action Scripts for all objects. It runs a single script that can also be run from the RED UI, see the script details for the scripts prefixed with 'c' in the following section. Running this script is optional.

7_Generate_Load_Scripts

Generates Load routines for Load objects without an associated script. It can also be run from the RED UI, before running this job see the script details for the scripts prefixed with 'c'. 

Repeating or Restarting the Migration 

To repeat the migration process a second time you do not need to reinstall the Migration Tooling, you can simply follow these steps:

  1. Drop and recreate the Destination PostgreSQL database,
  2. Run script '2_target_repo_creation' - to recreate the Destination metadata repository.
  3. Then run the jobs again in the order specified.

If you are also upgrading the tooling please follow the upgrade process in the release notes pertaining to your version.

Since the tooling spans many supported version of RED the Load tables in the tooling may not have newer metadata columns for some tables, therefore the only supported way to recreate the load tables of the tooling is to follow the steps above, so that the metadata creation process creates the correct metadata tables for your target RED version.


Migration Scripts Explained

These are the Migration Tooling Scripts, each script can be run from the RED UI or via the indicated Scheduled Job. If you choose to run these scripts manually, please follow the order carefully as listed here. 

 

All the scripts, except for 1 and 2, can be rerun at anytime if required to address failures or if the job 2_Migrate_Current_Objects has been completely rerun. 

Auto-run at startup

1_prepare_migration

If you have not set up the required connections, the Results pane will display a failure message similar to the image shown below. Please expand the Connections node in the left tree and add or amend connections as required before rerunning the script.

If you do add or adjust connections at this point, then ensure you 'Save your Connection Profile' and restart RED so that the in-memory profile of connections credentials is up to date and then re-run this script manually to ensure that RED Applications containing the RED Objects and Jobs for the Migration Tooling are deployed correctly.

2_target_repo_creation 

Run only after Job '2_Migrate_Current_Objects' scripts

The following 'b' scripts are all included in job 3_Prepare_Target_Repository

b1_upgrade_obj_subtypes

b2_job_metadata_updates

b3_storage_metadata_updates

b4_reset_identity_sequences

b5_target_ep_installation

b6_import_sch_integration_scripts

b7_set_default_action_scripts

RED 10 requires each object which is processed via the Scheduler to have an Action Processing Script, for large migrated repositories generating an individual script for every object can take a very long time and can increase the metadata footprint substantially. Where possible it is more efficient to use a generic action script for most objects if they have simple scheduling requirements, this script determines those candidate objects and assigns a generic script where possible.

(info)  Note: The sample generic action processing scripts provided in the Migration Tooling are not target specific and may need to be tweaked to work in some environments. After migration these scripts should be tested and adjusted as required. In some cases the Target EP may provide a target specific generic action processing script which you can deploy instead.

The sample action processing scripts will support the legacy script output protocol if a given Extended Property 'LEGACY_SCRIPT_SUPPORT' is set on the Target Connection or Table, or a Parameter of the same name is set. Both of these settings are set to TRUE as a part of the migration process. Note the 'Review Action Script Templates' section on manually updating earlier RED 10 templates to enable this feature.  


b8_apply_legacy_obj_subtypes


Before continuing to Job 4, or manually running the following 'c' scripts, please log in to the Destination Repository to allow the Target Enablement Pack to complete it's configuration.

c1_set_storage_templates

c2_generate_windows_action_scripts and c3_generate_linux_action_scripts

Depending on your Destination Repository's Scheduling platforms you can run either or both of these scripts. It is best to only run it for the platform you require first since this process can take a long time, you can always come back and run these scripts again at a later date.

c4_set_load_templates

c5_generate_load_scripts

Similar to the c2 and c3 scripts, this script runs RedCli commands in batches and progress can be viewed in the cmd window or scheduler audit trail. Some failures can be expected for the first few runs until all the configurations have been resolved.


Post Migration

Installing a Scheduler

You should follow the Scheduler Installation section of the RED User Guide to install a scheduler for your Destination Repository. Since the scheduler in RED 10 is java based the memory requirements will be greater than that of the RED8/9 scheduler. There are a few important things to consider when building out the scheduler infrastructure and some experiment and tuning will be required to get the optimum throughput for your workloads.

Considerations:

Review and Refactor

Template Generated Scripts

Some earlier versions of enablement packs, particularly Snowflake PowerShell Load templates, have specific code in them to call the the SQL Server metadata. These will require identifying and either transformations applied or regeneration of those scripts using RED 10 templates.

To identify calls to WslMetadataServiceDLL in scripts which point to SQL Sever metadata you can run this query, post migration these scripts will need an Update SQL applied (shown later) or the script regenerated using RED 10 templates. 

-- RED 10 PostgreSQL Query (run post migration)
SELECT DISTINCT sl_obj_key AS key, sh_name AS name
FROM red.ws_scr_header JOIN red.ws_scr_line ON sl_obj_key = sh_obj_key
WHERE 
	UPPER(sl_line) LIKE UPPER('%WslMetadataServiceClient%dbo%')
 	OR 
	UPPER(sl_line) LIKE UPPER('%WslMetadataServiceClient%SQLServer%')

-- RED 8/9 SQL Server Query (run pre migration to analyze the source metadata)
SELECT DISTINCT sl_obj_key AS 'key', sh_name AS 'name'
FROM dbo.ws_scr_header JOIN dbo.ws_scr_line ON sl_obj_key = sh_obj_key
WHERE 
	UPPER(sl_line) LIKE UPPER('%WslMetadataServiceClient%dbo%')
 	OR 
	UPPER(sl_line) LIKE UPPER('%WslMetadataServiceClient%SQLServer%')
-- RED 10 PostgreSQL update to correct calls to the WslMetadataServiceClient in Scripts
UPDATE red.ws_scr_line 
SET sl_line = REGEXP_REPLACE(
                  REGEXP_REPLACE(
                      sl_line,
                      '(WslMetadataServiceClient.*)SqlServer',
                      '\1PostgreSQL',
                      'ig'
                  ),
                  '(WslMetadataServiceClient.*)dbo'
                  ,'\1red'
                  ,'ig'
              );

-- RED 10 PostgreSQL update to correct calls to the WslMetadataServiceClient in Templates
UPDATE red.ws_tem_line 
SET tl_line = REGEXP_REPLACE(
                  REGEXP_REPLACE(
                      tl_line,
                      '(WslMetadataServiceClient.*)SqlServer',
                      '\1PostgreSQL',
                      'ig'
                  ),
                  '(WslMetadataServiceClient.*)dbo'
                  ,'\1red'
                  ,'ig'
              );


Stand-alone Script and Procedure Objects

The term "stand-alone" in this context means script and procedure objects that are executed independently to a Table, View or Export object. These are essentially user created routines which are not generated by RED and so some refactoring post-migration may be required.

Scripts

Review any stand-alone scripts, such as High Water Mark scripts, these may have specific code in them that calls the old metadata repository directly and/or the legacy callable routines. Additionally the script output protocol of the Azkaban Scheduler is different and the script may need to be updated to conform. During refactoring consider adopting the new feature that allows scripts to be associated to Database/ODBC and Extensible Sources instead of Windows or Linux connections thus allowing secure access to that connection's credentials and settings.

SQL Blocks

SQL Blocks may have specific code in them that calls the old metadata repository directly and/or the legacy callable routines. Additionally if the associated connection was the old metadata connection then post migration the associated connection will still be that of the old metadata and so may not make sense anymore.

Procedures and Functions

Review stand-alone procedures, if these operated on the old metadata connection then they will continue to operate on the old metadata connection, these should be reviewed to see if they still work as expected and refactored to operate on the PostgreSQL meta instead if required. 

Review Action Script Templates

If the only RED 10 WhereScape Target Enablement Pack you have available was released prior to this version of the migration tooling then it will be missing code to deal with the RED8/9 legacy script output protocol required after a migration to avoid having to rebuild every script. If this is the case then you can update your Action Processing Script Template's execute script function directly. This should be performed prior to the action script generation tasks in Jobs 5 and 6 or the scripts beginning with 'c<n>_' if running the tasks manually. 


You can replace the functions in each of your PowerShell and Python templates depending on what your enablement provides. After making your changes you should test by manually regenerating the Action Processing Script on a few of your objects in the Destination Repo prior to running the batch generation jobs. The updated functions are as follows:

PowerShell

Replace ExecuteScript function in template wsl_common_pscript_utility_action

function ExecuteScript($name){
    $prefix = [string]::Format("WSL_SCRIPT_{0}_",$name)
    $command = [Environment]::GetEnvironmentVariable("${prefix}COMMAND")
    if ([string]::IsNullOrEmpty($command) -or "$command" -like "*${PSCommandPath}*") {
      # Abort if there is no command or the command conatins this action script
      throw [string]::Format("No Script or SQL Block found for routine {0} of $OBJECT$",$name)
    }
    else {
      # Copy accross any routine specific env vars
      Get-ChildItem Env:${prefix}* | ForEach-Object {
        $unprefixedvar = $_.Name -replace $prefix, 'WSL_'
        [Environment]::SetEnvironmentVariable($unprefixedvar,$_.Value)
      }
      # Ensure the environment var WSL_WORKDIR is set and accessible, defaults to current run directory when not
      if ( -not ( (Test-Path env:WSL_WORKDIR) -and (Test-Path "${env:WSL_WORKDIR}") ) ) {
        [Environment]::SetEnvironmentVariable('WSL_WORKDIR',$PSScriptRoot)
      }
    }
    if ( Test-Path "$command" ) {
      # We have an SQL Block file
      $sqlBlockFile = Get-Item "$command"
      if ($sqlBlockFile.Extension -eq '.sql') {
        $block = Get-Content $sqlBlockFile -Raw
        $result = ExecuteSQLBlock $block
        if ( ($result -ne $null) -and ($result[1] -ne $null) -and ($result[1] -ne -1) ) {
          WriteAudit "Rows affected:$($result[1])"
        }
      }
      else {
        throw [string]::Format("SQL Block file had unexpected file extension: {0}",$command)
      }
    }
    else {
        $legacyOutputProtocol = $false
        if ( ('$WSL_EXP_LEGACY_SCRIPT_SUPPORT$' -eq 'TRUE') -or ('$PLEGACY_SCRIPT_SUPPORT$' -eq 'TRUE') ) {
          $scriptStdOut = & cmd.exe /c ${env:WSL_COMMAND}
          if ( $scriptStdOut -ne $null ) {
              $stdOutLines = $scriptStdOut -split '\r\n|\n'
              if ( $stdOutLines[0] -in ('1','-1','-2','-3') ) {
                  WriteAudit -message 'Parsing legacy script output protocol' -type "detail"
                  $legacyOutputProtocol = $true
                  if ($stdOutLines[0] -in ('-2','-3')) {
                      WriteAudit -message $stdOutLines[1] -statusCode "E"
                  }
                  elseif ($stdOutLines[0] -in ('-1')) {
                      WriteAudit -message $stdOutLines[1] -statusCode "W"
                  }
                  else {
                      WriteAudit -message $stdOutLines[1]
                  }
                  for ($i = 2; $i -lt $stdOutLines.Length; $i++){
                      WriteAudit -message $stdOutLines[$i]
                  }
              } 
              else {
                # We couldn't detect legacy output protocol so assume new protocol and pass stdout through
                WriteAudit 'Using new script output protocol' -type "detail"
                $stdOutLines | Write-Host
              }
          }
      }
      else {
        & cmd.exe /c ${env:WSL_COMMAND}
      }
      if ( $LASTEXITCODE -ne 0 -or ( $legacyOutputProtocol -and $stdOutLines[0] -in ('-2','-3') ) ) {
        if ( $LASTEXITCODE -ne 0 ) {
          $exitCode = $LASTEXITCODE
        }
        else {
          $exitCode = $scriptStdOut[0]
        }
        throw [string]::Format("Script execution failed with exit code: {0}. Check both audit and detail logs.",$exitCode)
      }
    }
}


Python

Replace ExecuteScript function in template wsl_common_pyscript_utility_action

def ExecuteScript(name):
    env = dict(os.environ)
    # Environment variables specific to the script (e.g. WORKDIR, which comes
    # from the script's connection) are stored prefixed. We copy such variables
    # to their unprefixed name.
    prefix = 'WSL_SCRIPT_{}_'.format(name)
    command = os.getenv(prefix + 'COMMAND')
    if ( not command ) or ( sys.argv[0] in command ):
        raise Exception("No Script or SQL Block found for routine {}".format(name))
    write_detail("Executing command: {}".format(command))
    for var in os.environ:
        if var.startswith(prefix):
            unprefixedvar = 'WSL_' + var[len(prefix):]
            #write_detail("Overriding environment: {} -> {}".format(var, unprefixedvar))
            env[unprefixedvar] = os.environ[var]
    # Ensure our work directory is valid and default to script root if not
    env['WSL_WORKDIR'] = os.getenv('WSL_WORKDIR','Work_Directory_Not_Set')
    if not os.path.exists(env['WSL_WORKDIR']):
        # default to script root
        env['WSL_WORKDIR'] = os.path.dirname(sys.argv[0])
        write_detail("Overriding environment: {} -> {}".format('WSL_WORKDIR', env['WSL_WORKDIR']))
    if os.path.exists(command) and os.path.splitext(command)[1] == '.sql':
        # We have an sql block not a script
        with open(command, 'r', encoding='utf-8') as f:
            block = f.read()
            result = ExecuteSQLBlock(block)
        if result == True:
            write_detail("Executed SQL Block")        
    else:
        legacy_script = False
        if '$WSL_EXP_LEGACY_SCRIPT_SUPPORT$' == 'TRUE' or '$PLEGACY_SCRIPT_SUPPORT$' == 'TRUE':
            # Parse output for LEGACY_SCRIPT_SUPPORT if the matching extended property or parameter is TRUE
            result = subprocess.run(command, shell=True, env=env, capture_output=True, text=True)
            return_code = result.returncode if result.returncode < 2**31 else result.returncode - 2**32
            if result.stdout:
                stdout_lines = result.stdout.splitlines()
                if stdout_lines[0] in ['1','-1','-2','-3']:
                    legacy_script = True
                    write_detail("Parsing legacy script output protocol.")
                    # We have legacy script output protocol
                    legacy_returncode = stdout_lines[0]
                    if legacy_returncode in ['-2','-3']:
                        # error
                        return_code = 2
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','E')
                    elif legacy_returncode == '-1':
                        # success with warning
                        return_code = 0
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','W')
                    elif legacy_returncode == '1':
                        # success
                        return_code = 0
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','I')
                    for line in stdout_lines[2:(len(stdout_lines))]:
                        write_audit(line)
                else:
                    write_detail("Using new script output protocol")
                    # First line didn't conform to legacy script output protocol 
                    # so assume we have new output protocol and just pass stdout through
                    for line in stdout_lines:
                      print(line, flush=True)
        else:
            # Assume that we can just pass all the output from the script as our output
            # and the return code indicates success/failure
            result = subprocess.run(command, shell=True, env=env, stderr=subprocess.PIPE, text=True)
            return_code = result.returncode if result.returncode < 2**31 else result.returncode - 2**32   
        if result.stderr != "":
            write_audit(str(result.stderr),'detail','E')
        if ( (result.stderr != "" and not legacy_script) or ( str(return_code) != "0" ) ):
            # Finally signal a failure if one occured.
            raise Exception("Script execution failed with exit code: {}. Check both audit and detail logs.".format(return_code))