Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In RED 10 all Scheduling Actions for an Object are performed through an Action Processing Script which is built for and associated to each table, the RED Migration Tooling will generate this script for each object that requires one, or assign a generic script where appropriate, this generation process can take minutes to hours depending on the size of the metadata repository, machine resources and database performance.

...

  1. On the Connection Name field, ensure you enter 'Reports' as the connection name.
  2. On the Data Source Name field, ensure you select the connection to the destination metadata repository.
  3. On the Target Storage Locations field, enter 'red' as the storage location.
  4. Complete the other fields with the appropriate data, then click Validate to check your configurations.
     
  5. Once you validate your configurations click Add.  
  6. On the Add Targets screen you will see the two connections you just added. Click Next continue and add the source connection.
    Image RemovedImage Added

Create the Source Metadata Connection

...

Tip

When asked for a Scheduler Metadata database use the RED Migration Tooling metadata database.

When asked for a RED Metadata database also use the RED Migration Tooling metadata database.

Remember your Profile Encryption Secret for later entry into the Scheduler Profile Maintenance wizard in the RED UI.

If you install the Migration Tooling Scheduler with a separate service user then you may need to run the script 'wsl_mt_initialize_scheduler_user' to accept the EULA for that user. Find this script under the Host Scripts in RED and run it via the Scheduler to accept the EULA for the Scheduler user.

Configure the Scheduler Credentials in RED

...

Generates Load routines for Load objects without an associated script. It can also be run from the RED UI, before running this job see the script details for the scripts prefixed with 'c'. 

Repeating or Restarting the Migration 

To repeat the migration process a second time you do not need to reinstall the Migration Tooling, you can simply drop follow these steps:

  1. Drop and recreate the Destination PostgreSQL database,

...

  1. Run script '2_target_repo_creation'

...

  1. - to recreate the Destination metadata repository.
  2. Then run the jobs again in the order specified.

If you are also upgrading the tooling please follow the upgrade process in the release notes pertaining to your version.

Note
titleTooling Load tables should not be recreated

Since the tooling spans many supported version of RED the Load tables in the tooling may not have newer metadata columns for some tables, therefore the only supported way to recreate the load tables of the tooling is to follow the steps above, so that the metadata creation process creates the correct metadata tables for your target RED version.


Migration Scripts Explained

...

  • This script is driven by the two parameters 'red10DefaultLinuxActionScriptName' and 'red10DefaultWindowsActionScriptName'.
  • This script runs a set a queries to determine candidate objects for a generic action script and then uses the parameters above to assign a default generic action scrip to those objects. 
  • The Migration Tooling provides two sample generic action processing scripts which are copied across to the Destination metadata and then used in these assignments.
  • The sample sample generic action processing scripts are 'wsl_mt_py_action_script' for python and 'wsl_mt_ps_action_script' for powershell.*
  • If you have your own generic action processing script in the target to assign you can set in in these parameters.
  • To disable this process and have all object's action scripts regenerated later, you can remove the script names from these parameters.

...

Code Block
languagepython
titleExecuteScript (Python)
collapsetrue
def ExecuteScript(name):
    env = dict(os.environ)
    # Environment variables specific to the script (e.g. WORKDIR, which comes
    # from the script's connection) are stored prefixed. We copy such variables
    # to their unprefixed name.
    prefix = 'WSL_SCRIPT_{}_'.format(name)
    command = os.getenv(prefix + 'COMMAND')
    if ( not command ) or ( sys.argv[0] in command ):
        raise Exception("No Script or SQL Block found for routine {}".format(name))
    write_detail("Executing command: {}".format(command))
    for var in os.environ:
        if var.startswith(prefix):
            unprefixedvar = 'WSL_' + var[len(prefix):]
            #write_detail("Overriding environment: {} -> {}".format(var, unprefixedvar))
            env[unprefixedvar] = os.environ[var]
    # Ensure our work directory is valid and default to script root if not
    env['WSL_WORKDIR'] = os.getenv('WSL_WORKDIR','Work_Directory_Not_Set')
    if not os.path.exists(env['WSL_WORKDIR']):
        # default to script root
        env['WSL_WORKDIR'] = os.path.dirname(sys.argv[0])
        write_detail("Overriding environment: {} -> {}".format('WSL_WORKDIR', env['WSL_WORKDIR']))
    if os.path.exists(command) and os.path.splitext(command)[1] == '.sql':
        # We have an sql block not a script
        with open(command, 'r', encoding='utf-8') as f:
            block = f.read()
            result = ExecuteSQLBlock(block)
        if result == True:
            write_detail("Executed SQL Block")        
    else:
        legacy_script = False
        if '$WSL_EXP_LEGACY_SCRIPT_SUPPORT$' == 'TRUE' or '$PLEGACY_SCRIPT_SUPPORT$' == 'TRUE':
            # Parse output for LEGACY_SCRIPT_SUPPORT if the matching extended property or parameter is TRUE
            result = subprocess.run(command, shell=True, env=env, capture_output=True, text=True)
            return_code = result.returncode if result.returncode < 2**31 else result.returncode - 2**32
            if result.stdout:
                stdout_lines = result.stdout.splitlines()
                if stdout_lines[0] in ['1','-1','-2','-3']:
                    legacy_script = True
                    write_detail("Parsing legacy script output protocol.")
                    # We have legacy script output protocol
                    legacy_returncode = stdout_lines[0]
                    if legacy_returncode in ['-2','-3']:
                        # error
                        return_code = 2
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','E')
                    elif legacy_returncode == '-1':
                        # success with warning
                        return_code = 0
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','W')
                    elif legacy_returncode == '1':
                        # success
                        return_code = 0
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','I')
                    for line in stdout_lines[2:(len(stdout_lines))]:
                        write_audit(line)
                else:
                    write_detail("Using new script output protocol")
                    # First line didn't conform to legacy script output protocol 
                    # so assume we have new output protocol and just pass stdout through
                    for line in stdout_lines:
                      print(line, flush=True)
        else:
            # Assume that we can just pass all the output from the script as our output
            # and the return code indicates success/failure
            result = subprocess.run(command, shell=True, env=env, stderr=subprocess.PIPE, text=True)
            return_code = result.returncode if result.returncode < 2**31 else result.returncode - 2**32   
        if result.stderr != "":
            write_audit(str(result.stderr),'detail','E')
        if ( (result.stderr != "" and not legacy_script) or ( str(return_code) != "0" ) ):
            # Finally signal a failure if one occured.
            raise Exception("Script execution failed with exit code: {}. Check both audit and detail logs.".format(return_code))

...