Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Object Type(s)MigratedPost Migration Notes
Connections(tick) All connections are migrated, MSAS connections should be manually removed after migration.
MSAS, Cubes, Cube Dims, Tabular Cubes(error) Analysis Services Object Types are not migrated since RED 10 does not support them yet. 
Aggregate Dimension Join Table
Aggregate Fact Join Table
Aggregate Join Table
Fact Rollup
Fact Work Table
Permanent Stage
Model View
Fact View
(tick) 

These legacy object sub-types are migrated but assigned an new Custom Object Type in RED 10 of the same name. 

Objects of these types should be checked carefully in the Destination metadata.

All Other Object Types(tick) All other object types not mentioned in the rows above are migrated as is.
Object Versions(error) 

Previous object versions are not migrated. There are a few reasons for this:

  • Restoring to a version predating the migration would leave your object in an unusable state.
  • The size of the versioning tables in legacy repositories adds unnecessary delay to the migration.
  • It is better to start versioning again from scratch in the migrated repository.
WhereScape Callable Procedures*(error) 

Since the inbuilt WhereScape Callable Routines are compiled on either SQL Server, Oracle or Teradata they can not be migrated*

Non-Script-Based Loads(tick) 

Non-Script-based loads such as: ODBC, DB Link, SSIS and some File Load types are migrated however these load type will require a load script to be generated and therefore these types will need thorough testing post migrations.

Any Load which was already script-based should function as is provided the appropriate table level Action Processing Script has been generated.

Non-Script-Based Exports(tick) 

Non-Script-Based Exports will require an Export script to be generated and therefore these types will need thorough testing post migrations.

Any Export which was already script-based should function as is, provided the appropriate Export level Action Processing Script has been generated.

Parameters(warning) 

Parameters are migrated however if you were on traditional RED8/9 SQL, Oracle or Teradata targets you should check that your RED 10 EP has a solution to synchronize parameters between old and new repositories. Additionally you will need to review Stand-alone Scripts and procedures that used parameters. 


Info
title* WS Callable Routines

Any Procedures/Blocks or Scripts which called these callable routines before will continue to work but the outcomes will be applied to the original Source Metadata Repository and depending on the procedure being called will have no effect. Only the WhereScape Parameter Functions will still be of use as is post migration.

Most use cases, outside of Parameter read/writes, will involve a customized script or procedure, these should be reviewed to find the RED 10 equivalent and adjusted after migration. Including any Jobs they were part of. 

Note: Target Enablement Packs will handle legacy procedures that include the WhereScape Parameter read/write functions by synchronizing the dss_parameter table in the Target with the same table in the PostgreSQL metadata repository. In this way most procedures will continue to function as is after migration.

...

  • Review and adjust job thread counts: The migration process capped the total thread count in jobs to 10 since this is the typical parallelism which is possible on an average machine due to memory consumption. You can begin to set this higher as your infrastructure allows.
  • Review Jobs with Child Jobs: In RED 10.4.0.3 a child jobs tasks are run directly from the original job they reference, if you change a child jobs tasks then this may invalidate the parent jobs references to it which can only be fixed by re-publishing the parent job. Improved Child Job support is coming soon but until then use job nesting sparingly to avoid synchronization issues. 
  • If you were previously running a RED8/9 Linux scheduler then you should install an Azkaban Executor on the same machine and Linux user. If the previous workloads can not be handled on the same machine with the Azkaban Executor installed then you would scale out horizontally with more Azkaban Executor machines.

Review and Refactor

Anchor
Stand-alone Scripts
Stand-alone Scripts
Stand-alone Script and Procedure Objects

The term "stand-alone" in this context means script and procedure objects that are executed independently to a Table, View or Export object. These are essentially user created routines which are not generated by RED and so some refactoring post-migration may be required.

...

Code Block
languagepython
titleExecuteScript (Python)
collapsetrue
def ExecuteScript(name):
    env = dict(os.environ)
    # Environment variables specific to the script (e.g. WORKDIR, which comes
    # from the script's connection) are stored prefixed. We copy such variables
    # to their unprefixed name.
    prefix = 'WSL_SCRIPT_{}_'.format(name)
    command = os.getenv(prefix + 'COMMAND')
    if ( not command ) or ( sys.argv[0] in command ):
        raise Exception("No Script or SQL Block found for routine {}".format(name))
    write_detail("Executing command: {}".format(command))
    for var in os.environ:
        if var.startswith(prefix):
            unprefixedvar = 'WSL_' + var[len(prefix):]
            #write_detail("Overriding environment: {} -> {}".format(var, unprefixedvar))
            env[unprefixedvar] = os.environ[var]
    # Ensure our work directory is valid and default to script root if not
    env['WSL_WORKDIR'] = os.getenv('WSL_WORKDIR','Work_Directory_Not_Set')
    if not os.path.exists(env['WSL_WORKDIR']):
        # default to script root
        env['WSL_WORKDIR'] = os.path.dirname(sys.argv[0])
        write_detail("Overriding environment: {} -> {}".format('WSL_WORKDIR', env['WSL_WORKDIR']))
    if os.path.exists(command) and os.path.splitext(command)[1] == '.sql':
        # We have an sql block not a script
        with open(command) as f:
            block = f.read()
            result = ExecuteSQLBlock(block)
        if result == True:
            write_detail("Executed SQL Block")        
    else:
        legacy_script = False
        if '$WSL_EXP_LEGACY_SCRIPT_SUPPORT$' == 'TRUE' or '$PLEGACY_SCRIPT_SUPPORT$' == 'TRUE':
            # Parse output for LEGACY_SCRIPT_SUPPORT if the matching extended property or parameter is TRUE
            result = subprocess.run(command, shell=True, env=env, capture_output=True, text=True)
            return_code = result.returncode if result.returncode < 2**31 else result.returncode - 2**32
            if result.stdout:
                stdout_lines = result.stdout.splitlines()
                if stdout_lines[0] in ['1','-1','-2','-3']:
                    legacy_script = True
                    write_detail("Parsing legacy script output protocol.")
                    # We have legacy script output protocol
                    legacy_returncode = stdout_lines[0]
                    if legacy_returncode in ['-2','-3']:
                        # error
                        return_code = 2
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','E')
                    elif legacy_returncode == '-1':
                        # success with warning
                        return_code = 0
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','W')
                    elif legacy_returncode == '1':
                        # success
                        return_code = 0
                        if stdout_lines[1]:
                            write_audit(stdout_lines[1],'audit','I')
                    for line in stdout_lines[2:(len(stdout_lines))]:
                        write_audit(line)
                else:
                    write_detail("Using new script output protocol")
                    # First line didn't conform to legacy script output protocol 
                    # so assume we have new output protocol and just pass stdout through
                    for line in stdout_lines:
                      print(line, flush=True)
        else:
            # Assume that we can just pass all the output from the script as our output
            # and the return code indicates success/failure
            result = subprocess.run(command, shell=True, env=env, stderr=subprocess.PIPE, text=True)
            return_code = result.returncode if result.returncode < 2**31 else result.returncode - 2**32   
        if result.stderr != "":
            write_audit(str(result.stderr),'detail','E')
        if ( (result.stderr != "" and not legacy_script) or ( str(return_code) != "0" ) ):
            # Finally signal a failure if one occured.
            raise Exception("Script execution failed with exit code: {}. Check both audit and detail logs.".format(return_code))

...