Wednesday, September 12, 2018

RedPrairie/JDA Application Monitor

Overview

One thing that all large implementations have in common is the need for some tasks that need to be done periodically.  These may be tasks that are by design or due to some data corruption.  We often are unable to get the fix from the vendor or the fix may require an upgrade which may not be feasible.

In previous versions we had Schedule Agents that could do this.  In new versions we have a slightly more elaborate construct called jobs.  

Our Approach

At Oracular we employ a slightly more sophisticated approach to this problem.  We implement a concept of an application monitor that detects abnormalities and fixes them.  While it executes using the provided constructs of schedule agent or jobs - it provides a more abstract view of this problem and thus the overall solution is easier to manage.

We view the problem of an application monitor as follows:

  • Detect the issues of a certain type.  Return several rows where each row indicates the issue of this type.  This can take the form of a MOCA command or a SQL statement.
  • For each of the rows returned from above execute MOCA code to fix that situation.
  • Optionally log in dlytrn 
  • Optionally raise EMS alerts.
  • Provide a front-end so that advanced end users can manipulate these monitors or add new ones.
  • Run these as a single job so that overall job schedule is not adversely impacted.
  • Publish progress to the job monitoring framework developed by oracular

Our Solution

We define this monitors in a new table:
ColumnDescription
Monitor IdA unique name for each monitor
DescriptionDetailed information about it
Sort#To control the sequence in which the monitors run
Time Since Last RunSecond since last run. The monitor job could be running every minute but one of the monitors may be defined to run every 10 minutes.
Enabled?1/0 to enable it
Detect Command4000 byte field to provide MOCA or SQL to detect an abnormality. Return n rows.
Fix Command4000 byte field to provide MOCA to fix 1 row. Executes for each row returned from detect command
Log Dlytrn?1/0 to log dlytrn for each row returned by detect command
Raise EMS on Fail?1/0. if enabled, raise EMS alert if fix command fails
The solution is in the form of a single MOCA command called "run ossi monitor" that is scheduled using MOCA jobs.  We define this to execute every minute and to control individual jobs that we may want to run less frequently - we use "Time Since Last Run" concept.  Users can maintain this information themselves using a simple front-end application:



All WMS implementations have some of these jobs that are developed over time to handle typical issues.  For example:

  • We often have orphan invmov records.  We can detect such records and then delete them.
  • Sometimes we have pckmov records that do not have corresponding pckwrk records.  We can detect the condition and kill such pckmov records
  • It is common to have situations where rescod is not cleared.  We can detect the condition and run "deallocate resource location" for these locations.
  • It is often a requirement to reprocess downloads in EERR status.
  • We can create a monitor to detect some abnormal condition and send an email to a set of users with results of the query.  This solution is easier than creating an EMS alert since it does not require a rollout process and the email can include detailed results. 

Conclusion

Some sort of an application monitor is a common requirement in any large system and RedPrairie/JDA WMS is no exception.  Above design pattern can be implemented generally for any large system.  It looks at the problem abstractly and provides a solution that is simple to implement, maintain, and manage.  

Managing Long Running Jobs In RedPrairie/JDA

Overview

We often have jobs that perform long tasks.  There is no easy way to see the work they are performing at a given time.  The only option is to look at MOCA Console or other similar view which provides a very low level window into the MOCA command or SQL that is being executed.  Historical information is not easy to view either.

While this solution describes it in the context of RedPrairie/JDA - the concept is universal and can be used in any solution where long running operations are implemented.

Our Solution

At Oracular we have developed a universal approach to handle this problem which we apply in all of our solutions including RedPririe/JDA jobs.  We view any long running operation as follows:
  • Each instance of execution is a job
  • The job executes one or more modules
  • The module executes one or more actions
  • Start and stop of job, module, and action is recorded with timestamp in a separate commit context so that it is visible outside the transaction right way.
  • In case of an error the error information is recorded as well.
  • All of this information is kept in a database table so it can be referenced later
  • DDAs are available so that end users can view this information.
A table called usr_ossi_job_log records this information.  It has following structure:
ColumnDescription
Job#A new number for every job
Module#A new number for every module within the job
Action#A new number for every action within the module
WhoA string to identify user or other context which is executing the job
Job IdJob Name
Module IdModule Name
Action IdAction Name
Context DataAdditional Data
Started OnStart Date/Time
Ended OnEnd Date/Time
Error CodeIf an error occurs error code
Error MessageIf an error occurs error message

So as a job progresses it keeps updating this table in a separate commit context.  This allows the users to know exactly how far along this instance of the job is and given that we have historical data we can reasonably predict when the job will end.

Making It Happen!

To make it all come together we have created a set of MOCA commands and MOCA functions that allow our code to be simple and at the same time provide information to this table.  A typical such job will be structured as follows:
/*
 * Adds a row to the table with new job#, job-id, additional information, 
 * and who information.  The start date/time is logged as well.  module 
 * and action are set to % (literal).  Commits this data in a separate 
 * commit context
 */
publish data
where uc_ossi_job_seq = ossi__register_job ( 'JOB-ID, 'SOME-INFO', 
                        'WHO-INFO' )
|
try
{
    /* 
     * Some step(s) during the job
     * Adds row to the table for the job#.  gets a new module# and 
     * returns that.  Row has module name and also additional 
     * information as needed.  Additional.  Action is set to 
     * % (literal).  Additional Information can come in handy 
     * to for example to log when we are processing several 
     * rows of a domain, like processing orders in a wave.
     * You could log something like 1/n there so that you can see 
     * how far along the job is.  This is committed in a separate 
     * commit context.
     */
    publish data
    where uc_ossi_module_seq = ossi__register_module ( @uc_ossi_job_seq, 
                               'MODULE NAME', 'MORE DATA' )
    |
    try
    {
        /* 
         * This module will have 1 or more actions - each coded like
         * We do not have to use action concept - this becomes useful 
         * if we can divide a module further.  For example module could 
         * be an order and action could be pick.  Idea is same as module 
         * - here we can give action name and additional information
         * Additional Information can be used to provide specific 
         * details and also 1/n type of data.  This is committed in a 
         * separate commit context.
         */
        publish data 
        where uc_ossi_action_seq = ossi__register_action ( 
                                   @uc_ossi_job_seq, @uc_ossi_module_seq, 
                                   'ACTION NAME', 'MORE DATA' )
        |
        try
        {
            moca comamnds that do the work
        }
        finally
        {
            /*
             * uc_ossi_job_seq, uc_ossi_module_seq. uc_ossi_action_seq are in scope
             * so that row
             * is updated with timestamp and error information.
             * This is committed in a separate commit context.
             */
            complete ossi job log
            where uc_ossi_err_code  = @?
            and   uc_ossi_err_descr = @!
        }
    }
    finally
    {
        /*
         * uc_ossi_job_seq and uc_ossi_module_seq are in scope so that row
         * is updated with timestamp and error information.
         * This is committed in a separate commit context.
         */
        complete ossi job log
        where uc_ossi_err_code  = @?
        and   uc_ossi_err_descr = @!
    }
}
finally
{
    /*
     * Updates the row in the table for the job itself (added in 
     * register of job).  Sets end time and also error 
     * information if applicable.
     * This is committed in a separate commit context.
     */
    complete ossi job
    where uc_ossi_err_code  = @?
    and   uc_ossi_err_descr = @!
} 

So as this job progresses through several actions we will be publishing the progress of the job to this table - so we will know exactly how far along it is.  We will also know historically how long the job has taken and how long the various modules and actions have taken.

The data structure is generic and will work for any type of long running operation.

End User View

The users can view all of the jobs in a simple to use front-end.  First screen provides a dropdown to see all the jobs defined in the system:

Then it shows a view that indicates how many times the job has run and the times of the latest execution:
By going to the Job Log Detail Display, we can see the progress of the latest execution.  Here the users can see the progress of the currently executing job as well.
To see previous executions, go to Job Executions tab:

Summary

The users can view all of the jobs using this approach.   The data provided by this approach is extremely valuable for support.  We can detect if a job is stuck and also monitor progress of long running operations.  We can easily predict when the job will end and can objectively determine if the performance has deteriorated.  In case of performance we can pinpoint the exact operation that is the culprit.

Wednesday, September 5, 2018

RedPrairie/JDA RF Solution for the 21st Century

Overview

All Redprairie WMS implementations need an RF Solution - a solution for physical devices that are used on the floor to perform most of the warehouse functions.  Man Hours spent on these devices far exceed those on GUI screens.  But the solution for these devices looks the same as it did a decade ago.  Internally the infrastructure evolved to a Java based telnet emulation but from the point of view of an end user it has not improved much.

At Oracular we have been working with RedPrarie systems for a long time and we understand the pains and opportunities in such implementations.  Improving the RF Solution is part of that.

We have identified following areas as opportunities that we have addressed in our new solution.
  1. Connection Setup
  2. Touch Support
  3. Screen Layout
  4. Additional Information
  5. Voice Support

Solution Overview

Our solution is an Android app that is an evolution of our earlier solution.  While that solution was from the point of view of ad-hoc use - here our vision is to provide a solution for the warehouse floor.  As several customers are now choosing Android platform - the solution is a natural fit.

Whereas the standard RP solution is a thin telnet emulation - our solution builds on top of that.  All of the delivered and custom RF screens would work without any tweaks - but at the same time the solution can be enhanced by providing additional contextual information to the user.

In order to make it happen, our solution makes a direct connection to the MOCA server so that it can access it and also keeps a telnet window open.  It also utilizes advanced screen scrapping and keyboard manipulation to streamline the data management.

Salient Features

Following is a list of some key features that we have incorporated in our solution.

Connection Setup

A typical warehouse may have hundreds of these devices and the typical users are not technically savvy.  So maintaining these becomes a chore.  Each device needs to have some setup done for following reasons:
  • Define environments that it can connect to.
  • Identify it individually to the application so that DEVCOD variable is set properly
Environment management can become a major headache for system administrators.  Updating the information on several devices is time consuming.  Using the same device against multiple environments or re-purposing a device is not straight forward either.

Our solution solves this problem by defining a single URL for all connection details which will remain the same for all devices.  All of the changes are then done on that single location.

Profile
First we have a concept of profiles.  These are maintained by a simple XML file and these point to the various MOCA environments that the devices may connect to:
<ossi_rf_emul_profiles>
<profile name="env_1">
<app_url>http://10.10.10.10:8110/service</app_url>
<prompt_user_id_first>1</prompt_user_id_first>
<short_name>AC</short_name>
</profile>
<profile name="env_2">
<app_url>http://10.10.10.20:8110/service</app_url>
<prompt_user_id_first>1</prompt_user_id_first>
<short_name>AD</short_name>
</profile>
</ossi_rf_emul_profiles>
This defines following:
XML TagValueComments
nameArbitrary ValueProvide a name of this environment
app_urlMOCA URLURL that connects to this environment
short_nameArbitrary ValueAbbreviated name of the environment to show on the emulator
prompt_user_id_first0 or 1Ask for username and password after connection

Profile_dtl
In the same location, we will have profile detail XML files for each of the profiles defined above.  The XML file is named profile_dtl_name.xml.  For example:
<ossi_rf_emul_profile>
<setup name="env1">
<mtf_url>10.10.10.10:8112</mtf_url>
<short_name>AC</short_name>
<wh_id>WMD1</wh_id>
</setup>
</ossi_rf_emul_profile>
This defines following:
XML TagValueComments
nameArbitrary ValueProvide a name of this environment
mtf_urlMTF URLHost and port of the MTF telnet connection
short_nameArbitrary ValueAbbreviated name of the environment to show on the emulator
wh_idWarehouseAs defined in wh table


When our emulator starts, user selects the appropriate profile and then profile detail.  With this input we know the application URL, MTF URL, and  warehouse Id.  The application will then prompt for username and password for the application URL.  This will allow the emulator to directly connect to the MOCA server and at the same time open a telnet connection to the MTF server.

Device Identification
In order to identify each physical device to the application server RedPrairie has used various techniques over time including:
  • Requiring fixed IP addresses and using last three digits
  • Utilizing "answer back" and then responding with a value from the device.
Both can work but are not easy to setup.  Fixed IP addresses are not suitable at all these days.  Answer back works but requires per-device configuration.

Our solution in this case is to support these legacy concepts and extend it further to make things simpler.  Each physical device provides several interesting identifiers that can be utilized for example:
  • IP Address (if fixed IP address)
  • Bluetooth Id
  • MAC Address
  • Serial Number
MAC Addresses and serial numbers are especially interesting since they are often known from the product packaging and are already set.  In RedPrairie we already have a concept of defining each device as "RF Terminal" using "RF Terminal Maintenance"

As you can see here rather that utilizing "answer back" we define one of the following attributes for each device - and while defining we can use wild cards:
  • IP Address
  • Bluetooth Id
  • Mac Address
  • Serial Number
So in our solution when the "RF Terminal Id" prompt shows up on the emulator, we automatically key in the corresponding terminal id.


Touch Support

Some hardware devices are quite compact where keys are quite small and typing can be tricky.  Our emulator provides touch support.  This comes in handy for situations like:
  • Selecting menu options
  • "Pressing" Function keys

Screen Layout Enhancements

15 years ago defining an RF screen in terms of lines and columns of characters made sense but today these devices have high resolution screens - so in reality a lot of usable real estate is wasted.  Our solution maximizes the real-estate in several ways:
  • Function Key pallet is displayed on left side - making it easier to manipulate them.  Refer to the images displayed above to see the function key pallet.
  • Soft Keyboard support.  Refer to the image above to see how the soft keyboard appears and disappears
  • Automatically Zoom In and Zoom Out to maximize the use of real-estate.  Refer to the image above to see how screen automatically zooms in and out based on the soft keyboard
  • Top of the screen provides connection details like environment, user, etc.

  • Bottom section of the screen can provide additional contextually sensitive information (described below)

Additional Information

RedPrairie standard RF screens often need to be complimented with additional information.  They do not provide simple ways of providing that so sometimes users would switch to alternate front end with GUI, or have reports.  Our approach in this case is to utilize the full capabilities of the devices and allow users to extend the view with additional information based on the context.  For instance if a pick directs you to a location we could show the inventory in the location along with its attributes like lot, and origin.  We can show multiple pages and the information provided can be different for different users.  The information could be from external sources as well, e.g. ERP or external websites.

This is defined by users themselves and is not a development task.

LES Command Maintenance
We use the les_cmd table to store the commands that are used for this purpose.  This allows us to make changes without needing "mbuild" and thus avoid rollout requirements.  When we execute these commands we always execute the highest level commands - thus providing a mechanism for overriding them as well.

If we want to show a URL then the command returns a special variable called "url":


We have two types of commands - the ones that show data (as shown above) and others that detect the context.  These commands always return one row and one column with a value of 1 or 0:

Additional Information Definition
Additional RF Form Information is defined as a collection.  Several pages may be defined in one collection.  The collection also describes how it fetches the context from the RF form that is running.  The screenshot below indicates that "wrkref" is scrapped from the screen to establish context.  Then three pages will be displayed below the form for additional information.  Each "page" has associated command which would return the data.


Setups
We have defined the various page groupings above.  To use them in a specific context we use the concept of "setups".  Several setups can be defined for an RF form.  Each setup has an associated "condition command" and "Additional Information Id".  The first setup where the conditional command returns true will be used.  This concept can be used to display different types of additional information on same RF form.  For example we could show different data based on type of user or other context information.  A supervisor could, for example, see the inventory levels in the system while doing counting but other users will not.


How it all comes together
When a user goes into an RF form, we check if this form has a setup which is active based on the pre-condition defined.  If there is one we then find the various pages and the associated commands.  We utilize the bottom section of the RF form to show this data where various pages are displayed as tabs and data is displayed in a grid.

Voice Support

If devices support voice, the RF emulator can utilize it as well.  It can speak the form name and the field name.


Conclusion

We have been working with RedPrairie WMS for a long time and understand the intricacies of the architecture.  We also understand the deployment framework and use cases.  This allows us to appreciate the gaps and this solution is based on years of experience in deploying these solutions in various industries and environments.   Once deployed users will appreciate the new capabilities which will result in ease of use and improvement in quality and throughput.  The system support will also become easier.

If you want additional information about this solution or any other Oracular solutions or initiatives contact me at saad.ahmad@oracular.com.  Visit us at http://www.oracular.com


Sunday, June 10, 2018

Querying RedPrairie WMS Data for Adhoc Analysis

Overview

RedPrairie/JDA instances can have over 1000 tables and over 5000 distinct field names.  Exact counts can vary based on the options that are installed and if there are customizations. 

Typical RedPrairie installation is in a single schema for oracle or a single database in SQLServer.  The data is very easy to query and since the software is implemented with sound use of the concept of database transactions, querying from another tool is just fine!

In this blog I will provide some techniques about getting the information about the data dictionary so that you may use other tools to create reports or views.

RedPrairie Data Dictionary

Oracle and SQLServer had constructs and data stores for storing detailed information about tables and columns for some time but most software packages do not utilize these.  RedPrairie had been putting such information into the database for some time.
Information Data Dictionary Table Example
Table Information (Legacy SQLServer) ::fn_listextendedproperty select value cmnt from ::fn_listextendedproperty('MS_Description', 'user', 'dbo', 'table', @table_name, NULL,NULL)
Table Information (Legacy Oracle) user_tab_comments select comments cmnt from user_tab_comments where table_name = @table_name
Table Information (New) les_mls_cat tbl_table select * from les_mls_cat where mls_id = 'tbl_' || @table
Table Information (New) les_mls_cat tbl_table select * from les_mls_cat where mls_id = 'tbl_' || @table
Column - Short Name les_mls_cat select * from les_mls_cat where mls_id = @column
Column - Comment (SQLServer) les_mls_cat select value cmnt from ::fn_listextendedproperty('MS_Description', 'user', 'dbo', 'table', @table_name,'column',@column)
Column - Comment (Oracle) user_col_comments select comments cmnt from user_col_comments where column_name = @column_name and table_name = @table_name

RedPrairie Data Dictionary via MOCA

This information can be retrieved by executing MOCA Commands as well.  If you have access to the instance via MOCA - this can be simpler.  You can use a MOCA client or use Server Command Operations.

Run "Server Command Operations".  It shows an application like


Some commands you can run here to get information are:
Command Data Returned Example
list user tables Table list along with details list user tables
list table columns Columns in a table, short name, and comment list table columns where table_name = 'rcvinv'


Typical Queries

RedPrairie database is well normalized and is easy to query.  You can refer to this blog entry to see the tables that correspond to some functional areas.  Once you know a table, you can refer to techniques specified above to see columns in the table.  In order to join tables you can refer to table primary key.  Most of the times related tables have exactly same column names or very similar column names.

Receiving (Inbound)

Query Tables Example
Supplier List supmst, adrmst select * from supmst join admst on adrmst.adr_id = supmst.adr_id
Master Receipt / Receive Truck rcvtrk, trlr, locmst Every rcvtrk is a location. rcvtrk has trlr_id to join to trlr
List of receipts rcvtrk, rcvinv, rcvlin from rcvtrk join rcvinv on rcvinv.trknum = rcvtrk.trknum and rcvinv.wh_id = rcvtrk.wh_id join rcvlin on rcvlin.wh_id = rcvinv.wh_id and rcvlin.client_id = rcvinv.client_id and rcvlin.invnum = rcvinv.invnum and rcvlin.supnum = rcvinv.supnum
Inventory for receipt rcvlin, invdtl from rcvlin join invdtl on invdtl.rcvkey = rcvlin.rcvkey

Shipping (Outbound)

Query Tables Example
Customer List cstmst, adrmst select * from cstmst join admst on adrmst.adr_id = cstmst.adr_id
Shipping Structure shipment, shipment line, order line One shipment has several lines. Each line points to an order line
Inventory for shipment shipment_line, invdtl shipment line join invdtl on invdtl.ship_line_id= shipment_line.ship_line_id

Inventory

Query Tables Example
Inventory List locmst, invlod, invsub, invdtl Actual item details are in invdtl. invsub is a group of details like a case. invlod is like a pallet. locmst is actual storage location or for ASN it is the truck

History

Query Tables Example
History Data dlytrn Any activity will log something in dlytrn. For example receiving, inventory movements, picking, shipping. While transaction tables will have current data these can be queried to get historical data

Complex Examples

Query Approach
Historical Report to get supplier performance, i.e. errors in receipts
  1. If we change status while receiving, rcvlin will have rows with bad status
  2. If we received in expected status and then changed status, then use dlytrn to query an entry for the same load after creating inventory with changed status

Tuesday, March 13, 2018

Is Archiving the way to deal with old data in 2018?

Overview

JDA/RedPrairie WMS requires customers to have a data archival strategy. The approach that they push and most customers end up adopting is akin to using a sledge hammer to push a nail.

The JDA approach is quite interesting and probably an appropriate mechanism if the year was 1995. Their approach is to say that the data will be archived to an identical environment from the point of view of software but that other environment will have older data.  This is a very interesting paradigm because all of the reports, queries, use cases, etc. that we may have will work in exactly the same way in the archive environment - all we have to to validate is that the data from the current system is properly moved to the archive instance.

Issues

So what is the issue with above approach?

  • We are adding another environment to support.  So lets say we have 5 production environments (for sets of warehouses) - we really need 5 additional environments for archive.  Then to support testing and development we will have 10s of additional environments.  If we have some related environments like EMS, parcel, report writer, etc - those may be needed as well.
  • These additional environments need to be part of the software development life cycle.  All rollouts need to go to these environments as well.
  • We create "cut off" points, i.e. some queries would need to go to multiple environments.
  • Some advanced use cases like "recalling" and "returns" may need overly complex solutions where we may need to query archive and online instances.
  • An additional point of failure for daily maintenance.

What are my options?

So do we have any options?  First we really need to check the calendar.  What we considered as "too much data" in 1995 will be considered a "below average database size" today.  Secondly we need to understand that number of rows in a table does not cause an issues - it is the access path.  Thirdly, today's databases have advanced features like partitioning that render such complex strategies obsolete.  In this article, I will focus on solutions available in Oracle - but please note that SQLServer has similar concepts and features.

First let us firmly establish the problem we are trying to solve.  The size of the table is not a measure of performance - it is always the access path - so what we are really trying to address is data management not performance.  Another "problem" that archiving solution supposedly addresses is that queries against the archive instance will not adversely impact the online system's performance.

Data Management - Partitioning

The solution that is most promising for data management problem is to utilize advanced database feature for "partitioning" and let the data stay in the online system.  This implies that your single online instance will have all of the data all the time.  Dispatched shipments will simply be in a different partition.

Ideal partitioning solution will be to create the main partition by range (or in new versions interval partitions).  Then have a sub-partition by warehouse.  That will result in manageable data in each partition.  This partitioning strategy provides good information to the database to parallelize queries.

Data Isolation - Reporting Database

The other problem of providing an isolated environment for queries is easily addressed by utilizing the concept of stand-by database or "data guard" or "reporting databases".  Please refer to the database documentation - but what it provides is a completely isolated database instance that is "almost" up to date and can be used for reporting.  The setup is low level DBA activity and requires little application support.

But this may not be needed for majority of the scenarios.  Such a database should be made available for true research activities where we will not be using the RedPrairie/JDA client to access the tables.  All use cases where we need to access the data through application should be handled by connecting to the online instance and letting database handle the large amounts of data.

How should I partition the data?

To effectively partition we need to have a good date column on the table and for warehouse sub-partition warehouse id should be in the table.  Unfortunately some tables in JDA WMS do not have these columns so small modifications are needed to make it all come together.  

In the following section I have listed the tables and suitable partitioning keys.  Following enhancements should be done to make it all work:
  1. When "shipping trailer" dispatches, update following tables with uc_trndte set to dispatch date/time.  And if table does not have a warehouse id, then we set uc_wh_id column.
    • trlr
    • car_move
    • stop
    • shipment
    • shipment_line
    • ord
    • ord_note
    • ord_line
    • ord_line_note
    • oub_serv_ord
    • oub_serv_ord_line
    • pckwrk_hdr
    • pckwrk_dtl
    • pckmov
    • invlod
    • invsub
    • invdtl
    • locmst
    • manfst
    • shp_dst_loc
    • sp_hand_ref
  2. When "receiving trailer" dispatches, update following tables with uc_trndte set to dispatch date/time.  And if table does not have a warehouse id, then we set uc_wh_id column.
    • trlr
    • rcvtrk
    • rcvinv
    • rcvlin
    • locmst
    • inb_serv_rcvinv
    • inb_serv_rcvlin
  3. As a general rule - use base 36 sequences
So the overall partitioning strategy will end up as follows (not including work order tables)
Table Partition 1 Sub Partition Online Data Mod Needed
Transaction History Tables
dlytrn trndte - month wh_id - list No separate concept
trlract trndte - month wh_id - list No separate concept 
ordact trndte - month wh_id - list No separate concept
invact trndte - month wh_id - list No separate concept 
cnthst adddte - month wh_id - list No separate concept 
wrkhst issdte - month wh_id - list No separate concept
canpck candte - month wh_id - list No separate concept
pko_act ins_dt - month wh_id - list No separate concept
pko_act_dtl ins_dt - month None No separate concept
Workflow History Tables
cnfrm_bck_serv moddte - month wh_id - list No separate concept
cnfrm_inv_serv adddte - month wh_id - list No separate concept
cnfrm_noninv_serv adddte - month wh_id - list No separate concept 
Order Tables
trlr uc_trndte-month stoloc_wh_id uc_trndte is null #1
car_move uc_trndte-month wh_id uc_trndte is null #1
stop uc_trndte-month uc_wh_id uc_trndte is null #1
shipment uc_trndte-month wh_id uc_trndte is null #1
shipment_line uc_trndte-month wh_id uc_trndte is null #1
ord uc_trndte-month wh_id uc_trndte is null #1
ord_note uc_trndte-month wh_id uc_trndte is null #1
ord_line uc_trndte-month wh_id uc_trndte is null #1
ord_line_note uc_trndte-month wh_id uc_trndte is null #1
oub_serv_ord uc_trndte-month wh_id uc_trndte is null #1
oub_serv_ord_line uc_trndte-month wh_id uc_trndte is null #1
pckwrk_hdr uc_trndte-month wh_id uc_trndte is null #1
pckwrk_dtl uc_trndte-month wh_id uc_trndte is null #1
pckmov uc_trndte-month wh_id uc_trndte is null #1
invlod uc_trndte-month wh_id uc_trndte is null #1
invsub uc_trndte-month uc_wh_id uc_trndte is null #1
invdtl uc_trndte-month uc_wh_id uc_trndte is null #1
locmst uc_trndte-month uc_wh_id uc_trndte is null #1
manfst uc_trndte-month uc_wh_id uc_trndte is null #1
shp_dst_loc uc_trndte-month uc_wh_id uc_trndte is null #1
sp_hand_ref uc_trndte-month uc_wh_id uc_trndte is null #1
Receiving Tables
trlr uc_trndte-month stoloc_wh_id uc_trndte is null #2
rcvtrk uc_trndte-month wh_id uc_trndte is null #2
rcvinv uc_trndte-month wh_id uc_trndte is null #2
rcvlin uc_trndte-month stoloc_wh_id uc_trndte is null #2
locmst uc_trndte-month wh_id uc_trndte is null #2
inb_serv_rcvinv uc_trndte-month wh_id uc_trndte is null #2
inb_serv_rcvlin uc_trndte-month wh_id uc_trndte is null #2
Job/Task History Tables
job_definition_exec start_dte-month
task_definition_exec start_dte-month

Conclusion

If you have been live for some time - chances are you have experienced your share of Archiving Blues. Because of the overly complex archive solution - people often resent even going to archive instance. Volumes of priceless data in dlytrn is completely ignored and rather than valuing it we consider it a burden. Using the simple strategies described here we can enter the 21st century for our JDA WMS systems. We can consider dlytrn as a valuable data source which we choose to not purge and are not worried about it growing to 100s of millions of rows. We can utilize simple techniques like "stand-by databases" and mine through that data to see what gems may be there - or if we do not have time or budget right now at least we can leave data there rather than losing it. Also by keeping it all in one system where historical data is easily accessible we do not have to worry about untimely exceptions. If we have to print a packing slip from 5 years ago - no problem; we simply go to "Report Operations" and print it.