Tuesday, December 18, 2012

Control Adhoc access to production environments

Overview

    As much as we may want to disallow all adhoc access to production environments, it is often a requirement in order to provide adequate support - especially when there is a problem. That is why we have tools like WinMSQL or Server command operations available by RedPrairie. On the other hand the mere existence of such access is perceived negatively by the auditors and system administrators.

    In this blog, I will list some features that could make providing such access manageable. All of these features are available in the latest version of Oracular MOCA Client which is available at http://autoupdate.oracular.com/mocaclient

Essential Features

  • Access Control
    • RedPrairie "Server Command Operations" application essentially provides the same capabilities as any other command level access tool like winmsql. But access to this can be controlled via RedPrairie option access control. The option name is "SALSrvCmdOpr" and only users who have access to this can access the "Server Command Operations". Every external tool like winmsql should respect this access as well. Both Oracular MOCA client and latest version of RedPrairie winMSQL respect this access control.
  • Log commands to sys_audit table
    • RedPrairie provides a table called "sys_audit" that is supposed to provide a record to the auditor regarding access to the system. Oracular MOCA Client allows the sysadmin to force that everything that is executed is recorded in this table.
  • Force a specific version
    • We are bound to find issues in any product and some of those issues can be critical for a production environment. A system administrator should be able to force the inbound connections to be at a certain version level.
      Currently Oracular MOCA Client allows for this control where the administrator can set the minimum version level on the server and if the user's Oracular MOCA client version is not at that level, access will not be granted.
  • Force auto-commit to be off
    • All moca clients offer control where the user can set auto-commit to be off. This implies that after running the command, the user needs to manually execute a commit. This can be a life saver if someone made a mistake! The issue is that often auto-commit is on.
      Oracular MOCA client allows the server administrator to force this setting to be off. If the setting is done on the server, then the client will have it off and the user is not able to change it either. This can cause an adverse side-affect where someone may forget to commit the changes. In order to address this, Oracular MOCA Client respects another setting that allows the administrator to set the maximum time for an uncommitted transaction. If the user does not commit within that time, the transaction is automatically rolled back.
  • Only allow read-only access
    • Oracular MOCA Client also allows for a setting where changes are simply not allowed. In this case every command is automatically rolled back. Oracular MOCA Client executes the command as follows:
         try
         {
            entered command
         }
         finally
         {
            [rollback] catch(@?)
         }
         
      If the command has an embedded commit, which is generally a bad idea, that will still happen but otherwise any changes are immediately rolled back.
  • Detect if the user executed a DML or DDL
    • The system administrator may want to fine tune some control for DML (data manipulation) commands like insert, update or DDL (Data Definition) commands like drop table, create table etc. Oracular MOCA Client allows the sysadmin to maintain a file on the server where she can provide regular expressions to define what is a DDL and what is a DML. She could, for example, put commands like "sl_change" and "create record" etc.
      Along with this Oracular MOCA client then provides some additional features:
    • Confirm each DML
      • This is an integer and user will be asked to confirm this many times with default option being No every time.
    • Confirm each DDL
      • This is an integer and user will be asked to confirm this many times with default option being No every time.
    • Additional documentation for DML and DDL
      • Sysadmin can define some prompts which the user will have to respond to before executing the DML or DDL. This can include stuff like "Case#" and "Description" etc. This information is logged in sys_audit table along with the executed command.
    • New roles for allowing DML or DDL
      • Sysadmin can utilize roles to control which users can execute DMLs and which users can run DDLs.

Conclusion

    Features described above offer a good starting point to provide controlled access to production instances. The sysadmins can use other techniques where they grant the full access to named users for short periods of time and then revoke it.
    Bottom like is that access to production environments should be managed conservatively. It is better to provide too little rather than too much. When access is to be provided, then Oracualar MOCA Client offers some features to manage it so that mistakes can be avoided. Oracular MOCA Client also has several features for a development environment which I would list in a separate post.

Wednesday, November 7, 2012

RedPrairie Integrator Tips

RedPrairie Integrator tool (also called SeamLES) allows RP toolset to communicate with any external system. Following are some tips that may come in handy:

  • Some common tips - including some performance tips
    • DO NOT use IFD Field algorithms to fetch event data.
      • Please Please Please do not do it. It gives me a heart-burn every time I see a result IFD algorithm pulling event data, e.g. If EO has a field like "stoloc" and you need arecod of it, people will add arecod as a field in result IFD with a field level algorithm to pull the arecod by a single select. In worst cases I have seen 10s of such algorithms with single selects. It creates a performance nightmare. In such cases you should add fields to the corresponding EO.
        So general rule should be that you must not select event related data in an IFD Field algorithm. It is ok to fetch some constants like policies, code descriptions, convert UOMs etc. but not this.
        Also when looking for performance look at these IFD field algorithms and even if you are doing it right - e.g. policies etc. sheer number of these calls can have an adverse impact.
        Imagine a full truck-load in SHIP_LOAD. You may have 1000s of segments - what if you had these selects in some of these fields; you can easily imagine the stress on the database. By employing a caching strategy or pushing them to a parent IFD segment or EO segment you could really improve performance.
    • Use "Synchronous except send" rather than "Asynchronous" for result IFDs going to external systems.
      • Typical flow of an event that creates asynchronous result IFD is:
          Event StatusDescription
          ECEvent Output Created
          ICResult IFD Created
          SCResult IFD Sent
        When result IFD is ASYNC, user gets the prompt back when event is in EC. Then slIFDGen (A) task runs on an interval to work on these events one at a time to create result IFDs from them. Then another task takes events in IC to send the transactions to external systems.
        The slIFDGen (A) can me multi-threaded but an ever better option is to simply use "Synchronous except send" rather than Asynchronous. This implies that when event is logged it will create EO and also create result IFDs. When user gets the prompt back, event will be in IC not EC. It may take slightly more time but that overhead will not be perceivable by the user. But the advantage will be that there will not be a single task responsible for moving so many events from EC to IC.
    • Use "Synchronous" for result IFDs going to DCS.
      • You should never use synchronous communication for transactions going to external systems because that tightly couples the two systems and a transaction will fail if the communication link is down.
        But for transactions going to DCS, the work is done by processing algorithms and there is no communication link. For these result IFDs:
        1. Always set result IFD as synchronous
        2. Set "Always persist Event output" and "Always persist interface document" checkboxes to off
        3. Set "Persist event data" on event to on. You may turn it off if performance gain is significant. Keeping it on provides event argument data in event query
        This will greatly improve the performance because integrator will not write event output data and result IFD data to the database. That is a significant chunk of I/O that is saved resulting in significant improvement. Cost is that in event query you will see the event and its arguments but no EO or result IFD data.
    • Separate identification and processing for inbound data
      • When we get downloads sent to integrator, we can configure the slInAdapter task to either process all the way (download goes from NEW to EDONE or EERR) or we can break it up such that slInAdapter only identifies the download and slIFDEvt will log the events from it. In this case download will go from NEW to ID. slIFDEvt will take it to EDONE or EERR.
        This can help in performance. Furthermore we can start multiple instances of slIFDEvt which further improves the performance.
    • A separate sender slIFDGen for each external system
      • It is possible to have a single sender slIFDGen to send data to all systems. But it is better to have a separate sender for each external system.
    • Use CSV data layout for result IFDs if using XML, processing algorithm, or table based
      • The data layout on result IFD defines how data is stored in sl_ifd_data_dtl table. As you can see there is no option there for XML - the reason is that XML is a communication detail. This implies that the result IFD layout could be whatever you want in such cases.
        So in such cases I recommend always using CSV rather than FIXED BYTE because it provides you the flexibility to change the lengths of the fields more easily.

  • IFD Field Class: EO Expression on child segment
    • This is a less known (documented) feature which comes in handy. Lets say that you have Event Output data that looks like
      ORDER: ordnum=A
         ORDER_LINE: ordlin=001, prtnum=A1, qty = 10
         ORDER_LINE: ordlin=002, prtnum=A2, qty = 20
         ORDER_LINE: ordlin=003, prtnum=A3, qty = 30
         
      Now lets say at the result IFD level your result IFD Segment which is at the ORDER level needs to access some data from the ORDER_LINE level; that is when this feature comes in. Lets say your result IFD Segment at the order level needs the first part number, or total ordered quantity. In these cases you can add a field to the result IFD Segment with class of "EO Expression on Child Segment". Then you can put something like:
        <<EO_SEGMENT>> [SQL Group expression]
      So to get first part number, you will say:
        <<ORDER_LINE>> min(prtnum)
      to get total ordered quantity, you will say:
        <<ORDER_LINE>> sum(qty)
      to get number of lines, you will say:
        <<ORDER_LINE>> count(*)
      The EO Segment can be the immediate child, or grand child and so on. It is not restricted to the immediate child. There are other classes like "EO Expression Look up first" and "EO Expression Look down first" that can work for a subset of the use cases but those classes do not respect the parent- child relationship. And those classes cannot apply group functions either.
      Often people do not use this option and end up changing the event output to provide additional data elements. Changing event output should be the last resort.

  • Integration by reading foreign databases
    • Often when we are dealing with a custom system to integrate with, the integration discussion rapidly moves towards the other system creating files in a certain format and RP consuming them. Or using a queuing mechanism. There is another option.
      RP interfaces to Oracle Apps, JDE, and Great Plains use the strategy where no download is sent to RP, instead RP peaks into the host system's database. This strategy should always be considered when interfacing with a custom system. For the overall project, it saves a lot of time because the work of creating intermediate files is taken off the table. The project then becomes:
        Infrastructure Needed If RP is running on SQLServer, use linked servers to connect to host system database (it could be any major database). If RP is running on oracle, you will use Oracle heterogeneous connectivity option to create database link to a foreign database. You could also employ any other mechanism offered by the host system such as web services.
        Host to RP
        1. Create a new system in integrator
        2. This new system represents the host system.
        3. Create a notification table on host system
        4. Host will populate that to indicate an event occurred. It will create a single row per event and put in some key data that allows you to pull more information. This table will also indicate which event it maps to in RP, e.g. ORDER_MANAGEMENT or PART_MANAGEMENT etc.
        5. Create alternate event expressions in RP
        6. You will map the event argument to the column name in this queue table
        7. Create Retrieve Method implementation from the new system.
        8. You will use the standard event output but change the retrieve method implementation for the new system to pull extra data, such as order header, address, and line data.
        9. Standard result IFD to DCS should work
        10. Since you mapped to standard EO, result IFD should just work.
        11. Configure the standard table-to-event adapter
        RP to host
        1. No new system to create, same system as above would work
        2. Configure the ODBC communication method for this system
        3. Create new tables on host where you will populate all of the data
        4. Create custom result IFD in RP and send to this new system
        5. The result IFD should be based on standard EO if possible. Custom EO is ok as well. The result IFD will have a special field in each segment called "TABLE_NAME". This tells where this data goes. Fields within the segment correspond to columns in the table.
      Using this strategy shaves off weeks from the overall project. I have used this successfully in several projects; not to mention that the design for JDE, Oracle Apps, and great plains also use this.
      So be creative when discussing integration possibilities.

  • Restrictions on MOCA based Retrieve Method Implementations
    • When creating a retrieve method implementation, you can use SQL or MOCA. With SQL you use from and where clause and with MOCA you can put in a MOCA command snippet.
      With MOCA there are some restrictions.
      1. You cannot use expressions when putting in EO Field column data. With SQL, for example, you could say prtnum||'X' but something like that will not work for MOCA based retrieve methods. Similarly in SQL you have some bind variables available like :i_c_absent_ind_char etc. These cannot be used in MOCA based RMs This is important to note because the client applications automatically add these for missing fields and then you get a "No rows affected" error and it is hard to find out why. There is no net loss of functionality because you can put all of these expressions in the MOCA snippet quite easily with a "publish data"
      2. Order by and group by is not respected for MOCA based RM. For "order by" there is no loss of functionality because you can use sort moca command. Grouping is more difficult - you could use groovy.
      MOCA based RMs are a good idea though. They improve code reuse. It also allows for creating solutions like calling external web services or a RM that can update data as well (Voice interfaces use the latter).

  • About Integrator Caching
    • Integrator has a lot of meta-data to support the integration flow. The events, event outputs, Retrieve Methods, Algorithms, Result IFDs etc. are all in the database.
      If you are analyzing performance, you need to realize that regardless of how much data is there, All of it is cached after the first access. So it is not worth your while to worry about these SQLs. Just make sure that sl_cache_parm table has large numbers as cache_siz. We expect all of the data to be cached so increasing the numbers is ok.
      Often people trace a slow transaction and then go off on a tangent about these 100s of SQLs that are pulling data from integrator meta-data tables. If folks who are not familiar with integrator get involved (e.g. DBAs) then the discussion goes further off track because they keep coming back to these SQLs. You need to understand that these SQLs make no difference over the long run. For example if you are downloading an order - after first download, every other download will not read this meta-data.
      Integration performance issues are always in one of the following places:
      • Bad SQLs in retrieve method implementations
      • Expensive SQLs in blocking algorithms, processing algorithms, field level algorithms etc.
      So focus on these places when troubleshooting for performance. Regarding integration meta-data only issue could be that you have to increase the cache_siz.

  • Create files from communication methods that do not use files
    • Lets say you are communicating to another system over sockets or over JMS. In such cases what if the message is corrupted over the queue or the socket; how would you troubleshoot? Similarly if data is sent to you how do you see in raw format what was sent before it is processed.
      People generally consider communication method as a sacred entity that no one should touch - in fact creating custom communication methods is quite easy and I always create a custom one to support the following ideas:
      • Create environment variable UC_SEND_OVER_QUEUE and UC_CREATE_FILE_FOR_MSG
      • Idea is that if UC_SEND_OVER_QUEUE is F, I would not call the components that push message to the queue. This implies that if the development environment is not connected to the queue my events will still go to SC.
        The UC_CREATE_FILE_FOR_MSG, if set to T, will write a file in a pre-defined location with the full message that the system was about to push to the queue.
        These two help a lot because I can always see on my system what the message will look like. It helps in troubleshooting where the exact issue is.
        To implement this you just have to modify the post algorithm of the communication method (and then create a custom communication method).
      • Use this for inbound data as well
      • For data coming into integrator, you cannot make a change in the communication method; instead override the "sl_handle inbound_data" command. In this case use the UC_CREATE_FILE_FOR_MSG environment variable to see if the functionality is enabled (you can also use the communication method attributes). At this time you have "dwnld_info" available along with "data". dwnld_info may have interesting information to create a file name (e.g. for JMS it has a signature of the message). data is the raw data.
        So you can create the file first and then call the "^sl_handle inbound_data" to call core functionality.

  • More to come later!

Saturday, October 27, 2012

Twitter's role in optimizing IT

What if you could get a tweet that your house temperature has fallen below a critical temperature; or that your basement could report that the humidity is too high; or your oven could tweet that I have been on for too long. Possibilities are endless - can this happen?
This is not a new domain of solutions. Many of these systems already have a mechanism of texting you or calling you. Software systems have been paging for some time now. RedPrairie EMS system can email you, page you, or even ping you on Yahoo IM - but that is the exact problem, there is no consistency in these solutions. Now that everyone has adopted twitter, could we finally have a consistent mechanism for raising alerts?
What if twitter could offer following relaxation to their terms or offer new concepts:
  • Allow programs to create entities that can tweet. So for example when an order is created on amazon, that order could then start tweeting its progress.
  • Programmatically add me as "follower" of this new order
  • Programmatically remove these users after some time.
  • Provide security concept so that if someone looks at my profile, they cannot see this class of "users". Some of the information in these tweets may be confidential - like my address
If something like this could be agreed upon, then we could have home appliances with a twitter symbol on them. Open the manual and it will have a serial number that you can "follow". Since wifi is so ubiquitous now these appliances could simply use that to send important alerts in a consistent manner. I could get tweets from my furnace.
In addition to that orbitz or Delta Airlines would stop calling me or sending me text messages, instead I could simply "follow" the flight number and the flight could be sending tweets to send updates.
As I said - it is not a new concept but what twitter offers is a way for everyone to talk the same language - if twitter wants to play this game. Twitter technology allows this - but will there imagination allow it? What do you think?

Thursday, October 25, 2012

RedPrairie WM New Features (Technical)

This is a list of technical enhancements over the last few releases. This list is not complete and if I am wrong on exact versions, please let me know.
  1. Groovy
    • 2007
      This has been around for a while but it needs to be emphasized. Please use it when appropriate - it will make your life much easier. It allows you to create high quality solutions with much less effort, for example:
      • Calling external web services
      • Calling 3rd party java applications, e.g. sftp, svn etc.
      • Represent complex logic
      Groovy should be a key component of your overall solution. If you are not seeing any groovy in your code, make sure you have the right folks in your team. If you are doing in-house development, then invest in some training.
  2. RF Forms in Java
    • 2009
      Definitely a good direction to obsolete the old style RF forms in favor of Java. Now you have options to extend base forms rather than old way of copying code and changing it.
      I would like to see something like DDA to work on RF so that people can develop simple forms without writing any code, but we are not there yet. But overall a good feature.
  3. Jasper Reports
    • 2011.2
      It is part of 2011.2 and it looks promising.
  4. HTTP Protocol used by MOCA
    • 2010+
      Previously MOCA used a proprietary protocol for traffic between clients and servers. Now MOCA users http protocol. This is definitely a good direction. The connections are backwards compatible where new versions can open a Classic Port where older clients can connect to. New Clients can connect to older servers by using "host:port" syntax.
  5. Query level hooks in MOCA
    • 2011
      So you find a badly written query that is deep within the standard code which is in C or Java. Previously the only option you had was to override the whole component; now it is possible to install a hook that would rewrite that query.
      The performance is impressive. I ran a simple "select * from dual" 10,000 times where the hook scanned an array in Java to find the mapped rewritten query. The net overhead was about 2 seconds over the 10,000 executions. Exact times will be variable but it is an option in such cases.
      Do not consider this as an option for enhancements, only use it when bug is a bad query deep within standard code which is in Java or C
  6. Clustering enhancements
    • 2011
      Earlier versions of MOCA were not cluster-aware. If you are using an active-active cluster, you needed to actively manage what runs on each node. Now you have setup that will be controlled by MOCA.
      If you are using active-passive configuration, then you can probably keep using the OS level options but for active-active this will be very helpful.
  7. Active X completely removed
    • 2008
      All Active X components are removed from the GUI and integrator explorer is no longer supported. There are ways to get older Integrator Explorer to work with newer versions but don't go there. Use the new screens that part of the Dlx Client.
Also see RedPrairie Functional Enhancements

RedPrairie WM New Features (Functional)

In this blog I will some of the new features from a functional perspective that I like. This is by no means a complete list and if I am wrong on exact version, please let me know.
  1. Allocation Rules on sales order and work order
    • 2011
      This is by far the best feature that RedPrairie has incorporated over the last few releases. Previous RP had fields like "lotnum", "revlvl", etc. on ord_line. Now that is replaced by a rule which is an expression. So you can put a complete expression like "lotnum in this list and lotnum not in this list" etc. You can combine it with any other inventory attribute.
      They have a concept of "simple rules" and "complex rules". Simple rules are where you simply have an "and" condition for various fields like "lotnum = X and orgcod = Y". This is a direct replacement for old functionality - but what is really interesting is the complex conditions. In complex conditions you can use parenthesis and mix ors and ands.
      As of 2012.1, one feature that is still missing is controlling the order in which allocation considers all inventory satisfied by the rule, e.g. if your rule is "orgcod in india/china/us" but you want to sort by US first - at this time that is not possible without a small customization.
      This could definitely obsolete some of the complicated mods that you may have For many this alone may be a good reason to upgrade.
  2. Several custom fields on inventory tables
    • 2011
      Some folks incorrectly consider it as a replacement for RedPrairie custom field concept - it is not that. There are several field on invdtl table and along with them there are fields on prtmst. Idea is that if you want to track more than the typical lotnum,orgcod,revlvl,invsts for inventory you have a lot of options.
      At prtmst level you give description of every one of these fields and indicate if it is required or not for inventory. Then the attribute is tracked for inventory. This coupled with the Allocation Rules enhancement described above provides a lot of flexibility in terms of allocations.
      This will most likely be used heavily. Almost all ERPs send information like plant code etc. which till now was either not stored in invdtl or was put in fields like revlvl. Now you have a meaningful place for this data.
  3. Allocate by asset type
    • 2011
      You can create an order for a certain asset type.
  4. Trailer Loading is a true movement
    • 2011
      Previously trailer loading was a single move so it did not capture the LM data properly. Now it is a true movement where you can stack pallets on your gun and then deposit to the trailer.
  5. Various Pick Enhancements
    • 2012
    • Users can sign on to multiple list picks at the same time.
    • Cartonized picks can be part of the list
    • You can suspend a list pick and then resume it later
    • You can end a list pick in the middle - a new list pick will be generated for the rest
    • Resize a carton during cartonized picking. While picking say Pick Exception, then enter the exception. It will recartonize and give you new pick.
  6. Cycle Counting Optimization
    • 2012
      In previous versions, if you have partial cases then in summary count you will have to count in eaches. Now you can say 10 Full + some eaches.
      In detail count, old versions reqired you to specify quantities in eaches, now you can use higher level UOMs.
  7. Box in Box (cartonize cases)
    • 2008
      Earlier versions allowed each picks to be cartonized, now you can cartonize case picks as well. So a large case will have smaller cases.
  8. Follow the leader while staging
    • 2009
      When you are putting inventory in the staging, if the staging location becomes full, you can simply override to another staging location. Remaining picks will be changed to go to the new staging location.
      Flags on aremst are used to set up this functionality.
  9. Pick UOM selection while setting up allocation search path
    • 2009
      You will allocate each picks from a location but you can force that person picks Inner Packs before eaches.
Also see RedPrairie Technical Enhancements

Tuesday, October 23, 2012

Top Ten Common Mistakes

I initially wanted to title this post as "Top 10 reasons to show your RedPrairie consultant the door" - but that would have been too cold. But these are mistakes I have seen over and over again so wanted to mention them here.
  • 10. Overuse of policies (as opposed to custom columns)
    • RedPrairie custom column architecture is really sophisticated. I have not seen such a concept in any other system where you can simply add custom columns and they appear on forms. The exact steps are so simple:
      1. Add a column to the database. Optionally add column comment
      2. Add an MLS id for the column
      3. Add help for the column
      4. Add custom column to les_cust_fld
        • That is it, now field will be maintainable on the form where it makes sense and code will actually be readable. You will be able to press F1 on the field and get help. "list table columns" will show exactly what this field is for.
  • 9. Not utilizing groovy
    • Groovy is part of the product for several releases now so it should be used when appropriate. Basically if you are about to use "session variables" stop! Probably it is time to use groovy. The overall solution is greatly simplified by groovy but I often see code with 100s of lines of code including session variables that is hard to follow. So use it when it is appropriate.
  • 8. Still not using SQL-92 syntax for SQL joins
    • It is not such a bad thing to use older style join syntax where you join two tables in the where clause but it is time to switch to this syntax. It is database independent and much easier to read. When doing outer joins it is much better than using database-dependent syntax.
  • 7. Not knowing the difference between "|" and ";"
    • This one really bugs me. I often see code where "|" has been used simply to run a MOCA command "after" another command - not realizing what it is actually doing. Understanding difference between "|" and ";" is very important. Also knowing when to use "{}" (command grouping) is critical as well.
      If you want to publish some data and then execute two commands for that data, don't do
         publish data
         |
         do command 1
         |
         do command 2
         
      Instead do:
         publish data
         |
         {
            do command 1
            ;
            do command 2
         }
         
      This will ensure that "do command 1" and "do command 2" can access published columns from the above command but do not step over each other.
  • 6. Standard GUI form is customized without changing name
    • When this happens, you will not know what has hit you until the upgrade. For example, lets say someone customized standard GUI application xyz and when it was delivered it was at version 10. The person customizing it would keep it as xyz and increase the version to 11.
      Later on you received a hotfix or an upgrade where standard product now has it at version 20, guess what will show up when the form is launched?
  • 5. Not following naming conventions
    • Custom columns should have UC_ or VC_ prefix. Custom tables should have USR_ or VAR_ prefix (UC_ or VC_ can be tolerated as well). Custom policy nodes should have USR or VAR prefix (UC or VC can be tolerated as well). Custom integrator transactions should have UC or VC prefix (USR or VAR can be tolerated as well).
  • 4. Increase length of a standard column
    • A lot of RedPrairie code is still written in "C" where the buffers are defined with static lengths. Do not even think about increasing length of a standard column. It may work in some cases but when it would not work it will be really had to troubleshoot and could lead to corrupted data or crashes.
  • 3. Not using full key in join - and justifying it!
    • I understand that mistakes happen but what really bothers me when a mistake is justified. I would often see someone's code where they would join without client_id or wh_id. When pointed out, they say "it is ok because they have a single warehouse or a single client".
      If you hear this justification, it may be time to make some decisions!
  • 2. Using catch(@?) everywhere!
    • Search through your LES folder and look for "catch(@?)". If you see more than a few references (which are well justified by comments) - you have been hit by extremely bad code. There may be good reason to use it some times but in many projects I see it all over the place
      If you must catch an exception, catch exactly what you are looking for. For example catch(-1403,510) or catch(-1). Catching @? implies that even errors like syntax error or database crashes will be caught and you could get unintended side-affects. So don't do it!
  • 1. Don't worry - data will be archived by then!
    • This one really bothers me because by the time the customer realizes the issue the consultant is long gone and the problems become hard to manage. The idea of an archive system is that same exact use cases should work in archive that work in online system for completed shipments. So I should be able to print the same type of paperwork or see same analysis reports. A customer should not have to list out the exact use cases for archive and if archive system is designed properly they would not have any issues. In one case I noticed that someone decided to have wh_id as part of PK in archive system when it was not so on online system - well that naturally implies that same reports will not work in archive.
      I am not a big fan of how archive system concept is designed but that is something we have to live with.

Monday, October 22, 2012

MOCA Client Comparison

Thanks to the open framework of RedPrairietm MOCAtm framework we have a few options when it comes to the clients that can access the MOCA Server. I wanted to compare the clients to see how they are different from each other. If there are features that I have missed, let me know so that I can put them here as well.
  RedPrairie 2012 WinMSQL Lexedit Oracular MOCA Client 2012.1.0
Version 2012.1.4 2.8.4.1 2012.1.0
Technology Microsoft Microsoft Pure Java
Auto Complete SQL No Yes Yes
Auto Complete MOCA Commands No Yes Yes
Security - restrict access Yes No Yes
Trace Viewer This one is much better than earlier versions Very nice trace viewer. Can serve as default trace viewer Very nice trace viewer.
Explain SQL No Yes No
Load CSV files on client No Yes Yes
mload server files No No Yes
Edit Server Files No No Yes. Offers security based on roles to disable for some users
Search Source code No No Yes. Search files and code in database tables
Compare two environments No No Yes
Automatic Updates No Yes Yes
Cost Comes with RedPrairie client Free - requires registration Free - no registration
Company RedPrairie (www.redprairie.com) Orbital Services (www.lexedit.com) Oracular (www.oracular.com)
Download From n/a www.lexedit.com autoupdate.oracular.com/mocaclient
Here is the details for each feature:
Technology I do not see this as a really distinguishing feature unless you want to run Oracular MOCA Client on a mac or unix
Security - Restrict Access This is an important feature. All of these tools basically provide the same capability as "Server Command Operations" - so if someone does not have that access, they should not be able to gain access via these tools as well. Specifically the menu option SALSrvCmdOpr controls access to the standard "Server Command Operations" screen which allows a user to execute any MOCA command. Oracualar MOCA client will see if the user who is attempting to connect has this permission. If the user does not have this permission then they will not be allowed access to Oracular MOCA client as well
Edit Server Files Often the system administrators do not like to give access to the server to edit source code. Oracular MOCA Client allows the developer to navigate to the files using MOCA protocol and update the source code. This is really nice for such situations.
mload server files Same idea as above - if you do not have command line access to the db/data directory, Oracular MOCA Client allows you to navigate to the files, right click on them and load them via mload
Search Source Code In Oracular MOCA Client, you can put in a regular expression - it will search through the source code tree on the server and find what you are looking for. It can also create a tree which tells what calls what. It can also search through algorithms and DDAs
Compare two environments Oracular MOCA client allows you to point to two environments and compare the data .
Some distinguishing features of each product are as follows:
RedPrairie WinmSQL
  • It has come a long way since the earlier versions that really prompted folks to develop lexedit and oracular moca client
  • New version offers syntax highlighting via setup file
  • New version offers multiple connections
  • Lexedit This is the first "reaction" to how awful winmsql really was and it has been constantly improving. Some interesting features include:
  • Very nice overall feel - looks very mature
  • Tips
  • Integrates standard Windows concepts like customizable toolbar
  • Elaborate language setup - it views MSQL as one of many languages it can support
  • 80 column guide is interesting to write code that is easily viewable on old terminals
  • Oracular MOCA Client This is more of a full development environment (IDE) rather than simply a replacement for RedPrairie WinMSQL:
  • Integrates with Orcular's change management
  • Web Clipboard (experimental). Idea is that you copy text to cloud and paste from cloud on another machine.
  • Robot development via groovy
  • Repository of commonly used scripts
  • Save current state in a named file
  • Sunday, October 21, 2012

    How many RedPrairie environments should we have?

    This is a common issue that we have to deal with for any project. There are no absolutely right answers but there are some wrong ones:
    • Having a single production environment
    • Having multiple environments for WMS-online but a single archive environment.
    • Having multiple environments for WMS-online but a single parcel environment
    So as long as you have separated production environments, that is better than nothing. A good start may be:
    • Development
    • QA
    • Production
    And do not take the secondary environments like archive and parcel lightly. You should have a parallel environment for these as well.
    There is need for additional environments but before I discuss that I want to clear one myth that a RedPrairie environment is hard to setup! Given that you have an installation, for example dev environment; setting up an additional environment from same code-base is trivial. Exact steps will be:
    1. Copy the database to the new environment
    2. Thats it - because code will already be the same as the other
    With that myth busted, now we can consider some additional environments which will come in handy:
      Environment Use code from Purpose
      Test Development The development database is generally not a good place for testing the code for even the developers. So this environment will offer them a place where they will have good data to test with. If you have only a few developers you probably do not need this.
      Train QA or Production Before go-live you probably want this based off QA but after that to train the workforce you want this based on Production source code. This environment will typically have pre-configured exercises and data.
      Debug Depends The important part is that production database is exported and imported to another place nightly. This will help in diagnosing hard to reproduce issues. Then based on specific needs you may point it from development code, QA code, or production code.
    Conclusion
    Development, test, and production is a good start but if we understand that if we setup the database free of environment specific entries and understand that new environments can be created very easily from existing environments then these other environments can come in handy. These additional environments do not need to be permanent rather you can bring them online on as-needed basis.

    Using a version control system to manage RedPrairietm source code


    RedPrairie Architecture Overview

    RedPrairie provides a sophisticated mechanism that allows the users to make enhancements to the system such that the changes are preserved on upgrades and can take advantage of the software updates and upgrades.  Such an offering is quite unique in the space of such systems.  This open architecture empowers the RedPrairie user community to create enhancements themselves and they often do that because the risk on upgrades is minimal. 

    Managing the Customizations

    Even though the architecture supports creating the customizations it does not provide a mechanism for managing the customizations.  The “usrint” folder becomes hard to maintain where there is no control over the components.  Often times the only reliable way of moving a change from test to production is to copy everything under the LES folder.
    The problem is compounded for the customizations that aredone using data.  This includes customizations like database changes, client side modifications, policy changes, and integrator changes.  While moving the LES folder is relatively easy taking the data from the database becomes messy. 

    Oracular Approach

    We view the problem as two-fold:
    1. Manage the customizations
    2. Build packages for the customizations
    Manage the customizations
    We consider code and critical data such as policies, integrator mappings, and client side configurations in the same set of “customizations”.  Our methodology dictates that same principles should apply to all of these – just because a change is realized through code in a text editor in one case and through integrator explorer in another case should not imply that the customizations are to be managed differently.  Following concepts are central in our methodology:
    ·         Customizations (code and data) must be controlled through a version control system.
    ·         Issues that lead to customizations must be controlled through an issue tracking system.
    ·         Every customization should be made only as a result of a specific issue.
    ·         Every object (code or data) should be checked in to the version control system such that the revision ties to a specific issue.
    This is an industry-accepted approach – we have extended it to the RedPrairie code base in an open manner so that the RedPrairie user community can take advantage of that.
    Build packages for the customizations
    Once we have put appropriate discipline and procedures in place for managing the customizations it becomes possible to build packages (commonly referred to as rollouts) for the customizations.  Since every affected object is tied to an issue, we can build a package for a set of issues.  This greatly simplifies the overall task of deploying customizations to the various environments.

    Oracular Solution

    Our approach is based on industry recommended practices and is not unique in principle.  We have adapted  this approach to a solution so that RedPrairie customers can take advantage of the solution without significant investment.  Our solution is made up of the following components:
    SVN Version Control System
    We have created an SVN repository that can be used by the RedPrairie user community to manage their customizations.  This will empower them to take advantage of these concepts with minimum overhead.  Our version control repository is provided through secure “https” protocol along with appropriate authentication model to secure the investment of our clients.  



    Bugzilla Issue Tracking System
    We have deployed “bugzilla” – a leading open source issue tracking system for managing the issues.  Just like SVN, this is provided to the users over secure “https” protocol along with an appropriate authentication model that is consistent with the SVN model.  


    Integrated Issue Tracking and Version Control
    We have created hooks in SVN that tie the individual revisions in the version control system to issues in bugzilla.  This implies that we can look at the affected objects for any issue and similarly we can link to the issue information when viewing the revision log of an object.  Our hooks ensure that every check-in is done against valid issue # so that our data is consistent for creating packages.


    The above figure shows affected objects in the repository from bugzilla

    The above figure shows  a repository item linking back to an issue in Bugzilla
    Create packages
    Once the issues are tied to the affected revisions, we are able to create RedPrairie compatible rollouts from this data.  We allow the user to select a set of issue numbers or change-sets so that she can then create a rollout package from that.  When creating the rollouts we save the information about the issues that were added to that package.  We also tag the objects with the rollout name in the version control repository so that we can reproduce the rollout at any time – an essential component for a complete system.

    Conclusion

    Our solution allows the RedPrairie user community to take advantage of the open architecture so that they can create enhancements or customizations.  By having the objects in a version control repository along with suitable change control it enforces a methodology for the developers without compromising their productivity.  The productivity gain by following this approach is quite significant.  For more information about this solution and pricing please contact Saad Ahmad (saad.ahmad@oracular.com)

    Copyright Notice
    ·          RedPrairie is a registered trademark of RedPrairie (http://www.redprairie.com)
    ·          Oracular is a registered trademark of Oracular (http://www.oracular.com)
    ·          This document contains information about the RedPrairie software.  It is assumed that recipient has legal access to the said software.

    Managing RedPrairietm Data for version control system

    This is a continuation of the article that discusses the idea of using version control system for RedPrairie source code

    Why worry about the data?

    RedPrairie™ solutions are made up of the following components:
    File Based Objects
    ·         3GL source code files written in C, Java, and .NET
    ·         MOCA based source code files (mcmd and mtrg files)
    ·         Shell scripts
    Data Based Objects
    ·         Integrator Transaction Mappings
    ·         Data Driven Applications (DDAs)
    ·         GUI and RF configuration settings (les_var tables)
    ·         Policy settings
    ·         Message Catalog
    ·         Help Messages
    ·         GUI Profiles and Criteria
    ·         Grid configurations
    ·         WMS Specific entities like reason codes, order types, etc.
    The file based objects can be managed by operating on the objects directly via traditional tools like text editors and RedPrairie™ Server Command Maintenance application.  Since the objects are file system objects it is possible to control the content in a version control system or migrate individual objects.
    The objects that are kept in the database pose a challenge.  RedPrairie ™ solution provides applications to manipulate this data.  Such applications include Integrator Explorer (this has become part of the core RedPrairie™ client in newer versions), “Ctrl-Shift-F5” interface, “Message Catalog Maintenance”, “Profile and Criteria buttons”, etc.  These applications are quite user friendly but that is what make it challenging – users can just click and change the object but later on when we need to migrate the changes to a different environment it becomes a challenge to figure out what was changed and why.

    Oracular Solution

    We take a holistic approach to this problem.  We consider all objects that need to be controlled via version control system as same.  In our view, just because one object is maintained via text editor and another through a GUI application does not imply that we need to handle them differently at the very basic level.
    Our solution approaches the problem with the following philosophy:
    • We need to have a version control system that tracks the various objects.
    • We need to have an issue tracking system that tracks the reasons why objects need to be changed.
    • The version control system and the issue tracking system need to be integrated.
    • All objects, whether file or data, need to be pushed in the issue tracking system.
    Our philosophy with respect to the data objects, where RedPrairie™ provides a GUI application, is as follows:
    • A specific named RedPrairie™ user will be assigned to a set of issues in the issue tracking system.
    • The user will indicate which issue she will be working on.  She may only work on a single issue at one time.
    • After that, she may navigate through the RedPrairie™ client and make the various changes.
    • As the changes are being made, our solution will track the objects that were changed in a database table.
    • After completing the changes, she will mark the issue as complete.  This will export all of the objects in to their corresponding directories under $LESDIR/db/data folder.  The solution will also optionally check in the objects in to the version control system.  At this time we support integration with SVN repositories.

    Developer Experience

    The developer’s experience will be as follows:
    1. Developer gets an assignment.  The assignment tells her the issue number in the issue management system.

    We use bugzilla issue tracking system but our approach is compatible with any such system.
    1. She signs on to the “Issue Assignment” application within RedPrairie™.
      1. She adds an entry there and specifies the issue number.

    1. Now she navigates to the various applications and modifies the data from those applications.
      1. For example Integrator maintenance, “Ctrl-Shift-F5”, DDA, Code Master Maintenance, Custom Fields, etc.
      2. We provide an extensive set of these hooks and our solution can be extended to include additional objects and even custom objects.
    2. At any time anyone can audit to see what objects have been modified by that issue


    1. When she is sure that change is complete, she signs on to the “Issue Assignment” application again and presses the “Complete” button.

    The system then prompts the user if the data should also be checked in to the version control system.  Our solution is integrated with SVN repositories.  Version control integration is not required to export the data to files:

      1. The system will go through all of the objects that she touched
                                                                   i.      For add and change actions, export the data that she touched into the corresponding files.
                                                                 ii.      For delete, if the list command does not return any rows then delete the file otherwise handle it like add/change.  Note that even though she deleted an entry the entry as a whole may still be controlled.  For example she could have removed a form override for les_var_config.
                                                                iii.      The system will then add all of the objects to version control and commit them with a comment that would link it to the issue.
      1. Once all of the objects have been checked in to the version control system, the issue will be marked as complete.
      2. If version control integration was being used, the data will show up under the corresponding issue as well:


    These objects are policies that needed to be changed for this change request.

    Conclusion

    Managing the RedPrairie™ data is one of the most challenging aspects of change management.  Once the users see how easy it is to change various complex features, e.g. screen features, policies, integrations, etc. they are inclined to make such changes.  Once the changes are complete, they want to see the same changes in the other environments and that becomes a significant challenge because the data is scattered in various database tables.  Copying the whole database is not an option because that will end up copying transactional data as well. 
    Our solution provides the control while keeping the flexibility that the RedPrairie™ solution provides.   Out solution empowers the users to make the changes in the environment that they are familiar with while keeping track of their changes at the lowest level.  We can then control the changes and migrate them easily to other environments.

    Copyright Notice
    ·          RedPrairie is a registered trademark of RedPrairie (http://www.redprairie.com)
    ·          Oracular is a registered trademark of Oracular (http://www.oracular.com)
    ·          This document contains information about the RedPrairie software.  It is assumed that recipient has legal access to the said software.
    · 


    Improved Archiving for RedPrairie

    Overview of the RedPrairie standard solution

    An archive instance is a complete RedPrairie installation.  The data from the primary instance is copied to the archive instance using MOCA remote execution capability.  “Database Archive Maintenance” application is used to define this.  This setup is done on the primary instance.
    RedPrairie uses a policy on the primary instance to define the location of the archive instance.

    •  Use “INSTALLED” to indicate if the system is installed.
    •  Use “REMOTE-HOST” to specify the host and port of the archive environment.
    •  Use “LOCAL-ARCHIVE” to archive data to tables in the same instance.  Use “TABLE-PREFIX” to determine the table name in that case.  This is typically not used because in this case the existing applications cannot be used against the archive instance.

    The setup is generally as follows:
    1.       Specify the main table that is being archived.  This is the actual table name
    2.       Specify the duplicate action.  This tells us what to do in case of a duplicate
    a.       Do we ignore the error and continue?, i.e. we do not error out and we do not replace the value.  Typical for data that is only added.
    b.      Do we error out and rollback?
    c.       Do we replace the current data and continue.
    3.       After this, we specify the “List Command”.  This command will list the rows from the primary data source that need to be archived.   The filter will ensure that we correct the proper candidates; for example check statuses and date ranges.
    4.       Use “Post Archive Command” to specify the work that needs to be done once this object has been processed; for example we may purge the rows that were archived.
    5.       Use “Details” to specify dependent data that needs to be archived.  This also specifies a list command.  The “@” parameters refer to the variables that are returned by the row specified above in the “List Command”. 
    The above data is stored in the archdr and arcdtl tables.
    The data is processed as follows:
    1.       Fetch the rows from the main table using the “List Command”.  This would return multiple rows.
    2.       For each row returned.
    a.       Move the row to the archive environment.
    b.      For all “Detail” objects
                                                                   i.      Execute the “List Command”.  This utilizes the context for the list command from the main table specified above.
                                                                 ii.      For each row
    1.       Move the row to the archive environment. 

    Issues with the standard solution

    The above approach is quite sophisticated and works in some cases.  We often see following types of issues:
    1. It processes one row at a time.  This implies that if we are required to move 200,000 rows (a typical number for some of the large tables); we will require 600,000 independent operations (200,000 to read the data, 200,000 to move the data, and 200,000 to delete the data from the source).
    2. The logic to move the dependent data uses the same row by row approach.  For each row in the header table the data for the detail table is moved one row at a time.
    3. The solution does not provide visibility to the archive process.  The user cannot find out how archiving ran, how many rows were processed, or what was the status, etc.
    4. If archiving fails for a period or if it is turned on some time after the go-live; archiving some of the larger tables like dlytrn becomes extremely difficult.

    Archiving++ by Oracular

    Our solution will keep the best of the RedPrairie solution and focus on optimizing it for high volume scenarios.  The key part of our approach is to use database capabilities to transfer the data.  While moving a million rows one row at a time is a significant task doing it using advanced database techniques is quite trivial and fast.  Our solution uses the following techniques:
    • We will use the database features to connect multiple database instances.  For example on Oracle we will use “Database Links” and for SQLServer we will use “Linked Servers”.  Our preferred approach is to co-locate the archive database in the same instance as the online database - this improves the performance even further.
    • We will use the “insert into <target> select * from <source>” technique for moving the data.
    • We will use “delete” with a where clause to delete the data rather than a delete that processes the data one row at a time.
    • For updating the archive instance, we will do it in two steps:
      • Delete the target rows
      • Insert the new data
    • We will use correlated queries between the two instances
    • We will have generic, central components to do the work for any table.
    • We will utilize the information in the database data dictionary to streamline the process of moving the data from the primary instance to the archive instance.
    Our solution will employ the same database objects and applications as the RedPrairie standard solution.  This will allow the users to easily switch from the RedPrairie jobs to the jobs that we provide.

    Implementation Details

    All of archiving can be a single job or a set of few jobs
    Our solution allows for setting up a single job that will run a set of pre-defined archives.  We will do it as opposed to running an archive one at a time. 

    Archive Job Number (uc_ossi_arcnum)
    Our solution introduces a new concept of an “Archive Job Number”.  Whenever archiving starts we will set up a new archive job number and all of the data that is thus archived will be mapped to that archive job number.

    uc_ossi_arcnum Column
    All of the tables that are to be archived using this approach will need a new column called “uc_ossi_arcnum”.  This will contain the “Archive Job Number” of the last job that archived this specific row.

    uc_ossi_job_log Table
    We will use this table to log the progress of an archive job.  It will provide visibility to the progress of the archive process.  It can be used to compare the performance of archiving over time.  This table will have the following columns.
    Column Name
    Type
    Comments
    uc_ossi_job_seq
    Number(12) – PK 1
    A unique number for every execution of the job
    uc_ossi_job_log_seq
    Number(12) – PK 2
    A job can have several steps, this corresponds to that.
    uc_ossi_job_id
    Varchar2(200)
    The name of the job. 
    For archiving it will be the name of the job as defined in RedPrairie schedule agent operations.
    uc_ossi_job_num
    Varchar2(200)
    A number used to identify an instance of the job.
    For archiving we will use this to store the archive job number, i.e. the uc_ossi_arcnum value.
    uc_ossi_job_module
    Varchar2(200)
    Job module.
    For archiving we will use this to store the archive name (as defined in archdr table)
    uc_ossi_job_action
    varchar2(200)
    Job Action for a given job module.
    For archiving we will use this to store the table name that is being processed.  We will prefix the entries with “MARK-“ or “ARC-“ based on the type of work that we are performing.
    uc_ossi_job_start_dt
    Date/Time
    Start date/time of the operation.
    uc_ossi_job_end_dt
    Date/Time
    End date/time of the operatioin.
    uc_ossi_job_cmpl_cd
    Varchar2(20)
    Completion Status.  For our case this can be:
    ·         SUCC
    ·         FAIL
                                   
    This table closely follows the Oracle concepts of “client_info”, “module”, and “action” that are used to monitor the sessions.  We have put the information in a de-normalized fashion to simplify the query capabilities.   For a typical run the data would look like:

    Seq
    Log_seq
    Id
    Num
    Module
    Action
    Start_dt
    End_Dt
    Cmpl_Cd
    10
    1
    JOB01
    72
    %
    %
    1/1/09 00:00:00
    1/1/09 00:10:00
    SUCC
    10
    2
    JOB01
    72
    ARCHIVE01
    %
    1/1/09 00:00:00
    1/1/09 00:08:00
    SUCC
    10
    3
    JOB01
    72
    ARCHIVE01
    MARK-SHIPMENT
    1/1/09 00:00:00
    1/1/09 00:01:00
    SUCC
    10
    4
    JOB01
    72
    ARCHIVE01
    MARK-SHIPMENT_INS
    1/1/09 00:01:00
    1/1/09 00:02:00
    SUCC
    10
    5
    JOB01
    72
    ARCHIVE01
    MARK-MANFST
    1/1/09 00:02:00
    1/1/09 00:03:00
    SUCC
    10
    6
    JOB01
    72
    ARCHIVE01
    MARK-PCKWRK
    1/1/09 00:03:00
    1/1/09 00:04:00
    SUCC
    10
    7
    JOB01
    72
    ARCHIVE01
    ARC-SHIPMENT
    1/1/09 00:04:00
    1/1/09 00:05:00
    SUCC
    10
    8
    JOB01
    72
    ARCHIVE01
    ARC-SHIPMENT_INS
    1/1/09 00:05:00
    1/1/09 00:06:00
    SUCC
    10
    9
    JOB01
    72
    ARCHIVE01
    ARC-MANFST
    1/1/09 00:06:00
    1/1/09 00:07:00
    SUCC
    10
    10
    JOB01
    72
    ARCHIVE01
    ARC-PCKWRK
    1/1/09 00:07:00
    1/1/09 00:08:00
    SUCC
    10
    11
    JOB01
    72
    ARCHIVE02
    %
    1/1/09 00:08:00
    1/1/09 00:10:00
    SUCC
    10
    12
    JOB01
    72
    ARCHIVE02
    MARK-BOMHDR
    1/1/09 00:08:00
    1/1/09 00:09:00
    SUCC
    10
    12
    JOB01
    72
    ARCHIVE02
    ARC-BOMHDR
    1/1/09 00:09:00
    1/1/09 00:10:00
    SUCC

    We will commit the entries in this table right away (without impacting the active transaction).  This will allow the users to query the table to see the current status of the activity.  It has summary rows for the whole job and the whole archive.  The user will be able to use this to compare the performance over time.

    If we encounter an error, we will update the corresponding row in the above table with “FAIL”.  The error details will be available in the uc_ossi_job_err_log table.  That table will refer to the above table so we will know exactly which step failed and how.

    The users can use the API for the above tables in their own jobs so that the information from those jobs is also available in this fashion.

    Application to View the job history
    This DDA will allow the user to query the data from the uc_ossi_job_log table.  The user will be able to look at the errors for a given job from here as well.

    Policy for storing archive parameters
    Standard RedPrairie implementation uses hard-coded values within the “list command” to specify the appropriate filter criteria.  Our solution will use the following policy structure.

    Policy Code = USR-OSSI-ARC.  Warehouse Id will always be “----“
                    Policy Variable = MISCELLANEOUS
                                    Policy Value = DBLINK
                                                    Set rtstr1 value to the name of the database link

                    Policy Variable = DEFAULT
                                    Policy Value = Arbitrary Text, e.g. NUM_DAYS_FOR_SHIPMENTS
                                                    Set rtstr1, rtnum1, etc.
                    Policy variable = DEFAULT-Specific Archive (from archdr table)
                                    Policy Value = Arbitrary Text, e.g. NUM_DAYS_FOR_SHIPMENTS
                                                    Set rtstr1, rtnum1, etc.

    Before executing the archiving logic for a specific archive, we will fetch data from this policy.  We will publish the various parameters on to the stack.  For example for the case above, a variable called “ARC_PARM_NUM_DAYS_FOR_SHIPMENTS” will be available on the stack.  The various “List Commands” can then use this variable.

    The policy variable of a specific archive will override the default value.

    Archiving Logic
    1.       Generate a new archive job number
    2.       For each of the archive jobs that needs to be executed
    a.       Verify that “uc_ossi_arcnum” column exists on this table.  If not raise an error.
    b.      We will use the “List Command” to define a command that will mark the rows that need to be archived in the table.  This command will use an “update” statement with a “where” clause – which is much faster than selecting the data and updating by primary key.  We will update the rows with the archive job number generated above.
    c.       For each object in the “Details”
                                                                               i.      Verify that “uc_ossi_arcnum” column exists on this table.  If not raise an error.
                                                                             ii.      We will use the “List Command” to mark the rows (same as 2.b above).  The command will use a correlated query with the main table along with suitable join criteria to accomplish this.  We will not do this by scanning the primary table rows one at a time – a single update will accomplish this.
    d.      By this time we will have marked the rows in the primary table and all of the dependent tables with the archive job number.  Now we will go through all of these tables and transfer them to the archive instance.
    We will use database links (on oracle) or Linked Servers (on SQLServer) to move this data to the archive instance.  We will handle the various “Duplicate Actions” as follows:
    Duplicate Action
    Logic
    <blank>
    Do not archive
    Error and Rollback
    See if any of the rows that are marked with the current archive number already exist on the archive instance.  If so raise an error and do not proceed.
    The check will be done using “select from <primary table> where uc_ossi_arcnum is current archive# and exists ( select 1 from <archive table> where <primary key join>)” syntax. 
    Ignore and Continue
    Add data on the archive instance for the current archive job.  We will only add the data which does not already exist on the archive instance.
    We will do this using “insert into <archive table> ( column list ) select column list from <primary table> where uc_ossi_arcnum is current archive# and not exists ( select 1 from <archive table> where <primary key join>)” syntax.
    Replace and Continue
    Add the data on the archive instance for the current archive job.  If the data already exists on the archive instance then replace it.
    In database operations update is more expensive then delete and insert so we will first delete such rows in the archive instance and then add them again.
    We will use correlated delete and insert statements to accomplish this.

    e.      Now run the “Post Archive” command.  This command typically has a purge command that purges the main table and the dependent data.  If that is the goal then this command does not need to be used an instead the “purge_flg” may be checked.
    f.        If “purge_flg” was checked, then delete the rows from the tables that were identified earlier (Step 2.d).

    The users will still use the “Database Archive Maintenance” application.  The setup will be as follows:


    Compared to Standard RedPrairie solution
    Archive Table
    Name of the database table being archived
    Same
    Maximum Rows
    Our approach is to move the data in a single transaction.  Since we are using database operations, we believe that this is not needed.
    Our solution will work as if this was set to 0
    Duplicate Action
    The RedPrairie provided values of “Error and Rollback”, “Ignore and Continue”, and “Replace and Continue” will be respected
    If it is left blank then we will not archive the data at all.  This can be used in conjunction with “Purge” to simply purge the data.
    Generally same.  The logic to respect a blank entry here is different.
    Status File
    We provide a more elaborate monitoring environment so this does not apply.
    Does not apply for our solution.
    List Command
    The command will be used to mark the rows rather than listing them. 
    We will allow a blank in this field when specifying the “Details”.  If it is blank then we will assume that the table is related to the main table on the main table’s primary key and that the column names for the main table’s primary key exist in this table as well.
    Our solution uses the field differently.
    Purge?
    The data from the main table and the detail tables is purged.

    Custom Columns
    Use Improved Archiving?
    A flag called uc_ossi_arc_flg will be added to the archdr table.  The archives that use this technique should set this flag to True.

    Job Set
    A new text field called uc_ossi_job_set will be added to the archdr table.
    Several archives can specify the same value here.  Then a single job may be scheduled to run all of those archives.  See Step#2 in “Archive Logic” above.


    Standard Command Override
    RedPrairie uses the “archive production data” command in order to archive the data.  We will override this command to look at the “uc_ossi_arc_flg” column for the passed-in archive.  If the archive is defined as such we will use the improved archiving technique defined here otherwise it will default to the standard RedPrairie logic.

    Summary

    Archiving and Purging are critical for every RedPrairie installation.  Some of the RedPrairie tables grow rapidly and if the data is not archived properly it can lead to significant performance degradation.  We expect remarkable improvement using this technique.  We expect the performance improvement to be manifold.
    Customers will find the improved job visibility useful as well.  The users will be able to see details about the job execution.  We will stamp the archive rows with the specific job that moved the data.  This will provide assistance during troubleshooting.

    Copyright Notice
    ·          RedPrairie is a registered trademark of RedPrairie (http://www.redprairie.com)
    ·          Oracular is a registered trademark of Oracular (http://www.oracular.com)
    ·          This document contains information about the RedPrairie software.  It is assumed that recipient has legal access to the said software.
    ·