top of page
Search

Tracking Partial Progress in IFS Data Migration Manager Jobs

  • Martin Surasky
  • Sep 29
  • 10 min read

How things were before?

Long time ago and in a far away galaxy we had IFS APPS 10. Back then, IFS could be licensed with what was affectionately known as the “MTK”, an acronym that was more often than not the way people referred to the “Migration Toolkit”.


The migration toolkit was meant to be a one-stop-shop for your end-to-end migrations. There was a clear separation between using the MTK and just plain “Migration Jobs”. The terms were confusing (as many things in IFS were) because they both seemed to indicate in their name that their purpose was similar. Yet, those who had some exposure to both tools quickly realized that, where the Migration Toolkit shined (delivering end-to-end migrations), the “Migration Jobs” seemed to not deliver, and where they did deliver (specific migration of one IFS View), the MTK was an overkill.


For quite some time, that was the lay of the land: use Migration Jobs when you need to migrate only a couple of views and especially if you need to customize the APIs used for migration (not just plain Create or Update but call custom methods each time you process one row) and use the MTK if you need to perform migrations involving your whole Legacy estate (especially your initial migration of data when you end your journey with your Legacy ERP and move into IFS).


Fast forward to IFS on the Cloud and the rules have changed. The MTK is no longer a possibility (that’s actually not strictly 100% true as I have heard Astra Canyon still licenses their version of the MTK, but for the sake of focusing our attention on what is provided by IFS we will say it is). What IFS provides these days is a tool meant to fully replace the MTK that was initially called Smart Data Manager and now it is known as Data Migration Manager (the word “Migration” came back to the product, which I think was necessary to again clarify its purpose). Here is the thing: The Data Migration Manager (or DMM, because we all like to use as many acronyms as we can) is a young product compared to other IFS offerings and still shows some lack of maturity in terms of functionality and design. Whether those shortcomings will be eventually addressed or we will have to learn to live with them, no one knows. Time will tell.


To be fair, the MTK was never good dealing with some stuff. I can name a list of things that bothered me back then, but when looking in hindsight, the MTK was way more robust than the DMM if your intention is to do lift and shift migrations (or, in other words, take your data from a Legacy system and adjust it to be consumed by IFS). One are where the MTK was particularly not very good was reporting progress on migrations running on the foreground. No progress bar, no percentage, nothing… just wait (sometimes hours) and pray to the gods of ERP migrations that all is going well.


Off course, there was an alternative (as there always seems to be if you look good enough) which was to run the migration job in the background and open a screen called Container Processing Rates. On that screen you could see the total row count of your container and the amount of processed rows, you could use the IFS front end or query the table holding that information yourself via Oracle SQL, the way you choose to get that information was yours, but the option was good enough for most of us running migrations with containers taking multiple hours to complete.


How about now?

Unfortunately, IFS has not invested a lot of R&D budget to improve time and partial progress reporting on Data Migration Manager. If anything, things are worse than what they used to be. Why? Well things that used to bother us, that is, foreground activities freezing IFS for as long as the activity is being performed, is still the case. Even worse, the workaround we used to have, the “Container Processing Rates” screen is gone. There is no form that lets you see progress “in flight”. You would think that as time passes, IFS new incarnation would offer an improvement over what we had before at our disposal but when it comes to the MTK vs. DMM processing rate reporting, it is certainly not the case.


There is hope

One thing that remains available to us in DMM is the ability to run our migration as Background Jobs. In order to solve this conundrum we are in these days, we can start by relying on the execution of our migration as Background Jobs and obtain some information about them either via the Background Jobs form or by using some Oracle SQL queries.


Running a migration job in the background is as easy as selecting Background when the fly-out form shows. This will happen upon requesting the major activities that DMM performs, among them:

-          Loading Legacy Data

-          Moving data sitting in the Legacy Data Container into the Input Container

-          Moving data from the Input Container into the Output Container

-          Running Validations (Meta Data or Basic Data)

-          Deploying into the IFS tables (with or without commit)

 

For example, this is what the Legacy Data fly-out form looks like...

Run as Background Switch on the Load File From Client fly-out
Run as Background Switch on the Load File From Client fly-out

Some forms offer even more options for execution, such as Scheduled At, Interval Of, Daily At, and Weekly On. Frankly I have not had a case where I have found those modes of executions useful to me, but I can see how you may be able to find useful cases for them.


Once we execute a migration task as a background job we can inspect it in many different ways. One is directly via the Background Jobs form. One easy way to find the jobs that belong to the DMM is by using this combination of filters:

  • Created By: [MY_USER_GOES_HERE]

  • Description:

    • LIKE 'Loading of File%'

    • LIKE ‘Add Data to Input Container%'

    • Equal to ‘Transfer Data From Input Container to Output Container'

    • Equal To 'Validate Basic Data of Output Container'

    • Etc

  • Posted: [USE_A_DATE_RANGE_HERE]


With these 3 filters you should be able to easily locate the Migration Task.

Obtaining my Migration Jobs using the right set of filters
Obtaining my Migration Jobs using the right set of filters

I have noticed that the Background Jobs screen can be a bit tedious to work with for a few different reasons: it takes some time to refresh (on systems with a substantial amount of jobs or systems where jobs are not purged that often) and it is difficult to find the jobs I want (if there are lots of migration jobs but I want to find one in particular). Also, there is more search flexibility when going directly at the TRANSACTION_SYS_LOCAL_TAB table (the table that supports this form) via Oracle SQL. For example, here is a query I would use to obtain a similar list


Obtaining the same information but using Oracle SQL
Obtaining the same information but using Oracle SQL

Notice a few things in this search criteria I’m going directly into the ARGUMENTS_STRING field to look for migration Jobs that are specifically related to the CUSTOMER_INFO container. This is yet another way of narrowing down what you want to search.


Searching in this manner is way more flexible and responsive than using the IFS UI.


So far, what I have shown you doesn’t deal with our root issue. That is the inability to anticipate progress. Having said that, If you are attentive to detail you may have noticed that the last column in my query is EXECUTED – STARTED. The idea is that, if I subtract the start time from the time the job finished executing (two DATE columns in the TRANSACTION_SYS_LOCAL_TAB table) I can get a NUMBER. That number represents the difference in days (including fractional days for hours, minutes, seconds).


For Example

SELECT TO_DATE('2025-09-27 14:00:00', 'YYYY-MM-DD HH24:MI:SS')     - TO_DATE('2025-09-26 20:00:00', 'YYYY-MM-DD HH24:MI:SS') AS diff
FROM dual;

This would return 0.75 — meaning three-quarters of a day (18 hours).


So the result is always a numeric value in days.


So EXECUTED – STARTED will yield the numeric representation of duration. I can cast that numeric representation using the following formula:


TRUNC(((EXECUTED-STARTED)  86400) / 3600) || ':' || LPAD(TRUNC(MOD(((EXECUTED-STARTED)  86400), 3600) / 60), 2, '0') || ':' || LPAD(MOD(((EXECUTED-STARTED) * 86400), 60), 2, '0') AS hh_mm_ss

The formula is simpler than it seems. If, like I said, the number represents the difference in days (including fractional days for hours, minutes, seconds) then using the TRUNC function (which truncates decimals) would return the integer part of the number. That is, the “days” portion (which would normally yield zero, unless you are running migration jobs that take more than 24 hours which in most cases is very unlikely). The number is then multiplied by the number of seconds in a day (to turn this expression in seconds) and finally divided by 3600 (which is the number of seconds in an hour). The result will that integer portion turn into hours (which we place in the first part of the string). We then use similar logic to obtain minutes and seconds. As a result, for example, a number such as 0.00118055555555555 will turn into 0:01:42.


Now that we can collect duration in our Oracle SQL


Here you have a view of how this query looks like now (with formatted date time and some results in my system)


Inspecting my Migration Background Jobs and the execution time. Time is in a friendly format
Inspecting my Migration Background Jobs and the execution time. Time is in a friendly format

Baselining

What we want to do next (to anticipate percentage to complete) is to start collecting this duration time as a baseline. In my own migration projects I use something like this:


Baselining in these types of projects is key
Baselining in these types of projects is key

Baselining is critical because it allows us to anticipate the execution time of all our containers. IFS migrations have normally a “rinse and repeat” approach: we migrate, we collect issues, we fix them and we go all over again with cleaner data. This means for most part of our migration projects we migrate the containers many times. Armed with our baseline we can now fully anticipate how much each phase of our migration will take for each one of our containers. I personally like to track this data you see here, but you could create something like this modified to your own likings. The important thing here is that we have now a solid understanding of where our pain points are.


This example comes from a small set. Notice that, with the exception of PART_CATALOG and INVENTORY_PART, all other containers are very small. It is no surprise then that most of the execution is measured in seconds with these two containers being the exceptions. Notice also that my column B has rows. This is critical, because these baselines obviously make only sense for as long as my data sets remain more or less the same. If, for any reason my migration project requirements change and a different set of parts is required that would expand the rows in my Legacy container from 15k to 30k I can’t just assume the baseline will double as well as IFS run time does not necessarily grow in a linear fashion compared to the rows processed. In other words, doubling your container’s rows can maybe only add a few minutes and not necessarily grow your execution time x2.


Another issue related to baselining is that, if you will be deploying to different environments, you have to be aware that processing times can (and should) vary. For example, in my current project, the Production environment has more processing power allocated and as a result, the baseline I have is reduced about 30% for cell “H” (to IFS Tables”) when migrating there.


That’s great but now what?

Having this baseline will help us on 2 fronts mainly. On one side it will tell us in the future whether it makes sense to run a migration step for a particular container in the Foreground or the Background. My personal rule of thumb is that if something takes less than 30-45 seconds, I’m ok with my IFS screen to be frozen and wait. If it is a couple of minutes or more I will certainly move into a Background Job.

This baseline also helps us with the problem that started this blog post. The percentage to finish. Let’s take something that we know it takes a lot in this case like the 22:20 to perform Basic Data validations against the INVENTORY_PART container. We can reverse the logic we used to convert a numeric representation to pretty print like this…


Converting our string representation of time back to a numeric
Converting our string representation of time back to a numeric

… we end up with a number that we can compare against our progress so far, using instead of EXECUTED-STARTED a slightly different approach: SYSDATE-STARTED. Whereas EXECUTED-STARTED provided the total execution time for baseline, SYSDATE-STARTED will tell us for how long our current job has been executing. We can then divide these numbers to get a percentage.

 

Next time we run this activity, we can run this query


How much left we have in order to complete the time (approximately)
How much left we have in order to complete the time (approximately)

Off course, this script comes with a big disclaimer: your percentages are an approximation. You cannot know for sure how long your run time will be but having a baseline and comparing is the closest thing you have to a real “progress bar”. One more thing before I go, notice that in the query above I use a substitution variable (&duration). Substitution variables in Oracle aren’t part of SQL itself — they’re a feature of Oracle client tools like SQL*Plus, PL/SQL Developer, and SQL Developer. They let you insert values dynamically into your SQL statements at runtime, almost like "find and replace.". Upon running my SQL statement the client prompted me for a value and in this case I completed it with 00:22:20 (which is the total duration of this task according to my baseline).


In Closing

That’s all for now. Now you have a few scripts you can add to your arsenal. The next is homework. Go and run your jobs and find out how much they take and make sure to keep a record for that information in an Excel file like the one I provided. From now on you will always know how much time is left and whether it makes sense to wait for a task to finish or just go and get yourself a coffee and come back later 😊


In the next blog entry for this series I plan to cover how to implement “push” notifications via events and event actions and how to report the Run time inside your push notifications. I will also show you how I implemented a special sound on my smartphone to notify when my jobs have ended (so I don’t need to wait for them in my computer) using Tasker.

As with many things in IFS, there are many different ways to solve the same problem. If you have had to deal with similar challenges in your migrations and you used a different approach. What that approach looked like?. Feel free to comment in here and let readers know other smart ways to implement progress in IFS migration jobs.


Thanks for reading!


 
 
 

Comments


bottom of page