Maximo Automation Scripting Practices

I hesitate to call this post a “Best Practices” post, because who am I to prescribe a best practice? I won’t call it that. Instead this is a list of practices that I think are sane and maintainable over the long term.

I am currently on a project overhauling a Maximo instance that has gotten to the point where it is hard to maintain and debug, and there are bugs a plenty. It is a spiderweb of duplicitous workflows and automation script spaghetti code. This instance reveals some dangers of “low code” platforms. People seem to get the idea that they do not have to be concerned with clean programming practices or long term maintainability.

Write good clean code in Automation Scripts

This should be obvious, but it apparently is not. Just because we are using a script interpreter built into a web application doesn’t mean we should be lazy about it. Properly comment and structure your code.

Use good script and launchpoint names

Something that I like to do is name my scripts for the object and method they are working on. ASSET.SAVE for example, or WORKORDER.WORKTYPE.VALIDATE. This will make it easy to find an understand what these scripts are for. Name the launchpoint the same, because you need to migrate this as well and this will make it easier on you.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Script: ASSET.SAVE
# Launchpoint: ASSET.SAVE

from psdi.mbo import MboConstants

def setLocation():
mbo.setValue("LOCATION", "FOO", MboConstants.NOACCESSCHECK)

def setSomethingElse():
mbo.setValue("SOMETHINGELSE", "FOO", MboConstants.NOACCESSCHECK)

setLocation()
setSomethingElse()

The example above shows a pattern I’ve been following. Keeping all logic related to asset.save in one script, instead of many with their own unique names, is much easier to maintain and debug. I have seen examples on my current project where seemingly no pattern was followed, and finding all the bits of pieces of their business logic is more of a scavenger hunt than it should be.

Using the Python function declaration def myFunction(): I can use good program structure to keep all of the different things you might need clean and easy to modify or turn on or off. If you are just testing something, or the customer asks you to disable it, it’s as simple as removing or commenting out your function calls at the bottom of the script.

No matter what it is, even if it is the only thing the script does at the moment, just put it under a function declaration. It could help you in the future.

The Variables tab is pointless and bad

For reasons that I do not understand, there is a tab in the automation script editor called Variables. It is useless, and actually makes things much much harder to read and understand. Don’t use it. Python already has variables, and the MBO framework is available to you. Why use anything else? Keep things clean and easy to read.

Code reuse? Code reuse!!

You can and should employ code reuse in your automation scripts. Seasoned programmers understand this concept well, there are bits of code you may use in more than one place and it’s often appropriate to put these somewhere they can be called by other scripts or classes.

In Maximo these are referred to as LIBRARY scripts. IBM has a page on it here.

From their examples, they have a script called MULTIPLY.

1
z=x*y

They then use this script in another script. I’ll add more comments than their example has.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# here is an example of a debug logging
service.log("I want to multiply 2 numbers and log the result")

# import the HashMap from the java.util library.
# we will use a hashmap to store the parameters we
# pass to our MULTIPLY script

from java.util import HashMap
a=3
b=2

# declare our hashmap and put the values in it
ctx = HashMap()
ctx.put("x",a)
ctx.put("y",b)

# and here is where we call our library script
# service.invokeScript("MYSCRIPT",params)

service.invokeScript("MULTIPLY",ctx)
service.log("the result is "+str(ctx.get("z")))

We name our library scripts clearly, like LIBRARY.CALCULATETHEVOLUMEOFTHEUNIVERSE.

UPDATE: Nov 2024

IBM has added the ability to invoke individual functions within library scripts, which makes them a lot more useful. To invoke a function within a library script the library script must have the “Allow Invoking Script Functions” option selected. Then to use the functions in your library the syntax is as follows.

1
service.invokeScript("LIBRARY.SCRIPT.NAME","functionName",params)

When used appropriately, code reuse means you’ll have less code to maintain. That means less work. We all love less work, right?

Read Bruno’s site

Develop in Maximo for long enough and you’ll wind up on Bruno’s site. Bruno has a wealth of information available and the developers on my team use his site frequently. A link particularly relevant to this post is his automation scripting quick reference. I always have this open if I am writing automation scripts. I do not have a good memory.

Conclusion

And that’s it! There’s not much to it, I just wanted to put this out there. If you are just getting started using Maximo, or are new to using automation scripting, I hope something here gives you a head start. If you are used to Java customization then this will all be easy to pick up, and you probably didn’t need to read this anyway. For those that are coming to Maximo from a different background, I hope this helps you avoid future headaches.

BTW: If you are looking at using a Workflow for something, ask “hey can this just be an escalation or automation script instead?”. Asking that question often could save you a LOT of future headaches. :-)

Seemingly Silly SQL

Recently I had to fix a report that was broken from some changes we made while implementing a new financial tracking process for our customer. A previous development team had implemented work order related cost tracking without using Maximo’s out-of-box financial tracking functionality. They created a table which stored certain dollar amounts associated with their process. There is a row per dollar amount entered. The customer wanted to use more of the out-of-box financial tracking capabilites and expand their usage of the application. As a part of these changes, we stopped using this custom table, but preserved the capability to enter in a sequence of dollar amounts (pending a larger process overhaul) by using attributes on the work order table itself.

This change had the side effect of breaking a report that used this custom table. The top portion of the report was trivial to change to pull its values from the new places, but the bottom portion has a table which displayed rows from that custom table we no longer use. We did not want to change the structure or functionality of the report, so I now had to find a way to output attributes that were now a part of a single row/record, in multiple rows in order to populate the table on the report.

Take this list of attributes as an example

wonum, fundingnum, fund0, fund1, fund2, fund3

To output this one row, as four rows, I used UNION

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
SELECT 
wonum,
fundingnum,
fund
FROM (
SELECT
wo.wonum,
0 fundingnum,
wo.fund0 fund
FROM workorder wo
WHERE
wo.wonum = :wonum
AND wo.siteid = :siteid

UNION

SELECT
wo.wonum,
1 fundingnum,
wo.fund1 fund
FROM workorder wo
WHERE
wo.wonum = :wonum
AND wo.siteid = :siteid

UNION

SELECT
wo.wonum,
2 fundingnum,
wo.fund2 fund
FROM workorder wo
WHERE
wo.wonum = :wonum
AND wo.siteid = :siteid

UNION

SELECT
wo.wonum,
3 fundingnum,
wo.fund3 fund
FROM workorder wo
WHERE
wo.wonum = :wonum
AND wo.siteid = :siteid

) ORDER BY fundingnum ASC;

It sure looks silly, but it outputs the data in rows to populate the table on the report properly and Oracle’s explain plan said the cost was 40, great!

Then I got curious, how bad would the cost get if I put the WHERE clause on the outer SELECT instead of on the individual SELECT statements.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
SELECT 
wonum,
fundingnum,
fund
FROM (
SELECT
wo.wonum,
0 fundingnum,
wo.fund0 fund
FROM workorder wo

UNION

SELECT
wo.wonum,
1 fundingnum,
wo.fund1 fund
FROM workorder wo

UNION

SELECT
wo.wonum,
2 fundingnum,
wo.fund2 fund
FROM workorder wo

UNION

SELECT
wo.wonum,
3 fundingnum,
wo.fund3 fund
FROM workorder wo
)
WHERE wo.wonum = :wonum
AND wo.siteid = :siteid
ORDER BY fundingnum ASC;

This increased the query cost dramatically. The cost according to the explain plan was somewhere in the millions, while the cost of the other version was just 40. The reason for this, is that in the expensive query each SELECT is pulling in the entire workorder table, and each UNION is de-duplicating the results of each SELECT. This query has to do this four times before it can finally filter down the results to the one record needed. In the performant query, each SELECT is only returning one row, and since the row returned by each SELECT is different, the UNION doesn’t have to de-duplicate. Plus the dataset is already much smaller. Imagine how bad it would be if I had used UNION ALL instead of UNION!

The lesson here is that it matters where you put your WHERE clauses in queries. You have to keep in mind the size of the datasets you are pulling in, and how much work every operation is going to have to do.

Also, you aren’t always going to have ideal data to work with and….sometimes…you may have to do silly things like balloon one row into four…

I did put in a comment saying something to the effect of /* I know this looks dumb but...*/

Installing an SSD in a PowerMac G4!

All computers must use an SSD now. It’s the law.

Even an Apple PowerMac G4 Quicksilver from 2002!

OWC, of course, sells a kit for this upgrade. Like many of us they don’t seem to want to let old Macs die!

SSD kit from OWC

Installation of the kit couldn’t be any simpler. It just replaces the stock HDD using the existing PATA ribbon cable. OWC includes a 2.5” to 3.5” mounting plate so that everything is nice and secure.

SSD kit installed

I wanted to restore the software on this Mac to what it left the factory with 21 years. Fortunately, Macintosh Garden has images of the original software restore discs. I downloaded these and burned them onto CD-Rs. Oh yes, some of us still burn discs.

The restore disc didn’t recognize the SSD at first because it was unformatted. There wasn’t an option to open up the Drive Setup to format the drive. I had a copy of Mac OS 9 Lives, so I booted off of that and was able to format the SSD. Then I was able to restore using the restoration media.

It was fun seeting the first boot experience on this PowerMac, but after that I decided to install the pinacle of PowerPC operating systems… Mac OS X Tiger.

The Tiger installation went just fine, no surprises and it runs great on the PowerMac G4.

This computer will be mainly used for old games or that odd time where you need an older version of Mac OS.

Please don’t throw your old computers in the trash. Some people like to install new technology in them and use them even if they’re over 20 years old!

Using OpenCore Legacy Patcher on my 2012 Mac Pro

I have a deep interest in using computers well beyond their expected shelf life. I am writing this in 2023 and have a 2012 iMac that has been in regular use up until I won this 2012 Mac Pro on eBay for the paltry sum of $147 USD. CPU performance gains were starting to plateau around the time these computers were made, and only fairly recently have newer designs such as Apple Silicon and AMD’s chiplet based CPUS begun to push the envelope again, particularly in consumer hardware. Yes, the new Apple Silicon chips and AMD’s Threadripper thrash the Intel Xeon’s in this Mac Pro, but it still holds its own against a multitude of newer CPUs and I think this is very cool.

To run a currently supported version of macOS on an older Mac you have to use OpenCore Legacy Patcher. This provides the environment to allow an unaltered installation of macOS, but first things first…

Do not even attempt to install a version newer than macOS High Sierra unless you have a Metal compatible graphics card.

I learned this the hard way. If you attempt to use OCLP to install a version of the macOS that requires a Metal capable GPU, and you do not have one, you will end up with a blank screen. You won’t even be able to get into the normal Mac boot picker screen, Internet Recovery, anything. You will be terrified that you broke your Mac and will be frantically searching internet forums while drinking yourself into a tupor.

Don’t do this.

I also recommend that if you have a currently booting install of a currently supported macOS on your stock system to preserve that drive. On the Mac Pro you can easily just pull that drive out to keep on hand in case things go wrong. Once you go down the path of OCLP you will not be able to boot from the stock card. You will not have a boot menu. Getting back to normal will be a pain in the ass.

After learning my lesson, I grabbed the cheapest lowest power GPU I could find locally, which was an AMD RX 550 4GB. I chose this because I read it was supported and I didn’t have to worry about power requirements. I do not have high demands for graphics, so I hoped this card would suit me for a while.

The RX 550 allowed me to get macOS Monterey installed and booting, however; hardware accelleration does not work. The model I have is a PowerColor Radeon RX 550 4GB.

Of course I found this bit of documentation after having bought the card, which says to avoid PowerColor branded cards.

Another lesson: OpenCore is a project for hackintoshes that also happens to have a patching tool for legacy Macs. Most things are documented, but I found that I had to look at the hackintosh stuff too for a fuller picture. Fools rush in and I am a fool.

The good news is, at this point I knew I hadn’t ruined anything, the Mac is booting. Now I just needed to settle the graphics issue once and for all…

Read the GPU Buyers Guide. Make your life easy.

And read it carefully. There are a number of gotchas such as brand and power consumption. Some cards have issues with macOS even if the chipset is supported (like the PowerColor cards), and some more powerful cards require more power than experts in the community think the Mac Pro motherboard can supply.

For funky cards, you can apparently flash a new vBIOS onto them, or spoof the device ID so that macOS thinks it is a different card. Or, you can just sell or return your non-macOS-friendly card and get a different one. I chose the latter option.

I am also not interested in modifying the power supply on my Mac Pro. If I needed modern high end graphics performance, maybe I would, or maybe I’d just buy or build a newer computer. This is, at the time of writing this, an 11 year old computer. Expectations should be managed.

Apple has an old support page mentioning specific card makes and models that work in the Mac Pro. I purchased a card from this list, the SAPPHIRE Radeon PULSE RX 580 8GB GDDR5.

This card requires an 8-pin power connector, so an adapter cable is required. I used this one from Amazon.

Image of a Mac Pro with a Sapphire Radeon RX 580 installed

After installing a proper graphics card I was able to boot into macOS Monterey with full graphics acceleration! Now to upgrade everything else I can!

Image of an About This Mac screen showing dual Xeon CPUs, 64GB of RAM and a Radeon RX 580 running macOS Monterey

Maximo 7.6.1 OOB Help Settings

Here are the Out-Of-Box online help settings for IBM Maximo 7.6.1

PROPNAME PROPVALUE
mxe.help.host www.ibm.com
mxe.help.maximohelplink com.ibm.mam.doc,welcome.html
mxe.help.path /support/knowledgecenter/
mxe.help.port 80
mxe.help.protocol http
mxe.help.useKCHelp 1
mxe.help.viewsearchtiplink com.ibm.mbs.doc,mbs_common/c_advanced_search_tips.html

This site: Hexo on Vercel

UPDATE Nov 2024: I have since moved this site to DigitalOcean. Nothing wrong with Vercel, but DigitalOcean provides more general cloud services and also has the ability to monitor a git repository and automatically build a Hexo site.

This is a static website built using hexo.io. It is deployed on Vercel. I write entirely in VS Code in the Markdown language. Vercel monitors a git repository for changes, and when it detects changes it runs a hexo generate to build the new site. I basically manage zero infrastructure and have a simple static site. No big CMS solution, or tons of code that has to run in your browser. This is ideal for me, as someone who fondly remembers the internet made of personal sites made with very simple HTML markup.

The workflow with Hexo is quite simple:

1
hexo new draft "This very post"

This creates a Markdown file under /source/_drafts called This-very-post.md. Open this in your favored editor and write blog posts as comfortably as you would write code.

1
hexo publish This-very-post

When you are done writing your post it’s easy to publish it. This will at a timestamp to the posts’s front matter (the bit at the top) and move the file to /source/_posts.

1
hexo generate

The generate command will build the actual static website files. To preview your site just do

1
hexo server

…and your site will be at http://localhost:4000

It is not complicated to update posts after they have been published. Simply edit the Markdown in /source/_posts. You can even adjust the timestamps or anything else in the front matter. The hexo generate command will have no problems updating your blog.

If for some reason you want or need to build your site from a clean starting point, just run…

1
hexo clean

… all of your post data will still be there. Hexo will just purge its generated content so you can do a clean build.

Hexo supports plugins as well. I use hexo-generator-feed to publish the site’s RSS feed. Because hexo is a node.js application you can install plugins with npm.

Configuring various options for your site is simple too. Hexo reads your settings in _config.yml. The developers of hexo, plugins, and themes will provide guidance on what to put in your _config.yml to set them up.

I really enjoy blogging this way. I don’t have to login to some website or app and suffer some half-baked editor or deal with awful Markdown support and rendering. Best of all I have all my content saved automatically as source code. My content itself is not stored in any proprietary format, just Markdown.

For fun, to generate my site and send it up to git I have a little script called yeet.sh. Nothing fancy at all but it saves typing.

1
2
3
4
#!/bin/bash
echo "Yeeting the blog"
hexo generate && git add * && git commit -m "update" && git push
echo "The blog has been yote"

The internet is for information, not tons of JavaScript and trackers and ads or obstrusive UI. That’s why I like this. If you visit this site all you get is the content. There’s nothing else here, and the effort for me is minimal!

Maximo database refreshes

While developing for Maximo you will sometimes need to refresh your development environment from production so that you can have fresher data or work with, or to return your development environment to the production baseline if something has gone awry.

There are some configuration items I have updated in the past before starting the app back up and disabling Admin Mode.

If you have maxmessages that reference your environment name, you should update these.

1
2
3
4
UPDATE maximo.maxmessages
SET value = 'DEV - ' || value
WHERE msggroup = 'system'
AND msgkey = 'example';

Disable External Systems so that you can control which start processing in your development environment. Maybe you have some that are only intended for production.

1
2
UPDATE maximo.maxextsystem
SET enabled = 0;

Update URLs for your BIRT, Cognos, hostname, webapp, etc.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
-- BIRT
UPDATE maximo.maxpropvalue
SET propvalue = 'https://mydevenv.com'
WHERE propname = 'mxe.report.birt.viewerurl'
;

-- COGNOS
UPDATE maximo.maxpropvalue
SET propvalue = 'https://mydevenv.com'
WHERE propname = 'mxe.report.cognos.serverURL'
;

-- webappurl
UPDATE maximo.maxpropvalue
SET propvalue = 'https://mydevenv.com'
WHERE propname = 'mxe.int.webappurl'
;

-- hostname
UPDATE maximo.maxpropvalue
SET propvalue = 'https://mydevenv.com'
WHERE propname = 'mxe.hostname'
;

--help host

UPDATE maximo.maxpropvalue
SET propvalue = 'https://mydevenv.com'
WHERE propname = 'mxe.help.host'
;

Update your DOCLINKS settings to point to the correct URL

1
2
3
4
UPDATE maximo.maxpropvalue
SET propvalue = 'C:\mydoclinks=https://mydevenv.com'
WHERE propname = 'mxe.docklink.path01'
;

These are a just a few things you may want to update when doing a refresh. There may be other things specific to your environment you would want to do, such as setting the status or permissions of certain users and ensuring certain crontask instances are configured properly for the target environment.

WebSphere manage profiles interactive tool

I recently got a tip from IBM support about a friendler tool to use for creating WebSphere profiles. Previously I had used the graphical WebSphere Toolkit to create profiles on a new install but on the latest Windows Server 2019 images we were issued this tool would no longer work. It would crash with a Java error. IBM support discouraged use of this tool, instead they said to use the manageprofiles.bat command line tool. This tool is fine, however it is tedious to use because you have to enter in so many command line parameters.

IBM has a better tool called Manage Profiles Interactive. This tool provides a nice menu in the command prompt and steps you through the required and optional parameters to get your profiles created. I found this to be very easy to use and to document for the next person who has to do a server install from scratch in our environment.

Manage Profiles Interactive

I definitely recommend this tool!

Maximo custom cron

If you need to create a custom cron task for IBM Maximo using Java here is some code to get you started.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import psdi.server.SimpleCronTask;
import psdi.server.CrontaskParamInfo;
import psdi.util.MXException;
import java.rmi.RemoteException

public class MyCustomCron extends SimpleCronTask {

// You must extend SimpleCronTask and must override
// cronAction()

// Your main processing goes in cronAction()

@Override
public void cronAction(){
// Do stuff here

// How to access a parameter if you are using them
String param = super.getParamAsString("Param 1);
}

// If you want to use parameters you override getParameters()

@Override
public CronTaskParamInfo[] getParameters() throws MXException, RemoteException {
CrontakParamInfo parameters[] = new CronTaskParamInfo[2];
parameters[0] = new CrontaskParamInfo();
parameters[0].setName("Param 1");
parameters[0].setDefault("Default Value");
parameters[1].setName("Param 2");
parameters[1].setDefault("Default Value");

return parameters;
}

}



Experimenting with the vagrant-qemu-provider

Today I came across a qemu provider for Vagrant. I have a PC set up with Proxmox, however; I kind of don’t want to have this machine hooked up and running all of the time. It is very underutilized. I only use it when I have the time or interest to experiment.

Setup is easy:

1
2
brew install qemu
vagrant plugin install vagrant-qemu

And then we can try using a provided example:

1
2
vagrant init ppggff/centos-7-aarch64-2009-4K
vagrant up --provider qemu

Vagrant will start setting things up and you will be prompted for a username and password for SMB.

At this point you will probably get an error if you are on macOS complaining about an authentication failure for SMB. All you need to do is set up SMB File Sharing for macOS if you haven’t done so already. HashiCorp has a note on it here.

Once the qemu VM has been started up by Vagrant you can connect to it with vagrant ssh

Your home directory on your host will be mapped to /vagrant in the VM.

This solution seems the best for me right now on the M1 Mac. Qemu is very powerful and should even allow me to run operating systems for architectures other than ARM using Vagrant.