Apple DuoDisk Troubleshooting

A friend of mine very generously gave me his childhood Apple IIe for the price of shipping. I have been really enjoying exploring it, and will certainly write something more about it later. Today I want to share a troubleshooting story regarding the Apple DuoDisk…

The DuoDisk stopped reading reliably, and would sometimes get into a state where the spindle motor would keep running even while in the BASIC prompt. I had not serviced the drives since I got it, so I decided to open it up and start some diagnostics.

  • I cleaned the heads. Did not resolve the issue
  • I pulled all of the chips from their sockets and sprayed the sockets down with contact cleaner and reseated the chips. Did not resolve the issue.
  • I did capacitance and discharge tests on all of the electrolytic capacitors, and one of them seemed like it was going bad so I replaced it. Did not resolve the issue.

I was getting a little dismayed at this point. I do not have an oscilloscope, so testing the chips to see if one was bad would be like playing whack-a-mole. Fortunately I found the Apple service manual online. The manual has a note to the technicians that certain revisions of PCB have a risk of erasing data if a disk is in the drive when the computer is powered off. Ah ha. But, it didn’t really click with me at the time. The note said to resolve this issue, remove two small ceramic capacitors.

  • Followed tech note and removed the two capacitors. Did not resolve the issue.

I then realized that I had been testing with three disks. ProDOS, the ProDOS Users disk, and my ADTPro disk. It was refusing to boot these, so on a hunch, I tried my Apple Writer II disk.

  • It worked.

This was encouraging. I bootstrapped ADTPro over the cassette port and created a new ProDOS disk. Then I used it to check the contents of the ProDOS Users disk and the ADTPro disk. The data was still on those disk, but I have a theory on what happened. My DuoDisk was indeed affected by the erase bug, which must have occurred on these disks while the head was over the boot sector of the disks. The bulk of the data was still there, but the boot sector was nuked, and therefore the Apple II couldn’t boot from them. I thought I had the issue resolved, but…

The drive was still behaving somewhat erratically. I turned my attention to the I/O card, and ran the same troubleshooting steps that I ran on the DuoDisk PCB. I discovered that the DB-19 connector was getting a little worn out, and the wires were moving out of the connector a little too much. Apple did not put any strain relief whatsoever on these connections. I pushed the wires in as far as they would go and used superglue to hold them in place and give them some rigidity. I also wrapped some electrical tape around the cable where the wires were separated from the ribbon cable. This improved stability by a lot. But I still had a problem.

If I did not have the connector attached to the Apple II’s chassis, and plugged the DuoDrive in, it would work perfectly. I could shake the cable around and it wouldn’t bother it at all. If I did mount the connector to the Apple II’s chassis, then the drive would not work properly. I thought “is this some weird grounding issue?”. I grabbed my multimeter and verified that were was no such issue. What I think is going on is that when the connector is mounted to the chassis I just can’t get it seated firmly enough for it to work, given that the connector is worn out.

My temporary solution is to have the connector coming out over the top of the Apple II. The connector is too wide to fit through the slots on the back.

I’ve been using the DuoDisk all day writing disks and it’s been working perfectly. Two unrelated faults were causing chaos!

More Everything Forever

The Book

In his book More Everything Forever, Adam Becker provides a compelling argument, backed by interviews, documented legal proceedings, and other publicly available information, that the current A.I. mania is driven by a cult and the real-world impacts of this cult’s delusions are worse (and more real) than the apocalyptic futures they imagine.

The cult he writes of is not a singular entity but a group of organizations founded by like-minded people. They share similar beliefs and motivations, but you can’t point to a single organization such as Scientology. The groups the author focuses the most on are the Rationalists and the Effective Altruists.

I will try to summarize what these individuals believe…

If we do not create artificial super intelligence (Gen A.I.) and colonize the universe then humanity is doomed. The artificial super intelligence must be developed and trained to work for our benefit at all costs. Human suffering that exists today is outweighed by the future flourishing of human civilization in space, and therefore there should be few to no limits on what we do to achieve it.

Sounds like something out of Asimov, right?

Most of the individuals mentioned in the book are very wealthy, or otherwise have stakes in the A.I. industry. Think people such as Sam Altman, Elon Musk, and formerly wealthy Sam Bankman-Fried. They all share some combination of beliefs which circulate in the Rationalist and Effective Altruist communities. While many of these beliefs sound positive or beneficial on the surface, Becker outlines how they are rooted in the idea that humanity should exist everywhere, forever, and that there should be no limits to our consumption even if it means eating the entire universe.

Many of the ideas generated by these communities are unfathomably nutty sci-fi fever dreams. I will not summarize them here, because you should buy and read the book.

My Thoughts

As a technology person I am used to these hype cycles. Not long ago it was the blockchain. We were all supposed to become knowledgeable of and find ways to use this highly inefficient transaction registry to somehow make everything better, as promised by its hype men and believed by corporate managers and executives. That technology found niches in which it is useful (I assume in fintech, but I don’t work in that realm) but it didn’t change much. The hype bubble burst and made some people a lot of money before it did.

Now it’s all about A.I.. Companies are concerned with keeping up the pace of technology to ensure they stay competitive and relevant. I can’t blame them for ensuring their technology people are aware of these things especially when the A.I. hype men are promising them more “productivity” at lower cost and higher profits. At least I imagine this is what your average corporate boardroom types are thinking. Some of them may believe in the nuttier stuff explained in Becker’s book, but I hope that is not a large percentage.

What we currently refer to as A.I. are actually Large Language Models (LLMs). These are essentially large indexes of words and their relationships to each other as used in language. Companies like OpenAI feed their models truly massive amounts of text scraped from internet sources, books, etc. It is impressive how much data they process, how much smart (human) engineering and labor they expend, and how accurately the LLMs can create sequences of words that make sense in English.

My technical experience with this sort of technology is limited. I have experimented with Natural Language Processing (NLP) and have learned about related concepts. There is some definite utility to this technology to augment human capabilities, but I have not seen anything yet that makes me believe they can replace human capabilities. LLMs can be fantastic accelerators for things such as language translation, parsing and summarizing large amounts of research text and data, or helping with reconstructions of very ancient text. If models are trained for their specific purposes they could be very useful to us indeed, but I do not see much real benefit of the “generalist” models that operate based on a poorly defined context.

I have tried LLM based tools for my work and found the results to be completely underwhelming. The time it takes you to explain a complex problem to a chat bot is much better spent figuring it out with your own brain. You will do a better job, believe in yourself. The LLM based chat bots are nothing more than very adept bullshit generators. There are probably two things I have used one of these things for with any level of success.

  • Spitting out a RegEx string
  • Sorting a list of lines in a text file by specific criteria

Both of these things are hated, tedious tasks of mine. But the thing is…they aren’t THAT hard and don’t take THAT much time to do myself. It is certainly not worth the massive expenditure of money, electricity, water, computing power, etc. spent on training these models. Every example I have seen of generative A.I. is like this. It’s garbage that pales in comparison to something created by a human being while absorbing resources better spent on human beings.

The utility of this technology compared to its environmental cost is highly questionable. If you ask the Rationalists and Effective Altruists, you may hear something like “Humanity must develop artificial super intelligence and use it to colonize the universe or we will go extinct. The harms caused today are outweighed by the future perpetual survival of humanity.”. Then they may provide astronomically large numbers (of human beings in the future), and remote probabilities to back up their claims. Becker refutes these claims in his book using interviews with actual scientists working on a broad range of related topics. In essence, what the Rationalists and Effective Altruists promise is and perhaps always will be science fiction.

The utility of this technology isn’t my biggest concern. In any field, we try things and see what works. What bothers me is that behind the chatbots and image generators there is a movement led by billionaires who want to be space-imperialists. Instead of pouring their time and money into things that would actually help people here and now, they are working towards an impossible and frankly undesirable sci-fi future where all nature is dominated by tech, and all power is in their hands.

I agree with Becker’s conclusions at the end of the book, wholeheartedly, and I strongly recommend everyone read it.

Enabling telnetd on Raspbian

I am setting up a Raspberry Pi to be a remote server (you know, a mainframe) for ancient computers that don’t have the CPU power to use SSH. I ran into an issue where after installing telnetd the service could not be found and would not start. It seems that the Debian team has done the sane thing and made sure this insecure service couldn’t be enabled by accident.

I found the solution on this forum post and I am putting the relevant bits here for future reference.

1
2
3
4
Edit /etc/inetd.conf and uncomment line 23 (line number is just a co-incidence).

#:STANDARD: These are standard services.
telnet stream tcp nowait root /usr/sbin/tcpd /usr/sbin/telnetd
1
2
3
Restart the inetd service

sudo systemctl restart inetutils-inetd.service

Expanding this Site

I recently read Cory Doctorow’s post about using a blog as a personal Memex. I have been using this site in this way, but for a very limited scope. This has served a repository for technical information helpful to my job, and in some cases I have shared links to specific posts with coworkers. Having a sort of “public notebook” is useful for such things. What this site has not been is a commonplace book.

Doctorow’s analogy to a commonplace book got me thinking… I am interested in so many things but I don’t put my thoughts or discoveries anywhere but in my head or in poorly organized notes scattered around the digital and physical realms. In some cases I store things in git repositories, but these are things that have immediate utility, they are not ideas or the seeds of ideas.

I want to think about what to post here other than my own technical notes or little projects I want to share. My interests aren’t limited to technology, so why should my blog be? I have exactly zero readers, so it doesn’t matter much, but hey…I like to think about things. Perhaps I will take one of Doctorow’s habits and write about what I read.

Smart HDMI Switch

I wanted an HDMI switch that worked with Apple HomeKit so that I could include it in Scenes. Manually switching things is a drag and I wanted to keep the number of remote controllers very low.

I had a StarTech 4 port HDMI KVM that I wasn’t using any more. It was perfect for this project because it had a large case that was easy to fit a dev board in, and it used momentary contact buttons that would be easy to replace with MCU controls.

Because this had to be HomeKit compatible I needed build in wifi support, and the current chip of choice for that kind of thing is the ESP32. I was new to this platform and wanted to get this done quickly so I chose the Arduino Nano 32 board.

Then I had to decide/research how to write the code for it. I found the excellent HomeSpan library. This made it incredibly easy to get things working. The library handles all of the wifi and HomeKit setup for you so that you can focus on the actual functionality. I completed this project in a few hours, so many kudos to the HomeSpan team!

The wiring was very simple too. The KVM had a couple pads available to pull the +5V and GND connections from, then I removed the original switches and soldered the GND connections and switch pin connections that were needed.

All my code had to do was to set the pin states needed for each input. It works perfectly, and it is VERY fast.

I have shared the Arduino sketch on GitHub, and you can find it here.

I have plans to make more HomeKit accessories and I’m excited to explore more smart home platforms.

Avoiding List Tab Download Crashes in Maximo

Recently we observed JVM crashes on one of our UI servers that we traced back the XLS emitter, triggered by downloads from the list tab. A user was apparently trying to download more data from the list tab than the emitter could handle.

Investigation showed that in this environment, the system property webclient.maxdownloadrows had not been set.

We set this property to 1000 to limit the size of the recordset and the issue was resovled.

Maximo Installation Command Reference

Personal reference for useful commands.

Bypass verification checks for IBM Installation Manager

1
set BYPASS_PRS = True

Run Maximo install on DB

Param Ref:
-s = Schema Name
-t = Tablespace Name
-imaxdemo = Tells installer to populate the database with demo data from /SMP/maximo/tools/maximo/en/maxdemo.ora

1
/SMP/maximo/tools/maximo/maxinst -sMAXIMO -tMAXIMO -imaxdemo

Start DB config

1
/SMP/maximo/tools/maximo/configdb.bat

Run DB integrity checker

1
/SMP/maximo/tools/maximo/integrityui.bat

Build Maximo EAR

1
/SMP/maximo/deployment/buildmaximoearwas8.bat

Encrypt Maximo properties

1
/SMP/maximo/tools/maximo/encryptproperties.bat

Manage Windows services for WebSphere

1
/AppServer/bin/wasservicehelper.bat

Command line WebSphere node sync

1
/AppServer/profiles/AppSrv01/bin/syncnode.bat HOSTNAME -username WASADMIN -password WASADMINPASSWORD

Maximo Automation Scripting Practices

I hesitate to call this post a “Best Practices” post, because who am I to prescribe a best practice? I won’t call it that. Instead this is a list of practices that I think are sane and maintainable over the long term.

I am currently on a project overhauling a Maximo instance that has gotten to the point where it is hard to maintain and debug, and there are bugs a plenty. It is a spiderweb of duplicitous workflows and automation script spaghetti code. This instance reveals some dangers of “low code” platforms. People seem to get the idea that they do not have to be concerned with clean programming practices or long term maintainability.

Write good clean code in Automation Scripts

This should be obvious, but it apparently is not. Just because we are using a script interpreter built into a web application doesn’t mean we should be lazy about it. Properly comment and structure your code.

Use good script and launchpoint names

Something that I like to do is name my scripts for the object and method they are working on. ASSET.SAVE for example, or WORKORDER.WORKTYPE.VALIDATE. This will make it easy to find an understand what these scripts are for. Name the launchpoint the same, because you need to migrate this as well and this will make it easier on you.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Script: ASSET.SAVE
# Launchpoint: ASSET.SAVE

from psdi.mbo import MboConstants

def setLocation():
mbo.setValue("LOCATION", "FOO", MboConstants.NOACCESSCHECK)

def setSomethingElse():
mbo.setValue("SOMETHINGELSE", "FOO", MboConstants.NOACCESSCHECK)

setLocation()
setSomethingElse()

The example above shows a pattern I’ve been following. Keeping all logic related to asset.save in one script, instead of many with their own unique names, is much easier to maintain and debug. I have seen examples on my current project where seemingly no pattern was followed, and finding all the bits of pieces of their business logic is more of a scavenger hunt than it should be.

Using the Python function declaration def myFunction(): I can use good program structure to keep all of the different things you might need clean and easy to modify or turn on or off. If you are just testing something, or the customer asks you to disable it, it’s as simple as removing or commenting out your function calls at the bottom of the script.

No matter what it is, even if it is the only thing the script does at the moment, just put it under a function declaration. It could help you in the future.

The Variables tab is pointless and bad

For reasons that I do not understand, there is a tab in the automation script editor called Variables. It is useless, and actually makes things much much harder to read and understand. Don’t use it. Python already has variables, and the MBO framework is available to you. Why use anything else? Keep things clean and easy to read.

Code reuse? Code reuse!!

You can and should employ code reuse in your automation scripts. Seasoned programmers understand this concept well, there are bits of code you may use in more than one place and it’s often appropriate to put these somewhere they can be called by other scripts or classes.

In Maximo these are referred to as LIBRARY scripts. IBM has a page on it here.

From their examples, they have a script called MULTIPLY.

1
z=x*y

They then use this script in another script. I’ll add more comments than their example has.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# here is an example of a debug logging
service.log("I want to multiply 2 numbers and log the result")

# import the HashMap from the java.util library.
# we will use a hashmap to store the parameters we
# pass to our MULTIPLY script

from java.util import HashMap
a=3
b=2

# declare our hashmap and put the values in it
ctx = HashMap()
ctx.put("x",a)
ctx.put("y",b)

# and here is where we call our library script
# service.invokeScript("MYSCRIPT",params)

service.invokeScript("MULTIPLY",ctx)
service.log("the result is "+str(ctx.get("z")))

We name our library scripts clearly, like LIBRARY.CALCULATETHEVOLUMEOFTHEUNIVERSE.

UPDATE: Nov 2024

IBM has added the ability to invoke individual functions within library scripts, which makes them a lot more useful. To invoke a function within a library script the library script must have the “Allow Invoking Script Functions” option selected. Then to use the functions in your library the syntax is as follows.

1
service.invokeScript("LIBRARY.SCRIPT.NAME","functionName",params)

When used appropriately, code reuse means you’ll have less code to maintain. That means less work. We all love less work, right?

Read Bruno’s site

Develop in Maximo for long enough and you’ll wind up on Bruno’s site. Bruno has a wealth of information available and the developers on my team use his site frequently. A link particularly relevant to this post is his automation scripting quick reference. I always have this open if I am writing automation scripts. I do not have a good memory.

Conclusion

And that’s it! There’s not much to it, I just wanted to put this out there. If you are just getting started using Maximo, or are new to using automation scripting, I hope something here gives you a head start. If you are used to Java customization then this will all be easy to pick up, and you probably didn’t need to read this anyway. For those that are coming to Maximo from a different background, I hope this helps you avoid future headaches.

BTW: If you are looking at using a Workflow for something, ask “hey can this just be an escalation or automation script instead?”. Asking that question often could save you a LOT of future headaches. :-)

Seemingly Silly SQL

Recently I had to fix a report that was broken from some changes we made while implementing a new financial tracking process for our customer. A previous development team had implemented work order related cost tracking without using Maximo’s out-of-box financial tracking functionality. They created a table which stored certain dollar amounts associated with their process. There is a row per dollar amount entered. The customer wanted to use more of the out-of-box financial tracking capabilites and expand their usage of the application. As a part of these changes, we stopped using this custom table, but preserved the capability to enter in a sequence of dollar amounts (pending a larger process overhaul) by using attributes on the work order table itself.

This change had the side effect of breaking a report that used this custom table. The top portion of the report was trivial to change to pull its values from the new places, but the bottom portion has a table which displayed rows from that custom table we no longer use. We did not want to change the structure or functionality of the report, so I now had to find a way to output attributes that were now a part of a single row/record, in multiple rows in order to populate the table on the report.

Take this list of attributes as an example

wonum, fundingnum, fund0, fund1, fund2, fund3

To output this one row, as four rows, I used UNION

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
SELECT 
wonum,
fundingnum,
fund
FROM (
SELECT
wo.wonum,
0 fundingnum,
wo.fund0 fund
FROM workorder wo
WHERE
wo.wonum = :wonum
AND wo.siteid = :siteid

UNION

SELECT
wo.wonum,
1 fundingnum,
wo.fund1 fund
FROM workorder wo
WHERE
wo.wonum = :wonum
AND wo.siteid = :siteid

UNION

SELECT
wo.wonum,
2 fundingnum,
wo.fund2 fund
FROM workorder wo
WHERE
wo.wonum = :wonum
AND wo.siteid = :siteid

UNION

SELECT
wo.wonum,
3 fundingnum,
wo.fund3 fund
FROM workorder wo
WHERE
wo.wonum = :wonum
AND wo.siteid = :siteid

) ORDER BY fundingnum ASC;

It sure looks silly, but it outputs the data in rows to populate the table on the report properly and Oracle’s explain plan said the cost was 40, great!

Then I got curious, how bad would the cost get if I put the WHERE clause on the outer SELECT instead of on the individual SELECT statements.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
SELECT 
wonum,
fundingnum,
fund
FROM (
SELECT
wo.wonum,
0 fundingnum,
wo.fund0 fund
FROM workorder wo

UNION

SELECT
wo.wonum,
1 fundingnum,
wo.fund1 fund
FROM workorder wo

UNION

SELECT
wo.wonum,
2 fundingnum,
wo.fund2 fund
FROM workorder wo

UNION

SELECT
wo.wonum,
3 fundingnum,
wo.fund3 fund
FROM workorder wo
)
WHERE wo.wonum = :wonum
AND wo.siteid = :siteid
ORDER BY fundingnum ASC;

This increased the query cost dramatically. The cost according to the explain plan was somewhere in the millions, while the cost of the other version was just 40. The reason for this, is that in the expensive query each SELECT is pulling in the entire workorder table, and each UNION is de-duplicating the results of each SELECT. This query has to do this four times before it can finally filter down the results to the one record needed. In the performant query, each SELECT is only returning one row, and since the row returned by each SELECT is different, the UNION doesn’t have to de-duplicate. Plus the dataset is already much smaller. Imagine how bad it would be if I had used UNION ALL instead of UNION!

The lesson here is that it matters where you put your WHERE clauses in queries. You have to keep in mind the size of the datasets you are pulling in, and how much work every operation is going to have to do.

Also, you aren’t always going to have ideal data to work with and….sometimes…you may have to do silly things like balloon one row into four…

I did put in a comment saying something to the effect of /* I know this looks dumb but...*/

Installing an SSD in a PowerMac G4!

All computers must use an SSD now. It’s the law.

Even an Apple PowerMac G4 Quicksilver from 2002!

OWC, of course, sells a kit for this upgrade. Like many of us they don’t seem to want to let old Macs die!

SSD kit from OWC

Installation of the kit couldn’t be any simpler. It just replaces the stock HDD using the existing PATA ribbon cable. OWC includes a 2.5” to 3.5” mounting plate so that everything is nice and secure.

SSD kit installed

I wanted to restore the software on this Mac to what it left the factory with 21 years. Fortunately, Macintosh Garden has images of the original software restore discs. I downloaded these and burned them onto CD-Rs. Oh yes, some of us still burn discs.

The restore disc didn’t recognize the SSD at first because it was unformatted. There wasn’t an option to open up the Drive Setup to format the drive. I had a copy of Mac OS 9 Lives, so I booted off of that and was able to format the SSD. Then I was able to restore using the restoration media.

It was fun seeing the first boot experience on this PowerMac, but after that I decided to install the pinacle of PowerPC operating systems… Mac OS X Tiger.

The Tiger installation went just fine, no surprises and it runs great on the PowerMac G4.

This computer will be mainly used for old games or that odd time where you need an older version of Mac OS.

Please don’t throw your old computers in the trash. Some people like to install new technology in them and use them even if they’re over 20 years old!