Thursday, November 12, 2020

On Broken Systems...

 I've been doing a lot of scripting for the last couple of weeks on Second Life (in violation of my SurveyWalrus, don't tell him!) One of the things that surprises me about it is just how many of the APIs are unreliable. That is, the statement executes, no error occurs, but it doesn't work every time.

Actually, this is crazy common, especially when dealing with hardware in the real world. Nothing works quite as documented, and often not as you expected it to either. Surprisingly for this blog, I'm not criticizing it, I find it rather endearing. (When it's HARDWARE. Hardware is hard, SL has no excuse ;) )

So what do you do when you execute the command, and nothing happens? Well, there are a few things that tend to help out in that case.

First and foremost, you need to ensure that the command actually happened, and that it happened the way you expected it to. There's little point going any further if you can't prove this. This means instrumenting your code, which I've covered before. It probably also means probing the hardware to make sure what was supposed to trigger the effect actually happened. Oscilloscopes have come down a lot in price over the decades, and even a cheap pocket one from Alibaba is a better diagnostic tool than poking the circuit with a wet finger.

(Disclaimer: don't poke circuits with wet fingers. It's not good for the circuit, it's not good for your finger, it likely won't tell you anything and it looks silly.)

After you have verified that the command is happening the way you expected, just take a moment and double-check the documentation matches your expectation. This step can save you hours of effort. Of course, about one in five times the documentation is also wrong. Optimist.

Okay! So your code is working! The command matches the documentation! And it doesn't work. Now what? Debugging starts. That's right, you don't only have to debug the code you write, you will probably (always) have to debug code you didn't write and hardware you didn't create. And this part you usually can't fix! Man, computers are great!

More common than outright failure is inconsistent operation. That is, sometimes it works and sometimes it doesn't. This was the case with the SL APIs. It's very rare that something which is not actually defective is truly random -- and if it is, you can't work with it anyway. You need to start in on the scientific method and work out which conditions it works in, and which conditions it doesn't. At the over-simplified level, that's:

1) Create a theory
2) Devise a way to test your theory
3) Execute your test and record the results
4) Revise the theory based on the new information
5) Repeat at 2

You'll notice there's no exit condition. Usually, you stop when you get it reliable enough, and that depends on your needs. Or you stop when the hardware guy finally gets tired of your questions and adds some resistors to the circuit, suddenly stabilizing it. ;)

But to get you started - most of the time I've run into inconsistent operation, it has been timing. From the Atari Jaguar to cheap LCD panels to, yes, Second Life, it's really common for both APIs and hardware to drop commands if they come packed too close together. So, spacing them out is a good first test.

What if it is truly random (or you just can't narrow it down any further?), and you have no choice but to use it. Well, first, push back really hard, because you really don't want to have to support your code on this broken system for the next 10 years, do you? It's not over when you push it to Gitlab!

But if you really can't, well, you need to improve the odds of success. Can you safely execute the command twice? Safely means that it's okay if the command works once and fails once, and still okay if it executes twice. Apparently the NES needs to do this workaround on the controller port if making heavy use of the sample channel, for instance. Alternately, is there a way to VERIFY the command, and repeat it if it failed? Is there another way to accomplish the same thing, and bypass the broken command? Even if it's slower, that is probably better than unreliable.

In the case of outright failure, then you have really two possible causes: either the device/api is actually broken, or you are commanding it wrong. In the former case, you probably can't do very much about it -- and if you are still reading here, you probably can't prove it either. So you need to figure out what you are doing wrong.

Unfortunately, this one is much harder to advise - it all comes down to experience. Think about similar APIs you have implemented, and compare to the information you have. Does it make sense to try a different byte order? What about bit order? Is there an off-by-one error? (This is common in software and hardware both!) Make sure, if you are working with hardware, that it is safe to send bad data on purpose. Set up an isolated test bed, and try different things. Use the scientific method again, and you might just figure it out!

Then you can enjoy a coffee and go tease the hardware guy that the software guy found their bug. That's always fun. ;)

Monday, October 12, 2020

VGMComp2 - Looking Back

 Many years ago, I undertook a project to come up with a simple compression format for music files on the TI-99/4A. My goals were simple, and somewhat selfish. There was a music format called VGM that supported the chip, and music from platforms that used it, like the Sega Master System, was easily obtainable. However, the files recorded every write to the sound chip along with timing information, and tended to be very large.

I built a system that stripped out the channel-specific data and moved all timing to separate streams - thus this four channel sound chip now had 12 streams of data: tone, volume, and timing. With all the streams looking the same, I implemented a combination of RLE and string compression and got them down to a reasonable size. There were a number of hacks for special cases I noticed, but ultimately it was working well enough to release. It was, in fact, used in a number of games and even a demo for the TI, so it was a success.

But it always bothered me. Why did I need the hacks? Why did it use so much CPU time? Could I do better? I spent a fair bit of time, on and off, coming up with ways to improve it. And finally, I convinced myself that I could. The new scheme was similar, but reduced the four time streams to just one, and changed out one of the lessor-used compression flags for a different idea. My thinking was that even if all else was equal, going from 12 streams down to 9 would buy me 25% CPU back.

But it didn't. In fact, the new playback code barely performed as well as the old. Even after recoding it in assembly, and heavy optimization, it was still reporting only about 10% better CPU usage than the old one. It took a lot of debugging to understand why, and what I finally realized was that the old format was simply better at determining when NO work was needed - it simply checked the four time streams. The new format needed to check the timestream and the four volume channels. This means that the best case (no work at all) was slightly faster on the old player than the new one. But the new one was markedly better in the worst case (all channels need work), just because the actual work per channel was simplified some.

Compression itself didn't really give me the wins I hoped for either. After creating specific test cases and walking through each decompress case (and so debugging them), compression was better, but not amazingly so. The best cases, true, were about 25% smaller than the old compressor, but the worst cases were pretty much on par, and that only with the most rigorous searches.

What I finally had to admit to myself, in both cases, was that the years of hacks and tricks and outright robberies in the original compressor had created something that was pretty hard to beat. But, it was also impossible to maintain, rather locked in the features it could support, and most importantly, I did beat it. Maybe not by much, but 10% on a slow computer is not a bad win.

And that, really, was something else I had to admit to myself. The TI is a slow computer. Even back in the day it was not terribly speedy. I tend to forget sometimes, working on my 3GHz computer that the 3MHz clock of the TI is a thousand times slower than my modern PC. And that's ignoring all the speedups that modern computers enjoy. (It's kind of a shame how much of that power modern OS's steal, but I guess that's a different rant.) Anyway, the point is that even writing all 8 registers on the sound chip every frame takes almost 1% of the system's CPU. And that's just writing the same value to all of them. That I can decompress and playback complex music in an average of 10-20% CPU is maybe not as awful as I felt when I first realized it.

There's of course another advantage to this new version. It was a goal to also support the second sound chip used in the ColecoVision Phoenix - the AY-8910. Borrowed from the MSX to make porting games from it simpler, this became a standard of sorts in the Coleco SGM add-on from OpCode, and so supporting it, at least in a casual manner, seemed worthwhile. This goal expanded when a member of the TI community announced that he'd be ressurecting the SID Blaster - a SID add-on card for the TI-99/4A. So, I made the toolchain support both of these chips -- although I cheated. A lot.

In the case of the AY, it wasn't so bad. I just ignored the envelope generator and treated it like another SN with a limited noise channel but better frequency range. The SID was trickier. I still did the same abuse - I ignored the envelope generator and treated it like another SN, but with only three channels. Unfortunately, the SID required some trickery because the envelope generator was necessary to set the volume. Fortunately for me, the trickery appeared to work. ;)

I have to admit that I'm not convinced that using both chips together will be acceptable, performance wise. 20% doesn't sound bad -- but that's on average. If both chips experience a full load on the same frame, it could be more than double that. On the other hand, if you can get away with running the tunes at 30hz and alternate the sound chips, that would be fine. That would likely be what I'd do.

Anyway, there was yet one more goal, and that was a robust set of tools to surround the new players. In the end, I created nearly 50 separate tools. And being very silly, many of them look Windows specific (but they are all just console apps and will port trivially, someday). But we have player libraries for the ColecoVision and the TI, a reference player for the PC, a dozen sample applications, 10 audio conversion tools (including from complex sources such as MOD and YM2612), and over 20 simple tools for shaping and manipulating the intermediate sound data. I have no doubt it's very intimidating, but short of tracking the data yourself (which, frankly, is a better route than converting), I believe there's no better toolset for getting a tune playing on this hardware.

Of course, if you can track it yourself, you can still use this toolset to get from tracker to hardware. ;)

I do intend to use this going forward, of course. The first user will probably be Super Space Acer, as that's near the top of my list (Classic99 is ahead of it). Though that game is nearly done, it will benefit from the improvements, and I need to finish it and port it around. With luck, once people have a chance to figure out the new process, they'll use it as well. I'll have to do some videos.

Anyway, the toolset is up at Github, and eventually on my website too, once I get that updated.

(BTW: I very, very, very rarely log into the Github website. Using the ticket system and sending me notes there is all well and good, but generally I just push my project and move on. That's why I use Git in the first place, because SIMPLE. My point is - expect turnaround times to be really slow if that's how you reach out to me. I'm not ignoring you. I just haven't seen it yet. I say this because logging in to get the URL there, I noticed some stuff waiting for me. ;) )

Sunday, May 24, 2020

Comdex 1999 by CryoModem - review of an early 80s text file

(This text file was written sometime in the 80's. I don't know who originally wrote it, but I've always enjoyed it. Now in 2020, I look back to see what was right and what was so very wrong...)


LAS VEGAS: The fall 1999 Comdex was, as always, a bit disappointing. The star of
the show was clearly the Yamagazi RoomTemp CryoModem I'm using to trasmit this
story. Yamagazi claims it blitzes out data faster than the speed of light, which
means that this report may have made it back to the office before the show even
took place.

(By 2000, consumer dial-up modems had peaked at 56 kilobit/s, with hardware compression on top of that to allow simple data to flow faster. DSL was on the rise although speeds topped out around 1 megabit both ways for commercial grade lines, and cable modems which promised faster speed were starting to roll out, but not as certain technology. BBSs and the concept of dialing a dedicated service were all but finished and the internet, while still young, had cemented itself as a necessary service.

In case it's not obvious, this paragraph is the whole premise of the article - that faster-then-light data transmission caused the report to travel back in time to the 80s.)

This year, the computer industry's companion show -Legaldex- nearly outdrew the
hardware and software exhibits. With so many pending lawsuits and so much money
at stake, it's really no surprise that Legaldex sprawled into 11 hotel lobbies,
two parking lots, and a hallway at the Liberace museum. the Apple booths alone
commanded more than 30,000 square feet of space.

(Lawsuits over software and hardware ownership and patents has never really gone away. Sun Microsystems was fresh in people's memories for suing over Java, but Apple still rings with an air of possibility. In fact, in 1999 Apple sued a PC manufacturer called eMachines, claiming their PCs looked too much like an iMac. eMachine's eOne was taken off the market as a result.

At the real Comdex 1999 - both Sun and Microsoft mentioned their mutual lawsuit in their keynotes - Gates with a joke and McNealy directly.)

In what has turned out to be an annual tradition, IBM once again trotted out a
new graphics standard, the 3DGA. Compatible with the MGA, MCGA, HGA, EGA, VGA,
boards, this new standard heralds a "bold new era of channel profitability," according
to IBM president and owner: "Now at last serious business users can have their fancy 3-D
graphs float in space."

(This didn't happen, at least with cards. After SVGA popular naming just sort of faded out, though there was briefly a UVGA. Of course, someone has to name everything, so the /resolutions/ still got names. By 2000 most machines were still 800x600 or lower, with 1024x768 possible but not fully supported by monitors. So based on that, the naming would have given us: CGA, QVGA, VGA, SVGA, UVGA and XGA. We didn't have widescreen yet, or at least not commonly. 3D displays were a long way off, with glasses or headsets still required most of the time even today. Glasses free 3D displays appeared around 2010, but didn't gain popularity.

Perhaps more importantly, despite creating the PC market and defining an architecture which survived and eventually defeated all comes, flourishing for  decades to come, IBM stopped being the driving force in the market in the 90's, eventually leaving it altogether in 2005 - though that's after this article. They had a brief resurgence in popularly in the late 90's with the Thinkpad laptop series, which was indeed a very good machine, but clone manufacturers dominated the desktop market and video card innovation was owned by dedicated video card manufacturers. In fact, the term "GPU" was coined by NVidia in 1999 and so would likely have been the graphical focus. 3DFX was the major leader at the time.

At the real Comdex 1999, 3DFX unveiled the Voodoo4 3D card and announced Voodoo5. the Voodoo4 would come with 32MB RAM, support AGP and PCI for $180. Voodoo 5 would come with 64MB or 128MB RAM, and cost $230-$600.)

Big Blue also displayed yet another new keyboard. The 143-key sports 6 randomly
scattered Ctrl keys, three more function keys, and an entire pad of SysRq keys
(though IBM did not annouce why anyone needs even one). To counter IBM's new Blu
architecture, AST/Quadram/Hyundai announced Blubus-Plus, with an additional data
line and slightly more shielding. Blubus throws off so much RF interference that
airborne users can make their planes bank left and right by leaning on the
cursor arrow keys.

("Big Blue" was IBM's nickname, a reference to their logo.

Keyboards didn't change much after the 101 key keyboard, although 104 keys became the standard after the addition of three Windows keys. Many keyboards also added media keys - some only a few and some a lot, but play/pause, next, previous, stop, volume up, volume down, and mute became relatively standard. SysRq seems to have gone away, though we still have pause/break doing nothing
most of the time...

New bus architectures did of course happen. By 2000 VESA had come and gone, and PCI was the dominant interface, though most motherboards still had an ISA slot or two for compatibility. PCI was released in '91 and caught on around '95, when Windows 95 introduced proper operating system support for it. For performance graphics, AGP was released in 1997.)

The fastest selling product at the show was IBM's just-released TBR (Technical
Bus Reference) manual, a fat compendium of IBM BIOS and chip-level errors that
the industry has had to accept as standards.

(Since IBM was no longer in charge, this didn't happen. Intel and Microsoft own the definition these days... although since IBM was still making machines in 1999, that may not have been the case then. I'm actually not sure!)

In response to the new line of IBM 240MHz machines, Compaq/Dell announced a
242MHz screamer, which it claims "makes the IBM box look like it's playing dead"
At the other end of the spectrum, we counted 35 manufacturers still selling
replacment motherboards for the original PC-1, switchable between 4.77MHz and

(In the mid-80s, where an 8MHz machine was considered serious and 25MHz insane, the idea of even a 180MHz upgrade board blew minds. Clock speeds in 1999 started in January at 450MHz and reached 600MHz  by the end of the year. Intel's Celeron was 400MHz, and AMD countered with their 450MHz K6-III offering.

This was the era of the Pentium 3, and there were no replacement motherboards for the PC-XT. In addition, the turbo switch disappeared during the 90's and computers just ran as fast as they could. This is also largely attributable to Windows 95 - a fixed feature set operating system meant that software could reliably use system timers, rather than CPU speed, to set their timing. The "Compaq/Dell" comment is interesting as maybe the only valid prediction in the document - if early! Dell acquired Compaq in 2002. Overclocking was big around the late 90s, though, so the 242MHz "Screamer" could have just been overclocked.)

Sponsors of next year's millennial Comdex are planning to call Comdex 2000
"Finally, the year of the LAN." Other vendors are proposing that Comdex 2000 be
dubbed "The Year of the Home Application," in an effort to prod the industry
into producing at least one product that could justify buying a computer for
home use.

(Hard to address such a tongue-in-cheek comment, but LANs were pretty established for businesses during the 90s, and the rising popularity of the internet was beginning to introduce them into the home - although it would really take the cable modem's victory and the spread of WiFi to fully integrate them years later.

As for the product... that's an ongoing thing. Arguably, though, the internet was the killer app that put a PC in every home. 20 years later, today, that's starting to fade a bit. Cell phones and tablets are replacing the general purpose PC for internet access.

As always, though, gaming also drives the PC market, as it did back then. 3D gaming was becoming big with 3DFX and NVidia pushing what the graphics card could do. AMD eventually replaced 3DFX as the big competitor.

At the real Comdex 1999, one author came away feeling that Sony's 64MB Memory Stick was the killer hardware of the show... perhaps it would have been if USB memory sticks hadn't followed on quickly, being cheaper and more compatible with the hardware that was out there.)

Ever-youthful Bill Gate's keynote address, "OS/9: The One You've Really, Really
Been Waiting For," blunted criticism that this newest version was still too hard
to use, too slow, and too memory-hungry: "Even though no third-party vendors
have taken taken advantage of the advanced capabilities of the seven previous
editions, dozens of developer are porting their applications over. And it will
run just fine on any system with 30 megabytes of RAM, although you may need a
bit more for your data."

(Bill Gates was only 44 in 1999. ;)

OS/2 failed out of the gate, and although IBM continued with OS/2 Warp in '94 and released the final version (Warp Server for e-Business) in '99, it only lasted a couple more years before being abandoned. Microsoft refocused on Windows with the release of Windows 95 and gained unassailable dominance for decades.

The criticisms remain, of course.

The porting comment was perhaps less an issue, the one thing that Windows did rather well was backwards compatibility. From Win95 onwards, applications generally "just worked".

30MB of RAM was laughable in the early 80s, but by 2000 systems came standard with 32MB-128MB of RAM, and virtual memory was standard, so the numbers, while big, were acceptable by then.

At the real Comdex 1999, a release date for Windows 2000 RC3 was announced and met with some doubt. Application compatibility was causing some delay. Windows 2000 was the first version to unify the NT and 95 kernels, so compatibility with both lines was important.

Bill Gates DID give a keynote, where he focused on Windows 2000, and tried to introduce the concept of the "personal web". It was seen overall as more of the same, so the article probably described it well enough.

Speaking of operating systems, there was a mini-expo for the Linux community at the same time. One writer noted: "while it's much nicer than last year's laughable bargain basement affair, it's small size gives ample evidence of the lengths Linux must go to enter the mainstream." However, Corel also showed "Corel Linux", a Debian distribution meant to be easy to set up.

Sun Microsystems also had a keynote, taking on Microsoft (with whom they were involved in an anti-trust suit), and pushed StarOffice heavily. CEO Scott McNealy's position was that software should be free (presumably hardware is the model?)

BeOS also had a booth that was well received, but there are few details beyond that. BeOS had its first x86 release in 98, but it was sold in 2001 and faded rapidly. A free reimplementation named Haiku was released in 2009 and was still active in 2018.)

In the word processing arena, MicroPro, Microsoft, and WordPerfect have packed
even more features into their bloated programs. MicroPro has purchased so many
third-party utilities that WordStar Professional Classic 7.3 is now delivered on
73 disks. WordPerfect has streamlined its 16-volume manual.

(Wordstar was abandoned in the early 90s as they failed to jump on the Windows bandwagon early enough. WordPerfect was also late coming to Windows, but held on due to a large install base. However, they were sold to Novell in '94, then Corel in '96. The 32-bit version of WordPerfect for Windows 95 was plagued with release issues and the final working version was rather late. By this time, Microsoft Office and in particular Word was gaining rapid market share. However, WordPerfect still survives today in 2020. I don't have a manual from 2000, but the WordPerfect manual today is only 282 pages long.)

Finally, Lotus announced its 1-2-3 WZ 50-dimension spreadsheet, a "quantum leap"
above its previous 1-2-3 VZ 5th-dimension version. Although users have been
demanding this added power, say market analysts, they're still not sure what to
do with more that three dimensions. When pressed for a delivery date, Lotus
officials would only say "Sometime in the first quarter of the coming millennium."
We can hardly wait.

(Lotus 1-2-3 was overtaken by Excel in the 90's for pretty much the same reason as the word processors - it failed to take the Windows update seriously in time. However, it did survive past the 1999 date here, eventually being discontinued in 2013.

As far as I can tell, 3 dimensions is as far as spreadsheets went, and even that is through multiple tabs rather than a single 3D sheet.)

... I spent WAY too long on this...

Sunday, March 22, 2020

Software is not a Science

I've been coming to a realization that one of the great problems with software development is the fervent belief that it can be simplified, that it is a science that can be nailed down to a fixed set of guidelines and will thereafter magically be perfect.

It's not. It's simply not. Like it or not, writing software is a creative aspect. You can no more reduce writing software to a set of fixed answers than you can create a checklist for drawing artwork.

Let's think about this.

First off, creating software requires the development of a unique solution to the problem from a set of incomplete tools which need to be assembled into a final system. It's a lot like building with Lego -- but you don't have the advanced set with all the fancy pieces. You have 2x2s and 2x4s and a couple of 2x8s.

It gets better. Normally the problem isn't even that well defined. You have a Lego set but you don't have the instruction sheet and the box is torn so you only have half the picture.

Of course, we do have the continued advance of software development - new systems, new languages, new processes. These are all great and fancy things. These are Space Lego, and Harry Potter Lego. They let you more easily create the new worlds you are imagining. But they don't remove the creative element - you don't automatically get Hogwarts, you have to build it. That's how you bought Castle Greyskull, but not Hogwarts.

Most of the development processes that we develop fall into two categories.

The good ones aim towards giving us the instruction sheet. A set of processes that advance us towards our goal - with the understanding that it will help us build THIS product. We can use bits and pieces of one instruction sheet - creatively - to help build other products. But naturally you need to intelligently apply your creativity.

The bad processes aim towards removing the random creativity element, assuming that all problems can be solved in a fixed, predictable manner. Problem A is given to developer B who applies solution C, which takes time D. Neat, tidy, and guaranteed quality output. About as likely as Gingerbread men inviting you to tea and gumdrops, but we invest thousands and thousands of dollars in pursuit of this goal.

Software needs to be run more like art. Unfortunately, I don't have insight from professional artists on how it runs and whether it works for them. I'll need to do some research. :)