Tuesday, September 13, 2022

Cool Herders Graphics Experiments

Way back in the day, I created a silly little demo for the Sega Dreamcast using then then-new KOS development kit. My buddy Binky came up with the design and all the artwork, and we put the original demo together in like two weeks.

Let's zoom in a little on what's going on there...

We've got some nice overlaps going on there. Boxes overlapping the ground, trees overlapping the ground and boxes (both above them and beside them). There are probably lots of ways to do this, but I'm going to tell you how I did it. It's actually pretty basic.

The Dreamcast was the first 3D hardware I worked with, and knowing that, I made all the graphics out of polygons. Each tile/sprite is just two triangles using flat projection, but with the Z axis tilted slightly. The top of the tile is closer to the screen than the bottom. As a result, for free, we get nice overlaps of the sprites and the tiles. It's as if they are actually (slightly) standing up!

This caused some people to make funny faces at me, but nobody ever suggested a simpler idea. (Some people suggested multiple textures for different situations, but why?? Extra work, extra video RAM, extra processing... what?

So this was on the Sega Dreamcast. We had 8MB of video RAM... so we ended up deciding on a basic system with 3 pages of tiles (originally the three had meaning, but in the end only the third page is special). Full 24-bit (I think it was 16 bit on the DC), and lots of wasted space. Each page has 4 pages of animation that plays continuously, and none of this causes the Dreamcast to even notice that you are there. ;)

Of course, some levels have more graphics than others, but I'm sticking with the NZ stage for now.

So this worked fine, and we released a working game and had some fun with it.

After a while, I started working on porting and updating the game, and we landed on the Nintendo DS. Going from 8MB of video RAM to 768kb - with a few limitations on how to use that memory, required the wasteful old layout to be revised. However, the DS's 3D hardware was still up to the task of all those tiles, so I was able to use the same actual layout. We opted to just zoom the screen in and add a radar, so that the original 640x480 resolution was still honored on the smaller 256x192 display. (This also helped the 3D system, as I only render the polygons that are actually in the viewing rectangle.)

Eventually, I wrote a project that parsed the Dreamcast tiles and condensed the texture pages, removing duplicate tiles and making a lookup table that could be used when the code asked for one of the original tiles. We reduced the background color depth to 8 bit as well, which still looked pretty good!

(Still a bit of wasted space, but it was the closest power of two size that fit all the levels. Incidentally, this tight packing is why the game has trouble on DS emulators. They have an off-by-one bug in the texture mapping for flat projection that causes the textures to pull one more pixel than the hardware does. After years and years this has never been fixed, except in DrasticDS for Android.)

With that, the final game pretty much kept the original graphics on a much smaller system. The DS version also added special attacks and improved the story.

(All that said, the DS was actually powerful enough to render the whole screen, as this very early test shot shows!)

Recently, then, I got a challenge and decided to see if I could get this going on the Gameboy Advance. Now, we're talking slightly more restrictions here. Only 64k of video RAM available to tiles, oh, and it's TILE based, not 3D.

I reduced the color depth to 16 colors for this.. and after the first pass, I was able to use a different palette for the destructibles compared to the ground tiles, giving me 32 colors overall (well, really 30 cause of the way the GBA does graphics).

Originally, I was considering hand-calculating the tiles, but I didn't think the CPU would be up to the task. It might have, but then I remembered, oh yeah! There are two layers.

So originally, I thought that would do it. I'll put the top halves on layer 2, and the bottom halves on layer 1. I can put the sprites in the middle.

But, then I realized... oh, wait, the destructible objects (crates, etc) need to go on top of the ground, but under the tops. So fine, three layers then. No problem.

That /almost/ worked. I didn't grab a screenshot, unfortunately, but where the leaves overlap on the left, that was deleting the previous tile and causing corruption (only on specific animation frames!!) But, easy to resolve this - I used the fourth and final layer and alternated the top layer for even and odd.


Check out the layers!

(The four separate layers)

(Combined layers 1 and 2, and 3 and 4).

To get here, I have a new script that processes the original Dreamcast graphics, converting the 48x48 pixel blocks (!!) into the 6x6 character tiles that the GBA works with. The lookup process is just fast enough on the GBA, and will probably need a little bit of optimization.

The new tool also looks for duplicate tiles, and so it can actually find more duplicate graphics (at 8x8) than the DS version did (at 48x48, though the DS version removed empty space).

In the end, all 6 levels, with /almost/ all the graphics, fits in about 350KB, and a single level's tiles fit in a single character set on the GBA (32k or less). I had to say 'almost', because the Toy Factory has so much animation that I had to remove a few tiles. Fortunately, no animation was removed - just the paint on the factory floor (there was a whole tile page of arrows), and three of the five colors of gift boxes.

Anyway... that's all I had to write tonight. I was just pleased to see it come together on such a different system. We'll have to see where it goes in the end!

Thursday, July 28, 2022

Necrobiotics My Spinnerets...

If you don't like spiders, don't read this. ;)

Lots of buzz over the relatively recent release that some researchers are using dead spiders as grippers. There's literally nothing ground breaking here - they inflate the spiders' legs, causing them to extend, then release the pressure and they contract. That's how spiders walk in the first place.

They spend some time trying to justify it - oh, it's biodegradable, they can sometimes lift more mass than the mass of the dead spider, etc etc. Even invented a term - necrobiotics.

But it's all kind of bunk, isn't it? First off, there isn't a massive issue with grippers filling up landfills as disposable parts. So who cares that they are biodegradable? In fact, let's talk about that part!

I didn't see any mention of the number of useful cycles, but a typical gripper is going to rate this in the tens of thousands, if not hundreds of thousands. Lifespan of a typical gripper will be measured in years.

The spider starts decaying the moment they kill it (and yes, they kill the spiders they use for this experiment). The duration during which they are going to be useful, before the bladder is damaged or the creature simply becomes too dry to operate - that's going to be measured in hours. Which means at least every day, to keep your machine operating, you need to capture a spider, kill it without damaging it, carefully inject the actuator into the correct bladder, seal the hole, and then put your machine back together. Maybe you can get quick at that, but it seems like a lot of effort!

They talk about using them for pick and place machines, which need to rapidly and accurately pick up, position and place thousands of parts an hour. It's hard to believe that using dead spiders is going to revolutionize the already-very-simple-and-reliable grippers that are used for this. (Suction, if I understand correctly...) And there's no indication about how long the extremely tiny and fragile hairs used to actually make things stick to their feet will last without life to renew them.

That last point got me thinking. Clearly, the answer is not zombie spiders, but borg spiders. If we can implant a small computer that is capable of controlling the spider, we may have something. No, not as a pick and place, that's stupid. But for other things. Reconnaissance, for instance. I dunno what else. Targeted pest control, maybe. ;)

My thinking there is that the computer is able to drive the spider, as well as receive sensor feedback. But most importantly, it's able to turn off the interface and restore the spider to natural operation. The advantage of this is that, presumably, with appropriate rest breaks, the spider will naturally feed itself, and its biological processes will naturally repair the normal wear and tear of operation.

Morally, of course, this falls on the dark side of science. One might imagine that during periods of computer control, the spider would be experiencing a living hell. We might well be able to determine whether or not spiders have any sentience - any sense of self. If they did, I can only imagine the horrors they would try to cope with when the computer turns off.

I'm (fortunately) far too busy to create borg arachnids. Partially because spiders creep me right the hell out. And besides, for a pick and place, clearly ants are a far better choice. ;)

Wednesday, February 23, 2022

You're Doing it Wrong - Allocations

 One of the things that frequently amuses me/drives me into a rage is the modern mentality that any old design pattern is bad, so don't learn it. Then code breaks, we old timers nod and say "yes, that sounds about right", and get back "why is programming so hard? I'm going to fix it so it's easy!"

The article I'm reading right now is about a fellow who documents his battle to understand a kernel level crash that ultimately boiled down to a race causing a use-after-free. For those who don't know, this is exactly what it sounds like. You allocated some memory, then you freed it, then you accessed that pointer after the free. This is very bad and causes everything from security exploits to system crashes, because after a free the system is allowed to do anything it likes with that memory location, including re-allocating it, or marking it illegal because the address space is needed elsewhere.

Use-after-free is controlled by having a clear memory policy. This is a design concept wherein you have a set policy that defines unambiguously who owns a block of allocated memory at any given time. Some systems may have multiple owners - in this case you need a management system such as smart pointers. Some systems may pass ownership from one function to the next - in this case it's usually wise for the code which is losing ownership to forget that pointer as soon as possible. For instance, you can null it out - then you /can't/ accidentally use it. There is literally no reason to remember an address once it has been freed - null that pointer. 

This also helps a lot with memory /leaks/. If you know who owns a block of memory, then that implies quite strongly who is responsible for /deleting/ that block when it's no longer required.

This is all a design concept - nothing happens in code until you decide it. And once decided, no exceptions. Exceptions cause bugs. And besides, if you need exceptions, that means your design doesn't fit the application and needs to be rethought. My personal observation is that my design isn't right till the third time I implement it. Less than that, and I get nervous. It's like having your code build and run the first time - you should be thinking 'Oh no. What did I miss?' ;)

Stable, reliable code beats buggy code that's 0.001% faster on Tuesdays. EVERY. SINGLE. TIME. You'll thank yourself when those good night sleeps start taking the place of early morning panic calls and late night debug sessions.

Although I do agree with how the fellow ended it. I'm not naming him cause my rant isn't really his fault, and ended up a little off topic, but kudos, bud.

"Just go to sleep, because everything is broken anyways :)"