Tuesday, November 7, 2017

Investigation continues...

I went through and verified the pinout of the cart, and burned a new AVR and dropped it on the board to test, but it's still working just as flakey. A couple of things were interesting.. one is that when it partially works, it seems to get most of the bytes right (because the program selection is mostly correct). That seems to suggest either I am lucky or the address code is working. (I suspect the latter - if I was lucky I wouldn't be having an issue).

Putting the logic analyzer on it was all over the place, to the point where I couldn't make sense of what I was seeing. But I also determined that my console's power switch is wearing out... so I need to make a repair there to continue anyway.

One thing that has crossed my mind is that with the new compiler, the code may be running too fast. I check the MDIR and MODE pins immediately after detecting GSEL -- maybe that's too fast? When I get the logic analyzer properly on the bus I will recheck that theory.

This shouldn't even be an issue, but I am leaning back towards the ROM-based cart.... I just want this part to work. ;) But in the meantime, I relaid-out the cart to use the ATTINY861 instead - it's a 20 pin part instead of 14. So it doesn't look as much like a GROM, but it has enough pins to bring out the ISP header. I also positioned it so, in theory, the ZIF socket will fit on the board when in the console. I haven't ordered this board or the new parts yet though...

I probably should test programming one of the 128MB flash chips as well -- if my new programmer can pull that off, maybe I should stop wasting my time with the GROM emulation. (But darn it, this part was supposed to "just work". ;) )


Saturday, November 4, 2017

Didn't Work First Try

Never should expect that it will, but I'm a bit disappointed and frustrated at this point.

I had two things to do today - the first was to port my GROM simulation code to the ATTINY84 that I selected for GROM boot. I thought it would be nice to have that to startup, and I could use the EEPROM for high score saving.

That port went reasonably well, and for the most part it did what I expected. Then I shut off the reset pin and discovered that I don't seem to be able to get high voltage serial programming, needed to reprogram it without a reset pin, to work. I spent several hours on this, including research and rewiring it repeatedly. I seem to recall having this issue before and eventually deciding keeping a reset pin was a good idea, but I need every I/O an ATTINY84 has. I think I may nudge up to the next physical size.

Then I went to build the cartridge prototype, and pretty much everything on that PCB was problematic.

First I detected that I didn't actually buy all the ICs I needed to assemble it. I was able to raid the parts bin but it was frustrating. Then, of course, the ATTINY footprint was too close to the IDE port to allow a ZIF socket (I always forget how big they are), and I forgot that the pins need to be larger.

I soldered leads to the ZIF socket to get it on the board, and forgot you need to do that when it's OPEN, not closed, because otherwise the sockets don't always open up for you. ;)

Finally, I got it all together and plugged it into the TI. It didn't come up reliably, it's very flakey, but at this point I think I'm done for the day. When it did come up, the data was correct. I got as far as seeing the menu entries and even getting Easy Bug to start booting, so the basics are working. Either there's some timing issue or I configured one of the I/Os wrong, or (worst case) the IDE circuitry is interfering.

Because I was too dumb to test it before I put the IDE circuitry on.

So I think the next step is to take Jim up on his layout offer while choosing a new chip for this approach, and we'll see which layout works first. ;)


Also did a few surveys to see what people thought about dither patterns. My first favorite was shot down pretty hard, but in the second pass there are two contenders neck in neck, one of which IS my favorite. Sadly there are only about a dozen votes, which speaks lovely for the potential market of this thing. Oh well. ;)

So it looks like we'll be using an ordered dither, but in order to get around the terribly washed-out effect I was getting, I introduced back in a tiny bit of error distribution. It makes a difference in the color yet preserves the more regular patterns of the ordered dither, and I'm pleased that people like it. It's really down to whether I'll use the 2x2 or the 4x4 pattern at this point. (When I re-encoded, I'll probably just do the top four so I can change it out again at the last minute without fearing that the frame numbers might change...)

You can see the current challenge video here (if you can get past Youtube's scaling...): https://www.youtube.com/watch?v=NdO-lokD7DM

Thursday, October 5, 2017

I think it'll work...

I have laid out an IDE interface with an UberGROM powered by an ATTINY84 - that'll give me 8k of GROM on the cart, which is plenty to bootstrap and maybe even run the game.

I'm expecting to need 32k to run, but I guess we'll see.

Anyway, I've still got a long way to go.. hopefully using this will let me work out the kinks and prove the hardware and software both. My first pass, I'm just going to load EasyBug on the GROM so that I can poke at the CF card manually.

I was silly and laid out the PCB before getting the TI cart outline, so I made it way too tall. Still, I think there's a good chance it'll fit. One more night should be enough to get me there... then I can order some and we'll see.

Monday, October 2, 2017

Dragon's Lair Initiated

Honestly it was initiated a long time ago - I have been building the tools for this project literally for decades, and got the license six months ago (that's a badness on my part too, but it's not too late yet!)

I posted the proof of concept on YouTube back in May 2015: https://www.youtube.com/watch?v=iOFPusM2dtM

I didn't have a lot of free time to work on it since then, but since the software was proven feasible (and all the video encoded and spliced out), I worked loosely on getting the license. I finally got ahold of someone at ReadySoft who okayed me and paid for that, giving me a year to get the thing done and out there.

Of course, in typical me fashion, I worked on other things. Sigh.

But this is now top priority with six months to go, and the last unknown is the hardware.

I actually thought I had this more or less nailed down... I found some nice 128MB flash chips in China, and they're still available. With that, some voltage conversion, and a dumb and simple banking scheme, I should have enough memory to store the 16 thousand 8k banks that I'll need. And this is still true! But the bummer with this scheme is that I've never really known how I was going to program the damn chips.

As I sat down and started sketching out the board, I began considering alternatives again, and I found myself wondering "why not compact flash?" It's 5v compatible, comes in huge sizes for low-ish cost (better than the flash chips anyway), and is easy to program.

I spent some time studying the datasheets and couldn't see a why not. The worst case minimum cycle type at the slowest speed is 600ns. The TI memory cycle is 667ns. So why not? I embarked to find out.

There are a few why nots... one is that the card automatically goes to sleep, and is rated to take 20ms to wake up for a read. That's a pretty long time, and my video playback needs the fastest cycle time possible. But if I can wake it up for a scene and keep it awake until the scene ends, then that should be okay. I can read multiple sectors in one command. Feasible!

Of course, then I needed to check sector size versus data size. My video frames, which interleave audio, are 7686 bytes in size. Can you see what's coming? That doesn't line up nicely with 512 byte sectors. In fact, it's 6 bytes over the size of 15 sectors. That's pretty annoying. I haven't done this yet, but I'm going to try and hide those six missing bytes by duplicating bytes in the color table. ;)

I then got obsessed with using memory mode, and not IDE mode. I bought some CF to IDE adapters and worked out how to mod them not to come up in IDE mode. I then tried to map it all down to a RAM chip on the mini-memory cartridge, and mapped it into that space.

My first attempt came up but didn't return anything... I realized that I'd attached to the RAM chip's (disconnected) CS. I fixed that, and the console won't even start up with it inserted. Well, that figures.

At this point, troubleshooting with the MiniMemory in the way feels very complicated... so I'm going to just lay out a PCB with an uberGROM (so I can still use EasyBug) and let the CF interface have the entire ROM space - that will give me a lot of flexibility in how I talk to it. It will also let me add the circuitry so that it responds only to even addresses, so that I don't need to worry about whether double accesses break it (something I couldn't guarantee with MiniMemory). Downside, it will cost me time, maybe a lot, to get the PCBs made. Chicago's in two weeks? Yeah, that's not going to work.

I didn't need to present anyway. ;) I guess I can announce it and show a mockup on the PC. Nobody really did much with it at FestWest.


Saturday, September 23, 2017

Why So Serious?

I haven't posted here for a while... the main reason for that is I've just been busy on non-code-related projects, so I didn't have much to say here. Sure, I could post the odd rant like I used to, but let's face it, those were for me. And I only posted the stuff that was tame enough to not scare the children.

I've been programming for a long time. By long I mean "damn long". It's true I'm not one of those old mainframe-y types (stop fixing my spelling Blogger. If I don't want to use a hyphen, that's my right!)... but I started with TI BASIC back in 1983 and barely a day has gone by that I haven't done SOME kind of code.

To me, of course, 1983 must have only been like 17 years ago, cause time stopped at 2000, didn't it? But when you do the actual math and realize it's actually been 34 years, then yeah, I'm going to start accepting the scale of "damn long".

No, I'm serious. It's a weird phenomenon. To me (and I've seen posts from others with a similar viewpoint), 30 years ago still means 1970. So it's weird. Time stopped in 2000.

There /is/ a thought that's sticking in my craw and needs to be written out, but in truth I haven't figured out how to say it nicely enough yet... let alone what solutions to offer. So there's no article there.

But... you can expect this blog to slowly re-awaken. The non-computer stuff got done and my next task is the development of the Dragon's Lair cartridge for the TI-99/4A. This is a full-motion video and audio production of the classic laserdisc, running on authentic hardware. The proofs of concept were good, now I need to actually build the hardware. Hoping to have something by Christmas, but watch this space, since it's as good a space as any. (If it'll stop being picky on my spelling...)

For after that, I have a huge list of things I want/need/obsess to do, and rather than be picky, I decided to crowdsource the decision. I considered kickstarter and such, but if you give me money then I'm obligated. So instead I went for SurveyMonkey. If you have 3 minutes (the average completion time), do me a favor and go click what looks most interesting to you. I think I get up to 100 responses for free or something.

https://www.surveymonkey.com/r/8CJDR5C

Thanks for that! Or if you don't, thanks for nothing! Either way you get some kind of thanks!



Wednesday, April 5, 2017

Sleeps Don't Scale

We've all done it - something races something else, we don't care if we lose the race, so we just throw a sleep in there and call it done for the day.

Unfortunately, as a system grows, all those buried sleeps start adding up against you. A large complex system simply cannot sustain sleeps, and there are three main reasons why not.

First: a sleep usually is a workaround for a race condition. As your system grows more complicated, the timing of that race condition will change - usually the window grows larger. However, in most cases, the duration of the sleep itself remains the same. A 10ms sleep that worked fine when you only had 5 threads is suddenly right on the edge when you have 20. It may break every time when you have a hundred (eek! But you know it happens.) Good luck finding the sleep that is suddenly causing your issues, you forgot about it months ago, cause everything was working fine.

Secondly, and this is strongly related to the first point: the sleeps will destabilize your code base. Little races that you didn't even know about will come up, because some other function that you had forgotten about and assumed was quick suddenly sits around for a while, giving your new code a chance to run and then suddenly pre-empting the middle of it. You know, when you sort of assumed your main loop was idle.

Finally, they kill performance. As your system scales up, all those sleeps start to add up. Take the example of a simple web service, for instance. It receives a message on a socket, passes it to a waiting worker, and gets the response back. Now imagine you had some dumb little race condition and threw in a 10ms sleep. When you're receiving 1 response a second, who cares about a 10ms sleep? Nobody! What happens when your service gets popular and needs to process 1000 packets a second? It's impossible - you need at least 10 seconds of sleep time to process that 1 seconds worth of traffic.

So what do you do? First off, you don't let them in there in the first place.

Oh sure, there are cases where you can let it slide. A standalone task running on its own thread with no interdependencies with the rest of your mainline code, sure, let it sleep.

Sleep might also be helpful for releasing your timeslice when you're done working, but a timer or signal would be better, since you will get more predictable wakeup times. For instance, if you want to process once every 10 milliseconds, you could do your task, then sleep for 10ms, then wake again. That will work. But if your work takes 1ms, then your cycle time that loop actually becomes 11ms. If you use a timer, you could include that work time in your 10ms cycle with minimal extra effort.

But in your mainline code? Anything that is processing on behalf of another system? Really bad idea. If you are sleeping just because you need to wait for something else to be done, you are far better off finding out what that other thing is and synchronizing with it than blindly sleeping. In long sequential tasks, a state machine approach might be better. Each cycle you can check if its time to work - if yes, do the work. If no, don't. This allows you to interleave the operation of multiple clients through that state machine, rather than blocking on the completion of each individual task.

And I'm bored. No punchline today. ;)

Monday, February 13, 2017

Object Oriented Programming - How We Got Here

While reading "The Mythical Man Month" (which I didn't know was about programming), I was struck by the number of valid points that this book, written to help manage software products in 1975 -- which are still ignored or blatantly flaunted today.

I have a bigger project where I hope to distill the most important points I see from that. Later.

But the book I picked up is a later edition with a 1986 addendum titled "No Silver Bullet". And in this article is a section entitled "Object-Oriented Programming - Will a Brass Bullet Do?" And in that section, is a single paragraph that enlightened me entirely as to how we got where we are with Object Oriented Programming.

I'd like to comment on that paragraph.

Now this was written in the infancy of the very concept of Object Oriented Programming, and it's musing about why the concept had not yet caught on as well as the author thought it should. And so he attempts to describe some of the goals of Object Oriented Programming. And it's this one paragraph summarizing "one view" that I can see the germs of thought that have today mutated into rampaging plagues across most of software development. Seriously, I used to wonder.

One view of object-oriented programming is that it is a discipline that enforces modularity and clean interfaces. A second view emphasizes encapsulation, the fact that one cannot see, much less design, the inner structure of the pieces. Another view emphasizes inheritance, with its concomitant hierarchical structure of classes, with virtual functions. Yet another view emphasizes strong abstract data-typing, with its assurance that a particular data-type will be manipulated only by operations proper to it.

Every one of those features is actually pretty good in the original intent - that is - that it is used where it makes sense. The problem is that these guidelines have been mutated in many programs into absolute laws. You absolutely may not access data inside another class. You don't need to see how a class was written, let alone have the right to modify it. Everything is inherited from something else - whether it makes any sense or not (the number of times I've had a basic data type with an inheritance chain six or more classes deep is no longer amusing to me, but rather depressing). And I've literally worked on a project where I was not allowed to store public data in a central database because the database, which existed in the software already, didn't support strong data typing. That was the reason.

The point of Object Oriented Programming was to make it faster and easier to develop pieces of software and bring those pieces together.

Modularity exists so that a component can be developed and tested in isolation. It makes no sense whatever to make a class modular if you still need other classes to make it work. That's not modular anymore, and you probably should consider whether those should be merged into one object, rather than an incestuous mess. And for what it's worth, bool is not a modular class. Don't wrap bool.

Encapsulation is a tricky one to grasp. It's stated so plainly - one cannot see the inner structure of the pieces. But good encapsulation requires two things: a good design and enough runtime to prove that the design actually is good. If you enforce encapsulation to the point of "nobody looks at the code and therefore nobody can change the code" from day one, all that will happen is you will end up with workarounds for missing, obtuse, or broken functionality. Worse, you'll probably try to code for every conceivable case, most of which aren't what people actually want to use, in hopes no changes will be needed. The project will be more complicated and less stable. I've seen people enforce this rule to the point where they are doing this with their own objects. Encapsulation is for stable code, not development code. And you don't need to encapsulate bool. Don't wrap bool.

Inheritance is one of the most powerful features of Object-Oriented Programming and frankly, one of the few features I actually really like. But you inherit where it makes sense. In most cases your inheritance chain should not be any more complicated than the example in most text books -- that being a base class extended to one level. In rare cases you may need two levels for certain objects (but certainly not all of them) and in equally rare cases it may make sense to have multiple inheritance (but certainly not all of them). Good planning goes a long way here. Going nuts with inheritance leads to complicated, incestuous code that is difficult to debug, difficult to modify (without breaking something else), difficult to implement and difficult to document. It's also poorly performing in many cases and in cases where it's not, harder to predict what the code will do. You don't need to start with basic classes like a wrapper around bool and inherit from there. Don't wrap bool.

Strong Abstract Data-Typing was meant to get away from the admittedly sloppy practice of casting objects in C and hoping you got it right. This feature alone is a good reason to port C code to C++, even if nothing else changes (you'll be surprised where you screwed up but it worked anyway ;) ). But it doesn't mean you need to wrap every type of data you want to use in a custom object just so the data-typing will protect your function calls. (In fact, in many cases passing different types of data around is a better job for classes with a common base class and utilizing inheritance...). But simply put, if you have several true or false items, you don't need to wrap bool in different classes to make sure you pass the right kind of bool to the right function. Bool is a bool. Don't wrap bool.

That's all I really wanted to say. I learned a bit about when modern programming missed the left turn in Albuquerque. It was roughly thirty-one years ago. We have GPS now, let's figure out where North is and start getting back on track.