Honestly it was initiated a long time ago - I have been building the tools for this project literally for decades, and got the license six months ago (that's a badness on my part too, but it's not too late yet!)
I posted the proof of concept on YouTube back in May 2015: https://www.youtube.com/watch?v=iOFPusM2dtM
I didn't have a lot of free time to work on it since then, but since the software was proven feasible (and all the video encoded and spliced out), I worked loosely on getting the license. I finally got ahold of someone at ReadySoft who okayed me and paid for that, giving me a year to get the thing done and out there.
Of course, in typical me fashion, I worked on other things. Sigh.
But this is now top priority with six months to go, and the last unknown is the hardware.
I actually thought I had this more or less nailed down... I found some nice 128MB flash chips in China, and they're still available. With that, some voltage conversion, and a dumb and simple banking scheme, I should have enough memory to store the 16 thousand 8k banks that I'll need. And this is still true! But the bummer with this scheme is that I've never really known how I was going to program the damn chips.
As I sat down and started sketching out the board, I began considering alternatives again, and I found myself wondering "why not compact flash?" It's 5v compatible, comes in huge sizes for low-ish cost (better than the flash chips anyway), and is easy to program.
I spent some time studying the datasheets and couldn't see a why not. The worst case minimum cycle type at the slowest speed is 600ns. The TI memory cycle is 667ns. So why not? I embarked to find out.
There are a few why nots... one is that the card automatically goes to sleep, and is rated to take 20ms to wake up for a read. That's a pretty long time, and my video playback needs the fastest cycle time possible. But if I can wake it up for a scene and keep it awake until the scene ends, then that should be okay. I can read multiple sectors in one command. Feasible!
Of course, then I needed to check sector size versus data size. My video frames, which interleave audio, are 7686 bytes in size. Can you see what's coming? That doesn't line up nicely with 512 byte sectors. In fact, it's 6 bytes over the size of 15 sectors. That's pretty annoying. I haven't done this yet, but I'm going to try and hide those six missing bytes by duplicating bytes in the color table. ;)
I then got obsessed with using memory mode, and not IDE mode. I bought some CF to IDE adapters and worked out how to mod them not to come up in IDE mode. I then tried to map it all down to a RAM chip on the mini-memory cartridge, and mapped it into that space.
My first attempt came up but didn't return anything... I realized that I'd attached to the RAM chip's (disconnected) CS. I fixed that, and the console won't even start up with it inserted. Well, that figures.
At this point, troubleshooting with the MiniMemory in the way feels very complicated... so I'm going to just lay out a PCB with an uberGROM (so I can still use EasyBug) and let the CF interface have the entire ROM space - that will give me a lot of flexibility in how I talk to it. It will also let me add the circuitry so that it responds only to even addresses, so that I don't need to worry about whether double accesses break it (something I couldn't guarantee with MiniMemory). Downside, it will cost me time, maybe a lot, to get the PCBs made. Chicago's in two weeks? Yeah, that's not going to work.
I didn't need to present anyway. ;) I guess I can announce it and show a mockup on the PC. Nobody really did much with it at FestWest.
Monday, October 2, 2017
Saturday, September 23, 2017
Why So Serious?
I haven't posted here for a while... the main reason for that is I've just been busy on non-code-related projects, so I didn't have much to say here. Sure, I could post the odd rant like I used to, but let's face it, those were for me. And I only posted the stuff that was tame enough to not scare the children.
I've been programming for a long time. By long I mean "damn long". It's true I'm not one of those old mainframe-y types (stop fixing my spelling Blogger. If I don't want to use a hyphen, that's my right!)... but I started with TI BASIC back in 1983 and barely a day has gone by that I haven't done SOME kind of code.
To me, of course, 1983 must have only been like 17 years ago, cause time stopped at 2000, didn't it? But when you do the actual math and realize it's actually been 34 years, then yeah, I'm going to start accepting the scale of "damn long".
No, I'm serious. It's a weird phenomenon. To me (and I've seen posts from others with a similar viewpoint), 30 years ago still means 1970. So it's weird. Time stopped in 2000.
There /is/ a thought that's sticking in my craw and needs to be written out, but in truth I haven't figured out how to say it nicely enough yet... let alone what solutions to offer. So there's no article there.
But... you can expect this blog to slowly re-awaken. The non-computer stuff got done and my next task is the development of the Dragon's Lair cartridge for the TI-99/4A. This is a full-motion video and audio production of the classic laserdisc, running on authentic hardware. The proofs of concept were good, now I need to actually build the hardware. Hoping to have something by Christmas, but watch this space, since it's as good a space as any. (If it'll stop being picky on my spelling...)
For after that, I have a huge list of things I want/need/obsess to do, and rather than be picky, I decided to crowdsource the decision. I considered kickstarter and such, but if you give me money then I'm obligated. So instead I went for SurveyMonkey. If you have 3 minutes (the average completion time), do me a favor and go click what looks most interesting to you. I think I get up to 100 responses for free or something.
https://www.surveymonkey.com/r/8CJDR5C
Thanks for that! Or if you don't, thanks for nothing! Either way you get some kind of thanks!
I've been programming for a long time. By long I mean "damn long". It's true I'm not one of those old mainframe-y types (stop fixing my spelling Blogger. If I don't want to use a hyphen, that's my right!)... but I started with TI BASIC back in 1983 and barely a day has gone by that I haven't done SOME kind of code.
To me, of course, 1983 must have only been like 17 years ago, cause time stopped at 2000, didn't it? But when you do the actual math and realize it's actually been 34 years, then yeah, I'm going to start accepting the scale of "damn long".
No, I'm serious. It's a weird phenomenon. To me (and I've seen posts from others with a similar viewpoint), 30 years ago still means 1970. So it's weird. Time stopped in 2000.
There /is/ a thought that's sticking in my craw and needs to be written out, but in truth I haven't figured out how to say it nicely enough yet... let alone what solutions to offer. So there's no article there.
But... you can expect this blog to slowly re-awaken. The non-computer stuff got done and my next task is the development of the Dragon's Lair cartridge for the TI-99/4A. This is a full-motion video and audio production of the classic laserdisc, running on authentic hardware. The proofs of concept were good, now I need to actually build the hardware. Hoping to have something by Christmas, but watch this space, since it's as good a space as any. (If it'll stop being picky on my spelling...)
For after that, I have a huge list of things I want/need/obsess to do, and rather than be picky, I decided to crowdsource the decision. I considered kickstarter and such, but if you give me money then I'm obligated. So instead I went for SurveyMonkey. If you have 3 minutes (the average completion time), do me a favor and go click what looks most interesting to you. I think I get up to 100 responses for free or something.
https://www.surveymonkey.com/r/8CJDR5C
Thanks for that! Or if you don't, thanks for nothing! Either way you get some kind of thanks!
Wednesday, April 5, 2017
Sleeps Don't Scale
We've all done it - something races something else, we don't care if we lose the race, so we just throw a sleep in there and call it done for the day.
Unfortunately, as a system grows, all those buried sleeps start adding up against you. A large complex system simply cannot sustain sleeps, and there are three main reasons why not.
First: a sleep usually is a workaround for a race condition. As your system grows more complicated, the timing of that race condition will change - usually the window grows larger. However, in most cases, the duration of the sleep itself remains the same. A 10ms sleep that worked fine when you only had 5 threads is suddenly right on the edge when you have 20. It may break every time when you have a hundred (eek! But you know it happens.) Good luck finding the sleep that is suddenly causing your issues, you forgot about it months ago, cause everything was working fine.
Secondly, and this is strongly related to the first point: the sleeps will destabilize your code base. Little races that you didn't even know about will come up, because some other function that you had forgotten about and assumed was quick suddenly sits around for a while, giving your new code a chance to run and then suddenly pre-empting the middle of it. You know, when you sort of assumed your main loop was idle.
Finally, they kill performance. As your system scales up, all those sleeps start to add up. Take the example of a simple web service, for instance. It receives a message on a socket, passes it to a waiting worker, and gets the response back. Now imagine you had some dumb little race condition and threw in a 10ms sleep. When you're receiving 1 response a second, who cares about a 10ms sleep? Nobody! What happens when your service gets popular and needs to process 1000 packets a second? It's impossible - you need at least 10 seconds of sleep time to process that 1 seconds worth of traffic.
So what do you do? First off, you don't let them in there in the first place.
Oh sure, there are cases where you can let it slide. A standalone task running on its own thread with no interdependencies with the rest of your mainline code, sure, let it sleep.
Sleep might also be helpful for releasing your timeslice when you're done working, but a timer or signal would be better, since you will get more predictable wakeup times. For instance, if you want to process once every 10 milliseconds, you could do your task, then sleep for 10ms, then wake again. That will work. But if your work takes 1ms, then your cycle time that loop actually becomes 11ms. If you use a timer, you could include that work time in your 10ms cycle with minimal extra effort.
But in your mainline code? Anything that is processing on behalf of another system? Really bad idea. If you are sleeping just because you need to wait for something else to be done, you are far better off finding out what that other thing is and synchronizing with it than blindly sleeping. In long sequential tasks, a state machine approach might be better. Each cycle you can check if its time to work - if yes, do the work. If no, don't. This allows you to interleave the operation of multiple clients through that state machine, rather than blocking on the completion of each individual task.
And I'm bored. No punchline today. ;)
Unfortunately, as a system grows, all those buried sleeps start adding up against you. A large complex system simply cannot sustain sleeps, and there are three main reasons why not.
First: a sleep usually is a workaround for a race condition. As your system grows more complicated, the timing of that race condition will change - usually the window grows larger. However, in most cases, the duration of the sleep itself remains the same. A 10ms sleep that worked fine when you only had 5 threads is suddenly right on the edge when you have 20. It may break every time when you have a hundred (eek! But you know it happens.) Good luck finding the sleep that is suddenly causing your issues, you forgot about it months ago, cause everything was working fine.
Secondly, and this is strongly related to the first point: the sleeps will destabilize your code base. Little races that you didn't even know about will come up, because some other function that you had forgotten about and assumed was quick suddenly sits around for a while, giving your new code a chance to run and then suddenly pre-empting the middle of it. You know, when you sort of assumed your main loop was idle.
Finally, they kill performance. As your system scales up, all those sleeps start to add up. Take the example of a simple web service, for instance. It receives a message on a socket, passes it to a waiting worker, and gets the response back. Now imagine you had some dumb little race condition and threw in a 10ms sleep. When you're receiving 1 response a second, who cares about a 10ms sleep? Nobody! What happens when your service gets popular and needs to process 1000 packets a second? It's impossible - you need at least 10 seconds of sleep time to process that 1 seconds worth of traffic.
So what do you do? First off, you don't let them in there in the first place.
Oh sure, there are cases where you can let it slide. A standalone task running on its own thread with no interdependencies with the rest of your mainline code, sure, let it sleep.
Sleep might also be helpful for releasing your timeslice when you're done working, but a timer or signal would be better, since you will get more predictable wakeup times. For instance, if you want to process once every 10 milliseconds, you could do your task, then sleep for 10ms, then wake again. That will work. But if your work takes 1ms, then your cycle time that loop actually becomes 11ms. If you use a timer, you could include that work time in your 10ms cycle with minimal extra effort.
But in your mainline code? Anything that is processing on behalf of another system? Really bad idea. If you are sleeping just because you need to wait for something else to be done, you are far better off finding out what that other thing is and synchronizing with it than blindly sleeping. In long sequential tasks, a state machine approach might be better. Each cycle you can check if its time to work - if yes, do the work. If no, don't. This allows you to interleave the operation of multiple clients through that state machine, rather than blocking on the completion of each individual task.
And I'm bored. No punchline today. ;)
Monday, February 13, 2017
Object Oriented Programming - How We Got Here
While reading "The Mythical Man Month" (which I didn't know was about programming), I was struck by the number of valid points that this book, written to help manage software products in 1975 -- which are still ignored or blatantly flaunted today.
I have a bigger project where I hope to distill the most important points I see from that. Later.
But the book I picked up is a later edition with a 1986 addendum titled "No Silver Bullet". And in this article is a section entitled "Object-Oriented Programming - Will a Brass Bullet Do?" And in that section, is a single paragraph that enlightened me entirely as to how we got where we are with Object Oriented Programming.
I'd like to comment on that paragraph.
Now this was written in the infancy of the very concept of Object Oriented Programming, and it's musing about why the concept had not yet caught on as well as the author thought it should. And so he attempts to describe some of the goals of Object Oriented Programming. And it's this one paragraph summarizing "one view" that I can see the germs of thought that have today mutated into rampaging plagues across most of software development. Seriously, I used to wonder.
One view of object-oriented programming is that it is a discipline that enforces modularity and clean interfaces. A second view emphasizes encapsulation, the fact that one cannot see, much less design, the inner structure of the pieces. Another view emphasizes inheritance, with its concomitant hierarchical structure of classes, with virtual functions. Yet another view emphasizes strong abstract data-typing, with its assurance that a particular data-type will be manipulated only by operations proper to it.
Every one of those features is actually pretty good in the original intent - that is - that it is used where it makes sense. The problem is that these guidelines have been mutated in many programs into absolute laws. You absolutely may not access data inside another class. You don't need to see how a class was written, let alone have the right to modify it. Everything is inherited from something else - whether it makes any sense or not (the number of times I've had a basic data type with an inheritance chain six or more classes deep is no longer amusing to me, but rather depressing). And I've literally worked on a project where I was not allowed to store public data in a central database because the database, which existed in the software already, didn't support strong data typing. That was the reason.
The point of Object Oriented Programming was to make it faster and easier to develop pieces of software and bring those pieces together.
Modularity exists so that a component can be developed and tested in isolation. It makes no sense whatever to make a class modular if you still need other classes to make it work. That's not modular anymore, and you probably should consider whether those should be merged into one object, rather than an incestuous mess. And for what it's worth, bool is not a modular class. Don't wrap bool.
Encapsulation is a tricky one to grasp. It's stated so plainly - one cannot see the inner structure of the pieces. But good encapsulation requires two things: a good design and enough runtime to prove that the design actually is good. If you enforce encapsulation to the point of "nobody looks at the code and therefore nobody can change the code" from day one, all that will happen is you will end up with workarounds for missing, obtuse, or broken functionality. Worse, you'll probably try to code for every conceivable case, most of which aren't what people actually want to use, in hopes no changes will be needed. The project will be more complicated and less stable. I've seen people enforce this rule to the point where they are doing this with their own objects. Encapsulation is for stable code, not development code. And you don't need to encapsulate bool. Don't wrap bool.
Inheritance is one of the most powerful features of Object-Oriented Programming and frankly, one of the few features I actually really like. But you inherit where it makes sense. In most cases your inheritance chain should not be any more complicated than the example in most text books -- that being a base class extended to one level. In rare cases you may need two levels for certain objects (but certainly not all of them) and in equally rare cases it may make sense to have multiple inheritance (but certainly not all of them). Good planning goes a long way here. Going nuts with inheritance leads to complicated, incestuous code that is difficult to debug, difficult to modify (without breaking something else), difficult to implement and difficult to document. It's also poorly performing in many cases and in cases where it's not, harder to predict what the code will do. You don't need to start with basic classes like a wrapper around bool and inherit from there. Don't wrap bool.
Strong Abstract Data-Typing was meant to get away from the admittedly sloppy practice of casting objects in C and hoping you got it right. This feature alone is a good reason to port C code to C++, even if nothing else changes (you'll be surprised where you screwed up but it worked anyway ;) ). But it doesn't mean you need to wrap every type of data you want to use in a custom object just so the data-typing will protect your function calls. (In fact, in many cases passing different types of data around is a better job for classes with a common base class and utilizing inheritance...). But simply put, if you have several true or false items, you don't need to wrap bool in different classes to make sure you pass the right kind of bool to the right function. Bool is a bool. Don't wrap bool.
That's all I really wanted to say. I learned a bit about when modern programming missed the left turn in Albuquerque. It was roughly thirty-one years ago. We have GPS now, let's figure out where North is and start getting back on track.
Monday, December 19, 2016
C++ "MetaProgramming" and Why C++ Should Die
I saw a pretty awesome accomplishment on Twitter today - this fellow write a little raytracer that does all its calculations at compile time.
Pretty impressive, even if it takes a while. It's all done with templates and "metaprogramming", which is a fancy term used to excuse the complexity of programming both the computer AND the compiler.
No, I don't like it. And it's a great example of why C++ is done and needs to die.
I've loved C++ for a long time. I've encouraged friends learning in University without realizing the true pain they were experiencing. You see, I've been blissfully ignorant of how far things had gone for a long time. For the most part I ignored C++11 and C++14, until a recent project forced me into the deep end, and I got the O'Reilly book out and started reading.
I was pretty horrified, in general, but we'll focus on this particular aspect.
So the article above is about this fellow learning these new concepts with an ambitious and fairly impressive task, inspired more or less by this example:
Take this example:
template < int base, int power> struct Pow { const static int result = Pow<base,power-1>::result*base; }; template < int base> struct Pow<base,0> { const static int result = 1; }; int main( int argc, char *argv[]) { return Pow<5,2>::result; } |
If we look into the assembly file produced we just see the constant 25 being written to a register.
mov eax, 25
Your formatting is crap, Blogger.
Anyway, so what happens there is that main() invokes a template (Pow<5,2>), which generates a recursive chain of structures, each containing the next power, until the power is zero and the final template is invoked. The compiler runs through all this, and the final result is that single assembly instruction that generates a single const value "25" (5^2 = 25).
Fans of this style of programming point at the amazing efficiency of this resulting code as a major win. It's so much faster, they will tell you, than running the code the old fashioned way. But I call bullshit. Because in "the old way" we wouldn't have done that anyway. Not if performance mattered. Do you know what'd we do? Hell, this goes all the way back to "C", not even one plus!
const static int result = 25;
"Oh! Oh! Oh!" cries the peanut gallery. "But the computer didn't calculate that for you!!"
Of course it did. We did it offline. Or we did it in a separate program. Or we used a calculator. Or we did it at startup and cached the value. Or in the worst case, maybe we used a code generator. (Deliberately ignoring the fact that this example is very simple and didn't need code at all).
But! Isn't using a code generator for the most complex cases exactly what we did here? It's just built into the language now, isn't it?
Well, yes and no.
Yes, you essentially used a code generator to calculate the problem and reduce the code to the important single constant. But this is about the most complicated, difficult to debug way of doing it that I could have imagined.
First of all, you just littered your namespace with three different Pow structures. You didn't need the other two, but the compiler did, and they exist. It was a lot more expensive for the compiler to calculate all other structures and then decide what was really needed than just about any other technique would have been, which means your compile time is increased (substantially, in fact, depending on how many Pow's you need and how deep they have to go!) And suppose you typoed in the base,0 template? Well, then your error code is going to have to reflect the entire chain. In this case, it's a short chain of just three entities, and the error is a single line per entity, since it's very simple.
$ gcc test.cpp -otest
test.cpp:10: error: `into' does not name a type test.cpp: In instantiation of `Pow<5, 1>': test.cpp:4: instantiated from `Pow<5, 2>' test.cpp:15: instantiated from here test.cpp:4: error: `result' is not a member of `Pow<5, 0>'
But real life templates tend not to be so simple. And because of the nature of the templates, you can trigger errors simply by specifying the wrong type of argument to a parameter (for instance, forgetting to std::move can break some templates). The result can be pages of template chain errors, making troubleshooting difficult. And indeed, that is what our experimenter found:
This was the first time I had tinkered with metaprogramming so it was pretty hard at first, basically once one template evaluation fails you get a huge chain of thousands of failures.
The entire direction of the language's development seems to have shifted towards programming the compiler to generate constants for you at compile time. This is a good thing and we've often done it in the past, but it's not ALWAYS the right answer, and today it's being taken to ridiculous extremes. Some of the things I've seen mean that the compile-time ray tracer isn't even that outrageous to me, I've seen things attempted that feel on the same level of complexity. And I don't believe that we should be doing that.
Why not?
Well, this impacts you in several ways:
-compile time is longer. How many of those complicated template chains result in the single constant that the example above shows? You've built the code hundreds of times and never changed that value, have you? Make it a damned constant and save the time.
-typos in the code are MUCH harder to understand. If you've done std or boost template programming, you already know what I mean. If you haven't yet, you will. If you're a god who never makes mistakes, go back to cartoon land. This costs time - a simple typo that may be as simple as a missing modifier goes from a 10 second change to a minute or more, just to determine what line the error actually occurred on. I know people who switch to a different, non-production compiler for testing, just because the error messages are less verbose (meaning an entire compile phase is wasted). This time adds up substantially.
-learning time is longer. If you're using your own complex template chains (in addition to std, boost, or other common ones), then you have a larger and deeper codebase for a new developer to have to come to terms with -- and the two issues above are not going to help with that. Since most developers on most projects are thrown in with little more than an incomplete wiki and a promise to get around to guidance, you'd think that simple, easy to follow code would have some value.
I'm reminded of an old quote by DadHacker (http://www.dadhacker.com/blog/):
The future of computing is its own past, mashed-up and remixed by young'uns who have yet to fear the dark corners, the places where us old farts went in with similar bushy-tailed attitudes and came out with ashen-faced, eyes barn-door wide and with fifty new words for "pucker." Heed us. The stove is hot if you touch it. The stove is not only hot, it will incinerate your soul. At some point you will want to make pancakes or wash dishes for a living rather than run another build or merge another check-in or fix another bug...
-Dadhacker |
Subscribe to:
Posts (Atom)