Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Test what you write as quickly as you can.

Until you run into that one problem where you can't test between the small steps, because you need the whole thing to be up (to some degree) and working for any test to work.

Operating system kernels would be an example of that: The best test for a *nix OS kernel is, if it can run a shell. You need all the essential syscalls to do something sensible and if any of the required parts doesn't work the whole thing fails.

Another example would be refactoring a complex library into something more manageable. If you keep working in small, testable steps you can move only along the gradient, bordered by "can execute" and "doesn't execute". So you'll be able to reach only a local extremum. Now if you're in the fog and don't know where to go, that's fine. And for most development this is exactly how it happens. But sometimes you can see that summit on the other side of that rift and you know you have to take a leap to get over there.

I spent the past 3 weeks doing exactly that, refactoring a code base. I knew exactly where I wanted to go, but eventually it meant working for about a week on code without being able to compile, not even think of testing it, because everything was moving around and getting reorganized. However now I'm enjoying the fruits of that week; a much cleaner codebase, easier to work with and I even managed to eliminate some Voodoo code nobody knew why it was there, except that it made things work and things broke if you touched it.

> - Hold it in your head! Or you won't have a clue what all your code, together, is doing.

Or, sometimes it's important to get it out of your head, take a week or two off and look at it again with a fresh mind and from a different angle. Often problems seem only hard because you're approaching them from that one angle and you're so stuck with wanting to get it done, that you don't see the better way.

Instead you should write code in a way that it's easy to get back into it.

> - To get started, ask yourself, what is the simplest thing that could possibly work?

And then wonder: What would it take to make this simple thing break. Make things as simple as necessary but not simpler.



> Operating system kernels would be an example of that: The best test for a *nix OS kernel is, if it can run a shell. You need all the essential syscalls to do something sensible and if any of the required parts doesn't work the whole thing fails.

So start with something simpler. Start by making a kernel that can run /bin/true, that never reclaims memory, that only boots on whichever VM you're using for testing. You absolutely can start with a kernel that's simple enough to write in a week, maybe even a day or hour, and work up from there. See http://www.hokstad.com/compiler for a good example of doing something in small pieces that you might think had to be written all at once before you could test any of it.

> I spent the past 3 weeks doing exactly that, refactoring a code base. I knew exactly where I wanted to go, but eventually it meant working for about a week on code without being able to compile, not even think of testing it, because everything was moving around and getting reorganized. However now I'm enjoying the fruits of that week; a much cleaner codebase, easier to work with and I even managed to eliminate some Voodoo code nobody knew why it was there, except that it made things work and things broke if you touched it.

Which is great until you put it back together and it doesn't work. Then what do you do? I've watched literally this happen at a previous job, and been called in to help fix it. It was a painful and terrifying experience that I never want to go through again.

In my experience with a little more thought you can do these things while keeping it working at the intermediate stage. It might mean writing a bit more code, writing shims and adapters and scaffolds that you know you're going to delete in a couple of weeks. But it's absolutely worth it.


If you haven't, you really need to see this post.[1] Starting simple and writing a test driven algorithm is not necessarily bad. However, realize that you are really just turning the act of finding the optimum solution into a search space where you have to assume mostly forward progress at all times. Not a safe assumption. At all.

And, because I love the solution, here is a link to my sudoku solver.[2] I will confess some more tests along the way would have been wise, though I was blessed by a problem I could just try out quickly. (That is, running the entire program is already fast. Not sure of the value on running the tiny parts faster.)

[1] http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s...

[2] http://taeric.github.io/Sudoku.html


I've seen it, but it's just utterly alien to my experience. Partly the problems I encounter professionally don't look much like Sudoku; partly the things that are important on a large codebase are different from the things that are important in a small example. But mostly I think people realise they're not getting somewhere - and if they don't, others will point it out. That's partly why TDD tends to be found in the same place as agile with daily standups - you get that daily outside view that stops you just spinning your wheels the way that blogger did.


I have seen exactly this style of thinking happen in a large code base. Some of it was my own, sadly.

The odd thing to me, is you say this still of problem doesn't happen in a large code base. But, to me, this style of problem just happens many times in a large codebase. That is, large problems are just made up of smaller problems. Have I ever used the DLX problem? No. Do I appreciate that it is a good way to look at a problem you are working? Definitely. I wish I had more time to consider the implications there.

More subtly in your post, to me, is the idea that with the right people the problems don't happen. This leads me to this lovely piece.[1]

[1] http://www.ckwop.me.uk/Meditation-driven-development.html


I think as you get more experienced, what is considered a "small step" changes as you're able to keep more context in your head (subconsciously of course). For the complex library example, that would be keeping the new vs the old architecture at the forefront of your mind instead of switching all thought to "const? maybe? what does that mean in this context?" and "how can I get rid of this reinterpret_cast?"

Absolutely agree with "Instead you should write code in a way that it's easy to get back into it" though!


> Operating system kernels would be an example of that: The best test for a *nix OS kernel is, if it can run a shell. You need all the essential syscalls to do something sensible and if any of the required parts doesn't work the whole thing fails.

How do you test kernel? Run a virtualised kernel? I think at least DragonflyBSD supports that.


Kernel debugging always has been kind of a dark art. What helps greatly is if you can tap into the CPU using some hardware debugging (JTAG or similar).

Today you can also exploit the busmaster DMA capabilities of IEEE1394 to peek into system memory.

However the still most widely used method is pushing debug messaged directly into a UART output register, bypassing any driver infrastructure, and hooking up a serial terminal on the other end.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: