A professor of mine who worked at Bell Labs once made the same point. "In the old days we had to think a lot about how our punch card program worked because we'd only find out if it worked the next day. Nowadays you guys just throw crap at the wall and see what sticks. Find the middle ground."
Same with the former French department store programmer who taught us Pascal in high school - punch cards have shaped that generation. That mindset still exists in industries with physical processes, but "measure twice, cut once" is wisdom lost to the desktop generation - experimenting can replace some thinking... But we certainly overestimate how much.
I agree there's been an overall shift in styles, but instant-feedback aspects of programming also have a pretty long pedigree, in the form of Lisp's REPL.
That's one of the less obvious (to me, at least) benefits of test driven development: When you're writing out your unit test, you're forced to think about how the implementation is going to work.
I had the opposite reaction: I wonder if tests make it easier for you to fix code without forcing you to develop a mental model of it, assuming you're working in an unfamiliar codebase. That seems like something of a hidden drawback.
That may be possible, but it isn't inevitable. I use tests to validate that my mental model is correct. When I'm doing something greenfield, you'll see my tests are full of rather stupid-looking assertions of really basic stuff, and the reason for that is that about 5% of the time, my really basic so-simple-it-couldn't-be-wrong is wrong.
If you're building something that's going to be used as a foundation by lots of other things, those 5% errors add up really fast.
It does help in some sense, but not always. I found that, if i wrote out test cases like i am preparing a test scenario document for someone in plain English it works. If i have to open vim and write test cases, i seem to the hack mode and write out the most trivial cases, causing painfully slow development. Test Document + thinking/visualization works better for me.
I've found this as well. Just blindly writing test cases doesn't work so well unless you've already understood the higher level operation of what you're trying to build, and obviously does tend to slow down development.
I don't think in code. My mental model is the process, not the series of functions and objects that make up the process. When the process breaks, I'll dig around to see what code makes up that step of the process.
How do you find the middle ground? My hypothesis is that one can write the tests first and use them as a compass. If debugging proceeds monotonically, you have thought enough. If you fix the bug revealed by test r, but later when you fix the bug revealed by test s, test r starts failing again, that is a clue that you didn't think enough.
What the clue means will depend on the order of the tests. If the tests were written in order of increasing code coverage it is probably a clue that the algorithm needs more thought, but it could be a clue that one hasn't thought enough about test coverage and has ones tests in an unhelpful order.