> Meanwhile, Go isn't a platform, nor is it trying to be. Its designers (Rob Pike and Ken Thompson) already made the platform, first, a long time ago. It was called Unix.
This is great, and I think many of Go's advocates - including devs at Google - underplay it. I don't understand why, particularly given the pedigree of Pike and Thompson.
My approach to building large systems in Go is based around processing pipelines, and tries to be as UNIX-y as possible. The interface types in the io package particularly fit the everything-is-a-file model, where processes can do one thing well while having their inputs and outputs connected to pipes, files, sockets, named sockets, devices, etc.
In short, building complex systems with Go components has made me a better UNIX programmer, a level that I could never quite reach in C due to all the distractions of memory management and unsafety.
I have been playing around with Go on Windows- I really like it (especially the built-in concurrency types) but after reading these comments I feel like I would be better off trying it on UNIX (my mac will do). It probably does not help I am still somewhat lost on UNIX, either, but could you give an example of what a 'processing pipeline' approach would look like? That sounds very much like functional programming to me is that correct? Any advice on tackling UNIX and Go at the same time would be much appreciated!
A well-engineered UNIX-y "pipeline" is not unlike an impure functional program, yes. Well-behaved processes share no state (i.e. they don't contend over the same files), and messages and data passed over UNIX pipes are immutable, much like data in a functional program.
For example: one system I implemented needs to take a few hundred very large CSV files every day, aggregate and sort them, perform some complex processing on them, and output a result in a very different format to many different output files. It's an extremely complex system, and each component is in Go, performing a specific task, e.g.:
* combining the many source files
* cleaning the source files
* performing some aggregation on the stream
* splitting the stream into many parts that other components can read from in parallel ('named pipes' in unix make this very nice)
* splitting the stream into many output files
etc. Every component is dumb and does one thing, but a single controlling program is responsible for handling command line arguments that describe the overall outcome and setting up the stdin and stdout stream of all of the components to create the final result.
There's a beautiful simplicity to systems implemented like this, and it means you can take advantage of existing tools like grep, awk, sed, sort, cut, etc. to do a lot of the heavy lifting more reliably and quickly than you could probably implement yourself, while still coding the overall system at a reasonably high level of abstraction. Go doesn't lead to this approach directly, but it's very pleasant working with it as a citizen of this wider environment.
This is great, and I think many of Go's advocates - including devs at Google - underplay it. I don't understand why, particularly given the pedigree of Pike and Thompson.
My approach to building large systems in Go is based around processing pipelines, and tries to be as UNIX-y as possible. The interface types in the io package particularly fit the everything-is-a-file model, where processes can do one thing well while having their inputs and outputs connected to pipes, files, sockets, named sockets, devices, etc.
In short, building complex systems with Go components has made me a better UNIX programmer, a level that I could never quite reach in C due to all the distractions of memory management and unsafety.