Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Sapir-Whorf with Programming Languages (nklein.com)
47 points by gnosis on March 27, 2011 | hide | past | favorite | 26 comments


I've long thought that at least metaphorically, Sapir-Whorf fit computer languages like a glove. This observation is hardly new ("You can write COBOL in any language"), and it's not particularly useful, save as shorthand for describing, say, potential hires; in my experience, most programmers write code as in the language they first really used, regardless of the language they're actually using, and so it's often useful to ask someone with years of Java but some Python (for instance) to write something in Python and note if they've bothered to learn the idiom. Hardly determinative, and of course there are those who are capable of really picking up new computer languages, semantics included, just as there are those with the gift for human languages, but it's just something that I keep an eye on.


  > most programmers write code as in the language they first really used, regardless of the language they're actually using
I doubt that this is true. I can't quite remember, but I'm pretty sure the first language I ever used was Color Basic. I've forgotten quite a bit of it, and it has pretty much no influence on my programming.

The first real programming I did was with structured imperative languages (Game Maker was the first thing I used for anything real). The code I wrote bore almost no resemblance to Color Basic, and while I was using these languages, solutions to problems using the tools of structured imperative programming came to mind much more readily than solutions using the style of unstructured Basic.

Later, I learned Haskell and functional programming. For the first while, everything I wrote was like a puzzle. I had to think for a long time to do things the functional way. But after having used it enough, while I'm in Haskell, functional solutions come just as readily as imperative solutions come while I'm in those languages, perhaps even more.

Of course, people who are experienced with one language and inexperienced in another will try to apply the idioms from the first language in the second whether they're applicable or not, but programmers do not always write code as in the first language they really used.


I may have not made myself clear; I think it is largely true that programmers start writing COBOL in language x; some rapidly progress, others don't. It's the latter group that are red flags to me — it signals a lack of intellectual curiosity.

Again, it's not dispositive or anything; sometimes programmers, particularly younger ones, just haven't seen the strengths of the idiom in their alternate languages.


Actually, you can construct a language where you cannot amke some kind of actions at all.

There are languages that express only polynomial time computations, languages that won't allow you to make deadlocks, etc.

http://lambda-the-ultimate.org/classic/message5863.html

http://edwinb.wordpress.com/2008/04/03/correct-by-constructi...


> most programmers write code as in the language they first really used

That is why a lot of people who started with C, Java and Python have difficulty writing Haskell or Erlang -- it is because they cannot easily use the same paradigms as before. They want to, but language syntax won't let them or the code produced is so nasty it becomes obvious that they are doing something wrong.

For example, initially I was disgusted with Erlang syntax, but then I realized that the exotic syntax actually helps my brain realize that "something different is going on, you can't just use old patterns and tricks", it works as a major mode switch basically.


No one actually believes the Sapir-Whorf hypothesis anymore, do they?


I'm a grad student in cognitive psychology; my advisor works in the so-called "neo-Whorfian" literature.

What Sapir and Whorf were proposing is nowadays called "linguistic determinism" and is pretty much universally repudiated.

However, there is evidence for a less radical version called "linguistic relativity". For an example, see this paper: langcog.stanford.edu/papers/winawer2007.pdf


I haven't read the whole paper yet but I find it interesting that the Russian language prescribes a distinction between light blue and dark blue and that it just so happens that humans are more sensitive to blue than than red or green.

In fact, many expert Photoshop artists are aware of a trick whereby they can make small changes to the blue channel to get dramatic effects on the composite.


> humans are more sensitive to blue than than red or green

Without clarification, this is a nearly meaningless statement, but under the most plausible interpretations I can think of it is wrong [in particular, we have few "short wavelength" S cones in the fovea, the very central part of the retina, and it is the signal in these S cones which allow us to distinguish yellow from blue; the S cones contribute much less to lightness response than M or L cones].

What human vision is primarily sensitive to is differences between a color and its surroundings, and the relative sensitivity to differences along light/dark, red/green, blue/yellow dimensions depends enough on the size of the object, the prevalence of such differences in the rest of the scene, the state of adaptation of the eye, and so on. At small scale, we are most sensitive to lightness contrast, and two objects of similar lightness will tend to blend together at the edge even if they are of quite different hue.


It's also wrong, human photoreceptors are more sensitive to green wavelengths than blue or red.

http://en.wikipedia.org/wiki/Green

"The sensitivity of the dark-adapted human eye is greatest at about 507 nm, a bluish-green color, while the light-adapted eye is most sensitive about 555 nm, a yellowish-green color"

It's why night-vision systems are geared to green, but displays that preserve dark-adapted vision are red.

http://en.wikipedia.org/wiki/Red

"Red light is also used to preserve night vision in low-light or night-time situations, as the rod cells in the human eye aren't sensitive to red."


This (wikipedia) article on distinguishing blue from green may be of interest to you: http://en.wikipedia.org/wiki/Distinguishing_blue_from_green_...


> and is pretty much universally repudiated.

Just curious, based on what?


Ever been unable to think of a word for what you wanted to say? Strictly speaking, linguistic determinism claims that this is impossible.


> Ever been unable to think of a word for what you wanted to say?

Wouldn't that be an "unknown unknown"? My mother-tongue, Romanian, has a word called dor, whose exact meaning/translation I could never find in any English-written text. It doesn't describe unicorns in heavens or any such non-existent thing, in fact there is a similar term called saudade in Portuguese (http://en.wikipedia.org/wiki/Saudade), but the thing is that in order to get its full meaning you need to have it included in a larger text, because words by themselves don't mean anything.


Under conditions where you know there is a word for a thing but can't recall it or conditions where you don't know if there is a name for such a thing?


Hi, a history of the Sapir-Whorf Hypothesis is here:

http://en.wikipedia.org/wiki/Linguistic_relativity


Not quite, and there has been some interesting experimental work lately... but no, not really. Here is a relevant link from Language Log, and there are many more where this comes from: http://languagelog.ldc.upenn.edu/nll/?p=2592

(edit for typo)


Weakened versions of it definitely hold true for programming languages. Try writing software to solve the same non-trivial problem in several different programming languages, and this will quickly become apparent.

I did this, over the past several months, in Python, Haskell, JavaScript, and C. The Python and JavaScript versions felt pretty similar, but Haskell and C felt unique. If I weren't so busy right now, I'd re-write it in Lisp. I'm pretty sure it would fit naturally with Lisp.


I think it's a bit silly not to acknowledge that a weak version of linguistic relativity holds. Anybody who's ever learned a new word can see intuitively that language has an effect on your conceptualization of the world.

This is an interesting read on the way that the so-called "Sapir-Whorf Hypothesis" has been used as a straw man argument over the years. There is also some fascinating material on Whorf's work: http://www.enformy.com/dma-Chap7.htm ("The Great Whorf Hypothesis Hoax")


They don't? That's news to me. Linguistic relativity, at least using a weaker "intuitive" understanding of the concept, seems like the most obvious thing in the world.


Yes! Making a new function in Obj-C is so unnecessarily complicated. I would bet that Obj-C functions are much longer on average than more functional languages. Though header files are great for documentation, most functions one writes should NOT be public but small Lisp-like macros which should not be part of the object's header.

Yes, I know you can make 'private' functions by declaring them at the top of your implementation file. However, this seems like a hack insomuch as you never look at these declarations again until you write another private function - they're just there so your code compiles without warnings.

Here's hoping for Obj-C 3.


Since Objective-C is a superset of C, it also gets compiled in a top-down fashion[1]. if you don't want to add a method to your interface (public or private), just put it above everything else that will use it.

1. I believe there is something about only allowing fir a single pass in the standard, but, don't quote me on it.


Yeah, unfortunately you'll get a lot of warnings from Xcode if you do this.


No you don't. You get warnings about not responding to the method if you put the method implementation after it is used. If it is before, it works fine. Example implementation:

  @implementation SomeClass
  - (void) foo { [self baz]; } // a warning will occur; the compiler doesn't know about - baz yet.
  - (void) bar { [self foo]; } // a warning will not occur; the compiler knows about - foo already.
  - (void) baz { NSLog(@"bzz"); } // doesn't do anything special
  @end


It's been a while since I used either C++ or lisp, but for a function that's only used in the same class, isn't it just the function name, argument names and types, and type of return value? You only need to mess around with header files if it will be used elsewhere.

For lisp, you still need the function name and argument names (don't you? or is it common to access them as a list, with cddr etc?) - so it seems the saving is only in the type of return value and types of parameters. That is extra typing, but it doesn't seem much more - I guess it may make a difference at the margin. But if so, this is just dynamic typing. You still choose argument names and a function name.

Or is it more a cultural, idiomatic thing, of what's customary in a language? Which is the author's point, of Sapir-Whorf. I think of course you'll tend to do what's accepted in a language, what's easiest, what its strengths are. But it's also possible to cross-use idioms. It may be a little awkward, but if it really is right idiom for that specific task, it should help.

Or it might be another kind of cultural factor: that the author's C++ is work code, seen by other people (or more likely to be), and so function and argument names need to be fretted over, for others to understand (and also not criticize) them. I'm just guessing here. Why fret over C++ names, but not lisp names?


This came up once before, here's what I had to say: http://news.ycombinator.com/item?id=2144142




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: