These are both libraries that you can choose to use. Both Typed Clojure and Schema are more powerful than Java's type system. By powerful I mean you can declare types and constraints that aren't expressible in most type systems (eg: an object can not be NULL, or a Map or Array must have specific keys or type'd values). Schema is run-time (we only leverage it in development and testing), while Typed Clojure is more akin to compile time.
I'm ignorant here. From what you say it sounds like Typed Clojure is traditionally what we think of as 'types', whereas schema is more along the line of asserts.
Yup; but asserts that a) describe an entire 'object' (usually a map or list) b) can be composed together and c) be used to generate e.g. documentation or validated forms
By powerful I mean you can declare types and constraints that aren't expressible in most type systems (eg: an object can not be NULL, or a Map or Array must have specific keys or type'd values)
Hmm. Can it do anything better than that?
Java has optional nullity annotations that tools like Findbugs and (more usefully) IntelliJ can use to highlight nullity bugs. Kotlin has nullity integrated into the language's type system in a much better way. Stating that a Map or Array must have specific types in it is the whole purpose of generics, which Java had since 1.5, no?
Don't get me wrong. The Java type system is not that strong and has some frustrating holes in it. But typed arrays and nullity tracking doesn't seem like some advanced Clojure-only tech, to me.
If you're interested in learning about them, those two links are going to do a better job of explaining what both Typed Clojure and Schema are that I am across a few comments.
Wrt the typing of Maps and Arrays, what Clojure supports goes way beyond what Java's type system supports: you can specify that the first element of an array has to be of type X, the second is of type Y, the third can be either NULL or Z. For maps, you can specify that a key must have a particular type and must be in the map, with a specific value while other keys may be present (required or not) and you can compose any of the constraints mentioned on the values the keys may take.
This is different from Generics in that Generics constrain to homogeneity for an entire collection.
You could create formal objects in your Java code to express similar constraints, but you're not achieving the same result: a Java object actually has the property (even if it's null) while a map (or an array) with an optional member will only have it if it's present. With Java you'd need many classes to model all the specific combinations. Java's current type system is significantly less expressive and doesn't have the same power. I realize those are subjective. By expressive and power I am referring to whether you can declare an idea in your code or whether you have to write the imperative logic to implement an idea (the former meeting my definition of expression or power).
The Null annotations approach runs into problems quite fast, though. How about a list of values? How do you annotate that the values are not null? How about a list of lists?
Quickly you discover you need a type system. (I'm not familiar with Kotlin but I'm sure it's fine.)
The answer is that core.typed's type system is proper space-age tech, but it's also not exactly ready for production use.
I'd pay $ (5ish) for a couple of other ways to use this:
* a bookmarklet that allowed me to select text on my blog or on one of my github pages and analyze it
* emacs integration
* a command line tool that worked like ispell/aspell to help analyze things I've already written
Relay is a simple and secure way for businesses and customers to instantly connect and share private communications.
We are looking for (multiple) full-stack engineers to join our team. We use Clojure (no previous FP experience required, learn it working with the team), Devops (Chef) and practice Agile (weekly sprints and pair-programming).
I lead a team of 4 at a company in the Philadelphia, PA suburbs. We'll be adding more members later this year and Clojure is our primary back-end language (we also use Ruby and lots of JavaScript). We have been quite happy with it for many of the same reasons Alex mentioned.
This. The biggest difference is in the two cultures: blocking is anathema to the Node.js community - they will literally reject libraries or code that blocks because it destroys the entire model; the JVM community does not value non-blocking code - most of the core (JDBC, Networking in general, File system operations) is all written in a blocking style - the JVM community accepts this with the implicit assumption that threads will help assuage those issues.
Python, Ruby and Perl all have the same cultural tolerance for blocking code. The Node.js community has a complete lack of tolerance for blocking code.
I work with the JVM every day (Clojure) and wish it was different wrt the common use of non-blocking code, but it's going to be a long road to get there on the JVM.
Java Executors combined with Guava's ListenableFutures easily turn any blocking operation to an asynchronous one.
Netty's entire model is asynchronous, and Java 7 now has AsynchronousChannels for IO which, I assume, Netty will make use of.
All in all, the JVM has a much more solid and performant foundation than anything Node can provide. The whole difference will come down to a programming style preference. I am not entirely sure why Vert.x adopted the Node style rather than the proven servlet container, as I'm sure both styles provide comparable performance. I guess each may shine under different loads/usage patterns (my guess is that Vert.x/Node can squeeze more performance from a single thread, but servlets are more scalable).
Is that supposed to be bad? The programming style will be the same. If threads are done right, and the JVM can manage their affinity well (especially on NUMA architectures), it's best to use them and pass a relatively small amount of data between them, then they can provide much better performance than accessing the same large piece of RAM from many threads (that's what happens if you simply replicate a single event-loop thread with asynchronous IO).
Maybe I am missing something, but how can you possibly have an async SQL driver without threads like this? This sounds like a case of your Node.js database driver hiding the exact same behaviour described here within C code.
If the wire protocol for the driver is published, then you can write a 100% async driver for it. I.e. no threads blocking, ever. In fact, I already did this for redis and vert.x (I will dig out the code for this some time).
If you are dealing with something where you don't know what the wire protocol is and you just have a blocking client library to play with (e.g. JDBC - JDBC is, by definition blocking - see the JDBC API), then you can't do much but to wrap the blocking api in an async facade and limit the number of threads that block at any one time.
This is exactly what we do in vert.x. We accept the fact that many libraries in the Java world are blocking (e.g. JDBC) so we allow you to use them by running them on as a worker.
This is one area where we differ from node.js. Node.js makes you run everything on an event loop. This is just silly for some things, e.g. long running computations (remember the Fibonacci number affair?), or calling blocking apis.
With vert.x you run "event-loopy" things on the event loop but you can run "non event-loopy" things on a worker. It's a hybrid.
A limited number of threads will not scale as real async wake-on-data connections will scale. If demand is higher than your thread pool, for the use case that you're web response builds on async backend requests, your site will be down.
(notice the Connect method on line 325 of binding.cc)
At some level a client-server database driver isn't all that different from any other network client; you send a request over a socket and wait for a result. There's no reason you have to block while waiting.
Moreover some databases (like Postgres) let you receive asynchronous notifications signaled by transactions on other connections; that's how trigger-based replication systems like Bucardo do their thing.
Because it would be based on asynchronous socket responses. So you wouldn't iterate like you currently do w/ a ResultSet but rather have a simple "RowHandler" or sorts. However you still run into the trouble you do w/ node if you decide to do a lot of blocking work in there instead of just sending the row to some ExecutorService thread to get worked on.
I am talking about the cultures surrounding these languages and frameworks. Node's community rejects blocking libraries. Java's does not. I've used the non-blocking frameworks in Perl (POE and my own), C (select, and some of the poll variants), Ruby (event machine) and they are fine if you can avoid blocking libraries -- in these communities it is generally acceptable to write blocking libraries. I don't see it as a technical hurdle, I see it as a cultural one.
You need to start backing up your claims with actual data. What networking libraries are blocking in Node.js that are not blocking in Twisted? Moreover, what can't you do with a Twisted Deferred that you can do in Node.js?
There are two main things that block in computing:
* I/O
* CPU
You better believe that Node blocks on CPU, so what I/O does Node not block on that Twisted does?
I am talking about the cultures surrounding these languages and frameworks. Node's community rejects blocking libraries. Java's does not.
So what?
You get tons of Java libs to use, a majority of blocking ones and lots of non blocking on one hand, or you restrict yourself to the fewer non-blocking libs of varying quality available for Node.
With Java you can also turn blocking libs to non blocking with a wrapper and threads, whereas with Node.js if it's blocking you're screwing, because the js engine is single-threaded.
We run a major site on Java, had some thread trouble years ago in the very beginning, works very well now when tuned. Threads work.
BUT: I assume people will move to backend services with REST and combining REST backend results to a page. This increases IO a lot and will kill your latency and default thread models when you do sync code. You'd need to use async IO, composeable futures to manage latency and thread count. And if you do async backend REST, why not do async JDBC etc. But there are no libraries.
>This. The biggest difference is in the two cultures: blocking is anathema to the Node.js community - they will literally reject libraries or code that blocks because it destroys the entire model
Really? So they reject any kind of library that does anything except call a callback? Because everything else, from calculating 2+2 to creating a template blocks. And it doesn't matter when it happens, when it happens it blocks.
I think the article is a great explanation. I'm not sure I (exactly) agree with the conclusion "dynamic vars also breaks referential transparency" - isn't it the combination of lexical closure (referring to symbols from outside the function) as well as mutable state that breaks referential transparency?
If I have a function that takes a java collection and returns the count, it has no referential transparency because the collection is mutable, not necessarily b/c of how it's operating on its arguments.
Of course this is one of the things I love about Clojure and Rich Hickey's use of immutability as the default behavior (as often as possible) - much [more] of the Clojure I write has referential transparency, hardly any of the Java I wrote did.
Dynamically bound values can be thought of as invisible arguments, that are passed through every function up the call stack. A referentially-transparent function only depends on its explicit arguments, and will always return the same value when passed the same arguments. A function that depends on a dynamically-bound variable may return different values when called with the same arguments.
Example (Common Lisp syntax, not actually tested):
(defvar z 0)
(defun example (x y)
(declare (special z))
(+ x y z))
(example 1 2)
==> 3
(let ((z 10))
(example 1 2))
=> 13
In this sample, the function example is not referentially transparent, because it yields different results when passed the same argument. Note that this happens without using closures or mutation. example does not close over the value of z in any way because it is dynamically scoped[1]. There is no mutation because z is re-bound, which is conceptually different from mutation. It's effectively creating a new binding with the same name and different scope; a function called within that scope will look for the value by name, and find the new binding. The original binding is untouched outside of this new scope.
[1] This is tautological; The term "closure" is defined to refer to lexical scoping, and was invented to describe it[2].
[2] Actually, now that I'm writing this, it occurred to me that your confusion entirely stems from subtleties in the definitions of lexical closure and bindings. A closure is not any function that refers to symbols outside its body. The term only refers to functions that use lexical scoping to do so, and therefore need to "close over" their surrounding data and carry it around with them. Functions referencing dynamically scoped variables do not need to carry their data around, because they look up the call stack every time.
To your point, it is "referring to symbols outside the function" that breaks referential transparency, but dynamic bindings do so in an orthogonal fashion from closures, and unlike closures do not require mutation to do so.
The function that creates the lexical closure can still have referential transparency. For example:
(defn make-adder [x y]
(fn [z] (+ x y z)))
((make-adder 1 2) 3)
-----VS-------------
(def ^{:dynamic} x 1)
(def ^{:dynamic} y 2)
(defn make-adder []
(fn [z] (+ x y z)))
((make-adder) 3)
In the first example, the closure returned is still a function of the arguments passed to make-adder, and it is easy to reason about. In the second example, the closure returned relies on the dynamic value of x and y at run time. I agree that if you close over a reference to a mutable value you can also break referential transparency, but that's the exception and not the rule with clojure since it is immutable by default. With dynamic vars you are almost guaranteed to break referential transparency.
One of the advantages I find in pair-programming is that the 'navigator' (the one who's not typing) will often catch this kind of thing as it is happening. "Hey, that's just a while loop, or better yet just use `Clear`". There are other (greater) advantages to pairing, but this example is something we typically avoid before it gets committed.
As nano-tech advances and replicators become more commonplace, hardware will face the same issues with patents and the seeming absurdity of patenting an 'obvious idea'.
> As nano-tech advances and replicators become more commonplace,
I think we should leave science fiction out of the legal system. It is already overburdened dealing with problems we have and doesn't need to deal with the problems we may someday have.
I'm at my 3rd organization with Clojure, started in late 2008 (I think). In the first I had used Jscheme as an extension and scripting language for one of our JVM based systems. I moved to Clojure for prototyping tasks, other developers working with me showed interest and we were able to use it for small development tools at first. Once those started to show productivity improvements we were able to leverage it for small and then larger projects. Part of getting it adopted was getting the other devs on the team interested and excited about it.
The second company was with a team that I joined who was specifically looking for Clojure developers. It ended up being 4 developers, only 2 of which really knew any Clojure previously. The other 2 were Ruby devs. We paired over the entire life of the project and it was a very effective way of transferring skill sets - for all involved. We all learned more and more quickly than I've experienced during any other equivalent time on a project.
At my current engagement, I'm the head of tech and was able to choose the technology. I have the permission of the rest of the executive team and given my activity with the local technology community I've had no difficulty finding good developers who are enthusiastic about using Clojure. This may change as we grow, but I know I've still got a pool to pull from.
Choosing a non-mainstream technology is often additional work for whomever is leading the decision in any organization. For me it's been worth it: we've gotten passionate people who are more engaged because we're using something they care about. The fallback is to just go with a more mainstream JVM language (Java) - hasn't been and doesn't seem like it'll be ann issue.
Do you specifically seek out these jobs so that you can use Clojure? I'm working with Clojure on a personal project for image-processing, and I'd like to find a job where I can do more with it than just late-night hackery.
In the 1st you could say I snuck JScheme in under the radar when I first introduced it.
For the 2nd I was actively recruited to the team. I personally believe that this was because I made a conscious effort in the fall of 2008 to start being much more active in the community at large. I started doing more visibile things on the internet: a blog, putting up a sandbox on github. I also started speaking at local user groups (god bless them for listening to my first talks and my horrible, horrendous presentation skills).
The 3rd (current) is a startup, where I was also actively recruited into, to join at the time the first technologist at the company. They subsequently left about 7wks later for another startup. Taking on the role as head of technology I became responsible. I mean that in all the gravity of what it implies: I chose Clojure and I am responsible for that - as part of such I must ensure that the organization can keep moving forward and has a plan if I am no longer part of it. To do that I've started a local Clojure group, we have meetings at our office and I ensure that the developers working on my team get every single ounce of technical experience I can transfer to them.
By 'recruited into' I mean that not in the sense that either side used a recruiter. I mean I was approached through my network because of the effort I put into building the netowrk and the effort I put into building a personal brand (as slimy as that sounds, it is working).
Actively working at networking has been wonderful. I can't recommend it enough. I was involved in starting a "tech breakfast" meeting that takes place 1x a month. The local groups (esp volunteering to speak), the breakfast sessions, and buying people lunch (an hour of interesting conversation is totally worth the $10, and every time it has come back to me) has gone a long way to building a local network.
Networking will help - you'll have access to more of the places that might be willing to use Clojure, and more places that would be willing to let you choose the stack.
Be happy to share more if you have more questions.
Could that be the basis for an IP based startup? Seems like a clever hack of the patent & legal system surrounding patents. Of course once there is 1 of these, there will be copycats - which could lead to a race to capture the largest share of patents. Patent wars indeed.
If the organization that asserted management of the pool did it as a non-profit or not-for-profit, then it could work. Fragmentation and wars occur for power and control.
https://github.com/Prismatic/schema
https://github.com/clojure/core.typed
These are both libraries that you can choose to use. Both Typed Clojure and Schema are more powerful than Java's type system. By powerful I mean you can declare types and constraints that aren't expressible in most type systems (eg: an object can not be NULL, or a Map or Array must have specific keys or type'd values). Schema is run-time (we only leverage it in development and testing), while Typed Clojure is more akin to compile time.