Typically you're either deploying via a container, in which case there's no more overhead than any other container deployment, or you're deploying directly to some Linux machine, in which case all you need is a JVM - hardly an arcane ritual.
cljfmt is included with both Clojure-LSP and CIDER, so if you have either installed it should work out of the box.
With LSP mode the standard `lsp-format-region` and `lsp-format-buffer` commands should work, and on the CIDER side `cider-format-defun`, `cider-format-region` and `cider-format-buffer` should also invoke cljfmt.
I'll add a note to the cljfmt README to tell people about these commands, as your experience shows that it might not be obvious to people that they likely already have access to cljfmt in Emacs as a result of using LSP or CIDER.
I'm not sure that's necessarily true... Customers have limited space for games; it's a lot easier to justify keeping a 23GB game around for occasional play than it is for a 154GB game, so they likely lost some small fraction of their playerbase they could have retained.
"I’m not sure if anyone else feels this way, but with the introduction of generative AI, I don’t find coding fun anymore. It’s hard to motivate myself to code knowing that a model can do it much quicker. The joy of coding for me was literally the process of coding."
I experimented with GPT-5 recently and found its capabilities to be significantly inferior to that of a human, at least when it came to coding.
I was trying to give it an optimal environment, so I set it to work on a small JavaScript/HTML web application, and I divided the task into small steps, as I'd heard it did best under those circumstances.
I was impressed overall by how far the technology has come, but it produced a number of elementary errors, such as putting JavaScript outside the script tags. As the code grew, there was also no sense that it had a good idea of how to structure the codebase, even when I suggested it analyze and refactor.
So unless there are far more capable models out there, we're not at the stage where generative AI can match a human.
In general I find current model to have broad but shallow thinking. They can draw on many sources, which is extremely useful, but seem to have problems reasoning things through in depth.
All this is to say that I don't find the joy of coding to have gone at all. In fact, there's been a number of really thorny problems I've had to deal with recently that I'd love to have side-stepped, but due to the currently limitations of LLMs I had to solve them the old-fashioned way.
You are probably doing something others have done before frequently.
I find the LLMs struggle constantly with languages there is little documentation or out of date. RAG, LoRA and multiple agents help, but they have their own issues as well.
I'll see if I can run the experiment again with Codex, if not on the exact same project then a similar one. The advice I'm getting in the other comments is that Codex is more state of the art.
As a quick check I asked Codex to look over the existing source code, generated via Copilot using the GPT-5 agent. I asked it to consider ways of refactoring, and then to implement them. Obviously a fairer test would be to start from scratch, but that would require more effort on my part.
The refactor didn't break anything, which is actually pretty impressive, and there are some improvements. However if a human suggested this refactor I'd have a lot of notes. There's functions that are badly named or placed, a number of odd decisions, and it increases the code size by 40%. It certainly falls far short of what I'd consider a capable coder should be doing.
> and found its capabilities to be significantly inferior to that of a human, at least when it came to coding.
I think we should step back and ask: do we really want that? What does that imply? Until recently nobody would use a tool and think, yuck, that was inferior of a human.
GPT-5 what? The GPT-5 models range from goofily stupid to brilliant. If you let it select the model automatically, which is the case by default, it will tend to lean towards the former.
The technology is progressing very fast, and that includes both the models and the tooling around it.
For example, Gemini 2.5 was considered a great model for coding when it launched. Now it is far inferior to Codex and Claude code.
The Githib Copilot tooling is (currently) mediocre. It's ok as a better autocomplete but can't really compete with Codex or Claude or even Jules (Gemini) when using it as an agent.
Maybe, there are a few different things named "Codex" from OpenAI (yes, needlessly confusing) - "Codex" is a git-centric product, the other is the GPT-5-Codex agentic coder model. I recommend installing the Codex CLI if you're able to and selecting the model via `/model`.
The models are one part of the story. But the software around it matters at least as much: what tools does the model have access to, like bash or just file reading or (as in your example!) just a cache of files visited by the IDE (!). How does the software decide what extra context to provide to the model, how does it record past learnings from conversations and failed test runs (if at all!) and how are those fed in. And of course, what are the system prompts.
None of this is about the model; its all "plain old" software, and is the stuff around the model. Increasingly, that's where the quality differences lie.
I am sorry to say but Copilot is just sort of shoddy in this regard. I like Claude, some people like Codex, there are a bunch of options.
But my main point is - its probably not about the model, but about the products built on the models, which can vary wildly in quality.
In my experience with both Copilot and Claude, Claude makes subtler mistakes that are harder to spot, which also gobbles up time. Yes, giving it CLI access pretty cool and helps with scaffolding things. But unless you know exactly what you want to write, and exactly how it should work, to the degree that you will notice the footguns it can add deep in your structures, I wouldn't recommend anyone use it to build something professional.
> This year, Clojure didn't make it into the named languages list on the Stack Overflow developer survey (1.2% in 2024).
Clojure is clearly a niche language, but Stack Overflow is also not a place that Clojure developers typically go, so Clojure usage there is going to be under reported.
> I do wish Clojure would adopt a bit more of an opinionated way of doing things and coalesce around some solid core/common libraries that the official docs could point to.
> Clojure is clearly a niche language, but Stack Overflow is also not a place that Clojure developers typically go, so Clojure usage there is going to be under reported.
It seems unclear to me why Clojure developers would not go to Stack Overflow, and especially unclear why they would avoid SO more than developers in other niche languages. When I learned Clojure, I spent a very long time on SO.
I suppose I’m just a little skeptical. I often hear similar sounding rationales - “oh don’t worry, <my favorite language/technology> is under-represented by the data”. Somehow every niche technology is underreported by the data! But to an outside observer, Clojure to me seems to be used very rarely in the types of engineering work I come in contact with, and 1% doesn’t seem that wrong to me.
OTOH, 1% of a large group is still quite a lot. How many programmers are there in the world? Google says estimated 47 million. 1% of that is almost half a million people. If there are half a million Clojure programmers, Clojure is quite a successful technology! (Sadly, I doubt there are that many)...
Stack Overflow is one of those sites that benefit from a network effect. If there are few users of a particular technology on it, people are less likely to get questions answered and therefore less likely to interact with it again.
That said, it's always worth checking the numbers, so I took a look at the 2024 State of Clojure Survey. Around 18% of those surveyed used Stack Overflow, while the 2024 Python Developers Survey had at least 43% of respondents using Stack Overflow.
Now, you might well say that even so Clojure is still a niche language - and I agree. But it may be the case that instead of a 1.3% share, Clojure has a 3% share - if we assume that the Python community's usage numbers are more typical.
I’m not sure if you can draw that conclusion. The clojure survey asks where users went to interact with other people who use Clojure. Who interacts with people on SO? I’m sure a vast majority just read the answer and move on. It makes sense that a Slack server would be the #1 result.
The Python question is more broad: “Where do you typically learn about [python]?”
Posting a question on SO and having it answered is interacting with people. I'm unsure how you could interpret that any other way. And given that podcasts and YouTube were part of the answers, I think it's clear that passively listening to people counts as an interaction as well within the context of the question.
The Python question I'd say is more narrow, as it asks specifically about "new tools and technologies". What if I have a question about an tool I've been using for a while?
In any case, my point is not what market share Clojure actually has, but that there's reasonable doubt in using SO's developer survey as a basis for that answer. If a far smaller percentage of the Clojure community uses SO than is average for a language, then it's going to skew the results.
Thanks for responding, and, especially recognising the name, thanks for all your work on the Clojure ecosystem! To answer the question, for me personally, it would be largely full-stack web and data science tooling, but that's just me. I was moreso thinking out loud about the posted project and highlighting libraries that could be semi-official or strongly recommended by the community. The Clojure community offers many different libraries that, on the surface, are similar, even if each addresses a particular set of concerns. For a lowly idiot like me without enough time to spend writing code in Clojure, I'd love to just be directed to those used by the experts and have solid backing and anticipated longevity - 'gold star' libraries.
Static typing is a useful constraint, but it's not the only constraint. Focusing too much on dynamic vs. static typing can make one miss the more general problem: we want code that's expressive enough to do what we want, while being constrained enough to not do what we don't.
Immutability, for example, is another great constraint that's not considered in the article, but should certainly be on your mind if you're deciding between, say, Rust and Java.
The article delves into some of the drawbacks of static typing, in that while it can be more expressive, it can also contain a lot of information that's useful for the compiler but decidedly less useful for a reader. The Rust example that loads a SQL resultset into a collection of structs is a standard problem with dealing with data that's coming from outside of your static type system.
The author's solution to this is the classic one: we just need a Sufficiently Smart Compiler™. Now, don't me wrong; compilers have gotten a lot better, and Rust is the poster child of what a good compiler can accomplish. But it feels optimistic to believe that a future compiler will entirely solve the current drawbacks of static typing.
I was also slightly surprised when templates were suggested. Surely if you're aiming for rigor and correctness, you want to be dealing with properly typed data structures.
> we want code that's expressive enough to do what we want, while being constrained enough to not do what we don't
I don't think that's an ideal mental model. Code in any (useful) language can do what you want, and can not do what you don't want. The question is how far that code is from code that breaks those properties -- using a distance measure that takes into account likelihood of a given defect being written by a coder, passing code review, being missed in testing, etc. (Which is a key point -- the distance metric changes with your quality processes! The ideal language for a person writing on their own with maybe some unit testing is not the same as for a team with rigorous quality processes.) Static typing is not about making correct code better, it's about making incorrect code more likely to be detected earlier in the process (by you, not your customers).
I was being glib, so let me expand on what I said a little.
By 'constraint' I mean something the language disallows or at least discourages.
Constraints in software development are generally intended to eliminate certain classes of errors. Static typing, immutability, variable scoping, automatic memory management and encapsulation are all examples of constraints, and represent control that the language takes away from the developer (or at least hides behind 'unsafe' APIs).
By 'expressiveness' I mean a rough measurement of how concisely a language can implement functionality. I'm not talking code golf here; I mean more the size of the AST than the actual number of bytes in the source files.
Adding constraints to a language does not necessarily reduce its overall expressiveness, but static typing is one of those constraints that typically does have a negative effect on language expressiveness. Some will argue that static typing is worth it regardless, or that this isn't an inherent problem with static typing, but one that stems from inadequate compilers.
That is a pretty fair assessment, and I'll avoid the nominal v. structural subject, but in my experience the difference between static and dynamic typing comes down to metaprogramming. For instance, much of Python's success stems from its dynamic metaprogramming capabilities. By contrast Java's limitations wrt metaprogramming prevent it from competing in areas such as ML and data science / analytics.
One of the most untapped and misunderstood areas in language design is static metaprogramming. Perhaps this is what you meant by "inadequate compilers", but there is no reason why Java can't provide compile-time metaprogramming. With a comprehensive implementation it can compete directly with dynamic metaprogramming, with the benefits of static analysis etc., which is a game changer.
Everything has a cost. If you had to pick between "write 99% correct code in 1 week" vs "write 100% correct code in 1 year", you probably would pick the former, and just solve the 1% as you go. It's an absurd hypothetical, but illustrates that it's not just about correctness. Cost matters.
What often annoys me about proponents of static typing is that they sound like it doesn't have a cost. But it does.
1. It makes syntax more verbose, harder to see the "story" among the "metadata".
2. It makes code less composable, meaning that everything requires complex interfaces to support everything else.
3. It encourages reuse of fewer general types across the codebase, vs narrow scoped situational ones.
4. It optimizes for "everything must be protected from everything" mentality, when in reality you only have like 2-5 possible data entries into your system.
5. It makes tests more complex to write.
6. Compiled languages are less likely to give you a powerful/practical REPL in a live environment.
For some, this loses more than it gains.
Also, albeit I haven't seen this studied, human factor probably plays bigger role here than we realize. Too many road signs ironically make roads less safe due to distraction. When my code looks simple and small, my brain gets to focus better on "what can go wrong specifically here". When the language demands I spend my attention constructing types, and add more and more noise, it leaves me less energy and perspective on just taking a step back and thinking "what's actually happening here".
Cost matters, but in my experience there's more to this story. It's more like this:
"write 99% correct code in 1 week and then try to fix it as you go, but your fixes often break existing things for which you didn't have proper tests for. It then takes you total of 2 years to finally reach 100% correct code."
Which one do you choose? It's actually not as simple as 1 year vs 2 years. For a lot of stuff 100% correctness is not critical. 99% correct code can still be a useful product to many, and to you it helps you to quickly validate your idea with users.
However, the difference between static and dynamic typing is not that drastic, if you compare dynamic typing to an expressive statically typed language with good type inference. Comparing, for example, Python to C++ is not really fair as there are too many other things that make C++ more verbose and harder to work with. But if we compare Python to for example F# or even modern C#, the difference is not that big. And dynamic typing has a costs too, just different.
1. "Story" can be harder to understand without "metadata" due to ambiguity that missing information often creates. It's a delicate balance between too much "metadata" and too little.
2. Too much composability can lead to bugs where you compose wrong things or in a wrong way. Generic constraints on interfaces and other metaprogramming features allow flexible and safer composability, but require a bit more tought to create them.
3. Reuse is similar. No constraints on reuse, doesn't protect you from reusing something in corner case where it doesn't work.
4. (depends on how you design your types)
5. Dynamic languages require you to write more tests.
6. F# and C# for example both have REPL.
Quality statically typed language is much harder to create and require more features to be expressive, so there are less of them or they have some warts and they are harder to learn.
It's a game of tradeoffs, where a lot of choices depend on a specific use case.
Dynamic languages can execute code without type annotations, so you _can_ just dismiss types as redundant metadata. But I don’t think that’s wise. I find types really useful as a human reader of the code.
Whether you write document them or not, types still exist, and you have to think about them.
Dynamic languages make it really hard to answer “what is this thing, and what can I do with it?”. You have to resort through tracing through the callers, to check the union of all possible types that make it to that point. You can’t just check the tests, because there’s no guarantee they accurately reflect all callers. A simple type annotation just gives you the answer directly, no need to play mental interpreter.
I don't disagree, dynamic languages require better writing skills, so for example, in case of bilingual teams, metadata helps bridge the language barrier. However, if your team is good at expressing how/what/why[1] in your dynamic language, you will not have much issue answering what things are. Again, there are costs with either choice.
> Everything has a cost. If you had to pick between "write 99% correct code in 1 week" vs "write 100% correct code in 1 year", you probably would pick the former, and just solve the 1% as you go. It's an absurd hypothetical, but illustrates that it's not just about correctness. Cost matters.
I work on airplanes and cars. The cost of dead people is a lot higher than the cost of developer time. It’s interesting to ask how we can bring development costs down without compromising quality; in my world, it’s not at all interesting to talk about strategically reducing quality. We have the web for that.
> It’s interesting to ask how we can bring development costs down without compromising quality; in my world, it’s not at all interesting to talk about strategically reducing quality.
You have some level of quality ya'll are used to, that was already achieved by compromise, and you'd like to stay there. How was that original standard established?
On an exponential graph of safety vs effort (where effort goes up a lot for small safety gains) you are willing to put in a lot more points of effort than general industry to achieve a few more points of safety.
> You have some level of quality ya'll are used to, that was already achieved by compromise, and you'd like to stay there. How was that original standard established?
Safety-critical code for aviation co-evolved with the use of digital systems; the first few generations were directly inspired by the analog computers they replaced, and many early systems used analog computers as fallbacks on failures of the digital systems. These systems were low enough complexity that team sizes were small and quality was maintained mostly through discipline. As complexity went up, and team sizes went up, and criticality went up (losing those analog fallbacks), people died; so regulations and guidelines were written to try to capture best practices learned both within the domain, and from across the developing fields of software and systems engineering. Every once in a while a bunch more people would die, and we'd learn a bit more, and add more processes to control a new class of defect. The big philosophical question is how much of a washout filter you apply to process accumulation; if you only ever add, you end up with mitigations for almost every class of defects we've discovered so far, but you also end up fossilized; if you allow processes to age out, you open yourself to make the same mistakes again. To make it a less trivial decision, the rest of software engineering has evolved (slowly, and with crazy priorities) at the same time, so some of the classes of defect that certain processes were put in to eliminate are now prevented in practice by more modern tooling and approaches. We now have lockstep processors, and MPUs, and verified compilers, and static analysis tools, and formal verification (within limited areas)... all of which add more process and time, but give the potential for removing previous processes that used humans instead of tooling to give equivalent assurances.
Thanks for writing this (just a generally interesting window into a rare industry). As you point out, you can't only ever add. If there was a study suggesting that static types don't add enough safety to justify tradeoffs, you might consider phasing them out. In your industry, they are currently acceptable, there's consensus on their value. You probably have to prioritize procedure over individual developers' clarity of perception (because people differ too much and stakes are too high). That's fair, but also a rare requirement. Stakes are usually lower.
> If there was a study suggesting that static types don't add enough safety to justify tradeoffs, you might consider phasing them out.
Perhaps. Speaking personally now (instead of trying to generalize for the industry), I feel like almost all of the success stories about increasing code quality per unit time have been stories about putting defect detection and/or elimination left in the development process -- that is, towards more and deeper static analysis of both requirements and code. (The standout exception to this in my mind is the adoption of automatic HIL testing, which one can twist as moving testing activity left a bit, but really stands alone as adding an activity that massively reduced manual testing effort.) The only thing that I can see removing static types is formal proofs over value sets (which, of course, can be construed as types) giving more confidence up front, at the cost of more developer effort to provide the proofs (and write the code in a way amenable to proving) than simple type proofs do.
The most important ingredient by far is competent people. Those people will then probably introduce some static analysis to find problems earlier and easier. But static analysis can never fix the wrong architecture and fix the wrong vision.
In the industries I've worked, it's not a huge problem if you have a bug. It's a problem if you can't iterate quickly, try out different approaches quickly, bring results quickly. A few bugs are acceptable as long as they can be fixed.
I've even worked at a medical device startup for a bit and it wasn't different, other than at some point there need to happen some ISO compliance things. But the important thing is to get something off the ground in the first place.
> The most important ingredient by far is competent people.
Having competent people is a huge time (cost) savings. But if you don't have a process that avoids shipping software even when people make mistakes (or are just bad engineers), you don't have a process that maintains quality. A bad enough team with good processes will cause a project to fail by infinite delays, but that's a minor failure mode compared to shipping bad software. People are human, mostly, and if your quality process depends on competence (or worse, on perfection), you'll eventually slip.
Right, but I hope you also understand that nobody's arguing for removing static types in your situation. In a highly fluid, multiple deployment per day, low stakes environment, I'd rather push a fix than subject the entire development process to the extra overhead of static types. That's gotta be at least 80% of all software.
> So you're using proof on every line of code you produce?
No, except for trivially (the code is statically and strongly typed, which is a proof mechanism). The set of activities chosen to give confidence in defect rate is varied, but only a few of them would fit either a traditional or formal verification definition of a proof. See DO-178C for more.
Something that the type system should do is "make impossible states impossible" as Evan Czaplicki said (maybe others too)
We have started to use typed HTML templates in Ruby using Sorbet. It definitely prevents some production bugs (our old HAML templates would have `nil` errors when first going into production).
I have typically understood the "Sufficiently Smart Compiler" to be one that can arrive at the platonic performance ideal of some procedure, regardless of how the steps in that procedure are actually expressed (as long as they are technically correct). This is probably impossible.
What I'm proposing is quite a bit more reasonable—so reasonable that versions of it exist in various ecosystems. I just think they can be better and am essentially thinking out loud about how I'd like that to work.
I'm fully on board with improving compilers. My issue is that you compare the current state of (some) dynamically-typed languages with a hypothetical future state of statically-typed languages.
You use `req.cookies['token']` as an example of a subtle bug in JavaScript, but this isn't necessarily an inherent bug to dynamic typing in general. You could, for example, have a key lookup function that requires you to pass in a default value, or callback to handle what occurs if the value is missing.
req.cookies.get('token', () => {
throw new AuthFailure("Missing token")
})
I agree with this. I value immutability much more than static types. I find it eliminates a much larger class of bugs without sacrificing expressiveness.
The price for not making a Turing Complete language is that you can't solve all possible problems. But, you probably didn't want to solve all possible problems.
That's one of the insights in WUFFS. Yes, most problems cannot be solved with WUFFS, but, we often don't want to solve those problems so that's fine. WUFFS code, even written by an incompetent noob, categorically does not have most of the notorious problems from systems languages, yet in the hands of an expert it's as fast or faster. It has a very limited purpose, but... why aren't we making more of these special purpose languages with their excellent safety and performance, rather than building so many Swiss Army Chainsaw languages which are more dangerous but slower ?
DSLs serve an important purpose but the entire thread is about general purpose Turing complete languages. DSLs fail very quickly as soon as you need to leave the domain which can easily result in needing many many DSLs for a given project which has other forms of complexity and sources of bugs that easily arise from such an approach (and that’s assuming you can just cobble together DSLs), not the least of which that proprietary DSLs and languages quickly become difficult to hire for and maintain long term. And DSLs more specifically suffer from the domains they can solve. WUFFs is a notable exception that proves the rule just because format parsing is very well studied and has very sharp edges on the things you need to support / accomplish.
To me the solution seems like it's adding complexity that could cause more issues further down the line.
The specific problems in the example could be solved by changing how the data is represented. Consider the following alternative representation, written in edn:
This prevents issues where the region is mistyped for a single bucket, makes the interval more readable by using a custom tag, and as a bonus prevents duplicate bucket names via the use of a map.
Obviously this doesn't prevent all errors, but it does prevent the specific errors that the RCL example solves, all without introducing a Turing-complete language.
> The specific problems in the example could be solved by changing how the data is represented.
Finding the "right" representation for a given set of data is an interesting problem, but most (all) of the time the representation is specified by someone/something else.
In the past I've written a [preprocessor][1] that adds some power to the representation while avoiding general purpose computation. For example,
reply