Scala is explicitly multiparadigm and offers a lot of advanced OOP features. It also had a Python-like (though reportedly better handled) 2 -> 3 transition, which deprecated some things, removed others, and added a bunch of new ones. Scala has always been complex, and right now it's also chaotic. It's a wonder the models can get that high a score with it, honestly.
Racket is a similarly large PL, with many abstractions built on the metaprogramming primitives it offers. Without looking at the generated code, it's hard to say anything, but I suspect the high score despite that might be because of the Scheme core of Racket: `racket/base` is a much smaller language than `racket`, so if the LLMs keep to it, it might narrow the solution space enough to show different results.
In general, I think you're half-right: the "solution space" size is a factor, but so is its shape - ie. which features specifically are offered and how they interact. A more compact and cohesive language design should yield better results than just a reduced surface area. C is not a huge language, but the features it offers don't lend themselves to writing correct code much. Elixir is both relatively small and strongly steers a programmer towards safer idioms. Racket is big, but the advanced features are opt-in, while the baseline (immutable bindings, pure functions, expressive contracts) is similar to Elixir. Python is both huge and complex; "there's one obvious way to do it" has always been a bit of a joke. Rust is incredibly complex - the idea is that the tooling should allow you to handle that complexity easily, but that requires agents; one-shotting solutions there won't work as well.
Drivers, kernels, firmwares, low-level networking, the likes. Some higher-level infrastructure, like compilers, interpreters, runtime systems (Qt/Glib-like code).
I'm not sure where the question comes from? The divide between systems and app programming is almost as old as coding itself; it's not some distinction without difference - it's the difference between writing a TypeScript microservice for handling CRUD on some tables versus contributing to the TypeScript compiler, Node runtime (eg. uv), and PostgreSQL query planner.
Both kinds of programming are needed; both require specific (diverging in places) skills to do well. FWIW, I don't think systems programming is any safer (maybe a little bit) from AI than making apps, but the distinction between the two kinds of programming is real.
I'm not sure. You'd have to define "level of rigor". TypeScript has a vastly more expressive type system than C, for example, so given their respective prevalence in their domains, you could easily say that coding apps nowadays is actually more rigorous. There's Rust, but somehow people write lots of apps in it. And so on.
I don't think systems programming is inherently harder than writing apps. You deal with different sets of problems (users stubbornly misusing your UI vs. hardware vendors notoriously lying in the manuals; hundreds of dependencies vs. endemic NIH syndrome; etc.), but coding is, for the most part, the same thing everywhere. IME, the "level of rigor" (as in "kinds and pervasiveness of actions taken to ensure correctness") depends much more on actual people or organizations than on the domain.
See GToolkit[1] - Lepiter is a bit like that. It's too notebook-y for my taste, but it lets you write and format text and embed any widget. It also uses a native GUI and is not a repackaged browser.
As much as I love Common Lisp, it's dead. It has 2 orders of magnitude fewer packages in quicklisp than Emacs has in MELPA - and Emacs is an editor, not a general-purpose programming language. SBCL has a handful of devs and moves very slowly - nothing else does at all. Maybe LispWorks, but that's expensive.
CL is also held together - and held back, hugely - by the standard that won't ever be updated. It's good to have it, but there are major omissions (code walkers, MOP) that won't ever be fixed.
As it is now, Elisp is more practical as a scripting language than CL. The gap will only continue to grow. Right now, CL has an edge in parallelism - native threads and mutexes (with all the problems they entail) work there, while with Emacs, the only parallelism you can get is through separate processes. On the other hand, async/await-style concurrency works quite well in Emacs, while in CL you're basically a few macros away from raw promises, and the only real asynchrony is through a pool of threads executing callbacks, and it doesn't play well with conditions and restarts and some other advanced features (notably absent from Elisp).
I love CL, but right now it's aged considerably, lost many of its unique advantages, and has little chance of ever catching up. It's a shame, but using CL in 2026 is not a superpower anymore - it's just one of the similarly-valued propositions, competing with other dynamic languages, still providing a few unique advantages, but even those are being implemented in other languages fast.
> I just want a GUI that works like what they use.
TL;DR: Emacs is a GUI app and has lots of GUI-related functionality, but it tends to be slightly neglected by the majority of users. You can easily build your ideal GUI using the provided building blocks; the problem is, you have to build it, since most other users are not interested in doing so.
Both Emacs and Vim/NeoVIM have GUIs. I can't even run my Emacs in a terminal without `-q` (omit user config) - I never feel the need, and my config would be much more complex if I tried to make it terminal-friendly.
You don't need baroque keybinds, either. Both Emacs and Vim have always had "Command Palette" - Alt+x in Emacs, : in Vim - and with a few plugins[1], you get fuzzy matching, inline docs, icons showing what type of command you're looking at, etc. Both editors also have GUI buttons and mode-specific (on top of generic) menus (including context menus on click). This provides unmatched discoverability of available functions - you can then bind them to any key combination you find easy to remember. You don't have to, though, since with a few other plugins (Orderless), the frequently used commands bubble to the top of the list.
There are two things Emacs handles a bit poorly: mouse and popups. The former stems from existing users largely ignoring the issue, but the hooks for clicks and even gestures are there. The latter is an unfortunate consequence of wanting TUI to remain first class. There is functionality for creating Emacs "frames" (GUI windows) with given dimensions and a specified position, but it's basically GUI-only. Things like auto-completion popups default to something that can be emulated in the terminal, with frame/window-based implementations provided as extensions. That means that you can have a pop-up with method names starting with a few characters you typed, you can even have another pop-up beside that with docs for a given method, but you generally won't get that doc displayed as rendered markdown (you can't display headers with a bigger font in a terminal). It's 100% social and not a technical limitation - if you accept that you're only going to use Emacs in a GUI, you can get an IntelliJ-level of mouse and popup handling... Though it takes some effort.
That's the real problem, I think. You need to craft many of those features yourself out of available functionality. And it's not even a matter of some (even obscure) configuration, you will need to write your own Lisp to get the most out of Emacs. That's much more of a pain point and a respectable reason for not wanting to touch it. Technically, though, Emacs is not anti-GUI, and there are many packages that make Emacs pretty. Less so with mouse-friendliness, unfortunately, but you can configure it into something half-decent without much effort.
The only environment I know of that is (at least) equally powerful and flexible, but which handles GUI better is GToolkit[2] (VisualWorks was nice before the license change; now it's impossible to use) - a Smalltalk-derived system that uses the host OS (Linux/Windows/Mac) GUI directly through Rust bindings. A step down from there, but still respectable, is Pharo and the browser/Electron. Other than that, you have pre-written GUIs that you can't really change beyond what the developers planned.
> One thing I remember though, was that the multi-cursor+selection approach only really helps when you can see everything you're about to change on the screen. For large edits, most selections will be out of the scroll window and not really helping.
In Emacs, there's an mc-hide-unmatched-lines command that temporarily hides the lines between the ones with cursors. This makes multiple cursors usable with up to a screen-height number of items (being able to place cursors by searching for a regexp helps).
I agree, though - MCs are most useful for simple, localized edits. They're nice because they don't require you to mentally switch between interactive and batch editing modes, while still giving you some of the batch mode benefits. For larger or more complex edits, a batch-mode tool ("Search and replace", editable occur-mode in Emacs, or even shelling out to sed) is often a better choice.
OMRN is a regex compiler that leverages Common Lisp's compiler to produce optimized assembly to match the given regex. It's incredibly fast. It does omit some features to achieve that, though.
As a potential user (not the author), what jumps out at me about the two is:
OMRN: No lookaround, eager compilation, can output first match
RE#: No submatches, lazy compilation, must accumulate all matches
Both lookaround and submatch extraction are hard problems, but for practical purposes the lack of lazy compilation feels like it would be the most consequential, as it essentially disqualifies the engine from potentially adversarial REs (or I guess not with the state limit, but then it’s questionable if it actually counts as a full RE engine in such an application).
Everybody in sales in every software company in the world would be part of that community, I think. Some of the devs, too. Software was always marketed (and discussed with normal people) as something that could automate error-prone tasks, thereby eliminating the inevitable mistakes humans make when performing those tasks. Would Excel be the cornerstone of so many businesses if it sometimes gave the wrong value as a sum of a column?
That marketing (and the fact that, indeed, Excel can sum anything users throw at it without making mistakes) worked; now we have 3 generations of users who believe that once a computer "gets it" (ie. the correct software is installed and properly configured), it will perform a task given to it correctly forever. The fact that it's almost true (true in the absence of bugs and no changes to the setup, no updates, no hardware degradation, no space rays flipping important bits, etc.) doesn't help - that preceding parenthetical is hard to understand and often omitted when a developer talks to a non-developer.
We've always had software that wasn't as reliable as Excel - speech recognition and OCR come to mind. But in those cases, the errors are plainly visible - they cannot be "confidently wrong". Now we have LLMs that can be confidently wrong, and a vast number of users trained to think that software is either always right or, when it's wrong, it's immediately noticeable.
I don't think developers should bear the whole responsibility here - I think marketing had a much larger role in shaping users' minds. However, devs not clearly communicating the risks of bugs to users (for fear of scaring potential customers or out of laziness) over decades makes us partly responsible as well.
> Software was always marketed (and discussed with normal people) as something that could automate error-prone tasks, thereby eliminating the inevitable mistakes humans make when performing those tasks.
That's far from a community touting that computers don’t make mistakes.
> Would Excel be the cornerstone of so many businesses if it sometimes gave the wrong value as a sum of a column?
You mean like if it was running on a Pentium with the FDIV bug? :)
I agree there's a perception computer output is generally reliable, and that leaves users at the mercy of snake oil parrots that are generally unreliable and are sold without a warning. But I don't agree the cause is that touting.
Also, disable the formatting if stdout is not a terminal. That way, your colors and cursor movements won't be visible when piping to another program, and your tool will be usable in apps that don't understand the CSI and chars that follow. Use a command-line switch with more than two states, e.g., `ls` (and probably other GNU tools) has `--color=always|auto|never` which covers most use cases.
Also not mentioned in the article: there are a few syntaxes available for specifying things in control sequences, like
\x1b[38;2;{r};{g};{b}m
for specifying colors. There's a nice list here: https://gist.github.com/ConnerWill/d4b6c776b509add763e17f9f1... You can also cram as many control codes as you want into a control sequence, though it probably isn't useful in a modern context in 99.9% of cases.
Lose. Evacuate the government. Then mount a guerrilla, and wait for an opportunity. It'll come, most likely sooner rather than later.
Why is that unthinkable? I can understand people in the US being unable to process such a scenario, but here in Europe, there's not a single nation that wasn't off the map for some time.
I know why Ukrainians don't want that, but the demographic costs of tens to hundreds of thousands of "military age men" dying are so huge that any plausible alternative should be considered, even if it's very unpleasant.
> I know why Ukrainians don't want that, but the demographic costs of tens to hundreds of thousands of "military age men" dying are so huge that any plausible alternative should be considered, even if it's very unpleasant.
And you imagine they won’t die in your guerrilla war? Or the next invasion after an emboldened Russia regroups?
You're suggesting a decades long guerrilla movement under occupation will be better for the Ukrainian people than conscription during an existential defensive war?
In terms of the number of lives lost? Yes. Guerrilla resistance is a way of trading important advantages (like control of the territory or political legitimacy) for time and human lives. Guerrillas in a favorable environment tend to suffer much lower casualties per fighter per unit of time than trench warfare along a frontline.
It's a desperate measure, but so is snatching people from the street to bus them off to trenches.
Personally, I think people can live through almost any hell (and can make a comeback later) - unless they die, in which case they can't do anything anymore. Decades of hard times, in this view, are preferable to tens of thousands of excess deaths per year over a decade.
I understand why people are reluctant to consider this - I'm just trying to show that there are alternatives to the current situation; not strictly better, but at least presenting different trade-offs. In a situation of "existential defensive war," we should discuss all plausible options, even the most controversial ones.
Not necessarily, if Ukraine surrenders then Russia will disarm them. Then when they revolt Russia will be able to bomb them with impunity because the resistance will not have the air defenses and manufacturing that the Ukrainian military now has.
Not to mention that Russia will almost certainly genocide or atleast severely oppress the Ukrainians if they win
EDIT: important to note that abandoning the trenches and the frontline does not mean surrendering, and I never said they should surrender! I suggested evacuating the govt and continuing the resistance with other means - I don't believe the actual surrender would do any good.
You're right - the risks are, of course, very significant. And we've been through that here in Poland, historically, like 3 times already. We've had quite a few failed uprisings, and we've had anti-communist guerrillas here for a while after WW2 - they were quickly (it still took 3-5 years, though!) dismantled, and most of them were killed. So the risks are real, and it is a "desperate measure".
On the other hand, it worked quite a few times: Cuba, Vietnam, Afghanistan all proved that it's possible to win (or at least not lose) using guerrilla tactics. In case of Ukraine, I think the circumstances would favor the resistance: Russia's already not doing well economically; the "severe oppression" of the Ukrainians (which I agree would follow) would cement the support for the resistance, and it would cost Russia a lot; Russia had air superiority since day one, and it didn't really help them much (it would be much more of a threat had Russia have US-level intelligence capabilities - but they domonstrably don't).
Yes, as long as it's possible, the conventional war should continue. At some point, though, the costs (all kinds of them) of continuing to fight in the field become so high that it's better to stop and switch to other ways of defending.
I'm not saying that moment is now - and it's not for me to dictate when it happens - I'm just trying to say that there are other ways of dealing with the aggressor that may (in favorable circumstances) lead to lower casualties without forgoing the hope of eventually winning. Which I wish Ukraine with all my heart, BTW.
The countries that got invaded by the US fought guerrilla because that is the only thing they could do. It wasn't some deliberate strategy to rope the US in.
And the only reason it worked out for them is that the US wasn't determined to create new states and had very low domestic support to begin with. That's not the case with Russia where this war is clearly a big deal to them.
Racket is a similarly large PL, with many abstractions built on the metaprogramming primitives it offers. Without looking at the generated code, it's hard to say anything, but I suspect the high score despite that might be because of the Scheme core of Racket: `racket/base` is a much smaller language than `racket`, so if the LLMs keep to it, it might narrow the solution space enough to show different results.
In general, I think you're half-right: the "solution space" size is a factor, but so is its shape - ie. which features specifically are offered and how they interact. A more compact and cohesive language design should yield better results than just a reduced surface area. C is not a huge language, but the features it offers don't lend themselves to writing correct code much. Elixir is both relatively small and strongly steers a programmer towards safer idioms. Racket is big, but the advanced features are opt-in, while the baseline (immutable bindings, pure functions, expressive contracts) is similar to Elixir. Python is both huge and complex; "there's one obvious way to do it" has always been a bit of a joke. Rust is incredibly complex - the idea is that the tooling should allow you to handle that complexity easily, but that requires agents; one-shotting solutions there won't work as well.
reply