Adding the agent (and maybe more importantly, the model that review it) actually seems like a very useful signal to me. In fact, it really should become "best practice" for this type of workflow. Transparency is important, and some PMs may want to scrutinize those types of submissions more, or put them into a different pipeline, etc.
Ok, this sounds awesome, but do you miss the GUI integrations? like , being able to pop a document open in your editor from the desktop?
It just feels like it's hard to nail down your preferred workflow / setup ... but it's likely worth it if you're using it daily!
Are there any good visual or video demos of using this type of setup? I'm having trouble picturing what makes people really love this type of TUI-only workflow.
As an aside, it would be straightforward to make vim/neovim the editor that opens when you double click a text file on the desktop.
This kind of setup is at its most powerful when you live on the command line though. For instance, you need to modify .py files across multiple projects that mention a certain variable, have a certain word in their name, and were modified within the last month.
That search is a bit easier in bash/zsh than it is in most IDEs and the strength of vim/neovim is the shell integration.
It's not my kind of workflow but you can download a graphical client like Neovide, which I think has options for opening directly from your file browser.
I typically have a terminal-heavy workflow so it's very rare that I'm browsing to files from within my desktop, but if I am using Dolphin to look for a file I have a "Open terminal here" shortcut and then I'll usually just run "nvim doc.md".
Why not give it a try? You'll likely find that there's an adjustment period and you can always switch back to your old editor if you don't like it. The beauty of it is that you can build it into whatever IDE you want instead of having useless features shoved into your IDE whether you use them or not.
I use Emacs and opening a new file is just pressing “C-x C-f” (find-file), typing the path (completion is available), and pressing enter. As for vim, I would spawn a new terminal (WM keybind, new tab, new pane with tmux), cd to the directory and open it with vim.
The nice thing is that I rely only on the keyboard, no need to point with the mouse. It may not be faster, but typing is sequential and there’s no context switching. So muscle memory helps a lot. Just like you don’t think about each character when you write, I don’t really think about the shortcuts and commands I use.
It's interesting that vim and emacs have this sort of cultural difference where emacs users tend to have one session always open, and vim users are more likely to directly launch a new session per file. I've largely adopted the emacs approach with my usage of neovim, though still use a mix. I have a Session.vim file that opens my windows/tabs/buffers I saved, including remote files using the scp://hostname/filepath syntax. Certain files I edit often enough that I just want them always open, and arranged a particular way. I do sometimes open a one-off separate session to quickly edit a config, though. I don't wanna mess up my muscle memory by introducing too many extra buffers or possibly messing up the order (although if I did do that I could just quit out and reopen the Session.vim file to get back to my saved arrangement).
Another thing I picked up from my time with emacs was making keybinds to interact with the "other" window. One macro I use often will delete the second line of the file in my current window, save, change to the other window, delete second line, save, change back to original window. When activated from keybind it all happens approximately instantly. I also have some binds to jump to the top of the other window's file (without leaving my cursor stuck over there) and so on, letting me keep my cursor in the main area most of the time.
Vim current directory is tied to the process, while each buffer in Emacs have its own default directory.
Also the buffer’s local variable in vim comes from different sources. In emacs, a lot of stuff are tied to a major or minor mode. You only have to toggle them to switch between keybinds, syntax,…
In Neovim, most non-toy language servers allow you to open the doc/definition in a popup/floating window, typically bound to `K`. Some language servers like rust-analyzer and gopls also support opening the docs in your browser.
That's a good point ... it's trivial to have an agent post something onto HN on your behalf, so even old accounts are not immune. It's just the nature of things now, until we get better technology to assign some sort of "ALIVENESS" attestation to folks without revealing identity too much.
Human author here. The fact that I don't know web design shouldn't detract from my expertise in operating systems. I wrote the software and the man page, and those are what really matter for security.
The web site is... let's say not in a million years what I would have imagined for a little CLI sandboxing tool. I literally laughed out loud when claude pooped it out, but decided to keep, in part ironically but also since I don't know how to design a landing page myself. I should say that I edited content on the docs part of the web site to remove any inaccuracies, so the content should be valid.
Nice tool, def gonna try it. I was looking for the source and it took a while before I found the github(0) link. Like a lot software, I like to take a look at source. Maybe you can make it more prominent on the website
I think most people in this space are having the same EXACT same sets of dilemmas - you can EASILY have a flashy website, except that it's totally against the previous norms for things like you've written! A plain-text bare-bones website is typically what a tool like this is presented with - instead of a flashy looking promotional website that's visually appealing and has all the accessibility and proper UI/UX, etc.
We've truly entered a new, better era of the Internet (IMHO).
Also, thank you for this tool - it looks like a great piece of software!
I'm not a web UI guy either, and I am so, so happy to let an AI create a nice looking one for me. I did so just today, and man it was fast and good. I'll check it for accuracy someday...
I've been building my own tooling doing similar sorts of things -- poorly with scripts and podman / buildkit as well as LD_PRELOAD related tools, and definitely clicked over to HN comments with out reading much of the content because I thought "AI slop tool", and the site raised all my hackles as I thought I'll never touch this thing. It'll be easier to write my own than review yet another AI slop tool written by someone who loves AI.
I'm glad I read the HN comments, now I'm excited to review the source.
I think it will, in the modern AI slop era, look more legitimate when the web UI looks a) hand rolled and b) like not much time was spent on it at all. Which makes me a tad embarassed as someone who used to sell fancy websites for a living.
Needs to? Is there some new law mandating all landing pages must contain exclusively handwritten text that people haven’t heard of?
To your actual point, the people that would take the landing page being written by an LLM negatively tend to be able to evaluate the project on its true merits, while another substantial portion of the demographic for this tool would actually take that (unfortunately, imo) as a positive signal.
Lastly, given the care taken for the docs, it’s pretty likely that any real issues with the language have been caught and changed.
No they don't. The text is very clearly conveying what this project is about. Not everyone needs to cater to weirdos who are obsessed with policing how other people use LLM.
The people who don't care about LLM slop being shoved down their throat at every turn are the "weirdos" here. The project might not be slop, but the website certainly is, and it's perfectly reasonable for people to stop reading immediately and decide that they don't care about what could be an otherwise useful project when they determine that the author didn't give enough of a shit to even write the text on the website themselves.
But there is an old-school README.me at the github homepage: https://github.com/stanford-scs/jai
The repository has an old-school ASCII INSTALL file.
If you don't like the vitepress site, just use github and read the human-written README and man page there. All the information you need to use the software is available without laying eyes on any AI slop. Of cource, if you hate AI so much that you can't get past a vibe-coded landing page, you might not be the target audience for jai, because you probably aren't doing a lot of vibe coding. But maybe jai is still useful to you for grading programming assignments or running installer scripts.
Except that the "this was generated by an LLM" feeling you get from the front page would then make you automatically question whether the "decades of experience + stanford professor thing", as you put it, was true or just an LLM hallucination.
Author would, indeed, be wise to rewrite all the text appearing on the front page with text that he wrote himself.
Excellent point, though not everyone pays close enough attention to the domain shown in the browser (if they did, some of the more amateurish phishing attempts would fool a lot fewer people). But yes, anyone who notices the domain will have a clue to the truth.
To be less abstract, it was written by David Mazieres, who was been writing software and papers about user level filesystems since at least 2000. He now runs the Stanford Secure Computer Systems group.
David has done some great work and some funny work. Sometimes both.
Doesn't detract from it. The jai tool is high-stakes, the static website isn't. The tool is designed to be used with LLM coding agents, so if anything it makes sense to vibecode the website, even better if the author used jai in that.
Find a local community church, public room, or public library and have them allow you to organize a handful of sessions where folks can bring in old devices and come up with a workflow that's efficient. Run it as a donation event where folks can donate money for a new hard drive , or to fund the service for other folks that can't afford it.
Maybe it's more about a rush to share how awesome it is that you compressed your time-to-release down to days and not weeks or months - when in reality that's a good thing in the sense that you get to a failure state much FASTER, and failure states are good, because that means that you get to iterate and get past those failures FASTER.
I don't think people were releasing at this pace, so the failure states are fast and furious so there is just that much more viability. I think the microslop windos failures lately are just them being the same "them" that they've always been .. just MUCH faster. (they just need to stop monkeying with windows and stop adding more features on top of an already shaky foundation.) Maybe we just need more of the stories like Anthropic working with Mozilla to squash 5x the amount of bugs in a similar time frame first, AND THEN "vibe a browser together from nothing but specification files and an army of bots in a weekend".
Is this, or something like this wildly dangerous to use? Sure is! Are experiments like this necessary to move forward? Sure is!
I guess when you consider the fact that many (most) of us are pulling solutions from the open Internet then this becomes maybe a little more palatable.
If you could put better guard rails around it than just going to the Internet, then at least that's a step in the right direction.
reply