I think I'm in the middle, at first I was definitely against using any AI because I loved the craft. But over the past 12-18 months I've been using it more and more.
I still love to code just by hand for an fun afternoon. But in the long-term, I think you are going to be left behind if you refuse to use AI at all.
A.I. is now often doing in 5-10 minutes what would take me hours on my own for any given task (well based on the last couple of weeks at least, I wasn't doing much agent based A.I. coding before that).
I was pretty much having a real-time conversation with my superiors, showing them updates just a couple of minutes after they suggested them, for a feature the other day, getting feedback on how something should look.
Something that would have taken me an hour or more each time they wanted a change or something new added.
Now that cuts both ways, as it started to seem like they were expecting that to be the new normal and I started to feel like I had to keep doing that or make it seem like I'm not actually working. And it gets exhausting keeping up that pace, and I started worrying when anything did take me extra time.
> I was pretty much having a real-time conversation with my superiors, showing them updates just a couple of minutes after they suggested them, for a feature the other day, getting feedback on how something should look.
I did choose to do it, so it wasn't a nightmare. I was wanting extra guidance on what to include, and so I asked them (while they both weren't busy), they gave me feedback, I did that in like a minute or two with A.I. (when normally it would have taken me a lot longer), so I showed them and was like 'how's this?' and they said 'could you change it to be like this?' etc back and forth for about 45 minutes. It was about the equivalent of a call except it was over Slack and I could provide something they could look at quickly.
What could be considered to be a nightmare, perhaps, is suddenly feeling like 'uh oh, is this going to be the new normal. Will I have to keep doing this all the time now, or else they think I'm not getting any work done?'
Yes, absolutely agree. I have that feeling too that we have to keep up that pace. But it is not realistic that everything can happen at that same speed.
I don't really know, the client I've been working at for the past 4.5 years has only given me access to agent based A.I. two weeks ago, so this is all pretty new to me (it's a large corporation and they didn't allow it until very recently).
I experimented with it a bit a couple weeks before that on my own personal projects as well, but I don't feel that same push when I'm doing my own projects, obviously (well, if I do, it's because I choose to).
This article perfectly captures the frustration of the "WebAssembly wall." Writing and maintaining the JS glue code—or relying on opaque generation tools—feels like a massive step backward when you just want to ship a performant module.
The 45% overhead reduction in the Dodrio experiment by skipping the JS glue is massive. But I'm curious about the memory management implications of the WebAssembly Component Model when interacting directly with Web APIs like the DOM.
If a Wasm Component bypasses JS entirely to manipulate the DOM, how does the garbage collection boundary work? Does the Component Model rely on the recently added Wasm GC proposal to keep DOM references alive, or does it still implicitly trigger the JS engine's garbage collector under the hood?
Really excited to see this standardize so we can finally treat Wasm as a true first-class citizen.
I’m wondering if the recent improvements in sending objects through sendMessage in v8 and Bun change the math here enough to be good enough.
SendMessage itself is frustratingly dumb. You have excessively bit fiddly or obnoxiously slow as your options. I think for data you absolutely know you’re sending over a port there should be an arena allocator so you can do single copy sends, versus whatever we have now (3 copy? Four?). It’s enough to frustrate use of worker threads for offloading things from the event loop. It’s an IPC wall, not a WASM wall.
Instead of sending bytes you should transfer a page of memory, or several.
That live GPS jamming calculation using commercial flight NAC-P degradation is honestly brilliant. Such a clever use of existing public telemetry.
You mentioned compressing the FastAPI payloads by 90% to keep the browser from melting. I'm really curious about your approach there did you just crank up gzip/brotli on the JSON responses, or did you have to switch to something like MessagePack, Protobuf, or a custom binary format to handle that volume of moving GeoJSON features?
Also, never apologize for the "movie hacker" UI. A project like this absolutely deserves that aesthetic. Awesome work!
This is an incredibly elegant hack. The finding that it only works with "circuit-sized" blocks of ~7 layers is fascinating. It really makes you wonder how much of a model's depth is just routing versus actual discrete processing units.
I spend a lot of time wrestling with smaller LLMs for strict data extraction and JSON formatting. Have you noticed if duplicating these specific middle layers boosts a particular type of capability?
For example, does the model become more obedient to system prompts/strict formatting, or is the performance bump purely in general reasoning and knowledge retrieval?
I still love to code just by hand for an fun afternoon. But in the long-term, I think you are going to be left behind if you refuse to use AI at all.