Most of modern medicine, by which I mean each discovery and invention in their own right, stand alongside electricity. Particularly vaccines.
AI isn’t there yet. You could turn off AI tomorrow and there’d be a shock but people would quickly switch back. You could not do the same for electricity, medicine, combustion engines (or steam engines/turbines), computers, the internet, modern building materials, etc. You try to swap back off any of those and the modern world (literally and figuratively) collapses. Turn off AI, and there’d be a financial collapse but afterwards everything would return relatively easily to an earlier way of doing things (ye know, the way from just 4 years ago, and which is still 99% of how people do things :) )
I think the Internet is the more apt analogy. But even with electricity, you could have taken it away within the first couple decades of its popularity and society would have shrugged it off. Once they got used to that telegraph thing, not so much.
Yeah, I agree, but AI isn’t there yet. It’s too early to call it one way or the other. There’s plenty else that’s as important as electricity in my view, and maybe AI will join those ranks in 15 years or so when it’s gone through the hype loop and when the economy has recovered from the now-basically-inevitable AI- and war-fueled turmoil of the next decade.
Sure, but compare this to "turn[ing] off" combustion engines a mere four years after commercial adoption rather than 162 years later (now). Back then, going back to horses wouldn't have been as big of a deal as it would be now.
That's primarily a function of the time for adoption, though, not the utility of the technology. In 20 years, people would not be able to so easily say that they could turn off AI with no impact.
That..what..no. The question was whether there are any comparable to electricity, of which I have put forth a number of examples. And also offered my opinion that it is too early to judge whether AI will be as significant or not.
There are loads of technologies that, despite being decades old, do not qualify. So, no, it’s not “primarily a function of time”. It absolutely is about the utility. We can only be in a position to judge utility when sufficient time has passed, and AI ain’t had enough time yet to prove its utility. Given enough time, it might prove as useful as electricity, or it might just sit alongside computer operating systems - never quite making it onto anyone’s “this changed the world” list, even if it has as much utility as an OS.
I was with you until your comment about vibe coders. Microsoft paid for and brought this vibe coding hell upon themselves. GitHub Copilot, investment in/partnership with OpenAI, and everything else they’ve done to enshitify software and the internet.
If it brings them down, they’ve only themselves to blame. More likely it’ll just hasten the end of free public repos, which will be a shame, but we’ll find other ways to share code that aren’t reliant on one semi-benevolent megacorp.
I’m grateful for GitHub and their support for open source, but they’re not getting any sympathy for the AI mess they’re generating (and they’re contributing more to the mess than many other organisations, due to their size, position and product strategy).
They’re a big enough corporation that we can have nuanced feelings about them. Simultaneously grateful for one part of what they do, and unsympathetic for the consequences of a different part of what they do.
“Don’t pay attention to what Claude is doing, just spam your way through code and commands and hope nothing went wrong and you catch any code issues in review afterwards” is what this sounds like.
I will run parallel Claude sessions when I have a related cluster of bugs which can be fixed in parallel and all share similar context / mental state (yet are sufficiently distinct not to just do in one session with subagents).
Beyond that, parallel sessions to maybe explore some stuff but only one which is writing code or running commands that need checking (for trust / safety / security reasons).
Any waiting time is spent planning next steps (eg writing text files with prompts for future tasks) or reviewing what Claude previously did and writing up lists (usually long ones) of stuff to improve (sometimes with drafts prompts or notes of gotchas that Claude tripped up on the first time which I can prompt around in future).
Spend time thinking, not just motoring your way through tokens.
I disagree. My workflow is built around reviewing what it produces and trying to build a process where it is effective to do that. I definitely can't and don't watch edits as they go by because it is too fast, but I want to easily review every line of code. If you're not "reviewing afterwards", then when would you be reviewing?
As far as planning the next steps, that's definitely a valuable thing and often times I find myself spending many cycles working on a plan and then executing it, reviewing code as I go. I tend to have a plan-cycle and a code-cycle going on at the same time in different projects. They are reactive/reviewing in different ways.
The AI writing of the article made me give up halfway through. It’s a neat idea but the writing style of these AI models is brain-grating, especially when it’s the wrong style choice for this kind of technical report.
Also 90% of citations generated by AI are wrong or straight up don’t even exist. It’s got such a long way to go to be able to reliably write credible papers.
I think you missed the point. Yes it was meant to be humorous, and also to emphasise one of the reasons AI-generated citations are completely untrustworthy, especially with the growing number of AI-generated (junk) papers being published.
No, I had no intention of trying to offer a real source for the accuracy of AI generated citations. It is not hard to Google, search HN or even (ironically) use AI to search, to find numerous relatively recent studies discussing the problem or highlighting specific cases of respected journals/conferences publishing papers with junk citations.
It feels like allowing fake citations in the output from the AI means that you didn't do even the barest minimum of verification (i.e. tell the AI to verify it by sending a new AI to download the pdf that matches that DOI and verifying that it matches what the citation says).
Yeah I tried building such a tool. The problem was two fold:
1) Automated fetching of papers is difficult. API approaches are limited, and often requires per-journal development, scraping approaches are largely blocked, and AI- approaches require web fetch tools which are often blocked and when not, they consume a lot of credits/tokens very quickly.
2) AI generates so many hallucinated citations it’s very hard to know what a given citation was even supposed to be. Sure you can verify one link, but when you start trying to verify and correct 20 to 40 citations, you end up having to deal with hundreds or thousands of citations just to get to a small number of accurate and relevant ones, which rapidly runs you out of credits/tokens on Claude, and API pricing is insane for this use-case. It’s not possible to just verify the link, as “200 Status” isn’t enough to be confident the paper actually exists and actually contains the content the AI was trying to cite. And if it requires human review anyway, then the whole thing is pointless because a human could more quickly search, read and create citations than the AI tool approach (bearing in mind most researchers aren’t starting from scratch - they build up a personal ‘database’ of useful papers relevant to their work, and having an AI search it isn’t optimising any meaningful amount of work; so the focus has to be on discovering new citations).
All in all, AI is a very poor tool for this part of the problem, and the pricing for AI tools and/or APIs is high enough that it’s a barrier to this use case (partly due to tokens, and partly because the web search and web fetch tools are so relatively expensive).
Interesting, tools like Zotero seem to have sorted out the pdf fetching (and metadata + abstract fetching even without institutional access to the pdf). Did you try building the fetching on top of that?
I meant for point 1. Zotero will accept a doi/arxiv link (among other) and download the public metadata (authors, journal, abstract) for you so you don't need to build something for that end. AI cites a paper, copy DOI into Zotero, analyze info Zotero returns.
And here we see you’ve hit upon Jevon’s paradox. The scope of work will grow to use more than it did before, now that human labour achieves more for the same money. Employment will ultimately go up not down (over the long term - we are seeing a lot of short term instability and noise, although there’s much said about AI without it yet showing up in the data, as per articles recently shared on HN about employment figures across the US and the world).
10 years from now, the people that stopped hiring novices and juniors are going to be deeply regretting their past decisions. The people that kept hiring are going to be working with their newly-promoted-to-senior colleagues and be making significantly more progress than those that didn’t keep hiring.
(IBM figured this out a couple of months ago, and explicitly announced tripling their hiring of juniors/grads in order to avoid ending up with a massive gap in the management/senior layers in future).
> “The companies three to five years from now that are going to be the most successful are those companies that doubled down on entry-level hiring in this environment,” Nickle LaMoreaux, IBM’s chief human resources officer, said this week.
Will they? Just because you developed them that doesn't guarantee they will stay with you. It's been always the same issue tbh, but big companies could accept the risk because they pay the most competitive salaries anyways.
Except they won't. They will just hire those new people away from the firms that trained them. That's what happens now and there's no reason why it won't happen in the future.
This is why firms that do actual training have clauses written in the employment contract that says if you receive x months of training from them then you have to work for them for at least y number of years otherwise if you leave then you have to pay them for the cost of training you (which is written as a dollar amount in the contract).
Companies that don't have that kind of clause in the contract are going to get screwed over when their newly trained employees get poached by other firms.
I started my career with a graduate program from a larger company. I stuck around in that company for close to 5 years and would have liked to stay longer. My reason for leaving were the absence of a career progression. The first 3 years, the company had a great career progression path. Clear outlines what it needs for a promotion, fair and transparent pay, etc.
That changed and despite hitting/exceeding my goals, I was denied a promotion twice with no good reason. My boss, who is fantastic, told me that he cannot give me a good reason because he himself did not receive one. So I left.
Generally speaking, my cohort of the program was part of the company much longer than most employees. I don't think a single person left in the first 3 years. Attrition only started now that there was a general shift in the companies culture and communication.
It might happen, but there are risks. The obvious one is that the existing employers will make an effort to keep the best (promotions and pay rises) so people hiring away from them will get the people they do not need to keep.
Those sorts of clauses are not legal everywhere. They would certainly be at least heavily restricted in the UK (on the other hand there are subsidies for some employer training and education here - which is why my daughter has an engineering degree without paying any fees). The author of the article is in Israel, and as an academic is in a different position to people in businesses.
It honestly seems a little control freakish to think this way. People leave companies and that’s a good thing, they explore the industry and generally become more capable. If you leave on good terms there’s nothing holding back a renewed relationship, now with the added benefit of new perspectives; maybe meeting at conferences or working on a project. My gut is telling me these companies don’t part on good terms with their employees.
reply