Hacker Newsnew | past | comments | ask | show | jobs | submit | 100721's commentslogin

I have been putting my agents on their own, restricted OS-level user accounts for a while. It works really well for everything I do.

Admittedly, there’s a little more friction and agent confusion sometimes with this setup, but it’s worth the benefit of having zero worries about permissions and security.


Haha, you can already see wheel reinventors in this thread starting to spin their reinvention wheels. Nice stuff, I run my agents in containers.

Huh? The first paragraph literally says they are using LLMs

> [ GENEVA, SWITZERLAND — March 28, 2026 ] — CERN is using extremely small, custom large language models physically burned into silicon chips to perform real-time filtering of the enormous data generated by the Large Hadron Collider (LHC).


the site might have fixed it, to me it says "artificial intelligence" instead of LLM, still bad but not" steaming pile of poo on you bank statement" bad


Does anyone know why they are using language models instead of a more purpose-built statistical model? My intuition is that a language model would either be overfit, or its training data would have a lot of noise unrelated to the application and significantly drive up costs.

It's not an LLM, it is a purpose built model. https://arxiv.org/html/2411.19506v1

5 years ago we would've called it a Machine Learning algorithm. 5 years before that, a Big Data algorithm.


We’ve been calling neural nets AI for decades.

> 5 years before that, a Big Data algorithm.

The DNN part? Absolutely not.

I don’t know why people feel the need for such revisionism but AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.


> AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.

When I was 13, having just started programming, I picked up a book from a "junk bin" at a book store on Artificial Intelligence. It must have been from the mid-80s if not older.

It had an entire chapter on syllogism[1] and how to implement a program to spit them out based on user input. As I recall it basically amounted to some string exteaction assuming user followed a template and string concatenation to generate the result. I distinctly recall not being impressed about such a trivial thing being part of a book on AI.

[1]: https://en.wikipedia.org/wiki/Syllogism


Eliza was 1960s.

In the 1990s I remember taking my friend's IRC chat history and running it through a Markov model to generate drivel, which was really entertaining.


i hate that we're in this linguistic soup when it comes to algorithmic intelligence now.

This might be some journalistic confusion. If you go to the CERN documentation at https://twiki.cern.ch/twiki/bin/view/CMSPublic/AXOL1TL2025 it states

> The AXOL1TL V5 architecture comprises a VICReg-trained feature extractor stacked on top of a VAE.


… they’re not? Who said they are? The article even explicitly says they’re not?

For 40 minutes, the article claimed they used LLMs. They changed the wording twice: https://theopenreader.org/index.php?title=Journalism:CERN_Us... and https://theopenreader.org/index.php?title=Journalism%3ACERN_...

Huh? Their words are an accurate, if simplified, description of how they work.


The simplification is where it loses granularity. I could describe every human's life as they were born and then they died. That's 100% accurate, but there's just a little something lost by simplifying that much.


Wasn’t that film explicitly about climate change denial?


Well it was a metaphor I guess but I certainly took it that way.


It's still really early 2000's! We have over 900 years left :)

---

On topic: discussions like these are as old as human discussion forums and communities. I think that the participants each grow and change on an individual level just as much as the community and platform does. I think humans have a hard time identifying how much of their feelings of nostalgia are based in reality.

Maybe the platform has not actually changed in the ways people fear, and instead, peoples' opinions on what is interesting, important, or valuable has changed?

Since this thread has been discussing politics-adjacent things, let's consider Senator John Fetterman from the United States. Mr. Fetterman is notably different today from when he first started his campaign, regarding what he believes is important and valuable. (Mr. Fetterman suffered a stroke, which is suspected to have brought about personality changes and shifts in political ideology.)

---

I think we, as individuals, should always be focusing our first line of questioning on how _we're_ changing, rather than trying to figure out how the world, or the zeitgeist, or Hacker News, etc. is changing.

Sometimes we outgrow things that we hold dear, and instead of accepting that it's not really the place for us anymore and moving on to a different environment, we try to shape our current environment around our new personality by instituting new rules or adding new features.


Genuinely curious: would you mind please explaining to me how your contributions are more productive than the person you are responding to (read: attacking)?

It reads like you are upset at the poster using "DEI" and projecting your own behaviors onto them ("tedious and unproductive political discourse", "immune from critique or any burden of evidence").


Fair enough.


I think it'd be good to keep in mind that Hacker News is mostly populated by a demographic commonly referred to as "Tech Bros" who, for the most part, are here as part of their journey in creating profitable businesses.


Profitable (very) was Thomas Midgley Jr. when he introduced lead petrol for cars, it took 75-100 years til the «profit» was stopped. What did we learn?


Is that the definition of tech bros? I thought tech bros were people who shilled cryptocurrencies, NFTs and other grifts.


The definition of “Tech bros” is “tech people you don’t like”. There’s no agreed upon definition (just like how people disagree about what is/isn’t a “grift”) because it’s not meant to be descriptive, it’s a rhetorical device.


No, it's tech people you don't like for a specific set of reasons: it's mostly hubris and its implications like downplaying the damage the tech does to society and environment.


perceived downplaying of the damage. Popular soundbites (including "don't solve social problems with technology") have it generally backwards, and most people don't go beyond them.


No, this is too dismissive. There was a large shift in the culture of people over the last decade or so as the bay area money printers started printing faster than finance firms were printing. Eg tech money attracted a culture of people wed normally label “finance bros”. Patrick Bateman types but without the explicit murder. Status, money, often born outstandingly privileged.

This is the tech bro people speak of. It is that psychopathic desire for status at all costs which sadly is learned, emulated, and exalted. Ironically, yc is the poster child for breeding this culture over the last 8 or so years and the place it is most often complained about outside of reddit ofc.


That’s how you use the term because you don’t like those people.

I’ve heard people use the term to disparage Linus Torvalds and even Aaron Swartz because they didn’t like them.


Using tech bro on Torvalds is well outside the pattern of usage I’ve seen, which trends more towards GP’s definition, at least in the past 5 years.


Saying we don't like someone because we deem them to be a tech bro, is indeed a circular argument.

But saying we don't like someone that calls themself a tech bro? Well they had it coming.


For a company that had Sears’ positioning at the time? It wasn’t far off from that description.


But a lot of stuff had to be invented and Bezos was the person to do it. Amazon sounded intense to work for to get to where it is.


They invented almost all of that a century earlier. Amazon improved their warehouse management and, later, delivery times but that happened later. If Sears management had been earning their pay in the 90s that would have been much harder because Sears had a huge inventory and unmatched local presence for returns, support, etc. if they hadn’t been AWOL moving the catalog online. Amazon was shipping at regular postal speeds then, too, so Sears could even have beat them if they shipped from their warehouses.

This wasn’t uncommon back then: we had several clients in the 90s who just couldn’t wrap their heads around how quickly many of their customers would switch to email or online forms when it saved them a few days on the transaction.


Web search-based RAG is very different from having something embedded in a model's training data, though.


ChatGPT website gives a similar answer. Are they running RAG, or the model?

> Yes — I’m familiar with the “pelican riding a bicycle” SVG generation test.

> It’s become a kind of informal benchmark people use when evaluating whether an image-generation or SVG-generation model can: ...


Runnin’ confabulations:

>Yes — the “hamster driving a car” prompt is a well-known informal test …

>…that’s a well-known informal test people use…(a mole-rat holding or playing a guitar).

Try any plausible concept. Get sillier and it’s trained to talk about it being nonsense. The output still claims it’s a real test, just a real “nonsense” test.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: