Hacker Newsnew | past | comments | ask | show | jobs | submit | chis's commentslogin

Hackernews needs to nominate an elite crew of individuals who can tell when an article is AI slop and flag it.

We’ve had the ability to make water/wind-proof garments long before Gore-Tex. The crucial thing is that Gore-Tex is water vapor permeable. So it has a way better ability to shed excess heat without needing to take off a layer.

Traditional materials still have a place though. Material science has not beaten down feathers or wool yet, for the most part.


> Gore-Tex is water vapor permeable. So it has a way better ability to shed excess heat without needing to take off a layer.

It's a way to shed water: Wearing waterproof, non-breathable layers often is worse than not, because the moisture your body releases and that gets trapped soaks you from the inside as surely and rapidly as the rain. (Maybe it's a bit warmer.)


> The crucial thing is that Gore-Tex is water vapor permeable.

While dry, or intermittently wettened (so it can still shed water). Numerous independent tests show that it doesn't breathe at all, once the surface is fully wet. Also, Gore-Tex is no longer best-in-class amongst rain-shedding breathable fabrics; it simply has name recognition.

To be fair, few things do breathe once their surface wets... but wool's surface is so convoluted by the twisty, hydrophobic threads that it rarely gets fully wet on the surface.


My hope would be that this eventually pushes pip to adopt a similar feature-set and performance improvements. It's always a better story when the built-in tool is adequate instead of having to pick something. And yes UV is rust but it's pretty clear that Python could provide something within 2-5x the speed.

The problem is funding.

There seems to be a pervasive believe that the Python tooling and interpreter suck and are slow because the maintainers don’t care, or aren’t capable.

The actual problem is that there isn’t enough money to develop all of these systems properly.

Google says that Astral had 15 team members. Or course, it’s so hard to make these projections. But it wouldn’t shock me if uv and ruff are each individually multi-million dollar pieces of software.

If you’d like to invest a million dollars to improve pip, or work for free for 3 years to do it yourself, I’m not sure if anyone would object.


pip isn't exactly a "built-in" tool. Beyond the python distribution having a stub module that downloads pip for you.

`ensurepip` does not "download pip for you". It bootstraps pip from a wheel included in a standard library sub-folder, (running pip's own code from within that wheel, using Python's built-in `zipimport` functionality).

That bootstrapping process just installs the wheel's contents, no Internet connection required. (Pip does, of course, download pip for you when you run its self-upgrade — since the standard library wheel will usually be out of date).


E2E encryption lets Meta turn down government subpoenas because they can say they truly don't have access to the unencrypted data.

I can't say I really mind this change by Meta that much overall though. Anyone who's serious about privacy probably knew better than to pick "Instagram chat" as their secure channel. And on the other hand having the chats available helps protect minors.


These AI written articles carry all the features and appearance of a well reasoned, logical article. But if you actually pause to think through what they're saying the conclusions make no sense.

In this case no, it's not the case that go can't add a "try" keyword because its errors are unstructured and contain arbitrary strings. That's how Python works already. Go hasn't added try because they want to force errors to be handled explicitly and locally.


It is simpler than that. Go hasn't added "try" because, much like generics for a long time, nobody has figured out how to do it sensibly yet. Every proposal, of which there have been many, have all had gaping holes. Some of the proposals have gotten as far as being implemented in a trying-it-out capacity, but even they fell down to scrutiny once people started trying to use it in the real world.

Once someone figures it out, they will come. The Go team has expressed wanting it.


> nobody has figured out how to do it sensibly yet.

In general or specifically in Go?


The mentioned in the article `try` syntax doesn't actually make things less explicit in terms of error handling. Zig has `try` and the error handling is still very much explicit. Rust has `?`, same story.


I just read the article and I didn't get away with that rationale. Now, this isn't to say that I agree with the author. I don't see why go would *have* to add typed error sets to have a try keyword.

Yes, mimicking Zig's error handling mechanics in go is very much impossible at this point, but I don't see why we can't have a flavor of said mechanics.


What led you to believe this is an AI written article?


It's quite clear that these companies do make money on each marginal token. They've said this directly and analysts agree [1]. It's less clear that the margins are high enough to pay off the up-front cost of training each model.

[1] https://epochai.substack.com/p/can-ai-companies-become-profi...


It’s not clear at all because model training upfront costs and how you depreciate them are big unknowns, even for deprecated models. See my last comment for a bit more detail.


They are obviously losing money on training. I think they are selling inference for less than what it costs to serve these tokens.

That really matters. If they are making a margin on inference they could conceivably break even no matter how expensive training is, provided they sign up enough paying customers.

If they lose money on every paying customer then building great products that customers want to pay for them will just make their financial situation worse.


"We lose money on each unit sold, but we make it up in volume"


By now, model lifetime inference compute is >10x model training compute, for mainstream models. Further amortized by things like base model reuse.


Those are not marginal costs.


> They've said this directly and analysts agree [1]

chasing down a few sources in that article leads to articles like this at the root of claims[1], which is entirely based on information "according to a person with knowledge of the company’s financials", which doesn't exactly fill me with confidence.

[1] https://www.theinformation.com/articles/openai-getting-effic...


"according to a person with knowledge of the company’s financials" is how professional journalists tell you that someone who they judge to be credible has leaked information to them.

I wrote a guide to deciphering that kind of language a couple of years ago: https://simonwillison.net/2023/Nov/22/deciphering-clues/


Unfortunately tech journalists' judgement of source credibility don't have a very good track record


But there are companies which are only serving open weight models via APIs (ie. they are not doing any training), so they must be profitable? here's one list of providers from OpenRouter serving LLama 3.3 70B: https://openrouter.ai/meta-llama/llama-3.3-70b-instruct/prov...


It's also true that their inference costs are being heavily subsidized. For example, if you calculate Oracles debt into OpenAIs revenue, they would be incredibly far underwater on inference.


Sue, but if they stop training new models, the current models will be useless in a few years as our knowledge base evolves. They need to continually train new models to have a useful product.


Football is so unique in that the way it’s presented makes it almost impossible to understand what’s going on. There are a million rules, which even die-hard fans don’t understand. And the broadcast doesn’t even make an attempt to explain or even show the offensive or defensive formations and plays being chosen.

It feels like what we’re shown on tv is a very narrow slice of what’s going on. We see the ball moving down the field but have no idea what the coach or quarterback is doing. Somehow it’s still an incredible watch though.


The plays belong to the individual teams, which is, I heard, why they don't broadcast full field views.

No idea if it's true or not


There are some recent experiments with consumer-facing full field: (Prime Vision All-22). They were held closely for a long time, though.


What would the average software engineer pay for a AI coding subscription, as compared to not having any at all? Running a survey on that question would give some interesting results.


I may be a bit of an anomaly since I don't really do personal projects outside of work, but if I'm spending my own money then $0. If the company is buying it for me, whatever they are willing to pay but anything more than a couple hundred/month I'd rather they just pay me more instead or hire extra people.


I would pay at least 300$/month just for hobby projects. The tools are absolutely amazing at things I am the worst at: getting a good overview on a new field/library/docs, writing boilerplate and first working examples, dealing with dependencies and configurations etc. I would pay that even if they never improve and never help to write any actual business logic or algorithms.

Simple queries like: "Find a good compression library that meets the following requirements: ..." and then "write a working example that takes this data, compresses it and writes it to output buffer" are worth multiple hours I would otherwise need to spend on it.

If I wanted to ship commercial software again I would pay much more.


I pay $20 for ChatGPT, I ask it to criticize my code and ideas. Sometimes it's useful, sometimes it says bullshit.

For a few months I used Gemini Pro, there was a period when it was better than OpenAI's model but they did something and now it's worse even though it answers faster so I cancelled my Google One subscription.

I tried Claude Code over a few weekends, it definitely can do tiny projects quickly but I work in an industry where I need to understand every line of code and basically own my projects so it's not useful at all. Also, doing anything remotely complex involves so many twists I find the net benefit negative. Also because the normal side-effects of doing something is learning, and here I feel like my skills devolve.

I also occasionally use Cerebras for quick queries, it's ultrafast.

I also do a lot of ML so use Vast.ai, Simplepod, Runpod and others - sometimes I rent GPUs for a weekend, sometimes for a couple of months, I'm very happy with the results.


I pay $20/m for cursor. It allowed me to revamp my home lab in a weekend.


> It allowed me to revamp my home lab in a weekend.

So, what did you learn from that project??


I previously hand baked everything.

I learned how andible, terraform, and docker approach dev ops and infra.

I would not be able to hand cook anything with these tools but understanding the syntax was a non goal (nor what interests me).


If Kubernetes didn't in any way reduce labor, then the 95% of large corporations that adopted it must all be idiots? I find that kinda hard to believe. It seems more likely that Kubernetes has been adopted alongside increased scale, such that sysadmin jobs have just moved up to new levels of complexity.

It seems like in the early 2000s every tiny company needed a sysadmin, to manage the physical hardware, manage the DB, custom deployment scripts. That particular job is just gone now.


Kubernetes enabled qualities small companies didn't dream before.

I can implement zero downtime upgrades easily with Kubernetes. No more late-day upgrades and late-night debug sessions because something went wrong, I can commit any time of the day and I can be sure that upgrade will work.

My infrastructure is self-healing. No more crashed app server.

Some engineering tasks are standardized and outsourced to the professional hoster by using managed serviced. I don't need to manage operating system updates and some component updates (including Kubernetes).

My infrastructure can be easily scaled horizontally. Both up and down.

I can commit changes to git to apply them or I can easily revert them. I know the whole history perfectly well.

I would need to reinvent half of Kubernetes before, to enable all of that. I guess big companies just did that. I never had resources for that. So my deployments were not good. They didn't scale, they crashed, they required frequent manual interventions, downtimes were frequent. Kubernetes and other modern approaches allowed small companies to enjoy things they couldn't do before. At the expense of slightly higher devops learning curve.


You’re absolutely right that sysadmin jobs moved up to new levels of complexity rather than disappeared. That’s exactly my point.

Kubernetes didn’t democratise operations, it created a new tier of specialists. But what I find interesting is that a lot of that adoption wasn’t driven by necessity. Studies show 60% of hiring managers admit technology trends influence their job postings, whilst 82% of developers believe using trending tech makes them more attractive to employers. This creates a vicious cycle: companies adopt Kubernetes partly because they’re afraid they won’t be able to hire without it, developers learn Kubernetes to stay employable, which reinforces the hiring pressure.

I’ve watched small companies with a few hundred users spin up full K8s clusters when they could run on a handful of VMs. Not because they needed the scale, but because “serious startups use Kubernetes.” Then they spend six months debugging networking instead of shipping features. The abstraction didn’t eliminate expertise, it forced them to learn both Kubernetes and the underlying systems when things inevitably break.

The early 2000s sysadmin managing physical hardware is gone. They’ve been replaced by SREs who need to understand networking, storage, scheduling, plus the Kubernetes control plane, YAML semantics, and operator patterns. We didn’t reduce the expertise required, we added layers on top of it. Which is fine for companies operating at genuine scale, but most of that 95% aren’t Netflix.


All this is driven by numbers. The bigger you are, the more money they give you to burn. No one is really working solving problems, it's 99% managing complexity driven by shifting goalposts. Noone wants to really build to solve a problem, it's a giant financial circle jerk, everybody wants to sell and rinse and repeat z line must go up. Noone says stop because at 400mph hitting the breaks will get you killed.


People really look through rose-colored glasses when they talk about late 90s, early 2000s or whenever is their "back then" when they talk about everything being simpler.

Everything was for sure simpler, but also the requirements and expectations were much, much lower. Tech and complexity moved forward with goal posts also moving forward.

Just one example on reliability, I remember popular websites with many thousands if not millions of users would put an "under maintenance" page whenever a major upgrade comes through and sometimes close shop for hours. If the said maintenance goes bad, come tomorrow because they aren't coming up.

Proper HA, backups, monitoring were luxuries for many, and the kind of self-healing, dynamically autoscaled, "cattle not pet" infrastructure that is now trivialized by Kubernetes were sci-fi for most. Today people consider all of this and a lot more as table stakes.

It's easy to shit on cloud and kubernetes and yearn for the simpler Linux-on-a-box days, yet unless expectations somehow revert back 20-30 years, that isn't coming back.


> Everything was for sure simpler, but also the requirements and expectations were much, much lower.

This. In the early 2000s, almost every day after school (3PM ET) Facebook.com was basically unusable. The request would either hang for minutes before responding at 1/10th of the broadband speed at that time, or it would just timeout. And that was completely normal. Also...

- MySpace literally let you inject HTML, CSS, and (unofficially) JavaScript into your profile's freeform text fields

- Between 8-11 PM ("prime time" TV) you could pretty much expect to get randomly disconnected when using dial up Internet. And then you'd need to repeat the arduous sign in dance, waiting for that signature screech that tells you you're connected.

- Every day after school the Internet was basically unusable from any school computer. I remember just trying to hit Google using a computer in the library turning into a 2-5 minute ordeal.

But also and perhaps most importantly, let's not forget: MySpace had personality. Was it tacky? Yes. Was it safe? Well, I don't think a modern web browser would even attempt to render it. But you can't replace the anticipation of clicking on someone's profile and not knowing whether you'll be immediately deafened with loud (blaring) background music and no visible way to stop it.


I worked at an ISP in 1999 and between 8-11 PM we would simply disconnect the longest connected user once the phone banks were full. Obviously we oversubscribed.


I wonder how universal these stages are. All I can say is when I worked at a 15 person company, it was extremely clear to me that we needed more structure than "everyone reports to the CEO". We struggled to prioritize between different projects, milestones weren't clearly defined or owned, at times there would be long debates on product direction without a clear decisionmaker, etc etc.

Not to say the article is so wrong. I think their advice to consider elevating a few engineers into informal tech leads is a great answer. We went with the path of hiring one dedicated "manager" of all engineers and that worked pretty well too.


Depends team to team and founder to founder. I've seen early stage startups where most ICs were able to self manage, but others where some form of structure was needed. At the stage that you mentioned, it's natural for founders to end up hiring an Engineering Lead.

> consider elevating a few engineers into informal tech leads

It is potentially risky - I've seen plenty of talented engineers flounder because they were thrust into an ill-suited management role too soon, but I think if someone is motivated and eased into the role they tend to be superior to an outside hire.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: