Hacker Newsnew | past | comments | ask | show | jobs | submit | supriyo-biswas's commentslogin

I don’t get the point in writing another geocoder when such programs already exist, e.g: https://pelias.io/

Most or all existing solutions are universal (not just reverse geocoding) and rely on database. The purpose of this project is to make it super fast to do one thing. The result is 100x - 1000x speed of Pelias and other universal tools like that.

I don't get the point in making other types of food when pizza already exists.

Eh, if it turns out to be too bad I guess I’ll just end up switching back to pipenv, which is the closest thing to uv (especially due to the automatic Python version management, but not as fast).

I would much rather use pipenv, if it only had the speed of uv.

Every interface kenneth reitz originally designed was fantastic to learn and use. I wish the influx of all these non-pythonistas changing the language over the last 10 years or so would go back and learn from his stuff.


Pipenv is a pile of shite

so’s your face

Does pipenv download and install prebuilt interpreters when managing Python versions? Last I used it it relied on pyenv to do a local build, which is incredibly finicky on heterogenous fleets of computers.

People would just make pipenv fast? There are some new tools that can help with that..

I wish people would move on from this mindset. "Agentic" workflows such as those implemented by Claude Code or Cursor can definitely reason about code, and I've used them to successfully debug small issues occurring in my codebase.

We could argue about how they only "predict the next word", but there's also other stuff going on in the other layers of their NNs which do facilitate some sort of reasoning in the latent space.


I would concede a valid point you made:

> I've used them to successfully debug small issues occurring in my codebase.

Great! The pattern recognition machine successfully identified pattern.

But, how do you know that it won't flag the repaired pattern because you've added a guard to prevent the behaviour (ie; invalid/out of bounds memory access guarded by a heavy assert on a sized object before even entering the function itself)?

What about patterns that aren't in the training data because humans have a hard time identifying the bad pattern reliably?

The point I'm making is that it's autocomplete; if your case is well covered it will show up: wether you have guards or not (so: noise) and that it will totally miss anything that humans haven't identified before.

It works: absolutely, but there's no reliability and that's sort of inherent in the design.

For security auditing specifically, an unreliable tool isn't just unhelpful: it's actively dangerous, because false confidence is actually worse than an understood ignorance


Thank you for your contribution. Unfortunately I do not have sufficient expertise in LLM engineering to provide a useful comment, but this is the sort of research I'd like to see here instead of LLM-driven unemployment hype.

It seems that your entire profile is LLM generated comments, would appreciate it if you'd stop. Thanks.

I think you're too concerned with how others speak to believe that someone can actually write good, well-written comments. Obviously, in the age of LLMs, we might use one or two things to produce a better response. But that doesn't mean I don't think about what I wrote. Especially since English isn't my native language. Anyway, don't worry about me!

I’d really appreciate if you would avoid posting LLM generated comments here. Thanks.

Engineering Manager (as opposed to people who stick to programming, called Individual Contributor.)

Oh, how I hate these horrible job descriptions.

But thanks for the info.


Reading the article would have shown which one it is, “The agency is floating a set of rules that require companies to offer U.S.-based representatives”, where “representatives” refers to people.


> I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda.

I mean tech in general has been negatively covered in the media since 2015 due to latent agendas of (a) supposed revenue loss due to existence of Google/FB etc and (b) to align neutral moderation stances towards a preferred viewpoint most suitable to the political party in question.

There is a solution, however, anyone hoping to roleplay with models submits an identity verification, an escrow amount, and a recorded statement acknowledging their risky use of the model. However, I assume the market for this is not insignificant, and therefore, companies hope to avoid putting in such requirements. OpenAI has been moving in that direction as seen during the 4o debacle.


But how would your solution have helped in this case?

The guy was probably a paying user, so Google would have already known who he is. He's also 36, so no excluding him based on age. And neither the escrow nor the statement really add much in my view


If you have a good stdlib (which in my case would mean something like Java for its extensive data structures) Tradcoding is entirely possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: