Hacker Newsnew | past | comments | ask | show | jobs | submit | grogers's commentslogin

Well they also shadowed production traffic and fixed some bugs that were causing mismatching results. Not saying that stuff can't still slip through, but it's a good way to evaluate it against real data in a way you can't from just test cases alone

parallel execution that auto-generates test cases from exceptions is very slick. That being said, you still need humans in the loop as sometimes the oracle is not THE oracle.

In this incarnation, the only one who "wants" red to win is the first column. Every other column will choose whatever color it wants to win, subject to the rules of the game.

It's a 2 step process:

1. Prepare - Collect a majority of rows such that each of them promises not to accept any color sent by columns to the left of the proposing column. Any colors which were already accepted are sent in reply, along with the column they were accepted in (if different colors were accepted, only the rightmost column for a given row matters). The color to be propagated by the proposer is that with the rightmost column number (or if none were accepted by that particular majority, anything may be selected)

2. Accept - for every row, set the color to the one chosen in step one for the proposing column, subject to the promises made in step 1.

In this case, it's not shown well in the diagram, but by having a majority of rows promise for column 2, column 1 would already have a broken majority. Even if column 4 wanted red, since it received some already accepted colors, it has to choose from one of them based on the rightmost column (blue in this case)


The final diagram is a bit confusing, so it's worth pointing out one additional thing. It appears that R5 could vote green in column 2 and have green be agreed by a majority, even though in column 4, we are committing to blue as the value. However, as part of allowing blue to be selected in column 3, R5 must have already promised NOT to accept green in column 2. A more complete final diagram would show X's in all the appropriate cells.

Well it is hedged with the word "fancy". I think a charitable reading is to understand the problem domain. If N is always small then trying to minimize the big-O is just showing off and likely counterproductive in many ways. If N is large, it might be a requirement.

Most people don't need FFT algorithm for multiplying large numbers, Karatsuba's algorithm is fine. But in some domains the difference does matter.

Personally I usually see the opposite effect - people first reach for a too-naive approach and implement some O(n^2) algorithm where it wouldn't have even been more complex to implement something O(n) or O(n log n). And n is almost always small so it works fine, until it blows up spectacularly.


> Personally I usually see the opposite effect - people first reach for a too-naive approach and implement some O(n^2) algorithm where it wouldn't have even been more complex to implement something O(n) or O(n log n). And n is almost always small so it works fine, until it blows up spectacularly.

Same. People solve in ways that are very obviously going to cause serious problems in only a few short weeks or months and it’s endlessly frustrating. If you’re building a prototype, fine, but if you’re building for production, very far from fine.

Most frustrating because often there’s next to no cost in selecting and implementing the correct architecture, domain model, data structure, or algorithm up front.


Even treating the process as read only after forking is potentially fraught. What if a background thread is mutating some data structure? When it forks the data structure might be internally inconsistent because the work to finish the mutation might not be completed. Imagine there are locks held by various threads when it dies, trying to lock those in the child might deadlock or even worse. There's tons of these types of gotchas.


Okay so just all the usual threading gotchas. Nothing specific to Python.

Conceptually fork "just" noncooperatively preempts and kills all other threads. Use accordingly. Yes it's a giant footgun but then so is all low level "unmanaged" concurrency.


It's not your main point, but I can't help but point out that artificial diamonds ARE diamonds. Cubic zirconia is a different mineral. Usually the distinction is "natural" vs "lab grown" diamonds.

When computers have super-human level intelligence, we might be making similar distinctions. Intelligence IS intelligence, whether it's from a machine or an organism. LLMs might not get us there but something machine will eventually.


I agree, but as a nit, the industry uses "earth mined" instead of "natural", presumably because it's more precise (and maybe less normative?)


mined should be 'hand-picked' and lab made could be 'hand-crafted'.


Well, unless intellect is immaterial.


Take a homily written by someone 2000 miles away and it will likely feel just as relevant to me. Most humans deal with similar issues.


If I'm not mistaken, all the pitfalls in the article have clang-tidy lints to catch


> You must implement a move constructor or a move assignment operator in order for std::move to do anything

Bit of a nitpick, but there are sometimes other functions with overloads for rvalue references to move the contents out - think something like std::optional's `value() &&`. And you don't necessarily need to implement those move constructor/assignment functions yourself, typically the compiler generated functions are what you want (i.e. the rule of 5 or 0)


> so clearly an LLM that does math well only does so by ignoring the majority of the space it is trained on

There are probably good reasons why LLMs are not the "ultimate solution", but this argument seems wrong. Humans have to ignore the majority of their "training dataset" in tons of situations, and we seem to do it just fine.


It isn't wrong, just think about how weights are updated via (mini-)batches, and how tokenization works, and you will understand that LLM's can't ignore poisoning / outliers like humans do. This would be a classic recent example (https://arxiv.org/abs/2510.07192): IMO because the standard (non-robust) loss functions allow for anchor points .


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: