> So the author's point is that "other" can never appear in-between "parent before" and "child start".
But isn't it true for JavaScript too? So I don't really get the author's point... am I missing something or the author('s LLM?) forced a moot comparison to JavaScript?
Edit: after reading the examples twice I am 99.9% sure it's slop and flagged it.
In JS, there are microtasks and macrotasks. setTimeout creates macrotasks. `.then` (and therefore `await`) creates microtasks.
Microtasks get executed BEFORE macrotasks, but they still get executed AFTER the current call stack is completed.
From OP (and better illustrated by GP's example) Python's surprise is that it's just putting the awaited coroutine into the current call stack. So `await` doesn't guarantee anything is going into a task queue (micro or macro) in python.
That doesn't make sense. That would mean the awaiting function doesn't have access to the result of the Promise (since it can proceed before the Promise is fulfilled), which would break the entire point of promises.
Yep, it's another slop. We are getting these about daily now where there's lots of comments on articles that'd are clearly slop.
Half the article is paragraph headings, the other half is bullet points or numbered lists, if there was anything interesting in the prompt
it'd been erased by an LLM which has turned it into an infodump with no perspective, nothing to convey, and I have no ability to tell what if anything might have been important to the author (besides blog clicks and maybe the title).
I really wish we could start recognizing these sooner, I think too many people skim and then go to the comments section but I don't think we really want HN to be a place filled with low value articles just because they're good jumping off points for comments.
I've been flagging them here and then heading over to kagi and marking as slop there. Makes me wish we had something similar here rather than just "flag".
And I know we aren't supposed to comment when we flag, but this feels different to me, like we've got to collectively learn to notice this better or we need better tools.
They are async across operations that do 'yield', i.e. when the function eventually runs an i/o operation or sleep or similar. Those are the points where the functions can be interleaved. Simply awaiting another function is _not_ one of those points: await here only means the called function might yield to the scheduler at some point in its execution (it doesn't have to!), not that the calling function will yield immediately.
Tasks are async funcs that have been spawned with asyncio.create_task or similar, which then schedules its execution. A timer of zero doesn't spawn anything so the coroutine just executes in the same frame as the caller so yes it essentially a noop.
> To find the most informative examples, we separately cluster examples labeled clickbait and examples labeled benign, which yields some overlapping clusters
How can you get overlapping clusters if the two sets of labelled examples are disjoint?
The information you're seeking appears to be left out of the post. My best guess is that a separate embedding model, specifically tuned for document similarly, is used to generate the vectors and then a clustering algorithm is chosen to create the clusters. They may also use PCA to reduce the embedded vector dimensions before clustering.
> How can you get overlapping clusters if the two sets of labelled examples are disjoint?
What's disjoint are the training labels and the classifier's output - not the values in high-dimension space. For classification tasks, there can be neighboring items in the same cluster but separated by the hyperplane - and therefore placed in different classes despite the proximity.
If the diagram is representative of what is happening, it would seem that each cluster is represented as a hypersphere, possibly using the cluster centroid and max distance from the centroid to any cluster member as radius. Those hyperspheres can then overlap. Not sure if that is what is actually happening though.
They only did that for image generation. The more interesting part is that an LLM can approach or find the correct caption for an image, video or audio during test time with no training using only the score as a guide. It's essentially working blind almost like the game Marco Polo where the scorer is saying "warmer" or "colder" while the LLM is finding its way towards the goal. This is an example of emergent capabilities since there are no examples of this in the training data.
Actually, it's the name of the paper. And while the team also developed and released a system to elicit the behavior by doing what you described, it's entirely possible that the researchers thought the title to be the most important finding in their work.
In many cases the build output also has hardcoded paths unfortunately
so doing `brew install` inside a container with the proper volumes it’s not sufficient to fix the issue. Everything would have to run from within the container as well.
“Fill in the gaps by using context” is the hard part.
You can’t pre-bake the context into an LLM because it doesn’t exist yet. It gets created through the endless back-and-forth between programmers, designers, users etc.
But the end result should be a fully-specced design document. That might theoretically be recoverable from a complete program given a sufficiently powerful transformer.
Peter Naur would disagree with you. From "Programming as Theory Building":
A very important consequence of the Theory Building
View is that program revival, that is reestablishing the
theory of a program merely from the documentation, is
strictly impossible. Lest this consequence may seem un-
reasonable it may be noted that the need for revival of an
entirely dead program probably will rarely arise, since it
is hardly conceivable that the revival would be assigned
to new programmers without at least some knowledge of
the theory had by the original team. Even so the The-
ory Building View suggests strongly that program revival
should only be attempted in exceptional situations and
with full awareness that it is at best costly, and may lead
to a revived theory that differs from the one originally
had by the program authors and so may contain discrep-
ancies with the program text.
The definition of theory used in the article:
a person who has or possesses a theory in this
sense knows how to do certain things and in addition
can support the actual doing with explanations, justi-
fications, and answers to queries, about the activity of
concern.
And the main point on how this relate to programming:
- 1 The programmer having the theory of the program
can explain how the solution relates to the affairs of the
world that it helps to handle. Such an explanation will
have to be concerned with the manner in which the af-
fairs of the world, both in their overall characteristics and
their details, are, in some sense, mapped into the pro-
gram text and into any additional documentation.
- 2 The programmer having the theory of the program
can explain why each part of the program is what it is,
in other words is able to support the actual program text
with a justification of some sort. The final basis of the justification is and must always remain the programmer’s
direct, intuitive knowledge or estimate.
- 3 The programmer having the theory of the program
is able to respond constructively to any demand for a
modification of the program so as to support the affairs
of the world in a new manner. Designing how a modifi-
cation is best incorporated into an established program
depends on the perception of the similarity of the new
demand with the operational facilities already built into
the program. The kind of similarity that has to be per-
ceived is one between aspects of the world.
From my understanding, the big bang requires that the proto-universe was in a completely homogenous state that was then pushed out of that equilibrium for some reason. But that reason doesn't require non-zero angular momentum. It only requires that a the proto-universe was homogenous and now the universe isn't. And that is what separates pre and post big bang. I could be wrong, I am not a cosmologist. Would be happy to hear from one though.
What causes a perfectly symmetric ball on top of a perfectly symmetric hill to roll down via one side? (Probably quantum randomness if everything else is perfectly symmetric)
If the base models already have the “reasoning” capability, as they claim, then it’s not surprising that they were able to get to SOTA using a relatively negligible amount of compute for RL fine-tuning.
I love this sort of “anti-hype” research. We need more of it.
I think this is a subtler point than one might think on first read, which is muddled due to the poorly chosen examples.
Here's a better illustration:
It prints: So the author's point is that "other" can never appear in-between "parent before" and "child start".Edit: clarification