Hacker Newsnew | past | comments | ask | show | jobs | submit | halayli's commentslogin

Reading your parent comment and the responses, I feel be missing the point others are trying to make. There's much less technology, components, and material in a headphone compared to laptops. The circuitry in the headphones is closer in complexity to a charger than a laptop.

The cost of something doesn't always correlate with the technology, components, and material. A Hermes bag doesn't even have a single circuit in it compared to headphones and laptops. Yet it costs more.

> people are willing to pay more for better sound, better noise cancelling

> The cost of something doesn't always correlate with the technology, components, and material

Doesn't this actually contradict what you claimed originally, though?


This is a very reasonable comment. IMO it's a falacy to take into consideration the age of an account especially when it is subjective experience.

That doesn't sound accurate. The T in TPM stands for trust, the whole standard is about verifying and establishing trust between entities. The standard is designed with the assumption that anyone can bring in their scope and probe the ports. This is one of several reasons why the standard defines endorsement keys(EK).

Actually, it is completely true. The TPM threat model has historically focused on software-based threats and physical attacks against the TPM chip itself - crucially NOT the communications between the chip and the CPU. In the over 20 year history of discrete TPMs, they are largely completely vulnerable to interposer (MITM) attacks and only within the last few years is it being addressed by vendors. Endorsement keys don’t matter because the TPM still has to trust the PCR commands sent to it by the CPU. An interposer can replace tampered PCR values with trusted values and the TPM would have no idea.

It is correct, the measurement command to the TPM is not encrypted. So with MITM you can record the boot measurements, then reset and replay to any step of the boot process. Secrets locked to particular stages of boot are then exposed.

There is guidance on "Active" attacks [1], which is to set up your TPM secrets so they additionally require a signature from a secret stored securely on the CPU. But that only addresses secret storage, and does nothing about the compromised measurements. I also don't know what would be capable of providing the CPU secret for x86 processors besides... an embedded/firmware TPM.

[1] https://trustedcomputinggroup.org/wp-content/uploads/TCG_-CP...


This is not a special case. Everything you mentioned above can actually be achieved using cli. You can create listeners, configure pipelines, and sinks(granted not ergonomic). Sinks can be HTTP post for example, and sources can be tcp listeners + protocols on top. You can also configure the buffering strategies for each pipeline.

I feel HN comments have been getting hijacked for a long time now by LLM agents. Always so early, very positive, and hard to spot. Some replaced em-dash with --, some replace them with a single dash, some remove them all together. I wonder how much time it is taking from @dang and other moderators helping to maintain this community.

Can you mention some specific examples? If you don't want to post them here, emailing hn@ycombinator.com would be good.

We recently promoted the no-generated-comments rule from case law [1] to the site guidelines [2], and we're banning accounts that break it.

[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[2] https://news.ycombinator.com/newsguidelines.html#generated


I will start sharing them moving forward. I've had several parent threads deleted in the past(rightfully so) because it turned out I was responding to AI. Thank you for staying on top of this.

The performance observation is real but the two approaches are not equivalent, and the article doesn't mention what you're actually trading away, which is the part that matters.

The C++11 threadsafety guarantee on static initialization is explicitly scoped to block local statics. That's not an implementation detail, that's the guarantee.

The __cxa_guard_acquire/release machinery in the assembly is the standard fulfilling that contract. Move to a private static data member and you're outside that guarantee entirely. You've quietly handed that responsibility back to yourself.

Then there's the static initialization order fiasco, which is the whole reason the meyers singleton with a local static became canonical. Block local static initializes on first use, lazily, deterministically, thread safely. A static data member initializes at startup in an order that is undefined across translation units. If anything touches Instance() during its own static initialization from a different TU, you're in UB territory. The article doesn't mention this.

Real world singleton designs also need: deferred/configuration-driven initialization, optional instantiation, state recycling, controlled teardown. A block local static keeps those doors open. A static data member initializes unconditionally at startup, you've lost lazy-init, you've lost the option to not initialize it, and configuration based instantiation becomes awkward by design.

Honestly, if you're bottlenecking on singleton access, that's design smell worth addressing, not the guard variable.


> Honestly, if you're bottlenecking on singleton access, that's design smell worth addressing, not the guard variable.

There's a large group of engineers who are totally unaware of Amdahl's law and they are consequently obsessed with the performance implications of what are usually most non-important parts of the codebase.

I learned that being in the opposite group of people became (or maybe has been always) somewhat unpopular because it breaks many of the myths that we have been taught for years, and on top of which many people have built their careers. This article may or may not be an example of that. I am not reading too much into it but profiling and identifying the actual bottlenecks seems like a scarce skill nowadays.


You leveled up past a point a surprising number of people get stuck on essentially.

I feel likethe mindset you are describing is kind of this intermediate senior level. Sadly a lot of programmers can get stuck there for their whole career. Even worse when they get promoted to staff/principal level and start spreading dogma.

I 100 percent agree. If you can't show me a real world performance difference you are just spinning your wheels and wasting time.


Yes, I agree, and my experience is the same - there's just too many folks getting stuck in that mindset and never leaving it. Looking into the history I think software engineering domain has a lot of cargo-cult, which is somewhat surprising given that people who are naturally attracted to this domain are supposed to be critical thinkers. It turns out that this may not be true for most of the time. I know that I was also afoul of that but I learned my lesson.

On the flip side, it’s easy to get a bit stuck down the road by the mere fact that you have a singleton. Maybe you have amazing performance and very carefully managed safety, but you still have a single object that is inherently shared by all users in the same process, and it’s very very easy to end up regretting the semantic results. Been there, done that.

Worse, while shipping Electron crap is the other extreme, not everything needs to be written to fit into 64 KB or 16ms rendering frame.

Many times taking a few extra ms, or God forbid 1s, is more than acceptable when there are humans in the loop.


agreed. Strong emphasis on "profiling and identifying the actual bottleneck". Every benchmark will show a nested stack of performance offenders, but a solid interpretation requires a much deeper understanding of systems in general. My biggest aha moment yrs ago was when I realized that removing the function I was trying to optimize will still result in a benchmark output that shows top offenders and without going into too many details that minor perspective shift ended up paying dividends as it helped me rebuild my perspective on what benchmarks tell us.

Yeah ... and so it happens that this particular function in the profile is just a symptom, merely being an observation (single) data point of system behavior under given workload, and not the root cause for, let's say, load instruction burning 90% of the CPU cycles by waiting on some data from the memory, and consequently giving you a wrong clue about the actual code creating that memory bus contention.

I have to say that up until I grasped a pretty good understanding of CPU internals, memory subsystem, kernel, and generally the hardware, reading into the perf profiles was just a fun exercise giving me almost no meaningful results.


Totally. I always found joy solving critical performance problems because it naturally pave a path forward to peel the layers and untangle the system interactions which feeds my curiosity, and is highly rewarding.

>Then there's the static initialization order fiasco

One of the reasons I hate constructors and destructors.

Explicit init()/deinit() functions are much better.


The fact that he calls the generated code good/bad without discussing the semantic differences tells that the original author doesn't really know what he is talking about. That seems problematic to me as he is selling c++ online course.

[dead]


Yes definitely not dismissing the lock overhead, but I wanted to bring attention to the implicit false equivalence made in the post. That said, I am surprised the lock check was showing up and not the logging/formatting functions.

[flagged]


a real human. threads can exist before main() starts. for example, you can include another tu which happens to launch a thread and call instance(). Singletons used to be a headache before C++11 and it was common(maybe still is) to see macros in projects that expand to a singleton class definition to avoid common pitfalls.

In fact, Windows 10+ now uses a thread pool during process init well before main is reached.

https://web.archive.org/web/20200920132133/https://blogs.bla...


It's a bit contrived, but a global with a nontrivial constructor can spawn a thread that uses another global, and without synchronization the thread can see an uninitialized or partially initialized value.

no offense but it looks like the reason behind oauth confusion is the author. I had to read half way through to get to a definition which was a poor explanation. Sometimes certain topics are difficult to understand because the initial person behind it wasn't good at communicating the information.


What about using claude -p as an api interface?


This is the definition of reasoning motivated fallacy. You want to believe what you want to believe.


My naive take is we discovered it as a math tool first but later on rediscovered it in nature when we discovered the electromagnetic field.

The electromagnetic field is naturally a single complex valued object(Riemann/Silberstein F = E + i cB), and of course Maxwell's equations collapse into a single equation for this complex field. The symmetry group of electromagnetism and more specifically, the duality rotation between E and B is U(1), which is also the unit circle in the complex plane.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: