I don't think you can steal Bitcoin with a quantum computer because the blockchain only stores the 256-bit hash of the public key, so you need to reverse that, which costs 2^128 with grover's algorithm
You’re right that P2PKH addresses use the hashed public key, but there are other address types.
The very early days of Bitcoin had addresses created using the now-deprecated P2PK address variant—Pay To Public Key. These addresses are simple encoded secp256k1 public keys with no hashing.
There are still > 1.5 million BTC stored in P2PK UTXOs as of this post, all of which are up for grabs to the first person who can derive the private keys for the known public keys
I don't understand this: isn't the thing in the article only relevant for software simulations, while in hardware ordering is arbitrary like in Verilog, or at least dependent on wire lengths that are not specified in HDL? (unless you delay the effect to the next clock update, which it seems to me will work the same in all HDLs and targets).
And afaik HDLs are almost exclusively used for hardware synthesis, never seen any software written in those languages.
So it doesn't seem important at all. In fact, for software simulation of hardware you'd want the simulation to randomly choose anything possible in hardware, so the Verilog approach seems correct.
It's important to have deterministic simulations and semantics that you can reliably reason about. Both VHDL and SystemVerilog offer this to some extent, but in the case of (System)Verilog the order of value updates is not as strictly enforced. In practice, this means that if you switch to another or a newer simulator, suddenly your testbenches will fail. The simulator vendors love this of course. This hidden cost is underestimated.
No sane hardware engineer would want randomness in their simulation unless they get to control it.
You are definitely correct, and that is how I am solving all the puzzles as well. However, I've encountered many people that couldn't wrap their head around even the mirror across the fold line logic. For such people, the techniques I described help to come up with puzzles that feel "hard" for them. Thanks for giving it a try.
Avoiding this generally needs to be the main consideration when writing prompts.
When appropriate, explicitly tell it to challenge your beliefs and assumptions and also try to make sure that you don't reveal what you think the answer is when making a question, and also maybe don't reveal that you are involved. Hedge your questions, like "Doing X is being considered. Is it a viable plan or a catastrophic mistake? Why?". Chastise the LLM if it's unnecessarily praising or agreeable. ask multiple LLMs. Ask for review, like "Are you sure? What could possibly go wrong or what are all possible issues with this?"
Needs to? Is there some new law mandating all landing pages must contain exclusively handwritten text that people haven’t heard of?
To your actual point, the people that would take the landing page being written by an LLM negatively tend to be able to evaluate the project on its true merits, while another substantial portion of the demographic for this tool would actually take that (unfortunately, imo) as a positive signal.
Lastly, given the care taken for the docs, it’s pretty likely that any real issues with the language have been caught and changed.
No they don't. The text is very clearly conveying what this project is about. Not everyone needs to cater to weirdos who are obsessed with policing how other people use LLM.
The people who don't care about LLM slop being shoved down their throat at every turn are the "weirdos" here. The project might not be slop, but the website certainly is, and it's perfectly reasonable for people to stop reading immediately and decide that they don't care about what could be an otherwise useful project when they determine that the author didn't give enough of a shit to even write the text on the website themselves.
But there is an old-school README.me at the github homepage: https://github.com/stanford-scs/jai
The repository has an old-school ASCII INSTALL file.
If you don't like the vitepress site, just use github and read the human-written README and man page there. All the information you need to use the software is available without laying eyes on any AI slop. Of cource, if you hate AI so much that you can't get past a vibe-coded landing page, you might not be the target audience for jai, because you probably aren't doing a lot of vibe coding. But maybe jai is still useful to you for grading programming assignments or running installer scripts.
Except that the "this was generated by an LLM" feeling you get from the front page would then make you automatically question whether the "decades of experience + stanford professor thing", as you put it, was true or just an LLM hallucination.
Author would, indeed, be wise to rewrite all the text appearing on the front page with text that he wrote himself.
Excellent point, though not everyone pays close enough attention to the domain shown in the browser (if they did, some of the more amateurish phishing attempts would fool a lot fewer people). But yes, anyone who notices the domain will have a clue to the truth.
I think the issue in the article can further be mitigated with a better algorithm for determining road elevation profiles that accounts for the fact that roads are usually placed to minimize vertical displacement.
One can start assuming that the world is square tiled with square corners on the elevation measurement grid, and assuming that elevation in the square lies between the minimum and maximum value.
Now a road can be split into curve segments such that each segment lies in exactly one square. Then the profile of the road can be determined by guessing the altitude for the midpoint of each curve segment and interpolating.
The altitudes should be guessed to approximately minimize the road length and I think good and fast algorithms are easy to find.
For example, the altitudes of the midpoints can be assigned with a greedy/lazy approach: once one is determined, for each neighbor pick the closest valid altitude until all are assigned. To start, pick the maximum n such that the first n segments have non-empty altitude interval intersection and pick for all of them the endpoint of the intersection interval that is closest to the next interval (or the middle of the intersection interval if there is no next one).
Alternatively it can be formulated as a constraint problem with linear constraints and an objective function that depends on the interpolation. If weighted sum of absolute values is chosen, it's a linear program, otherwise the objective function will have higher degree
It seems like it would be possible to implement this in userspace using shared memory to store the data structures and using just one eventfd per thread to park/unpark (or a futex if not waiting for anything else), which should be fully correct and have similar or faster performance, at the cost of not being secure or robust against process crashes (which isn't a big problem for more Wine usage).
It seems that neither esync or fsync do this though - why?
Claude thinks that "nobody was motivated enough to write and debug the complex shared-memory waiter-list logic when simpler (if less correct) approaches worked for 95% of games, and when correctness finally mattered enough, the kernel was the more natural place to put it". Is that true?
> It seems like it would be possible to implement this in userspace using shared memory
It is not. Perhaps this should be possible, but Linux doesn't provide userspace facilities that would be necessary to do this entirely in userspace.
This is not merely an API shim that allows Windows binary object to dynamically link and run. It’s an effort to recreate the behavior of NT kernel synchronization and waiting semantics. To do this, Linux kernel synchronization primitives and scheduler API must be used. You can read the code[1] and observe that this is a compatibility adapter that relies heavily on Linux kernel primitives and their coordination with the kernel scheduler. No approach using purely user space synchronization primitives can do this both efficiently and accurately.
The code doesn't really seem to use any kernel functionality other than spinlocks/mutexes and waiting and waking up tasks.
That same code should be portable to userspace by: - Allocating everything into shared memory, where the shared memory fd replaces the ntsync device fd
- Using an index into a global table of object pointers instead of object fds
- Using futex-based mutexes instead of kernel spinlocks
- Using a futex-based parking/unparking system like parking_lot does
Obviously this breaks if the shared memory is corrupted or if you SIGKILL any process while it's touching it, but for Wine getting that seems acceptable. A kernel driver is clearly better though for this reason.
People such as Figura and Bertazi have been attempting to do what you propose for most of a decade now[1]. They've ended up with this, after two previous implementations running in Wine for many years. Thier reasons are explained in their documentation[2]. Perhaps you know better. We all look forward to your work.
I don't know the technical details, but the kernel docs say "It exists because implementation in user-space, using existing tools, cannot match Windows performance while offering accurate semantics."
https://docs.kernel.org/userspace-api/ntsync.html
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should"
It's very cool, but would only be useful in some marginal cases, specifically if you don't want to modify the programs significantly and the reliability reduction is worth either the limited performance upside of avoiding mm switches or the ability to do somewhat easier shared memory.
Generally this problem would be better solved in either of these ways:
1. Recompile the modules as shared libraries (or statically link them together) and run them with a custom host program. This has less memory waste and faster startup.
2. Have processes that share memory via explicit shared memory mechanisms. This is more reliable.
Thanks! The idea of launching additional components nearly "natively" from the shell was compelling to me early on, but I agree that shared libraries with a more opinionated "host program" is probably a more practical approach.
Explicit shared memory regions is definitely the standard for this sort of a problem if you desire isolated address spaces. One area I want to explore further is using allocation/allocators that are aware of explicit shared memory regions, and perhaps ensuring that the regions get mmap'd to the same virtual address in all participants.
I think that's worse than reinstalling because there could be a non-persistent exploit in the secure element allowing a malicious OS to fake attestation
reply