"now at OpenAI" were my original words - they did the equivalent of an acqui-hire and "protected" OpenClaw in a foundation.
In the context of the seemingly aggressive machinations of Anthropic your hair-splitting without clarifying beyond "OpenAI didn't buy OpenClaw" seems itself misleading and rather counter to helping conversations progress.
I worked at a bank on the backend for architecture and security.. and I've posted this attestation here before, but the sheer volume of fraud and fraud attempts in the whole network is astonishing. Our device fingerprinting and no-jailbreak-rules weren't even close to an attempt at control. It was defense, based on network volume and hard losses.
Should we ever suffer a significant loss of customer identity data and/or funds, that risk was considered an existential threat for our customers and our institution.
I'm not coming to Google's defense, but fraud is a big, heavy, violent force in critical infrastructure.
And our phones are a compelling surface area for attacks and identity thefts.
I wish we had technical solutions that offered both. For example, a kernel like SeL4, which could directly run sandboxed applications, like banking apps. Apps run in this way could prove they are running in a sandbox.
Then also allow the kernel to run linux as a process, and run whatever you like there, however you want.
Its technically possible at the device level. The hard part seems to be UX. Do you show trusted and untrusted apps alongside one another? How do you teach users the difference?
My piano teacher was recently scammed. The attackers took all the money in her bank account. As far as I could tell, they did it by convincing her to install some android app on her phone and then grant that app accessibility permissions. That let the app remotely control other apps. They they simply swapped over to her banking app and transferred all the money out. Its tricky, because obviously we want 3rd party accessibility applications. But if those permissions allow applications to escape their sandbox, and its trouble.
(She contacted the bank and the police, and they managed to reverse the transactions and get her her money back. But she was a mess for a few days.)
> (She contacted the bank and the police, and they managed to reverse the transactions and get her her money back. But she was a mess for a few days.)
And this almost certainly means that the bank took a fraud-related monetary loss, because the regulatory framework that governs banks makes it difficult for them to refuse to return their customer's money on the grounds that it was actually your piano teacher's fault for being stupid with her bank app on her smartphone (also, even if it were legal to do so, doing this regularly would create a lot of bad press for the bank). And they're unlikely to recover the losses from the actual scammers.
Fraud losses are something that banks track internally and attempt to minimize when possible and when it doesn't trade-off against other goals they have, such as maintaining regulatory compliance or costing more money than the fraud does. This means that banks - really, any regulated financial institution at all that has a smartphone app - have a financial incentive to encourage Apple and Google to build functionality into their mass-market smartphone OSs that locks them down and makes it harder for attackers to scam ordinary, unsophisticated customers in this way. They have zero incentive to lobby to make smartphone platforms more open. And there's a lot more technically-unsophisticated users like your piano teacher than there are free-software-enthusiasts who care about their smartphone OS provider not locking down the OS.
I think this is a bad thing, but then I'm personally a free-software-enthusiast, not a technically-unsophisticated smartphone user.
That's the cost of business for the bank using an app. If they don't like it, they can try a different business model, like payment cards. The cost of having an app should be borne by the bank who decided all its customers would have to have an app.
> And this almost certainly means that the bank took a fraud-related monetary loss, because the regulatory framework that governs banks makes it difficult for them to refuse to return their customer's money on the grounds that it was actually your piano teacher's fault for being stupid with her bank app on her smartphone
In which country? This happened in Australia. The rules are almost certainly different from the US.
For me the answer is separate devices. I have an iphone which is locked down and secure. I have my banking and ID apps on it but I can't mod it however I want. Then I have a steam deck and raspberry pi I have entertainment and whatever I want on. I can customise anything. And if it gets hacked, nothing of importance is exposed.
> . For example, a kernel like SeL4, which could directly run sandboxed applications, like banking apps. Apps run in this way could prove they are running in a sandbox. ... Then also allow the kernel to run linux as a process, and run whatever you like there, however you want.
This won't work. It's turtles all the way down and it will just end up back where we are now.
More software will demand installation in the sandboxed enclave. Outside the enclave the owner of the device would be able to exert control over the software. The software makers don't want the device owners exerting control of the software (for 'security', or anti-copyright infringement, or preventing advertising avoidance). The end user is the adversary as much as the scammer, if not more.
The problem at the root of this is the "right" some (entitled) developers / companies believe they have to control how end users run "their" software on devices that belongs to the end users. If a developer wants that kind of control of the "experience" the software should run on a computer they own, simply using the end user's device as "dumb terminal".
Those economics aren't as good, though. They'd have to pay for all their compute / storage / bandwidth, versus just using the end user's. So much cheaper to treat other people's devices like they're your own.
It's the same "privatize gains, socialize losses" story that's at the root of so many problems.
It may still be an improvement over the situation now though. At least something like this would let you run arbitrary software on the device. That software just wouldn't have "root", since whatever you run would be running in a separate container from the OS and banking apps and things.
It would also allow 3rd party app stores, since a 3rd party app store app could be a sandboxed application itself, and then it could in turn pass privileges to any applications it launches.
I can run an emulator in the browser my phone and run whatever software I want. The software inside that emulator doesn't get access to cool physical hardware features. It runs at a performance loss. It doesn't have direct network access. Second class software.
Its not what we have now, for the reasons you list. Web software runs slowly and doesn't have access to the hardware.
SeL4 and similar sandboxing mechanisms run programs at full, native speed. In a scheme like I'm proposing, all software would be sandboxed using the same mechanism, including banking apps and 3rd party software. Everything can run fast and take full advantage of the hardware and all exposed APIs. Apps just can't mess with one another. So random programs can't mess with the banking app.
Some people in this thread have proposed using separate devices for secure computing (eg banking) and "hacking". That's probably the right thing in practice. But you could - at least technically - build a device that let you do both on top of SeL4. Just have different sandboxed contexts for each type of software. (And the root kernel would have to be trusted).
I'm not familiar with SeL4 other than in the abstract sense that I know it's a verified kernel.
I interpreted your statement "Then also allow the kernel to run linux as a process, and run whatever you like there, however you want." as the Linux process being analogous to a VM. Invoking an emulator wasn't really the right analogy. Sorry about that.
For me it comes down to this:
As long as the root-of-trust in the device is controlled by the device owner the copyright cartels, control-freak developers, companies who profit end users viewing ads, and interests who would create "security" by removing user freedom (to get out of fraud liability) won't be satisfied.
Likewise, if that root-of-trust in the device isn't controlled by the device owner then they're not really the device owner.
Yes; I think that's the real impasse here. As I say, I think there is a middle ground where the device owners keep the keys, but programmers can run whatever software they want within sandboxes - including linux. And sandboxes aren't just "an app". They could also nest, and contain 3rd party app stores and whatever wild stuff people want to make.
But a design like this might please nobody. Apple doesn't want 3rd party app stores. Or really hackers to do anything they don't approve of. And hackers want actual root.
The problem is it's quite easy to poke holes in a sandbox when you're outside the sandbox looking in, especially when the user is granting you special permissions they don't understand. These apps aren't doing things like manipulating the heap of the banking app, they are instead just taking advantage of useful but powerful features like screen mirroring to read what the app is rendering.
Yes, sandboxing is a technological protection, but once you have important data flowing we often don't have technological protections to prevent exfiltration and abuse. The global nature of the internet means that someone who publishes an app which abuses user expectations (e.g. uses accessibility to provide command and control to attackers) is often out of legal reach.
You also have so much grey area where things aren't actual illegal, such as gathering a massive amount of information on adults in the US via third party cookies and ubiquitous third party javascript.
Thats why platforms created in the internet age are much more opinionated on what API they provide to apps, much more stringent on sandboxing, and try to push software installation onto app stores which can restrict apps based on business policy, to go beyond technological and legal limitations.
Don't know why this was downvoted. Some people prefer to access online services from the safety of a web browser sandbox than through an always-installed wrapper app.
You can even use the chip on the card together with some cheap HW device to authorize the transactions made with the app.
This actually exists [1] for quite some time but seems to be mostly limited to Germany. But this and the use of other HW tokens systems is on decline. Banks increasingly use apps now, increasingly without any meaningful second factor, not even offering better options. They want this and are fully to blame.
This 100%. I don't understand why everything needs to be an app nowadays. Some things are best done in person and without to technology. No, I won't install some shitty app that requests location and network access to order lunch. If a venue does not provide a paper menu and accept cash, they have just lost my custom.
Yeah, I worked at a bank once. I was told following policy and using dependencies with known vulnerabilities so my ass was covered was more important than actually making sure things were secure (it was someone else's problem to get that update through the layers of approval!). Needless to say, I didn't last long
How does preventing people from running software of their choice on their own device (what you call jailbreaking) prevent fraud in practice? It's a pretty strong claim you're making there. And it's being made frequently by institutions, yet I have never seen it actually explained and backed up with any real security model.
All the information and experience I ever got tells me this is security theater by institutions who try to distract from their atrocious security with some snake oil. But I'm willing to be convinced that there is more to it if presented with contraindicating information. So I'm interested in your case.
How did demanding control over your customers' devices and taking away their ability to run software of their choice in practice in quantifiable and attributable terms reduce fraud?
The app does fingerprinting and requires certain secure device profile characteristics before the app lets a user initiate certain kinds of financial transactions.
Those are based on APIs available from the mobile devices. Google and Apple can offer other means by which to secure these things, and to validate that the device hasn't been cracked and is submitting false attestations. But even a significant financial institution has no relationship with Apple on the dev side of things.. Apple does what it decides to do and the financial institution builds to what is available.
These controls work -- over time fraud and risk go down.
Maybe there are some details missing here, but asking for more detailed or tailored feedback makes it seem like he cares and was willing to hear you out. Sometimes people are in their own industry for so long that they forget what their industry and tools look like to outside eyes. A simple menu to him could’ve been overwhelming for you as a quick example.
I ran into this a while back at a talk when the speaker used the phrase "perfectly ordinary sodium iodide gamma ray spectrometer". I pointed out to him afterwards that that's not something that most people would expect to follow "perfectly ordinary" in a sentence, and he explained that, yes, today you'd be using thallium-doped CsI or NaI scintillators instead.
If the response is an exact quote, the tone is "you must be stupid." It doesn't convey caring and wiling to hear things, and if they can't understand that before sending it, it makes perfect sense that the product sucks, and it will only get worse.
It’s not the tone. It’s how you perceive the tone to be. Be careful, especially in a culturally diverse and international environment. Plenty of cultures cringe when they receive overly friendly phrased words, as it will not sound honest and curious to them but condescending and fake (in this context it may be perceived as sarcasm); whereas others will experience and mean it as straightforward openness.
Communication is hard. Even harder in writing. A usually working approach is to assume friendliness.
Yeah, I think the idea of the law is fine. If you imagine "Operating System" to mean "things like Windows and iOS, or Desktop install of Fedora", "Application Store" to mean "Microsoft Store or AppStore or the like" and "Application" to mean "Word and Doom and stuff like that" then it's fine. Especially if you keep in mind that there isn't any actual verification of the age, it's simply set by whoever sets up the account
Most of the issues only arise because in the bill "operating system", "covered application store" and "application"/"developer" have very loose definitions that match lots of things where the law doesn't make sense.
Laws get made by whomever takes Gavin to the most dinners at the French Laundry. Don’t like this law? Good luck - reservations are booked out 6 months in advance.
I keep wishing for a public place to put a formatted version of my LLM threads. I have long conversations with LLMs that usually result in some kind of documentation, tutorial, or dataset. Many of them are relatively novel, but I haven't created a place for them yet.
And no, I wouldn't think an HN post is it either.. I'm just saying, there should be a good place to post the output of good questions asked iteratively.
Simon Willison published something for turning Claude Convos into something publishable. [1] I haven't tried it, so cannot speak to the ergonomics.
Where to post it? Any blog site, probably a good few Show HN too. Will anyone read it? I haven't read anyone else's, I'm more inclined to dock them reputation for suggesting I read their Ai session. Snippets of weird things shared on socials were interesting to me early on, but I'm over that now too.
I think that if you actually try reading someone else's conversation with LLM, you'll find out that it's less exciting than it seems.
For the one who has the conversation the excitement comes mostly from the ability to steer it the way you want. Reader doesn't have this ability, so they are just forced to endure the excessive wordiness, that is so typical for most LLMs.
If you learned something interesting, then why not express this knowledge in a normal article/blogpost? What advantage does a conversation between you and LLM has over just a normal text or, perhaps, text with pictures, diagrams, maybe some interactive illustrations etc
If you can’t even be arsed doing that how much value is there, really?
Personally the only thing less interesting to me than someone else’s conversations with an LLM is hearing about someone else’s dream they had last night but you never know, some people may be interested.
reply