Hacker Newsnew | past | comments | ask | show | jobs | submit | andrewbarba's commentslogin

This math doesn't account for the time it takes to get the water to a boil. Probably closer to 40% savings. Still, quite good!


Doesn't seem to work for me. I downloaded and ran on my Mac and proceeded to click the volume buttons on my iPhone but the count remains 0.


hahahahahaha


It will, most definitely, be end to end encrypted along with the rest of user data in the iCloud ecosystem.


Unless you live in China*.

* https://support.apple.com/en-us/HT208351


iCloud user data that’s not end-to-end encrypted: name, home address, work address, who you talk to and when, where you go and when, what you buy and when…

iCloud user data that’s not end-to-end encrypted unless you find and change multiple different settings that few people know about: files, photos, notes, texts, full copies of laptop’s and phone’s storage…

I’d take a bet that 99% of the time Apple or a local regime wants to read iCloud user data, they can.


I’m almost sure that billing informations are required by law in plenty jurisdictions to be stored, where you go and when is a feature for Find My, what you buy and when is again, first point. What if I tell you that literally any physical supermarket will also have this data?


So when it’s Google doing this it’s evil, but when it’s Apple, it’s ok if it a feature uses it, or if some nation wanted it, or if somebody else did it too?

I do think that a lot of these are hard problems to truly tackle, I was just commenting on this cognitive dissonance where people say things like “iCloud user data is private”, while in fact iCloud user data is this huge trove of very personal information that’s not private, and we don’t act like those ideas are in conflict.

Besides, Apple’s not doing the most they can within the confines of user wishes and local law. Neither one requires using dark patterns to ensure that 95% of users have privacy turned off. They’re also tracking every interaction for ad purposes on the App Store. I’m not trying to call them an evil boogeyman, but I think they’re trying to have their cake and eat it too: have users buy more because they think it’s private, while not disadvantaging themselves or pissing off local regimes by actually preventing data access in the majority of cases.


That's kinda the problem. This "required by law" bit obviously conflicts with their "privacy is a human right" shtick. Just look through their transparency page[0], where Apple confirms that device and account data is turned over by the thousands annually. Despotic nations like China get 93% of device access requests granted. Apple doesn't even have control over Chinese iCloud servers, their allegiance to the government directly prevents them from protecting their users.

> What if I tell you that literally any physical supermarket will also have this data?

Great, now we're comparing iPhone privacy to a supermarket. Wasn't Apple supposed to care about this stuff?

[0] https://www.apple.com/legal/transparency/


We are talking about China, that straight up just replaced Google with their own thing.

You are either doing business with China at their own rules, or you don’t do anything. Apple has different iCloud settings for the rest of the world, and while it does suck for Chinese people, the alternative would be some domestic OS with much much more involved privacy violations.

Also, why should a company be above local law? Sure it sucks when the local law is bad and apple should not make the job easier, but if they do lawfully request some data then I don’t see why Apple should not fulfill it.


> or you don’t do anything.

This was the ethical abstention that the rest of FAANG pursued. At least, the ones who didn't have vested manufacturing contracts in the China mainland. Let them replace it with their own thing, it's a better alternative than compromising every iPhone "just in case". It's not only about domestic security, it's about how it devalues the meaning of Apple's privacy worldwide.

> Also, why should a company be above local law?

Because their principles of privacy and security supersede their interest in moneymaking? If the local law is unjust, you don't do business there. That's what Microsoft, Google, Netflix and

> I don’t see why Apple should not fulfill it.

Your phone shouldn't have a backdoor in the first place. Letting Apple have this much vertical authority over their platform is why we're here now, trying to decide if America's largest corporation is right for sleeping with the enemy. It's obviously wrong (as you note), but we're also helpless to resist it (as you also note). The best path of recourse is legislation that dilutes Apple's absolute authority and forces them to play ball with the industry. It's attractive to legislators, regulators, and the common people.


> it's a better alternative than compromising every iPhone "just in case"

But that’s absolutely not what happens - do you have any sort of citation for that?


How do you think Apple is granting device access to law enforcement?


Worth noting:

- We lost two tweeters (7 down to 5)

- We lost two mics (6 down to 4)

Really curious how the sound compares to the original.


I still don't know why it doesn't come with an equalizer built into Home. Sometimes I want grumbly bass; sometimes I want to understand Bane without subtitles.


Not an equalizer but Reduce Bass setting does exactly this. I actually have it on at all times because it makes the HomePod sound more flat, the default has too much bass for my taste.


The description of the tweeters has changed on the spec page as well. I wonder if they're still using balanced mode radiators (BMRs) like the first gen HomePods? Perhaps these use a newer generation BMR?


I don't think they're BMRs, they're just conventional full range drivers, similar to what Apple has been using for almost two decades, going all the way back to the iMac.


The first gen HomePod definitely uses BMRs: https://forums.macrumors.com/threads/nondestructive-look-ins...


How could you possibly say this is doing it wrong? The only way you could batch requests in the way you describe is if you have 1 (or very small number) compute nodes. You would need all those requests to hit same node so you could try and batch. With serverless compute infrastructure (which is what this blog is demonstrating by using lambda) you can have 1 isolated process per request and therefore need a database that can actually handle this kind of load.


> With serverless compute infrastructure

Here is your problem. You are trying to build a huge application using inadequate technical building blocks.

Lambdas are super inefficient in many different ways. It is a good tool but as with every tool you do need to know how to use it. If you try to build heavy compute app in Python and then complain at your electricity bill -- that really is on you.

If your database is overloaded with hundreds of thousands of connections from your lambdas, it means it is end of the road for your lambdas. Do not put effort into scaling your database up, put effort into reducing the number of your connections and efficiency of your application.


I think you can start to hit connection limit walls with RDS at several hundred connections, depending on your instance size. Running an even moderately busy app you could hit those pretty quickly. I would hate to have to change my entire infrastructure at such an early stage because the DB was hitting connection limits!

Would you ever need a million open connections? Probably not! But you'll likely want more than 500 at some point. And if your entire stack is serverless already, it'd be nice if the DB could handle that relatively low number of connections too.


I look at the database connections the following way: how many connections can a database really serve effectively? For a connection to be actively served the database really needs to have a cpu core working on it or waiting for IO from the storage. And I am completely omitting the fact that databases really need a sizeable amount of memory to be able to do things efficiently.

Even if you have a server with hundreds of cores your database probably can't be actively working on more than a small multiple of the number of the cores.


I'm not sure if you've seen the other post, but we can totally handle one million queries per second: https://planetscale.com/blog/one-million-queries-per-second-....

And previous HN discussion: https://news.ycombinator.com/item?id=32680957.


I am not saying you can't. I totally believe you do.

Modern hardware is totally able to execute hundreds of thousands of transactions per second on a single core. If your query is simple and you can organise getting the data from storage at the necessary speeds you should totally be able to do this many requests, possibly even tens of millions.

But handling one million queries per second is completely different from having database server making progress on one million queries in parallel. What happens is, the database server is only making progress on a small number of them (typically in tens up to hundreds on a very beefy hardware) and everything else is just queued up.

There are much, much better ways to queue up millions of things than opening a million connections to get each one done individually.


Lambda was a means to an end for us here, and we're not specifically endorsing its use in _this_ way. Our goal was explicitly to test our ability to handle many parallel connections, and to observe what that looked like from different angles.

We're a DBaaS company, and we do need to be prepared for anything users may throw at us. Our Global Routing infrastructure has seen some major upgrades/changes recently to help support new features like PlanetScale Connect and our serverless drivers.

From our point of view, this was a sizing exercise with the interesting side benefit that many people do happen to use Serverless Functions similarly.


How much is a moderately busy app? I have a sketch of a twitter app in Scala with zio-http as the framework, doing the batching strategy twawaaay describes, and it can handle 46k POSTs per second on my i5-6600 with a SATA3 SSD. That's using 16 connections to postgres, which is probably more connections than is reasonable for my 4 core CPU.

At 46k RPS, it only takes 5.5 ms to assemble a batch of 256, so latency is basically unaffected by doing this. Just set a limit of 5-10 ms to assemble the batch (or lower if you have a more powerful computer that can handle more throughput).


if you are at load level when you have million lambdas executing concurrently your monthly bill will make even Uncle Sam cry.


In the case of the blog post at least: It is a 1,000 lambdas making 1,000 queries each.


Well thats very unrealistic workload


Unrealistic from the Lambda side, yes! But we were just trying to generate connections to the DB, not test the capabilities of Lambda


sure I am not criticizing the methodology of load testing, just pointing out that at extreme high loads lambda is too expensive to use.


Introduce intermediate server which accepts multiple requests and groups them into single batch requests.

With enough trickery you can even implement it using postgres wire protocol, I guess, so it'll be transparent.


Now you need to batch your requests to your intermediate server


No, you don't. You have one per 100 nodes or whatever. Not a single one for all nodes to talk to.


But why? Why have an additional component if your database can do it?


twawaaay claimed that similar approach allowed him to improve throughput by factor of 50. Apparently database couldn't do it efficiently in that case.


This is a shameless plug, but I built Swift Cloud to help people build scalable backends in Swift: https://swift.cloud

Behind the scenes we compile Swift to WASM and deploy to Fastly's edge network, Compute@Edge. At my day job we are using this in production and serving thousands of requests per second on our Swift app. Overall it's a lot of fun to deploy Swift on server, but the developer UX still leaves a lot to be desired. Running and testing locally is still non-trivial.


This looks great, good work!

One tangential question: do you worry about using "Swift", which I assume is an Apple trademark, in the name of your product?


Thanks so much.

And short answer - Yes.

My guess is it's a matter of 'when' I have to deal with this. I've had the domain for a while, and this is something I've wanted to build for such a long time. Things fell into place when I was able to build a Swift SDK for Fastly's platform back in January this year. The stubborn engineer in me went ahead and used the domain anyway and launched what you see today.


Gotcha, thanks for your answer :)


I already created an account! I’m definitely going to use this the next time I need a little cloud function


Love to hear that! Always feel free to reach out if you have any questions or run into issues.


I have no use for this project, but this seems super cool!


Are there plans to release an HTTP API to make it easier to use with services like Fastly Compute@Edge and Cloudflare Workers? And if so would the API be global or region specific?

One thing I haven't seen with "serverless" databases is an easy way to dictate where data is stored. Mongo has a pretty clever mechanism in their global clusters to specify which regions/nodes/tags a document is stored in, and then you simply specify you want to connect to nearest cluster. Assuming your compute is only dealing with documents in the same region as the incoming request, this ends up working really well where you have a single, multi-region db, but in practice reads/writes go to the nearest node if the data model is planned accordingly.

A real world example of how I am using this in Mongo today: I run automated trading software that is deployed to many AWS regions, in order to trade as close to the respective exchange as possible. I tag each order, trade, position, etc. with the exchange region that it belongs to and I get really fast reads and writes because those documents are going to the closest node in same AWS region. The big win here is this is a single cluster, so my admin dashboard can still easily just connect to one cluster and query across all of these regions without changing any application code. Of course these admin/analytics queries are slower but absolutely worth the trade off.


Absolutely! We are working on it right now and call this “regions”. We already have a proxy - you will notice that the connection string is project_name.cloud.neon.tech.

We are working on deploying the proxy globally and routing read traffic to the nearest region.

We also have some multi-master designs in collaboration with Dan Abadi. But this will take a second to build.


So if we wanted to put PostgREST[1] in front of this would something like this work:

db-uri = "postgres://authenticator:mysecretpassword@project_name.cloud.neon.tech:5433/postgres" db-schema = "api" db-anon-role = "web_anon"

[1]https://postgrest.org/en/v8.0/tutorials/tut0.html


Kind of, yes. There are a few user creation details we need to polish so that you can follow the tutorial word for word without needing to do any click-ops on the console


I‘ve seen an clever approach to do location-specific data at rest. Partitioning and postgres_fdw was combined to store specific data on specific clusters in some regions. And on top of it partioning was used to again get the global view. Really nice.


It's good for us too - we got proper Hikaru game recaps on YouTube.


Is this something that could be exposed as a generic CLI tool to replace something like wasm-opt from Binaryen?


A subset of this could possibly be applied at the wasm-opt level, but consider that at the LLVM's IR level there is more information to be leveraged (in particular PartialExecuter uses information on what memory ranges are read-only).

In general the whole concept behind Cheerp (C++ to WebAssembly + JavaScript compiler) is doing as much as possible at the LLVM IR level, JavaScript concept included, since it's easier and more powerful to do code transformation there.


Binaryen implemented partial evaluation at the beginning of this year https://github.com/WebAssembly/binaryen/pull/4438


Hah! We had the same idea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: