Hacker Newsnew | past | comments | ask | show | jobs | submit | cpbotha's commentslogin

One more vote for miniflux, which I run on a "server" (old laptop with linux...) at my house and access from anywhere thanks to cloudflare tunnels.


We must be related, I'm doing the exact same thing.


Large prospective cohort study (103 388 participants) showing that artificial sweeteners and specifically aspartame are associated with increased risk of cardiovascular disease: https://www.bmj.com/content/378/bmj-2022-071204

"...findings indicate that these food additives, consumed daily by millions of people and present in thousands of foods and beverages, should not be considered a healthy and safe alternative to sugar..."

Also, artificial sweeteners might not help with obesity: "Long-term aspartame and saccharin intakes are related to greater volumes of visceral, intermuscular, and subcutaneous adipose tissue: the CARDIA study" https://www.nature.com/articles/s41366-023-01336-y


It's all good information in the bmj paper, and there's a lot to take in there.

But reading this bit -

"Compared with non-consumers, higher consumers (unadjusted comparisons) tended to be younger, have a higher body mass index, were more likely to smoke, be less physically active, and to follow a weight loss diet; they had lower total energy intake, and lower alcohol, lipid (saturated and polyunsaturated), fibre, carbohydrate, fruit and vegetable intakes, and higher intakes of sodium, red and processed meat, dairy products, and beverages with no added sugar"

I'm not sure how much we can say this is a smoking gun, and how much we can say people who are less healthy and have worse outcomes are also using more sweeteners. In fact the authors note this in weaknesses -

"Additionally, reverse causality could lead to higher artificially sweetened food and beverage consumption among participants who were overweight or obese, and already had poorer cardiovascular health at baseline before CVD diagnosis. However, this factor probably does not entirely explain the observed associations because we excluded CVD events occurring during the first two years of follow-up and we also tested models adjusted for baseline body mass index, weight loss diet, and weight change during follow-up, which did not substantially change the results."

So it seems that even though they have tried to control for that, they can't eliminate it, so I wouldn't personally draw any strong conclusions.

For the record, I don't consume particularly much of any artificial sweetener, though I am fond of the occasional diet coke.


There is also Sébastien Dubois's Personal Knowledge Management slack [1] and the PKMs reddit [2].

[1] https://dsebastien.net/pkm-community

[2] https://www.reddit.com/r/PKMS/


Thanks! Joined both


I've been paying for the JetBrains toolbox for a few years now, I'm a huge fan.

However, JetBrains unfortunately has nothing coming close to the transparency and speed of the VSCode remote connection, with your source living only on the remote side.

In cases where being able to work on remote, for example very particular configurations, or docker containers, or WSL2, this makes a huge difference.

For me this is mostly Python and TypeScript these days, where VSCode has grown particularly strong in terms of IDE features.


I've been using xrdp more extensively with WSL2 recently. Because WSL2 often gets a new network interface assigned, X connections back to Windows are terminated, while rdesktop to xrdp running on the WSL2 instance does not.

RemoteFX is active, but on my 2560x1440 display there is still a bit of sluggishness. However, it's fine to run PyCharm locally on the WSL2, which is my primary use case with this.


Signal servers only know who talked to whom, but otherwise are physically not able to see even a smattering of the contents of the communication.

Correction: The signal servers don't even have that bit of metadata. See [1], they only store the last time that a user connected to the server.

[1] https://en.wikipedia.org/wiki/Signal_(software)#Servers


> The signal servers don't even have that bit of metadata. See [1], they only store the last time that a user connected to the server.

Note that it's what they claim at least. It's not verifiable client side, and to be honest, it's hard to come up with a scalable protocol where this is the case, but you should still not repeat their claim as a matter of fact while in reality we only have their word that the code actually matches what's deployed. And even if they don't store anything, AWS could still provide interested entities access to the infrastructure to capture what Signal doesn't want to capture. Yes, features like sealed sender are awesome and are an important step, but the service still gets ip addresses, which do provide hints about the sender. Again, likely Signal doesn't store ip addresses but people with access to their infrastructure could.

Furthermore, Signal's encryption doesn't help against people storing all of Signal's traffic and waiting until attacks on crypto algorithms become practical (quantum computers, theoretical progress on attacks). Some secrets become irrelevant with time, others increase in value. The best defense is never having the message leave your country's network in the first place.

And there's the DOS problem. What happens if the american president decides that the EU should be cut off from all US network connections? The EU parliament members can't even organize a good response to this because they use an american service...


Signal app is also canonically distributed by Google Play/Apple Store, which are US entities under US law. When push comes to shove, an app update may get distributed to select individuals that will happily gather and send all their conversation histories and more.

As an EU citizen, I'm half puzzled and half horrified at how happy the EU institutions are to rely on foreign products: especially coming from a country that has a history of being trigger-happy and cutting people off in the name of a "trade war".


I compiled Signal for iOS and monitored the sent data through a proxy. Both behave identical. There could be a hidden switch in the distributed binaries that triggers other behavior, but I really doubt it. For Android, there are reproducible builds so you can actually check the code is the same. For iOS reproducible builds are harder but should still be possible.


Can I verify that the build installed on my Android phone[] is identical to the one that I compiled? For instance, if I mount the device in Linux I can only see /mnt/sdcard, not /, so I can't copy the binaries off.

[] i.e. the build installed on my phone, not the build available no Google's server to download.


What's the alternative? Private closed-source apps like Threema?

This also is not for official communication , it's just for any case where staff would currently use WhatsApp or similar spyware.


I do not think that anyone suggest to use proprietary alternatives. Instead it seems that the posters in this thread would be more happy if the EU was more independent from the US by for example hosting their own signal servers and forking the client.


Matrix, which is already used by the French government.


Under your threat model no internet connected smartphone is safe. Google can just push any arbitrary software to run on your phone and this includes spyware created by governments.


Can one really trust they don't store more if they physicaly have the information at one point in time ? Or possibly their upstream connectivity provider could do that metadata scrapping.


I will link to this each and every time this comes up:

https://signal.org/bigbrother/eastern-virginia-grand-jury/

Signal turned over everything they had on this user (which was two time stamps: user creation and last access), and fought the gag order to be able to publish the subpoena and the response. Signal would have to be pretty stupid to lie to a federal court.

Think what you want, but Signal doesn’t have any metadata to turn over.


If I worked for the intelligence agencies I would be capturing all the info going in and out of the signal servers at the infrastructure level.

Even if I couldn't break the encryption I'd have timing and connectivity data.

So, if I were a user, I would always operate on the assumption that info would leak.


In this threat model, the only defense you would have would be an overlay network resistant to correlation attacks where all nodes are involved in routing traffic (like I2P), or a mixnet like Katzenpost.

Getting people to use Tor for everything is hard enough, good luck getting people to use stuff even more obscure.


And how often where the silenced by US law and weren't even allowed to mention such a thing? We will never know.


Even so, by EU rules, I'd expect them to be required to store the data in the EU.


The Signal private messaging app has a built-in dedicated "Note to Self" contact which you can text, send photos to, send voice notes to, or share anything with via your mobile OS of choice.

I should have added: The idea is that you use Signal only for quick low friction capturing.

In my case, I later process these messages during my daily input review, at which point they mostly end up somewhere in my Emacs orgmode-based system.


My whole system, minus the Apple Pencil bits, is described here: https://cpbotha.net/2019/09/21/note-taking-strategy-2019/

In short:

- Emacs OrgMode with a specific setup as the core of everything.

- I store interesting web-pages as PDFs on my local drive, currently synced using Dropbox.

- Academic articles go in Zotero, with PDFs on local drive, synced using Dropbox.

- On mobile, I use the Dropbox app to create and edit markdown files (I wish they would just treat .org files as normal text files!), and to save any web page to PDF.

- I sometimes draw flow charts, architecture diagrams and UI using an Apple Pencil and the Notability app on a 2018 entry-level iPad, which syncs these sketches as searchable PDFs to ... Dropbox.


I did this same experiment in September of last year. (just checked my orgmode notes)

My conclusion then was that the speedometer 2.0 benchmark is dominated by page load, because it does that a zillion times as it goes through all the different todomvc implementations.

The lastpass performance tax shows up mostly during page load.

The question is, how representative is the speedometer benchark of normal use?


It isn't perfect, and it does penalize LastPass's behaviour more due to its poor startup performance.

But I don't think it is entirely unrepresentative of real world performance.

If your hypothesis is correct — if you have LastPass installed, your pages are probably going to load slower and you'll experience a longer "uncanny valley". The tax paid is worse for pages that are otherwise lightweight.


> uncanny valley

You might want to look up that term sometimes. It means something different than you seem to think.


I was referring to the time between the browser paints your site and when JS execution kicks in.

See https://www.fastly.com/cimages/6pk8mg3yh2ee/3Toq5jWy0EuqG8KU...


I can't find any other source for that definition besides that picture.

Where does it originate if I may ask?


The term isn't very ubiquitous, but the problem it describes is. Some references —

https://addyosmani.com/blog/rehydration/

https://developers.google.com/web/updates/2019/02/rendering-...


This is the one blemish (in my view) on fastmail's record. I love them otherwise.

I initially came to the same conclusion as you, but after receiving one too many badly quoted HTML emails (and after some lost hours unsuccessfully trying to understand how fastmail web, gmail, thunderbird and ios mail.app do HTML quoting (hint: they're all subtly different)), I decided to double-down on format=flowed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: