Hacker Newsnew | past | comments | ask | show | jobs | submit | system33-'s commentslogin

They terminate TLS. It seems like you wouldn’t want to use this service even if all those questions were answered to your satisfaction.


Not disagreeing, just elaborating, about “fine” Chinese Amazon tools.

I needed safety wire pliers to assemble some brake rotors. The metal in the ones I got on Amazon was softer than the metal wire they came with such that the cutting edges got little wire-sized dents in them and increasingly useless the farther I got along in the job.

Returned those afterward. Junk.

But there’s other stuff I’ve gotten from RANDOMLETTERS Amazon that’s actually holding up “ok.”

Also, Harbor Freight is a better source of ok/fine tools where you don’t need quotes around those words.


The saying goes Harbor Freight is probably good enough for any tool you can afford to have fail. If your physical safety depends on it or if you use it so much that failure would cause a lot of downtime, you should probably spend a little more.


Tup. https://gittup.org/tup/ https://gittup.org/tup/make_vs_tup.html

But the Internet’s make mind-share means you still have to know make.

Edit: and make lets you use make to essentially run scripts/utils. People love to abuse make for that. Can’t do that with tup.


> Tup.

I don't think Tup managed to present any case. Glancing at the page, the only conceivable synthetic scenarios where they can present Tup in a positive light is built times of > 10k files, and only in a synthetic scenario involving recompiling partially built projects. And what's the upside of those synthetic scenarios? Shaving w couple of seconds in rebuilds? That's hardly a compelling scenario.


Abuse? Runnig linters, code analysers, configuration tools, template engines, spellcheckers, pulling dependencies, building dependencies with different build systems.

Sufficiently complex project need to invole alot of wierd extra scripts, and if a build system cannot fulfil it... the n it needs to be wrapped in a complex bash script anyway.


> Tup

`tup` relies on a stateful database, which makes it incomparable to `make`.



It is easy to manufacture contradictions by prooftexting. It isn't difficult to read in a favored hypothesis that contains contradictions, but that is by no means demanded by the text. Many can be resolved by recognizing that the same thing was being described from two different perspectives or with a focus on different aspects.

The quintessential example is perhaps #3 which purports that the two accounts of creation are contradictory. But there are a number of ways to interpret Genesis [0] that doesn't result in contradiction while maintaining the theological truths that are the purpose of biblical texts. The Bible isn't a scientific treatise.

Another typical class of examples are the purported inconsistencies within the Gospels themselves [1].

An article on inerrancy you might find interesting [2].

[0] https://www.catholic.com/magazine/print-edition/are-there-co...

[1] https://www.catholic.com/magazine/online-edition/how-to-reso...

[2] https://www.catholic.com/magazine/print-edition/is-scripture...


Yup. +1 for fossil. I wanted an issue tracker that wasn’t text files in the repo. Lots of git-based things that were heavier (gitea and friends) or hackier than I wanted. Decided to finally try out fossil and I think it’s really really neat.


In terms of everyday workflow, does Fossil differ radically from git? If so, what's the learning curve like?


The biggest workflow differences I've noticed:

- The repo normally lives outside of the worktree, so remembering to 'fossil new foo.fossil && mkdir foo && cd foo && fossil open ../foo.fossil' took some getting used to. Easy enough to throw into some 'fossil-bootstrap' script in my ~/.local/bin to never have to remember again.

- For published repos, I've gotten in the habit of creating them directly on my webserver and then pulling 'em down with 'fossil clone https://${FOSSIL_USER}@fsl.yellowapple.us/foo'

- The "Fossil way" is to automatically push and pull ("auto-sync") whenever you commit. It feels scary coming from Git, but now that I'm used to it I find it nice that I don't have to remember to separately push things; I just 'fossil ci -m "some message"' and it's automatically pushed. I don't even need to explicitly stage modified files (only newly-created ones), because...

- Fossil automatically stages changed files for the next commit - which is a nice time-saver in 99% of cases where I do want to commit all of my changes, but is a slight inconvenience for the 1% of cases where I want to split the changes into separate commits. Easy enough to do, though, via e.g. 'fossil ci -m "first change" foo.txt bar.txt && fossil ci -m "everything else"'.

- 'fossil status' doesn't default to showing untracked files like 'git status' does; 'fossil status --differ' is a closer equivalent.


> - Fossil automatically stages changed files for the next commit - which is a nice time-saver in 99% of cases where I do want to commit all of my changes, but is a slight inconvenience for the 1% of cases where I want to split the changes into separate commits. Easy enough to do, though, via e.g. 'fossil ci -m "first change" foo.txt bar.txt && fossil ci -m "everything else"'.

That'd be a deal breaker for me. Git's staging area is such a breath of fresh air compared to the old way (that fossil is doing), that it's one of the biggest reasons for me to switch to it. It's completely freeing to not have to worry about things being accidentally added to commits that I didn't want to have.


I agree, and adding a `-a` to your git commit if you dont want to have to add all the changes is not much of an added burden for people who operate in the fossil way


Not to mention this is kinda yet another hidden local version :) I have a habbit to do git add -u when stuff I work on is in good state and I am about to try something more risky. I just stage the stuff, and keep hacking. If I fuck it up.. I can git checkout files or everything up to a stage.


Not tremendously. You still commit, you still push and pull. There is a history and ignore options.

There is a guide written for Git users:

https://fossil-scm.org/home/doc/trunk/www/gitusers.md


So cut those people. They will continue to be a problem. Can’t fire them because of the rights of federal employees? Get the law changed.

Actually solve an actual problem, not wave a machete around cutting the government in half.


Subsonic and airsonic (the latter is a fork of the former).


Before clicking the link or seeing the domain, I was expecting either a rehashed (or if I was optimistic: a novel) argument for why what LE does isn’t actually validating domains. Philosophically or technically. For example: they don’t validate you’re going to the domain you intend on visiting. And 500 words on why that makes them useless. (I don’t agree, but that’s what I was expecting)


I worked for a brand that was heavily impacted by phishing sites that used LE certs. It was annoying, but honestly I wasn’t sure what LE couple do about it. If you deny creating a cert with Gmail in the domain, people will just use something like gmall instead.


Many fishing attacks could be thwarted if there was a more manual process for certificate issuance, CAs were obligated to KYC and verify/monitor applicants stringently and lost their license for malpractice, etc. Web would be a safer place, but the cost is higher barriers for entry, and attackers would just focus on stealing the actual certs.

Some would say being able to communicate privately/securely is irrelevant to whether you should trust whoever you’re communicating with, but then someone could argue that in practice the two get conflated all the time and the aura of the channel colours the counterparty.

I notice that there are two most common categories of non-techie users: those for whom being able to visit a website without loud warnings is enough to auto-trust it, and those who by default distrust anything that has to do with anything on the Web (and the latter are unfortunately correct). You can’t expect people to perform sophisticated threat detection at all times and feel good about their life at the same time.


Exactly. “Unsolvable” is a strong word, but … how wrong is it? Shrug.


Passkeys. The answer is passkeys.


It’s then best we’ve got for achieving actually meaningful privacy and anonymity. It has a huge body of research behind it that is regularly ignored by those coming up with sexy or off-the-cuff alternatives.

It’s the most popular so it gets the most attention: from academics, criminals, law enforcement, journalists, …


Why not just have greater number of relays by default? Internet bandwidth tends to increase over time, and the odds of this correlation attack are roughly proportional to the attacker's share of relays to the power of the number of relays used.

So latency issues permitting, you would expect the default number of relays to increase over time to accommodate increases in attacker sophistication. I don't think many would mind waiting for a page to load for a minute if it increased privacy by 100x or 1000x.


If you’re advocating for a bigger network… we need more relay operators. Can’t wave a magic wand. There’s like 8000 relays. Haven’t looked in a while.

Or if you were arguing for increasing the number of relays in a circuit, that doesn’t increase security. It’s like one of the OG tor research papers deciding on 3. Bad guy just needs the first and the last. Middle irrelevant.


> we need more relay operators. Can’t wave a magic wand. There’s like 8000 relays. Haven’t looked in a while.

The reason that there are so few relays and exit nodes is that everyone that runs an exit node believes, for very good reason, that they'll be opening themselves up to subpoenas and arrest for operating one. You know who never has to worry about getting arrested? Surveillance agencies tasked with running exit nodes.

Consider the two classes of relay and exit operators:

1. People who operate relays and exit nodes long term, spending money to do so with no possibility or expectation of receiving money in return, and opening themselves up to legal liability for doing so, whose only tangible benefit comes from the gratification of contributing to an anonymous online network

2. Government agencies who operate relays and exit nodes long term, spending government allocated money to operate servers, with no material risk to the agencies and whose tangible benefit comes from deanonymizing anonymous users. Crucially, the agencies are specifically tasked with deanonymizing these users.

Now, I guess the question is whether or not you think the people in group 1 have more members and more material resources than the agencies in group 2. Do you believe that there are more people willing to spend money to run the risk of having equipment seized and arrest for no gain other than philosophical gratification than there are government computers running cost and risk free, deanonymizing traffic (which is their job to do)?


>Or if you were arguing for increasing the number of relays in a circuit, that doesn’t increase security. It’s like one of the OG tor research papers deciding on 3. Bad guy just needs the first and the last. Middle irrelevant.

Because of timing attacks? There are ways to mitigate timing attacks if you are patient (but I think clearnet webservers are not very patient and my drop your connection)


Yes timing attacks.

And yeah mitigation gets you into a huge body of research that’s inconclusive on practical usability. Eg so much overhead that it’s too slow and 10 people can use a 1000 relay network and still get just 1 Mbps goodput each. Contrived example.

People need to actually be able to use the network, and the more people the better for the individual.

There’s minor things tor does, but more should somehow be done. Somehow…


Any idea what consideration keeps the tor team from making the client also act as a relay node by default?


Clients aren’t necessarily good relays. Reachability. Bandwidth. Uptime. I’ll-go-to-prison-if-caught-and-idk-how-to-change-settings-this-needs-to-just-work.


it was used by Snowden to leak documents...


Snowden got caught.


>It’s then best we’ve got for achieving actually meaningful privacy and anonymity

...while being practical.

One could argue that there is i2p. But i2p is slow, a little bit harder to use, and from what I can remember, doesn't allow you to easily browse the clearnet (regular websites).


These sort of “Tor evangelism” comments are so tiring, frankly. There are quite a few like it in this thread, in response to…not people poo-pooing Tor, or throwing the baby out with the bathwater, rather making quite level-headed and reasonable claims as to the shortcomings and limitations of the network / protocol / service / whatever.

One should be able to make these quite reasonable determinations about how easy it’d be to capture and identify Tor traffic without a bunch of whataboutism and “it’s still really good though, ok!” replies which seek to unjustifiably minimise valid concerns because one feels the need to…go on and bat for the project that they feel some association with, or something.

The self-congratulatory cultiness of it only makes me quite suspicious of those making these comments, and if anything further dissuades me from ever committing any time or resources to the project.


The issue is that the people making 'level headed' claims have read none of the literature and their mathematical ability seems to end at multiplying numbers together.

It sounds reasonable to anyone who hasn't read the papers, to anyone that has these comments are so wrong that you can't even start explaining what's going wrong without a papers worth of explanation that the people don't read.


Depends on the content of your traffic.

If “deanonymize” strictly means perform a timing attack using info you have from the beginning and end of the circuit, then by definition you’re correct.

But if you visit an identifying set of websites and/or ignore TLS errors or … they can still deanonymize you.


What role do TLS errors play in de-anonymizing onion traffic?


My comment is strictly about exit nodes which are not used as part of connecting to onion services.

Ignoring TLS errors might mean you’re ignoring the fact your exit relay is MitM attacking you.


Thanks, I just wanted to be sure I wasn’t missing something.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: