Hacker Newsnew | past | comments | ask | show | jobs | submit | dontTREATonme's commentslogin

Wow, private flying really is for the rich. $1700 to fly cross country and it’s still 3 hours longer than flying commercial.


> private flying really is for the rich. $1700 to fly cross country and it’s still 3 hours longer than flying commercial

You don’t fly New York to San Francisco. You fly Cupertino to Driggs with three friends. The point is connectivity between unconnected points; similar to why we drive private cars.

Also, the article’s entire point is flying is unnecessarily expensive in the birthplace of aviation.


It will take much longer than that, you have to prep the airplane, do planning, refuel and inspect, pee, eat, and everything depends on the weather, is it too hot, too windy, poor visibility, what is the expected weather enroute and at the destination 4 hours from now, are there route restrictions. Still worth it usually.


Yes but on the other hand you can take a trip from LA To Mammoth Lakes with a few friends and get there in a few hours and less cost than driving.

Or fly to Vegas in half the time as driving or fly to Catalina Island which you cant go to any other way than ferry


Not sure if meant seriously or sarcastically. But it seems like a pretty reasonable option to me. Yeah, commercial air travel is super cheap and fast these days, especially for very far destinations. But it has its headaches and downsides too, and plenty of people choose to drive for various reasons even if it's slower and more expensive.

If you have the skills and equipment, it might well be worth the $900-$1700 or whatever. You're also potentially taking 3 other people, and luggage, and avoiding the costs and headaches of big commercial airports and airlines, plus TSA etc.


A ticket to NY is like $200. Even for 3 people its fairly cheap.

Spending over $1700 (thats just fuel, add probably $900 for the oil/maintainence, $100 for landing/tiedown fees, $200 for a hotel...) for the luxury of not spending few hours in an airport would certainly be rich


don't you need some place to put the plane!?


You can park pretty much at any airport or even strip of land. And there's more private airports and airfields than big commercial ones.


I always question how accurate these types of accounts were. Even if she wrote this after his death, his successor obviously wouldn’t look too kindly on it being disparaging.


It reads a lot like those instagram/magazine profiles of "I get up at 7am and eat healthily" (author actually gets up at 9am and eats junk food on most other days).

Worth noting that this is a relatively immobile king. Various other kings spent a lot of time on:

- hunting for sport

- military campaigns (e.g. Richard Lionheart spent more time out of England than in it)

- assizes (mobile courts)

- summer residences (Versailles is a huge, late example of this, but lots of monarchs around the world have had holiday homes of one sort or another)


> e.g. Richard Lionheart spent more time out of England than in it

To be fair, most of his prize holdings were also out of England.


> To be fair, most of his prize holdings were also out of England.

To be fair, he wasn't really English and didn't speak the language either. It wasn't until Henry IV (reign 1399 - 1413) that a post-invasion King's mother tongue was English. Most people don't realise that for over 300 years the (language at) court was Norman French.


If you want to talk about competent governance, look to William II, the "Hammer of the Scots" villainized in Braveheart. He made it law that legally binding contracts must be written in plain English, so that both parties would (most likely) be able to understand what they were agreeing to...

Even though he did not speak it, himself.


Btw: The successor was “Charles the Mad” who is known for mental illness and psychotic/schizophrenic episodes and had to be placed under regency. So maybe she also wanted to give an example how a sane King normally ruled.

https://en.wikipedia.org/wiki/Charles_VI_of_France


The article says the depiction may reflect idealization, and is also a deliberate inspirational portrayal.


I recently joined a very old company, with many lifers, I continuously run into this mentality. “I can’t explain it now, but I’m sure there was a good reason for it, so we’re gonna continue doing it this way”


The real issue is that most business just don't document anything to do with their processes. Chance are that there are a hand full of things that there are a good reason for doing and they do need to be done that way. Except the people that identified that original problem and came up with original solution have all left the company so now there is nobody around that has put in the effort (or been given the time to investigate) to figure out why things are done the way they are, and the last time one of the things that had been done forever was suddenly stopped it caused untold amount of chaos so now the directive is to just keep doing everything we've always done.


Of course it is typically wise to consider Chesterton's fence.


Per Chesterton's Fence, isn't this the right course of action for any individual who is unsure of why the practice was started?

https://www.lesswrong.com/w/chesterton-s-fence


I like that fence, but I consider the best course of action to be going and finding out why the thing is done the eay it is, even if it necessitates careful investigation.


I always understood chestertons to be that you should leave the fence there while you are investigating why its there and whether its still needed. Not blind "dont do anything"


I'd wholeheartedly agree with your assessment of the best course of action. To the points raised in the conversation above: there is definitely too little understanding of the pattern and too much blind adherence to the pattern as a widespread institutional practice across many institutions.


What’s your recommendation for finding the dns resolver closest to me? I currently use 1.1 and 8.8, but I’m absolutely open to alternatives.


The closest DNS resolver to you is the one run by your ISP.


Actually, it's about 20cm from my left elbow, which is physically several orders of magnitude closer than anything run by my ISP, and logically at least 2 network hops closer.

And the closest resolving proxy DNS server for most of my machines is listening on their loopback interface. The closest such machine happens to be about 1m away, so is beaten out of first place by centimetres. (-:

It's a shame that Microsoft arbitrarily ties such functionality to the Server flavour of Windows, and does not supply it on the Workstation flavour, but other operating systems are not so artificially limited or helpless; and even novice users on such systems can get a working proxy DNS server out of the box that their sysops don't actually have to touch.

The idea that one has to rely upon an ISP, or even upon CloudFlare and Google and Quad9, for this stuff is a bit of a marketing tale that is put about by thse self-same ISPs and CloudFlare and Google and Quad9. Not relying upon them is not actually limited to people who are skilled in system operation, i.e. who they are; but rather merely limited by what people run: black box "smart" tellies and whatnot, and the Workstation flavour of Microsoft Windows. Even for such machines, there's the option of a decent quality router/gateway or simply a small box providing proxy DNS on the LAN.

In my case, said small box is roughly the size of my hand and is smaller than my mass-market SOHO router/gateway. (-:


Is that really a win in terms of latency, considering that the chance of a cache hit increases with the number of users?


I used to run unbound at home as a full resolver, and ultimately this was my reason to go back to forwarding to other large public resolvers. So many domains seemed to be pretty slow to get a first query back, I had all kinds of odd behaviors from devices around the house getting a slow initial connection.

Changed back to just using big resolvers and all those issues disappeared.


Keep in mind that low latency is a different goal than reliability. If you want the lowest-latency, the anycast address of a big company will often win out, because they've spent a couple million to get those numbers. If you want most reliable, then the closest hop to you should be the most reliable (there's no accounting for poor sysadmin'ing), which is often the ISP, but sometimes not.

If you run your own recursive DNS server (I keep forgetting to use the right term) on a local network, you can hit the root servers directly, which makes that the most reliable possible DNS resolver. Yes you might get more cache misses initially but I highly doubt you'd notice. (note: querying the root nameservers is bad netiquette; you should always cache queries to them for at least 5 minutes, and always use DNS resolvers to cache locally)


> If you want most reliable, then the closest hop to you should be the most reliable (there's no accounting for poor sysadmin'ing), which is often the ISP, but sometimes not.

I'd argue that accounting for poorly managed ISP resolvers is a critical part of reasoning about reliability.


It is. If latency were important, one could always aggregate across a LAN with forwarding caching proxies pointing to a single resolving caching proxy, and gain economies of scale by exactly the same mechanisms. But latency is largely a wood-for-the-trees thing.

In terms of my everyday usage, for the past couple of decades, cache miss delays are largely lost in the noise of stupidly huge WWW pages, artificial service greylisting delays, CAPTCHA delays, and so forth.

Especially as the first step in any full cache miss, a back-end query to the root content DNS server, is also just a round-trip over the loopback interface. Indeed, as is also the second step sometimes now, since some TLDs also let one mirror their data. Thank you, Estonia. https://news.ycombinator.com/item?id=44318136

And the gains in other areas are significant. Remember that privacy and security are also things that people want.

Then there's the fact that things like Quad9's/Google's/CloudFlare's anycasting surprisingly often results in hitting multiple independent servers for successive lookups, not yielding the cache gains that a superficial understanding would lead one to expect.

Just for fun, I did Bender's test at https://news.ycombinator.com/item?id=44534938 a couple of days ago, in a loop. I received reset-to-maximum TTLs from multiple successive cache misses, on queries spaced merely 10 seconds apart, from all three of Quad9, Google Public DNS, and CloudFlare 1.1.1.1. With some maths, I could probably make a good estimate as to how many separate anycast caches on those services are answering me from scratch, and not actually providing the cache hits that one would naïvely think would happen.

I added 127.0.0.1 to Bender's list, of course. That had 1 cache miss at the beginning and then hit the cache every single time, just counting down the TTL by 10 seconds each iteration of the loop; although it did decide that 42 days was unreasonably long, and reduced it to a week. (-:


Windows 11 doesn't allow using that combination


That’s not how NFPs work. I’m on the board of a NFP, we absolutely are able to save money year to year. The big difference between us and a regular corp is we don’t have shareholders or paid board members.


I wasn’t clear. Mozilla was making > 400M from the Google deal. They needed to spend most of the money otherwise why would they be a nonprofit. So they would spent the vast majority of it on boondoggles, lots of all-hands in expensive locations, $400k salaries etc.


There are many NFPs with multi-billion dollar endowments, I don’t really understand this line of reasoning…


Ah yes, bec that’s worked out so well with china.

Anyone with internet access in NK is working at the behest of the government.


It is impossible to pause puberty or any other biological process. You cannot delay and restart something that is biologically time-bound. By giving a child puberty blockers you permanently prevent them from becoming an adult. They will never develop any of the features required for having children, they will never experience the brain developments that help with reasoning and empathy.

There are no studies on this Bec doing such studies is considered grossly unethical and evil, same as studying brain lobotomies in infants. As such we have no science on this, there are just people who have decided one thing and are performing live experiments without any controls. However, it should be noted that until very recently there was no significant incidence of unexplained child suicide, there was no significant incidence of unexplained teenage suicide, nor was there a significant incidence of unexplained young adult suicide. This is 100% social contagion, exacerbated by evil greedy pharmaceutical orgs who have latched on to small childhood insecurities and used them to build a multi-billion dollar industry mutilating and disfiguring healthy people.


Puberty blockers have been used on children to manage early puberty. The meds don't know if you're trans or not, so it's only reasonable to assume giving them to trans kids would have similar outcomes.


He would of course use his super intelligence and alien space lasers to target only the people actually launching the rockets, his from-the-future space lasers are so accurate and advanced that there is no possibility of collateral damage. He would be able to then explain to the “civilians” who until 10 minutes ago were cheering for his demise that akctually he’s on their side and they don’t really hate him and therefore bec he knows them better than they know themselves they will magically stop supporting their brothers, fathers, uncles, and cousins who have spent their entire lives trying to kill him and instead all those people will suddenly realize that he alone is their savior and thus there will be peace.

The above is the actual complete thought process of all the people who casually complain about what Israel is doing without any understanding of the history or the region.


I like docker because it makes it super easy to try out apps that I don’t necessarily know that I want and I can just delete it.

I’m also confused about the claim that there is no config file… everyone I know uses docker compose, that’s really the only right way to use docker, using a single docker command is for testing or something, if you’re actually using the app long term, use docker compose. Also most apps I use do have a specific place you can set for configuration in the docker compose file.


The title should be the opposite imo. Why everyone should use docker


After reading this, I assumed this was some level of parody:

“If a program can be easily installed on Debian and (nowadays) installed on Arch Linux, that covers basically all Linux users.”


it really does allow easy setup with compose, multiple containers, different versions, etc. I have been setting up linux servers and desktops for decades but docker made it way easier for a lot of things

I still have email server setups I would never dare try to touch with docker, but I know it is possible

like a lot of things it has its uses and it's really good at what it does


i love the convenience and ease-of-use but worry about the security compared to full-blown vm


Kata containers is a nice compromise. Each container is run as a microvm


On the surface, Kata appears to be a variation of LXC/LXD.


LXC/LXD still shares the same host kernel. Kata runs a full instance of qemu including its own kernel.

FWIW, here's an slightly redacted example qemu instance launched by kata

/opt/kata-3.18.0/bin/qemu-system-x86_64 -name sandbox-FOO -uuid FOO -machine q35,accel=kvm,nvdimm=on -cpu host,pmu=off -qmp unix:fd=3,server=on,wait=off -m 2048M,slots=10,maxmem=1032958M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m -device virtio-serial-pci,disable-modern=false,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/FOO/console.sock,server=on,wait=off -device nvdimm,id=nv0,memdev=mem0,unarmed=on -object memory-backend-file,id=mem0,mem-path=/opt/kata-3.18.0/share/kata-containers/kata-ubuntu-noble.image,size=268435456,readonly=on -device virtio-scsi-pci,id=scsi0,disable-modern=false -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device vhost-vsock-pci,disable-modern=false,vhostfd=4,id=vsock-FOO,guest-cid=FOO -chardev socket,id=char-FOO,path=/run/vc/vm/FOO/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-FOO,tag=kataShared,queue-size=1024 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /opt/kata-3.18.0/share/kata-containers/vmlinux-6.12.28-157 -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 console=hvc0 console=hvc1 quiet systemd.show_status=false panic=1 nr_cpus=88 selinux=0 systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none cgroup_no_v1=all systemd.unified_cgroup_hierarchy=1 -pidfile /run/vc/vm/FOO/pid -smp 1,cores=1,threads=1,sockets=88,maxcpus=88


meaning not a full-blown vm security boundary?


It is


thank you for posting the qemu details above


thanks! so they achieve the convenience of docker with the added security of full-blown kvm? trading some perf and resource-use?

https://katacontainers.io/


Yes. Microvms are stripped down to the basic hardware needed (AWS' Firecracker for example), so they 'boot' really fast, in the tenths of seconds for my containers, but you do have the extra resource overhead of running a second kernel and the performance reduction of the VM context switches. That said, it's minor enough that I feel the security tradeoff is well worth it.


in addition, docker compose also support reading env variables / .env files from outside that you can use for configuration inside the docker compose file.


Is there a significant difference between this and nginx proxy manager?


They're both reverse proxies built on nginx, but the whole point of BunkerWeb is that it's a WAF, which NPM is not, so that's a significant difference.

In short, NPM doesn't do any of the stuff listed under Security Features here: https://docs.bunkerweb.io/latest/#security-features


NPM will automate Let's Encrypt certificate generation but you're right about the other listed features.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: