What about this change from 'latest' to 'quarterly' with no good way to change that for pkg? I don't want to only get new packages 4 times a year. Is that what's going to happen?
It also feels like the documentation around doing things like pinning you could do in apt is lacking; if there are some packages I can say "retrieve these packages from latest" then I'd be more okay with everything else only updating 4x a year.
> What about this change from 'latest' to 'quarterly' with no good way to change that for pkg? I don't want to only get new packages 4 times a year.
From the Release Notes:
> The default pkg(8) repository set in /etc/pkg/FreeBSD.conf now defaults to the quarterly package set. To use the latest branch (as was the previous default), the comment at the top of /etc/pkg/FreeBSD.conf explains how to disable the default repository and specify an alternative repository.
If I'm reading /etc/pkg/FreeBSD.conf correctly, swapping over to use HEAD instead of the Quarterly branch is as simple as creating /usr/local/etc/pkg/repos/FreeBSD.conf with the following content?
edit: originally I replicated all of the contents of /etc/pkg/FreeBSD.conf in /usr/local/etc/pkg/repos/FreeBSD.conf, but based on pkg.conf(5), the contents of the latter override keys in the former, so you only have to specify the differences
FreeBSD's packages being so up to date is literally one of the reasons I switched to it from Debian.
I don't understand this change, nor why it's the default. Actually having up-to-date packages was one of BSD's biggest selling points. Sick of distros with package systems so out-of-date they might as well not exist.
Some organizations value stability over new features, which is why you see so many places using distros like debian stable, centos, etc. It may not fit your needs, but there are plenty of use cases for preferring stability and security over new features.
There are advantages, yes, not denying that, but it seems vanishingly rare to find a packaging system that doesn't have this problem, and that's frustrating in particular for a programmer when almost every single language you use doesn't have an up-to-date package and winds up requiring an external install that then isn't even necessarily version tracked.
And I do wonder at the security implications of that. Some of the more popular languages like Python get semi-regular security updates even on stable branches like Debian's, but then I see stuff like Racket still being on 5.2 which came out in 2011 and I have to wonder how that affects the security profile if some "stable" package you've installed is depending on a scripting language package that still has a vulnerability in it because the package in "stable" hasn't been updated in half a decade.
To my understanding, one package set is built from the head of ports; that's been the default till now. Every quarter they create a new branch from the head, give it a little time (weeks?) to shake out the bugs, and build the quarterly package set from that; this will be the new default.
I've been using the quarterly package set for some time now to avoid the occasional breakage I saw on the latest package set. Having three-month-old software is a totally worthwhile trade off for me.
There's no reason I know of that you couldn't switch back to the latest package set and keep building your ports from the head of the tree.
AFAIK portsnap has no concept of pulling from anything other than HEAD when updating the tree. You have to explicitly ask for quarterly via cvsup, if you only want quarterly release ports.
No, because Linux containers are still running the Linux kernel, and the Linux kernel doesn't know how to run FreeBSD binaries.
You can do the opposite however -- running Linux inside jails on FreeBSD hosts. This is how docker-on-freebsd works, and also how FreeBSD desktop systems usually cope with software like Flash plugins which are only available as Linux binaries.
Btw, thanks for FreeBSD compat work for running on AWS/Xen, even if it might've been mostly to host Tarsnap on it. ;)
It's possible to run a NetBSD Xen dom0 (host system) on bare metal, under VMware or Xen HVM: it takes a few patches, building a kernel and config tweaks to get going but it works stably. [0] (There's no XAPI (the Xen remote management API) support however, so XenServer tools and other 3rd-party XAPI integrations probably won't work. FYI: XAPI server-side is coded in OCaml; don't ask how I know that. ;) [1])
For most people w/ baremetal or rented colo that just want a turn-key supportable hypervisor, I would advise using Citrix XenServer (commercial official Xen, free download, it seems be more stable than XCP and includes XAPI) or VMware ESXi (free download, very stable, $$$ quickly). After that, you can run whatever OS/es you like. (IIRC a ton of AWS boxes run heavily-modified Xen open-source 3.3.x.)
For desktop/laptop dev: VirtualBox, VMware Fusion/Workstation or qemu.
I did no such thing. My work was all to add AWS/Xen compatibility to FreeBSD. :-)
In all serious though, while wanting to host Tarsnap on an OS I knew and trusted was my justification for spending so much time on FreeBSD/EC2, my actual reason had more to do with wanting to make sure that FreeBSD didn't fall behind.
Ah cool. Adoption is like a stochastic transfer function with a dependent variable "modern usability."
Speaking of usability, here's a patch to libfetch to ignore crusty ftp server non-RFC spurious responses https://gist.github.com/steakknife/b4772a5deb6afc8851e0 (I have absolute zero idea how to contribute code/patches to FreeBSD or it's not obvious/easy from docs.)
A best practice is to ask the next new person to keep notes of obvious questions/unclear details to put in an internal, secure wiki. The issue, as founders, we often don't think about what is obvious to us when we deployed an app and all the server tweaks necessary to get it going, for teaching someone else or replicating what was done. Then, they learn some things and put them into the wiki. Rinse-later-repeat until there's few/no questions as the team grows.
It's continual DR/BCP housekeeping: architecture diagrams, instance inventory/config and other critical info (contact / escalation info) updated so that it's run-over-by-a-bus and EC2-burns-down (almost) resilient.
As you scale, having someone put server config all in Chef or Puppet (cfg management stored in git) will also help reduce deployment pain at the expense of initial setup pain. Initially, a wiki page containing a giant shell script for each server box kind is usually a faster hack.
Just use multiple services for HA, which is a SPOF of selecting any one service regardless, not specific to Tarsnap. Also, check out Tahoe LAFS. Nonproblem strawman solved. :)
What's the point? Tarsnap is modestly cheap and extremely secure, and there other backup services which are equally cheap and almost as secure. Use at least 2x services always, so Tarsnap going down or bust (heaven forbid) isn't a big deal. There is never a perfect real-world solution, but risk mitigations that reduce risk at a higher levels of abstraction is how to get real, production-grade fault-tolerance the easier way than dredging up edge-case issues without offering usable alternatives.
rsync.net is also pretty usable, but I wouldn't trust there is any in-flight or at-rest security http://www.rsync.net/
Tahoe LAFS provider https://leastauthority.com/ (it's possible to run your own Tahoe LAFS servers on cloud/colo boxes on several providers)
The best-practice mitigation to allow backups on less secure providers is encrypt locally (end-to-end encryption effectively) and distribute restore keys to a decent quorum of managers/founders/supervisors.
Having done offsite LTO tape vaulting and formal disaster recovery / business continuity planning at the organization level, it's a whole lot cheaper, easier and more flexible to use multiple cloud providers for most real use-cases (apart from multiple PiB datasets).
The point was that trusting all your backups to a one-man operation is not particularly wise, though cperciva provided more information. HA and using multiple services was not part of the conversation.
Being in a position where your off-site backups becoming unavailable is "not particularly wise" means you got _other_ problems to solve.
If my offsite backups vanished (the building they're in burnt down, for example) I'd a) know about it promptly, and b) arrange additional copies and security for my current set of on-site backups. Just the same if as if Colin gets hit by that bus and his service goes down without anyone knowing how to, or caring about, bringing it back up.
If Tarsnap is a single point of failure for you, you're doing it wrong.
Fault-tolerant backup architectures are cheaper and more reliable than depending on any one service, regardless of due-diligence outcome on them. Any shop of any sensible size/scale would mitigate risks accordingly, or risk losing all their data and going out of business within a few months.
Nice! For larger sites, having a NAS/SAN setup with enough cheapo spinning-rust is a good way to consolidate onsite->offsite backups (using multiple providers).
0. Use backup agents for local boxes to make a compressed, encrypted backup to the NAS/SAN on some host-unique temporary dir on the same volume as the offsite dir.
1. Test restores of backups to throwaway VMs before blessing them as good for offsite storage. Fail any backups that fail this test. (Very important for checking backup jobs and restore automation processes. An untested backup == not a backup.)
2. Use mv to move the compressed backup from host-unique dir into the offsite dir.
3. Continuously replicate the offsite dir to offsite providers.
4. Prune old jobs as needed, which then replicates. (Be sure to set provider-specific previous backup retentions appropriately, to avoid error replication issues.)
Does anyone know how the virtio network performance is on this release when virtualized under qemu/kvm? I know that pfsense is moving to 10.2 soon and i've been unable to use it virtualized due to its atrocious virtio net performance.
While the linux based firewall alternatives are incredibly fast they just don't have anywhere near the ease of use/feature set of pfsense!
I had that same problem when I was proofing out FreeBSD to replace Linux at $work but found an errata that fixed everything by disabling hardware checksum offloading:
Quick reminder (not that you necessarily need it, but lots of people do forget this): It's usually a good idea to run
freebsd-update fetch
and install any updates (rebooting if necessary) before you try to upgrade to a new release. On occasion there have been problems in freebsd-update which need to be fixed.
EDIT: Don't run 'freebsd-update fetch install && freebsd-update ... upgrade', since if the first command installs kernel updates you might need to reboot before downloading upgrades. (Ok, it's very unlikely. But it's theoretically possible that a kernel update would affect the upgrade-downloading process.)
Only for some kernel modules because a few occasionally have an issue but generally they're not supposed to.
Otherwise FreeBSD is backwards compatible. You can run FreeBSD 2.0 binaries and libraries just fine if you want. There's some on the official FreeBSD cluster, I think.
In case you didn't know, the official FreeBSD packages for 10.1 and 10.2 will continue to be built on 10.1 -- the oldest supported release in the 10.x train.
Pretty close. Compatibility back to FreeBSD 4 is a port package away, and there's a kernel module to support a.out binaries if you really need to run those 20+ year old programs. Some of the older syscalls don't have 32 bit wrappers, though, so you may not get full support for absolutely positively everything without emulation, but pretty damn close.
It also feels like the documentation around doing things like pinning you could do in apt is lacking; if there are some packages I can say "retrieve these packages from latest" then I'd be more okay with everything else only updating 4x a year.