Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
FreeBSD Core Team Statement on FreeBSD Development Processes (freebsd.org)
145 points by vermaden on March 29, 2021 | hide | past | favorite | 75 comments


Netgate doesn't exactly have the best reputation as it is.

There is a reason there's a growing push of people leaving pfSense for OPNsense.

Remember when they bought their competitor's domain and put up this? http://web.archive.org/web/20160314132836/http:/www.opnsense...


They were supposedly also squatting r/OPNSense, which is why the official subreddit is r/OPNsenseFirewall.

Just checked:

    r/opnsense is a private community

    The moderators of r/opnsense have set this community to private.
    Only approved members can view and take part in its discussions.


Not Netgate


Kinda feel bad for them here though because they did things initially pretty much exactly right: paid real money to a developer with reasonable credentials to work in upstream to develop this feature with apparently best of intentions. That is pretty much textbook example how corporations can participate in open source (without in-house developers). And then it somehow blew on their face.

The response then from Netgate was very much on-brand, they indeed have a history of colorful PR. And of course they are ultimately responsible for things they put their name on.

But this also highlights how difficult participating in open source can be.


I don't know, I've never heard of Arm or NetApp or Netflix having these issues. They just quietly contribute both employee time and financially to the FreeBSD project without fanfare (I guess at times Netflix does blog about their contributions but not in the same look-at-me way).

Netgate seems to want to brag about their contributions (despite the fact they don't even show up on the foundation page) while also somehow not taking any responsibility for when they screw up.

https://freebsdfoundation.org/our-donors/donors/?donationYea...


Netflix presents their FreeBSD work at FOSDEM fairly often. They're usually good technical talks that don't come off as PR.


Which makes sense. The type of person Netflix wants to know about their FreeBSD contributions is the type of technical person that will be impressed by non-PR technical talks and think "I want to work with this guy". The average Netflix customer shouldn't care at all about the technical details of the background so doing non-technical PR wouldn't gain anything.


>Kinda feel bad for them here though because they did things initially pretty much exactly right: paid real money to a developer with reasonable credentials to work in upstream to develop this feature with apparently best of intentions. That is pretty much textbook example how corporations can participate in open source (without in-house developers).

I mean, I see what you're saying, but I can't 100% agree with this. As a textbook example, I still remember way back when Apple was getting started with KHTML and what would eventually become WebCore and WebKit, and they got quite a lot of heat for a while early on for just doing big fat block code dumps without following project procedures and such. "Contributions welcome" is true for anyone, but there is an implicit (and often these days explicit for larger projects) "...as long as you follow our code standards, communicate, put in the basic effort to do a professional job, etc". Contributions don't have to be perfect but there needs to be some meat there or it's just a waste of everyone's time. Even for a big corp doing big work, just "throwing code over the fence" so to speak often is not appreciated.

Netgate hired him, so they were ultimately responsible. They apparently did a very, very poor job of both checking on him and supporting him, which is on them. They didn't see about coordinating with the actual guy who created the entire thing. They didn't do a basic sanity check of what he'd finished before trying to have it pushed right to main, and they were aggressive about deploying a security product that was really really shoddy and in context quite dangerous right out to the general public. And then they were complete dicks about even relatively low-key efforts to try to fix it with as much face saved all around, too much even.

I'm sorry, but that doesn't actually seem like a great example of how corps can participate productively. Nor do I honestly think it "highlights how difficult participating in open source can be". Can't just expect unlimited gratitude purely for volunteering something useless/dangerous.


Netgate isn't exactly your average open source contributor either. In fact, their product is very much closed source.

Even the areas of it that have been "open" have largely hidden, incomplete and non-buildable for the longest.

It's really a mixed bag. They pay for a bit of development but throw their weight around in ways that are harmful to the project.


That's why its sad that this time when they seemingly behaved mostly like good citizen it didn't work out very well especially for them.


>But this also highlights how difficult participating in open source can be.

I dunno, I was shopping around for network appliances and I considered a netgate box until I saw how Netgate representatives conduct themselves on their reddit. Participating in open source doesn't need to be difficult; a modicum of humility is all it ought to take but the Netgate folks seem arrogant an abrasive pretty much as their default position.

Gave my money to someone else instead.


Who did you give your money to instead of Netgate?


Sadly, Ubiquiti. Since I was already buying access points from them I decided open source wasn't worth dealing with assholes and just bought an edge router x


>paid real money to a developer with reasonable credentials to work in upstream to develop this feature with apparently best of intentions.

Yes, but Netgate's TSNR is based on Linux. So it seems they will be pivoting to Linux, or at least have less emphasis on FreeBSD.

My guess is that going forward there will be less money and participation going to FreeBSD from Netgate. Whether that is a good thing or bad thing depends on your perspective.


TNSR is really based in VPP, Linux is more of a platform than the actual functionality.

Our commitment to FreeBSD continues unabated.


The person they hired to do this work (Kip Macy) is a massive POS.

https://abcnews.go.com/US/exclusive-landlord-hell-defends-te...

Just in case there wasn't enough evidence that the people at Netgate are not good human beings.


Holy cow. Is there proof they were the ones who did that? Can't that be considered slanderous in some parts of the world?!



The line mocking VPN implementation hasn’t aged well. That site is a piece of work, from top to bottom.


Holy shit, that's atrocious!


Here is a writeup that provides a good review of the context for this [0].

0. https://arstechnica.com/gadgets/2021/03/buffer-overruns-lice...


Honestly, the criticism of the freebsd code review process seems way overblown. To me, it looks like a smaller team that works well together in a less formal manner, and someone exploited that.

They caught it, the person has lost trust, they all moved on. Big whup.


Well, it's not the end of the world (and I feel much more sympathy for this guy's poor tenants than I do for pfSense customers), but...

Bad code going unreviewed from a single author into a main branch from which people build production systems is definitely beyond "Big whup" severity. The equivalent would be if one person pushed an unreviewed driver into Fedora Rawhide or an Ubuntu Beta, say. It's a clear violation of the principles behind the service the distro is supposed to be providing for you.

There are a zillion ways to make sure code gets reviewed before merge. Linux does it informally via Signed-off-by headers and a tree structure of trusted maintainers. Services like github provided automated tooling to enforce review. FreeBSD needs to just pick one. It's 2021, for goodness sake. A fixed COMMITTERS list just isn't going to cut it.


>In December 2020, development of the base system migrated from Subversion to Git.

Call me mean, but I don't associate FreeBSD with a project that is quick to adopt modern development practices.

I remember the time that FreeBSD moved from CVS to SVN, and it was hailed as revolutionary. The world was already embracing Git, which, despite its flaws back then, was perceived as a bliss compared to SVN.


That's not mean, it is a compliment. Although the phrase "modern development practices" makes it a loaded statement.

There is much value in not constantly hopping onto every hypetrain that comes along. For a reliable project, I want to see engineering practices that favor remaining on proven stable technologies as long as possible. Let all the hypes die down, don't go down with them.


I don't see how code review is "modern development practices" (in a bad way). It's common sense.

An analogy would be lawyers. They don't represent family because they may overlook something they think is meaningless, but a "fresh pair of eyes" would say is very important. With code, it's the same way: your eyes are biased towards your own code which can cause you to miss a bug.


The items they discuss, like more rigor on MRs, static analysis, etc. seem like no-brainers from five years ago. I'm not knocking them, I don't know the scope of their limitations. And they're doing the improvements now, so good on them. Just seems odd some things weren't being done previously.


My guess, after lurking around the internet for the past few decades and keeping an eye on what BSDs generally do, it's a mix of NIH/people that don't want to change their existing workflows and lack of financial resources. It's hard to say what the percentages are for the two.


Even if you trust your team members to do a good job, another pair of eyes on the code before it goes in rarely hurt. Everybody slips up once in a while and a review has at least a chance of catching the worst mistakes early. Team size doesn't matter. You just need to bring a little discipline to the process.


This weekend I came across an interesting comment on the Netgate WireGuard fiasco from a former FreeBSD Core team member David Chisnall. He does consider it a failure of FreeBSD's process.

https://lobste.rs/s/sh2kcf/buffer_overruns_license_violation...


FWIW, "good review" is pretty debatable (source: was involved) -- it provides a decent overview, but you should read it with the thought in mind that the author did some pretty heavy cherry-picking to support their arguments. Also, this indeed is not "Kernel Debugging Quarterly."


I thought the Ars article was a pretty good overview for the context of this current issue and why there needs to be a statement on FreeBSD development practices at all. I don't expect Ars to get into too many kernel specifics, so won't knock them for that. But as far as bringing more attention to how the mess started in the first place (and was dealt with), I think the Ars article is a pretty good place to start.

I mean, the question of how the rough-draft Wireguard implementation made it into the kernel in the first place without sufficient review is a pretty big question.

I'm happy the FreeBSD team is addressing it directly.


Yeah, sorry, I'm mostly lamenting that there's not a more objective overview. This whole situation is pretty tiring at this point, so it's unlikely that one will surface except as a post-mortem down the road.


I think that unfortunately a lot of people can't separate "this code is bad" from "this person is bad".

We all have times when we don't ship our best work. Life happens. It's also worth noting to the readers that Donenfeld's criticism of bad code is completely dispassionate and that he isn't above criticising himself. He seems to be genuinely among the nicest people in the community.

This seems unfortunate for mmacy and unfortunately exacerbated by the behavior of Netgate.


> It's also worth noting to the readers that Donenfeld's criticism of bad code is completely dispassionate and that he isn't above criticising himself.

Er, what? Donenfeld's hyperbole and wild characterizations are part of what fanned the flames and made this a tech-press mess instead of some quiet collaboration and bug reports.[0]

> The first step was assessing the current state of the code the previous developer had dumped into the tree. It was not pretty. I imagined strange Internet voices jeering, “this is what gives C a bad name!” There were random sleeps added to “fix” race conditions, validation functions that just returned true, catastrophic cryptographic vulnerabilities, whole parts of the protocol unimplemented, kernel panics, security bypasses, overflows, random printf statements deep in crypto code, the most spectacular buffer overflows, and the whole litany of awful things that go wrong when people aren’t careful when they write C.

While some details are based in reality, the paragraph goes well beyond the realm of truth. It makes totally unnecessarily jabs at Macy as "the previous developer." He is (broadly) a competent C/kernel developer. Yes, he did an inadequate job here. No, some Greek chorus isn't jeering about the C code just because stylistically it differs from how Donenfeld would write it.

To my knowledge:

* There was only a single "validation function that returned true," and it involved validating an ip address internal to a validated and decoded message from a wg peer. The message is already cryptographically verified; only peers that are part of the same mesh could spoof IPs outside of their configured range. (Donenfeld described this as validation functions, plural.)

* Donenfeld's only ever found a single real buffer overflow. It's the one where Jumbo frames can cause heap overflow. His other buffer overflow claims are not realistic due to other constraints on the inputs. Mostly they seem to reflect stylistic preferences about using mallocarray(a, n) instead of malloc(a * n). So the claims of "spectacular" buffer overflow(s), plural, feels disingenuous. (I don't know what "spectacular" is supposed to mean in a cold technical critique, either.)

Maybe this is "dispassionate," but it seems unnecessarily careless with the facts when writing technical criticism.

To be clear, Netgate's press response to this was totally inappropriate and also just a dumb move. The public narrative would be more in their favor if they had been totally silent instead of posting the angry screed they did.

Ars takes Donenfeld's hyperbole and runs with it, fact-checking only the easily verified claims. And there is some truthiness to it! Unfortunately, it's the rest of the communication that leaves something wanting.

Anyway, I love wireguard and what Donenfeld has accomplished. I just wish the guy would be a bit more considerate and less colorful when writing sensitive emails.

[0]: https://lists.zx2c4.com/pipermail/wireguard/2021-March/00649...


If a security product/ supposedly trusted projects code loses some company's secrets or hurts somebody, nobody cares whether it was a single bad day, a mishap if you will or something that could have been forseen. The data is now in bad hands, peoples lives are disturbed. Everything else is basically just talk until _proven_ otherwise. FreeBSD really dodged the ball at the last minute here.

Would you employ somebody for work on a security product that obviously behaved as a maniac, basically robbed his own family of savings and landed together with his wife in prison for years? Is this person that good that there was literally nobody else to ask? It seems, Donenfeld at al. did a good job in 1-2 weeks porting code to FreeBSD, it may be better quality than the Macy's quasi-original developed over months of work. (Some of the code seems to have been rather similar to a differently licensed code elsewhere.)

* Ok, so Donenfeld maybe is right, maybe he just dropped an extra s in an _email_. * The buffer overflow was quite spectacular. A network professional in a security product should handle jumbo frames. Maybe there are other less obvious and maybe less spectacular overflows elsewhere. This one just hit Jim Salters eye (grep).

Btw. how would you feel about somebody basically doing your trademark (Wireguard in this case) a bad reputation? I could understand it, if Donenfeld took it personally. It seems though, he didn't. Macy on the other hand wouldn't admit to the poor quality of the software he wrote until pressured with clear evidence and even then he couldn't fully swallow his ego.

Ars Technica/ Jim Salter did some great journalism here. It goes way beyond the quality of the average article even at Ars and that is a very decent bar.

Yes, Wireguard is great, Donenfeld and friends have done a tremendous job over the years.


Mallocarray exists for a very good reason. If you don't check for overflow you can allocate a much smaller buffer than needed. Mallocarray handles this case for you.


I understand what mallocarray does and why it is preferable. In these cases, the multiplied values happen to be constrained such that overflow is not possible. Given that precondition, the two patterns are functionally equivalent.

For what it's worth, I tend to advocate for using mallocarray and would use mallocarray in the same places Donenfeld does here. But unless overflow can actually happen, it's stylistic rather than "bug."


I would suggest the LWN article instead: https://news.ycombinator.com/item?id=26572370


Wow, evil landlords, C being bad. All my favorite things to discuss in one article!


Wow. Thanks for that link.



Contrasting this with another FOSS UNIX alternative is interesting.

Although I initially thought the Illumos code review and commit processes (request to integrate; RTI) [0] [1] seemed overly-cumbersome and laborious (and to some extent, perhaps they are still slightly), I now have a much greater appreciation for the safety mechanisms built in.

Strictly based on code quality and engineering I wish we could rewrite history, have Sun open up OpenSolaris sooner and faster (and perhaps with a different license) such that Illumos stood as a/the dominant Linux alternative today. But maybe that's just nostalgia talking.

Best of luck to FreeBSD going forward; I'm sure this event will lead to even better processes in the future.

[0] https://illumos.org/docs/contributing/ [1] https://illumos.org/docs/contributing/#code-review


OpenBSD is surprisingly viable for most things. There's a few stacks it doesn't quite run, but it supports most of what I need to run professionally.


This whole fiasco has made me question why I bother with pfsense for my home network. When I only require a simple NAT/port forwarding for torrents, OpenBSD seems like an obvious choice. Other than hardware support, is there another reason why OpenBSD hasn't taken off as a viable home router alternative to pfsense and opnsense? SecurityRouter is the only OpenBSD specific routing appliance that I'm aware of and it is no longer being developed. It also had a closed source backend and unknown licensing for the client.


IIRC OpenBSD's version of pf is single-threaded, which might be an issue depending on your network speed and hardware. A single core of an Atom or Celeron might struggle on a gigabit or 10gig network, if that's the use case.


I might have the history wrong on this one but it sounds like FreeBSD forked OpenBSD's PF code at one point to add SMP support. OpenBSD's continued PF updates have not been merged into FreeBSD due to the incompatibilities introduced by their SMP changes. I believe single thread performance has also increased quite a bit since the fork happened. I don't require 10gig support currently but I have no doubt that SMP support will eventually be required if OpenBSD wants to remain usable as a router in the future.


My externally facing firewall at home is an OpenBSD box with an Atom C2550 @ 2.40GHz, it's plenty enough to handle all internet traffic (internal network traffic doesn't go through it). I don't have 10gig internet link at home though (who does?)


I treat my home network as insecure and use an off the shelf ISP provided router (FritzBox). I have saved hours of my life doing this. I was running various unixy things over the years and suddenly something went snap and I decided not to bother.

It’s fine for corporate and medium sized networks but not worth it for home stuff any more. Just costs time, money and eats a lot of power.


there may be reasons you wish to reconsider OpenBSD

https://isopenbsdsecu.re/


Here is the presentation which provides more context than the slides alone: https://www.youtube.com/watch?v=3E9ga-CylWQ

I've watched it before and it is compelling; however, at the end of the day, an OS that tries to take security seriously is probably better than one that doesn't. The OpenBSD code is small and I suspect that the defects/KLOC is far less than other projects capable of routing. I'd love to hear more critiques for using it as a home router though.


Yeah, the whole Spectre/ Meltdown was handled very badly by Intel. Illumos, OpenBSD and others basically found out from the media/ mailing lists of other projects where people were told under NDA. Not a great position to be in, they did, what they could in the shortest time possible.


Yes. If you need 10x less performance, you'd choose OpenBSD. Otherwise you'd go for DragonFly


I have no doubt that DragonFly could handle higher throughput but for a home router at 1GbE or less, OpenBSD seems ideal assuming it doesn't add additional latency. Benchmarks are hard to find; if you know of any, please let me know.


I think the important thing is this commit was blocked -- there was QA process that caught it. This statement reads to me that there will be more code review moving forward. The whole situation is embarrassing to the 13 release process, but a lesson was learned rather than a dangerous release being created.

I look forward to wg, and I understand the rush to want it out.


As a FreeBSD user, this is great news! I hope that FreeBSD bounces back from this setback and gets stronger than ever.


Looks like they're committed to improving things.


I just love how all communications including official statements from freebsd.org get posted to the web as email threads. Nothing pretty, just utilitarian.


Did Coverity not pick up on any of the bugs that were later found?

The folks behind PVStudio regularly spam the /r/cpp subreddit with actually very intersting articles about issues their static analyser finds in several open source projects. Does anyone here use it?

(I do not work for them or use their product)


Coverity has a bunch of false positives and true positives, and FreeBSD wasn't coverity-clean before this patch landed. No one in the FreeBSD dev community has time to watch every Coverity report that comes in.


How does that work, is there a periodic review of static analysis results?


If the number of false positives in your project is > 10 static analysis is a waste of time. It doesn't matter if your project is 10 lines, or 100 million, 10 is the most false positives static analysis is allowed to have before it is shut off as useless.

It is very hard to be a static analysis tool developer. All the easy cases have been done, most have been folded into the compiler. What is left is the hard cases where you have to figure out how to not trigger on the false positives. Get this wrong and your customers will dump you - doesn't matter if you get it wrong by not detection many problems, or you get it wrong by too many false positives. It doesn't help that once the customer has fixed all the problems they forget about them, but the false positives come up all the time.

You can't get out by making it possible to mark false positives. Once you allow that everyone will just get in the habit of marking all messages as a false positive. I tracked one production bug to a line that was immediately preceded by a false positive suppression - the problem was real but the someone decided instead of thinking they would suppress the issue like every other one found.


I've got some very real value out of static analysis tools, so I don't know quite what to say about all that. For sure it is a tool that takes time and effort to use correctly. Having used it, it can sometimes take a little imagination to consider how much time might have been wasted later in the game had the tool not been used.

Not sure that any of that has to do with FreeBSD's use of Coverity, which I'm curious about. (rereading the OP I guess I'm not sure the FreeBSD project itself is a regular user of Coverity or if there's something else to it)


FreeBSD is already 1000x better code quality than linux.

Linux still has WONT FIX kernel bugs, WONT FIX bugs do not exist in freebsd that are userland based, they fix both kernel and userland.


This is standard behavior.

When someone publish a +10kloc patch in an area maybe 1 or 2 other developers are competent in, nobody's gonna read it in details. Heck, I probably couldn't even review my code from 1 year ago without lots of metadata/comments attached to it.


I'm really not sure what the point of this message is. From a PR perspective it is an own goal. It was obvious from earlier statements that self-critique was underway. The right time to send another communication is once some improvement has been made.

What is unfortunate is that the project has not publicly defended itself, which is what core should have addressed -- that the situation has been broadly and unfairly misreported. The history of wireguard is bizarre, Donnefeld is hellbent on total control of both the protocol and implementations. He blew the same gasket on NetBSD developers for implementing his protocol http://mail-index.netbsd.org/tech-net/2020/08/22/msg007842.h.... The real story here, that the Ars reporter missed because he allowed himself to be compromised by Donnefeld, is how this single person is accumulating and aiming a cult following as he chooses and what are the security and business implications of this in the future. This is a simple tunneling protocol. I don't expect WG to end well. At the very least, you are subject to public zero days and shakedowns if you don't do exactly what Donnefeld wants. A new black hat open source business model of monetization by mob rule and "scooping" low intellect reporters instead of license and implementation.


Unfortunately the NetBSD side's story was not widely known or reported. Otherwise it would give a different perspective to the case here.


Curious what the end result was though. It appears they collectively agreed to talk it out in September, then... nothing? Github shows a handful of commits, but nothing major. No chatter on the mailing list.

NetBSD hasn't renamed "wireguard" which they said they were considering as an option if Jason was going to keep pushing back on the existing code. The lack of major changes to the code makes me think they also didn't take him up on his offer to scrap the whole thing and start from scratch or port the OpenBSD implementation.

Anyone happen to have any further insight?


I sometimes wonder why Jason himself doesn't just do it himself on NetBSD and FreeBSD. Although OpenBSD doesn't seems to have any issues which is also worth mentioning.


> The history of wireguard is bizarre, Donnefeld is hellbent on total control of both the protocol and implementations.

Given how bungled implementations can apparently end up (case in point), as a user of wireguard, I'm like.. thankful I guess?

This _is_ crypto/security stuff, and I'm glad it is being held to a higher standard by _someone_ at least.


That is misrepresentation, it was never not held to high standards. There were bugs, one serious while much fever was made over less critical corner cases i.e. the pfsense release that has been pilloried by Salter by way of Jason doesn't use jails. There are others like jumbo frames that are not common on internet outside of certain high end carriers.. less common but valid bug, good for new contributors. Most were personal preference: the pseudo driver framework, malloc style etc. There were fixes by multiple people in progress, including two well respected developers at Netgate. The communications were not handled in good faith by the rewrite party by their own admissions, so a rational discussion to fix or to disable the code ahead of 13.0 release in the stable branches was not able to be had with Netgate or by any of dozens of other mature FreeBSD developers. Instead drama was created. I am quite certain the right call would have been made without all of the bad blood spilled had this been conducted respectfully by all parties from the outset. So while new tooling and re-commitment to review process is nice, the processes FreeBSD had in place were already in action and it is sad to not see core assert this.

There is also a layer 8 and 9 vulnerability. One guy now tightly controls a protocol in several free *nix kernels and has interesting reactions whenever anything happens without his blessing. He was able to cause a disproportionate reaction by talking to a journalist. This probably doesn't matter if you are encrypting your home PC traffic but it does to people who work in the Internet industry. Say whatever you want about the particular technology and particular individuals, it boils down to whether you think the desire for control of implementation is a weird situation or not.


Can you expand on why the history of WireGuard is bizarre?

Do you believe that a blackhat presentation taints the code quality of WireGuard? Do you believe that Donnefeld's desire to maintain tight control over kernel implementations suggests he has ulterior motives?


[flagged]


I have no idea what this controversy is about, much less who is in the right, but your comments in this thread seem unduly personal and break the site guidelines with quite a bit of name-calling. Please don't do that. Substantive critique is fine, of course, but that would require explaining factually what the errors are, and omitting swipes and putdowns.

https://news.ycombinator.com/newsguidelines.html


It's more about the public understanding being less wrong than about me being right, I just got heated because not enough people are sticking up for the facts and principally that a reporter, whose job that is, fueled this. I've tried to post something more substantive and dispassionate in a followup and will leave it at that.


I'm just a systems guy too so I can't really make judgements on why so much input is needed for the implementation of a seemingly simple protocol. I disagree with there being smoke but I appreciate the reply and it seems like you're not alone in thinking something might be a little off: https://news.ycombinator.com/item?id=24430424




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: