> The title of this article had me do a double-take: C and C++ development on Windows is great. No sanity is needed.
we certainly have a different opinion on what "great" means. It takes less time to rebuild my whole toolchain from scratch on Linux (~15 minutes) than it takes to MSVC to download those friggin debug symbols it seems to require whenever I have to debug something (I sometimes have to wait 30-40 minutes and I'm on friggin 2GB fiber ! and that's seemingly every time I have to do something with that wretched MSVC !)
Thankfully now the clang / lld / libc++ / lldb ... toolchain works pretty well on Windows and allows a lot more sanity but still, it's pretty slow compared to Linux.
My main gripe with writing C++ on Linux is the dependency management. If you need to do stuff that's not covered by the standard library, like interfacing with GTK or X11 you are in a world of pain. You need to probably install a distro-specific package in a distro-specific way to get the headers/symbols, use some build tool to configure those distro-specific include/so locations, and hope to god that the distro maintainers didn't upgrade one of those packages (in a breaking way) between the source commit and the time of build.
If you suffer through this, you have an exe that works on your version of Ubuntu, maybe on other versions of Ubuntu or possibly other Debian-based distros. If you want it to also work on Fedora, it's back to tinkering.
Tbh i think the only sane-ish way of building to dockerize your build env with baked-in versions.
In contrast, you pick and SDK and compiler version for Windows, and as long as you install those versions, things will work.
Versus no dependency management at all? This reasoning falls apart once you need to use a 3rd party library on Windows. There’s no standard way of sharing such a thing so you always wind up packaging the whole thing with your program, and handing the whole mess to your users.
Granted, writing an RPM is a special kind of hell, but at least you don’t have to package everything with your program. But actually you can still do that - I’ve done that plenty of times in embedded. You can always ship your program with its dependant libraries the way you always have to on Windows. In fact it’s a lot easier because most 3rd party libraries were originally coded on Linux and build more sanely on Linux. And RPATHs are pretty easy to figure out.
Yeah, you're right, but you probably need a lot less stuff that's not in the SDK.
My go to solution for including dependencies, is just checking in all the dependency .lib, include files into Git LFS (I think this is a rather common approach from what I've seen on Git). For your typical Linux C/C++ project, unless it's made by best in class C++ devs, building can be a pain, most likely because it depends on very particular lib versions, building a largish project from GitHub for Ubuntu 21.10, where the original dev used 20.04 is usually not possible without source/Makefile tweaks. And I don't particularly love the idea of using root access to install packages to just build some random person's project.
IMHO, C++ dependency management kinda stinks, regardless of platform.
Writing an rpm isn’t difficult… is that a commonly held belief? Maybe I don’t know what I don’t know, but I’ve found wrapping my head around deb packaging much harder than rpms.
When I last wrote one I was very new to it and rpm.org (where most of the public docs on the format reside I guess) was down/abandoned. It looks like rpm.org and the docs are back now? I had a bunch of special requirements for the lib I was making and not having docs for the format, especially all the funny macros, was pretty frustrating.
> If you need to do stuff that's not covered by the standard library, like interfacing with GTK or X11 you are in a world of pain [...] If you suffer through this, you have an exe that works on your version of Ubuntu, maybe on other versions of Ubuntu or possibly other Debian-based distros. If you want it to also work on Fedora, it's back to tinkering.
GTK is known to break their ABI across major versions (GTK1->GTK2, GTK2->GTK3, GTK3->GTK4) but as a C ABI it should be compatible between minor versions and everything can be assumed to have GTK2 and GTK3 available anyway. X11 as a protocol has always been backwards compatible and Xlib on Linux pretty much never broke the ABI since the 90s. Here is a screenshot with a toolkit i'm working on now and then running the exact same binary (built with dynamic linking - ie. it uses the system's C and X libraries) in 1997 Red Hat in a VM and 2018 Debian (i took that shot some time ago - btw the brown colors is because the VM runs in 4bit VGA mode and i haven't implemented colormap use - it also looks weird in modern X if you run your server at 30bpp/10bpc mode)[0].
Of course that doesn't mean other libraries wont be broken and what you need to do (at least the easy way to do it) is to build on the oldest version of Linux you plan on supporting so that any references are to those versions (there are ways around that), but you can stick with libraries that do not break their ABI. You can use ABI Laboratory's tracker to check that[1]. For example notice how the 3.x branch of Gtk+ was always compatible[2] (there is only a minor change marked as breaking from 3.4.4 to 3.6.0[3] but if you check the actual report it is because two internal functions - that you shouldn't have been using anyway - were removed).
Major versions of Gtk are, for all intents and purposes, different toolkits entirely. They are always parallel-installable, they don't conflict with each other.
Well, except for the part that development stops in previous versions and they do not get any real updates while they "hog" the "Gtk" name so any forks that may want to continue development in a backwards compatible way as if the incompatible change never happened cant really be called "Gtk" without being misleading.
The problems you've described are the reason we have tools like CMake, no? CMake's reusable find modules handle the heavy lifting of coping with the annoying differences between Linux distros, and for that matter other OSs.
> you have an exe that works on your version of Ubuntu
This is indeed a downside of the Linux approach, it's the price we pay for the significant flexibility that distros have, and the consequent differences between them. Windows has remarkably good binary compatibility, but it's a huge engineering burden.
> Tbh i think the only sane-ish way of building to dockerize your build env with baked-in versions.
This is an option, but bundling an entire userland for every application has downsides that the various Linux package-management systems aim to avoid: wasted storage, wasted memory, and less robust protection against inadvertently using insecure unpatched dependencies.
The de-facto standard for dependency discovery is pkg-config. CMake being its own little world with its own finding system is annoying and part of the reason why the ecosystem has not migrated from autocrap to CMake en masse. Thankfully Meson came along, which does everything correctly.
> It takes less time to rebuild my whole toolchain from scratch on Linux (~15 minutes) than it takes to MSVC to download those friggin debug symbols it seems to require whenever I have to debug something
Fun fact: you can do the same in gdb and some distributions (e.g. openSUSE) have it enabled by default. Though you also get the source code too.
I was messing around with DRM/KMS the other day and had some weird issue with an error code, so i placed a breakpoint right before the call - gdb downloaded libdrm and libgbm source code (as well as some other stuff) and let me trace the call right into their code, which was super useful to figure out what was going on (and find a tiny bug in libgbm, which i patched and reported).
Yes, it's a relatively recent innovation, but it's pretty awesome. Symbol server has always been one of the things I actually liked about Windows development, which didn't require installing debug packages for every DLL before the bugs happen and you catch them. https://sourceware.org/elfutils/Debuginfod.html
NixOS has had a similar thing for a while called "DwarfFS" where a FUSE filesystem instead resolves filenames back to the package that needs to be installed, which was around for a while before debuginfod, but very NixOS specific. I'm happy this is now so much more widely available as of recently.
Really? I've never had to wait more than 5 minutes, and only the first time since the symbols are cached. On the other hand, the Visual Studio debugger actually works, even on large and complex multi-process systems like Chromium. My experience debugging C/C++ with GDB/LLDB and any frontend using them has been so poor that I've essentially given up trying them in all but the most desperate circumstances.
I agree that downloading symbols can be oddly slow but you can just turn it off, or only turn it on for specific modules. It can be helpful to have symbols for library code to troubleshoot bugs but typically you only need your own symbols and they are already on your computer with your binaries.
Debug symbols are stored locally with MSVC, and booting into debug mode only takes a few seconds longer than non-debug, sounds like you are doing something wrong.
> Thankfully now the clang / lld / libc++ / lldb ... toolchain works pretty well on Windows
Off topic, but I wonder if anyone knows whether it's possible to use rustc with lld, instead of link.exe? I tend not to have Visual C++ on my home systems. Is it as simple as the Cargo equivalent of LD=lld-link.exe?
Microsoft is certainly evil and all, but I use MSVC on windows and clang on linux and the Windows tools for my project are much smarter and faster when it comes to compiling. Not counting times when the windows machine decides it has more important priorities to attend to rather than doing what I need.
I couldn’t disagree more with your opinion. Symbol servers and pdbs are a tremendous advantage to C and C++ development on Windows. Are you sure you have equivalent experience with both Windows and non-MSVC toolchains?
we certainly have a different opinion on what "great" means. It takes less time to rebuild my whole toolchain from scratch on Linux (~15 minutes) than it takes to MSVC to download those friggin debug symbols it seems to require whenever I have to debug something (I sometimes have to wait 30-40 minutes and I'm on friggin 2GB fiber ! and that's seemingly every time I have to do something with that wretched MSVC !)
Thankfully now the clang / lld / libc++ / lldb ... toolchain works pretty well on Windows and allows a lot more sanity but still, it's pretty slow compared to Linux.