This state of Gtk sad, but I'm really glad we (VLC) moved to Qt, a few years ago (2006), before many applications did the switch, and when it was an unpopular move.
Before that time, we were using WxWidgets and had many issues, notably with Unicode and Windows support. WxWidget APIs and behaviors were changing too much between releases (even minor ones).
When we moved, we were in the early Qt 4.1/4.2 days, and most VLC developers were using Gnome and pushed a lot for Gtk. But one developer started the new UI in Qt, and I picked up the work. We had an important backlash from users, notably with some people in the community recoding an interface in Gtk...
Afterwards, QGtkStyle was introduced, and people could have a native look, even with Gtk environments.
Finally, Qt moved to a community project, to LGPL and Gtk went down the road with Gtk 3.x, breaking themes, Windows, OSX, and API/behaviors at every release (and removing features).
Those days, every cross-platform application are moving to Qt (subsurface, LXDE, wireshark, audacity).
It's funny that we made this decision, at that time, without knowing all that. I think we just got very lucky... :D
Around that time I worked for Trolltech and got a good view of how that project was run. The trolls took compatibility very seriously. From devs that were running KDE 3 using the current nightly build of Qt 3 to swapping out the Qt library of various Qt applications with the next Qt release to find any regressions ourselves, keeping things working was important. Dogfooding with KDE, internal tools etc was expected by developers. It is not to say we didn't make mistakes, but we did try to prevent as much as we could. A good API really goes hand in hand with compatibility. Because almost any API change in C++ is incompatible and because any public API we would be stuck with for years there was a huge incentive to get it right. When designing any new API there was always multiple rounds of API design sessions often with many different developers providing feedback and always feedback from the documentation team. Is this API extensible? Does it use the same terminology as the rest of Qt? Could any of the function names be named better? Is there any api missing or that could be removed, etc. New classes had examples and demos not just for documentation purposes, but often enough written first to help find the best API (before actually implementing it) and second to be used to regression test compatibility. Before a release all new API was reviewed one last time to catch any errors that might have slipped through. Many of the lessons learned can be found in "The Little Manual of API design" that was written by one of the Tolls Jasmin Blanchette http://www4.in.tum.de/~blanchet/api-design.pdf It is because of all of this effort that when you are using classes in Qt you often can intuitively know what how the api would work. Again we did make mistakes, but we tried to learn from them and we were not shy about holding back on an API that wasn't ready for a release rather than commit to an API that we were not sure about. I recall more than one API being delayed for the better 6 months or so before being included in the next release (4.3 v.s. 4.2 for example). The result was always a better API and a more bug free class, a win all around.
Nice, but they didn't follow their advice in the case of creating parent-less widgets up to Qt4.
I.e. in Qt3 and previous versions, creating a parent-less widget resulted in a window frame being created. Why was that? If I am creating a button I do not want a window frame around it. I am going to insert the button to another widget later.
Thank God they fixed that in Qt4 and later versions.
Speaking from a Windows user perspective, Qt interfaces feel near-native, much unlike Gtk interfaces, where many of the little details don't quite match native behavior.
But there is this massively annoying little bug where submenus close immediately when you hover another top menu item before reaching the submenu. This makes quickly navigating menus quite cumbersome. (Doesn't happen with native Windows interfaces.)
Does anybody know if the Qt devs are aware of this? Don't they think of it as an issue?
I remember when Gtk 2.0 came out. IIRC, at the time, Qt seemed to be bloated and slow compared to Gtk, but Gtk had a lot of breaking changes and was more work to program with. I was unimpressed with having to constantly figure out how to update my Gentoo system when Gtk kept moving functions from one library to another, which required hacking in symlinks to point to new libraries that programs required. It was a real pain in the ass and I finally moved onto Ubuntu. Gentoo had its own probs, but Gtk didn't help.
Er, what? I ran Gentoo from 2001 until mid-2013, and I was a maintainer and core developer of parts of Xfce (a Gtk-based desktop environment) from early 2004 until late 2009, and I have no idea what you're talking about.
Gtk was a pain to develop with, sure, but only due to the awkwardness of GObject's attempts to build an OO system on top of C.
From a user perspective, I never saw the problems you're describing. Even having built Xfce out-of-tree all the time, I never recall having to do a full rebuild due to a Gtk update.
I've since moved on; it's a shame to hear what the OP is saying about 3.x, but 2.x most certainly didn't have these problems that you describe.
I'm talking about the change from Gtk1.0 to Gtk2.0 (or that's what I remember mostly). I gave up on Gentoo around 2001 or so. Things were better when I left, but there was still breakage whenever I did a world-update. I had fallen into the lazy habit of not updating often enough which contributed to the problem, but still, the promise was that all you had to do was to do a world-update and everything would work fine. Maybe it does now, but I remember having to run a crippled system for a couple of weeks at a time, while trying to sort through the numerous broken packages. I had used many different desktops including XFCE. They were all pretty good and completely flexible under Gentoo, as the Desktop designers had intended. That part was fine, but the underlying Gtk libraries were a complete PITA whenever there were large changes.
I'm still a bit confused. The 1.x to 2.x transition was intended to be a breaking change (hence the major version bump). I don't see how you had application issues during that, as 1.x and 2.x were parallel-installable, and apps had to make a conscious decision to upgrade.
I don't recall the stability of the 2.0.x series, though: it's possible they didn't get things right and were still making breaking changes even though it was the stable series. I don't remember that being a problem, but I'll admit my memory of that period isn't perfect.
I do remember some app developers prematurely upgrading and releasing versions that depended on early development releases of 2.0 (and later, on the 2.1.x unstable series), which often did cause breakages. But that was really the app developer's fault for depending on versions that made no API/ABI guarantees.
Sorry, I don't recall the particulars since it was a long time ago. But believe me, there were lots of problems with the libraries. I still cringe whenever I have to deal with a Gtk library.
This post was written by Morten Welinder, the author of Gnumeric and a popular GNOME blogger.
I feel really bad for GTK developers. The GNOME guys have clearly taken the toolkit from a "general purpose" direction to a much more gnome-centric one.
At the same time though, I can't help but be hopeful for the future. Qt is a wonderful project, with a bunch of wonderful licenses, developed in a wonderfully-open environment (It's not like before!) and with wonderful improvements already available in Qt 5. With more and more applications switching to it, I see Qt as a central part of the Linux desktop ecosystem in the future - finally, not only will we have a beautiful desktop with common themes for all apps, but also the power of a truly cross-platform toolkit in most Linux apps. It will be nice.
I feel really conflicted about Qt. On one hand, as a graphical toolkit/environment, it's great. It's well-structured and easy to use, and QML is basically everything web applications should have been.
On the other hand, as a C++ library it really couldn't be worse, with its flagrant reinvention of the standard library, pervasive UTF16, complex object hierarchies, raw pointers, extensive use of macros, etc., etc.
Maybe I'm just too choosy, but it'd be really nice to have a graphical toolkit that didn't have such an air of sausage factory to it.
I think that the reason for reimplementing stl functionality is that they want complex types such as QString as a part of their external interface and they want different minor versions to be binary compatible which can't be guaranteed for types like std::string unless everything is compiled with the same compiler and runtime. See https://qt-project.org/wiki/Dpointer
As for the raw pointer use you should always use QPointer smart pointers for objects whose lifetime you don't control. However they don't recommend passing QPointers as function parameters since they are easily/cheaply constructed and the reference count is embedded in the QObject instance itself in any case.
I really like the Qt syntax. It's well though out, readable, I can overlook it without years of study and it has a good documentation. I think it helps to not think about it as C++ at all, because it isn't really. It's a syntax subset of C++ with a few extensions and it's own standard library that happens to use a C++ compiler somewhere down the toolchain.
I don't see the issue with the object hierarchies either. You have your QObject and QWidget classes that are important and then all the stuff that builds on that. Not that complicated really once you get into it.
The UTF-16 thing is ugly, yea, but I did never see a practical issue with it.
Totally agree. After studying C++, I was wary of trying to use the language. It seemed to me that you'll get into trouble quickly and it requires years of experience to become an expert. I would rather use Qt because it insulates you from many of the pitfalls of C++. I especially like the slots and signals paradigm. I know there are libraries that do the same thing, but if you stay within the Qt framework, you'll be a lot more productive.
> On the other hand, as a C++ library it really couldn't be worse, with its flagrant reinvention of the standard library, pervasive UTF16, complex object hierarchies, raw pointers, extensive use of macros, etc., etc.
Have you given Qt a serious try? Most of the above arguments don't hold good at all.
* c++ std lib is total crap. Anyone arguing for it has no idea what a good API is. Have you used Qt container? It's as intuitive as it gets. The C++ std lib is performance optimized and most desktop apps don't need it. It comes at a cost of developers having to learn complex APIs.
* Complex object hierarchies - huh? Qt's value based types need no memory management. The pointer types has a simple parent-child relationship. Delete the parent and all children are deleted as well. How hard is this?
* Raw pointers - Commented truly like someone who hasn't understood Qt.
* Why do you care about UTF16? It's an internal representation. BTW, Do you write any web apps? Do you know or care what internal representation is used by strings? If you some ultra-special case of a performance critical app, no string library out there will be good for you. You will have to roll out your own.
Let me guess. It really looks like you are one on the few guys who likes writing libraries (as opposed to apps). People who write apps love Qt. People who like writing libraries don't like any other library other than their own because their way is the true way.
There are many valid criticism of Qt but these are none of them.
Calling the C++ standard library crap is acknowledging the Wheel was Not Invented Here.
Glazing over the fact that Qt has an incredibly complicated object hierarchy by saying "they're good for you" is not acknowledging the fact that it's complicated and difficult to learn or work with. You didn't even mention, e.g., MOC - Qt's incredibly complicated Meta-Object Compiler. And people call GObject complicated...
Ignoring the fact that basically every other platform than Windows (and the platforms it has managed to infect) uses UTF-8 as its One True Encoding, and that you and Qt are proliferating the claim that every sufficiently complicated C++ project has its own string class with its own specific peculiarities is not a good thing.
Most importantly, you can't dismiss his concerns by agreeing that all of them exist and that he shouldn't care.
Not sure how you concluded this is about NIH. Qt's container are very developer friendly and as an application author that's the first and most important thing that matters.
Saying moc is complicated is like saying dalvik or ART is complicated. These are just tools that work in the background. Just like you don't need to know ART internals, you don't need to know moc or the code it generates for you.
Arguing about string encoding is so 1990's, I won't comment.
No offence, but you calling the STL "total crap" can lend others to claim you haven't given the STL a serious try as well. A related stackexchange question on critisms of the STL [1]
That's b/c the C++ standard library (or its implementation among various compilers) was terrible 13 years ago when Qt 3 was released. Some say it still is.
Sorry, "fork" was the wrong term. How about "long-term working branch that might get merged eventually, but also might get used in preference to the original to the point that people start just submitting their PRs directly to it"? (Think egcs's "fork" of gcc.)
And by "Not everybody" you of course mean "People who don't know any better", right? The Qt legal situation has changed a lot lately, especially since Nokia went out of the picture. Look it up:
The situation is that a CLA is still required.
Personally, I don't care (I'm just a consumer who is not affected by the CLA). Others care and insulting them as "People who don't know any better" is not right.
I've always preferred Gtkmm over Qt. That might be because Gtk+ (and Gtkmm) doesn't try to be an "everything, including the kitchen sink" library (and, in related news, I've never been a fan of the Gnome API -- even though I was a fan of the Gnome desktop until Gnome 3).
However, it's always seemed a little perverse to me to have Gtk+ try to imitate '90s-style C++ in C, and then to use a C++ wrapper around that.
There is a lot about GTK I like; it's a C library making compatibility easier[1], it always seemed way faster (less memory bloat?) than QT, and while the widget-packing/nesting style was somewhat ugly, I found it surprisingly easy to write.
I was even enjoying Vala. While it was obviously a young language, it had a lot of interesting ideas.
Then we got the new 3.x version of GTK with its "lets rewrite everything for no other reason than to break compatibility"[2] project goal that seems to have corrupted far too many projects recently. In addition to the problems already mentioned in this thread, it seemed so.. unfinished. Various components or features were gone or rewritten into something else. I guess they spent all their development time trying to tie Gnome and in as tightly as possible instead of finishing features.
The last straw was when they decided to join Pottering's "lets forget Unix and make Linux into Windows" crusade. A terrible design decision on top of years of other questionable choices and bad attitude about actually listening to user needs.
I supported GTK and Gnome waaaaay back when they first started, when the fight was between a Free (GPL) library and the increasingly popular proprietary-license-only[3] Qt. Now, I'm not sure what to support in the GUI toolkit area.
While I figure that out, my current project's GUI is being written in ruby-tk. The widgets in Tk have a terrible look and strange layout/interaction quirks, but at least it isn't a moving target and work more or less everywhere.
[1] e.g.: writing Ruby bindings C++ libraries can be problematic. While problems such as the name-mangled symbols are not as bad as they once were, it is still much easier to link a C library into a random environment.
[2] As Linus said, "...thou shalt not break working code."
[3] Trolltech changed the license about a year (?) afterwords.
I think it's just largely perception amongst the anti-systemd crowd. Systemd is a virtually unmitigated boon to Linux, but some people fear its relatively tight coupling and lack of adherence to some platonic ideal of the "Unix Philosophy". Indeed, systemd's design is inherently epicurean; the aim is to create a unified base system for Linux that reduces friction and makes people's lives easier.
"Turning Linux into Windows" is hyperbole, though; the most used and developed-on non-mobile Unix -- Mac OS X -- is similarly epicurean, and blithely makes use of tightly coupled components designed to work together, to deliver a smooth easy-to-manage experience. And it's eating Linux's lunch.
Yes, there's a bit of hyperbole in that statement, but there is a problem there. Pottering has pushed a very "our way and no other" attitude, even when other people have requirements that aren't met by systemd.
Worse, the tight-coupling between components is a perfect example of the "embrace and extend" tactic. The monopoly position systemd currently enjoys has successfully been used to extend into a disturbing number of other components[1].
As for unix-vs-windows - switching to a windows-style binary log and rewriting an inetd into a mandatory "SvcHost.exe" for linux is bad enough, but the real issues is the removing of well-defined barriers between components.
Pottering's hatred scripts is a problem, because using scripts as the "glue" in unix is one of the most important features of unix. Keeping many of the the interactions between components in an open and readable format has forced developers to create (and deal with) methods of API that can actually be extended beyond the original project. Systemd returns to the windows style of just making function calls into the current implementation. Being forced to, for example, listen on a unix socket for a command works easily with anything, even into the future. Requiring that same message be sent by calling some function encourages linking to extra libraries and makes interaction from scripts much more difficult.
This isn't about any technical benefits; keeping the IPC between components open and well-defined is even more important than access to the source code itself if we are to maintain an ecosystem of Free Software. For those of us that would like to have a Free (as in freedom) Software ecosystem survive the current War On General Purpose Computing[2], the campaign by the pro-systemd people is, frankly, terrifying. Sometimes there are more important goals than "faster" or "cleaner API" or "easier to write". Keeping the OS - both the kernel and system-utils - open, free, and interchangeable needs to be a high priority goal, or we will lose the progress the Free Software community has made over the last ~decade.
It's unfortunate that Linux and/or Linux distributions have not yet solved the problem of supporting parallel versions of dependencies. Even Node's package manager supports that. The end result is that rolling release distros break stuff often, as applications can't be tested and executed in isolation of each other. The "solution" to this is releasing everything every 6 months, when you can actually test everything together, then stop providing new versions for months.
If Linux/Linux distributions did support it, rolling release would be a lot more common. One limitation that would remain is that if you have an application that requires a single instance (e.g. a Network Manager), you couldn't have multiple instances of it running at the same time.
The goal of an operating system is to run applications. The installation and packaging of software should be as simple as possible.
For whatever reason, the Linux desktop community does not see this as an issue.
If I want Vim 7.4, 7.3 and 7.1 installed at the sametime, your package manager should support it. What if 'X' plugin is only compatible with 7.1 and 'Y' plugin is only compatible with 7.4. This is a valid use case.
Lately, I have been using Docker to work around this, but it's not fun setting up GUI applications in a container.
Well (I think) part of the whole point of a package manager is to reduce space by sharing dependencies. Npm seems to actually store a copy of the dependency individually for each package that requires it. I really think that is inefficient and a step backwards.
A good package manager should allow to:
-share dependencies
-provide a way to install multiple versions of the same package
These requirements are not mutually exclusive. With a proper file system hierarchy both of them could be accomplished.
Most of the popular distribution's package managers (apt, pacman, etc) support the sharing dependencies but almost none of them support installing multiple versions of the package.
Given that storage space is very cheap nowadays I think a package manager like NPM which stores copies of the dependencies is not that bad actually.
Yes, npm uses more space than what is necessary, but luckily disk space is cheap. That said, npm does optimize the disk space usage a bit. If I have package A which depends on packages B and C, both of which need D, then (if the versions match), D will be installed only once.
> Even Node's package manager supports [parallel versions of dependencies]
I feel like your emphasis is backwards.
It's not surprising that NodeJS, a new project, has solved some of these problems. NPM has the luxury of decades of experience from dozens of linux package managers and package managers from other languages. Plus, they're not tied to the legacy use cases the same way that (e.g.) apt is.
Do you have a link that would educate me about some of these problems? Package management interests me because I might find myself developing a package manager in the near future, but I don't know much about it.
Not supporting multiple versions of the same software is something that I identified as a problem though.
I don't recommend developing a package manager until you have done a lot of research. Its a field with non trivial problems and a lot of existing experience, and a lot of implementations that may well solve your issues.
I don't think it's surprising either, I just hope that Linux will solve the problem too. I guess I used the 'even' rhetoric because if Node could solve it, Linux (a much larger and better financed software ecosystem) should too.
> It's unfortunate that Linux and/or Linux distributions have not yet solved the problem of supporting parallel versions of dependencies.
They do and they have, but they can't conjure-up libraries that haven't been properly versioned and released upstream.
If the GTK+ team can't maintain an ABI then they need to start releasing versions under different sonames such that programs like wireshark link against "libgtk-3.so.0.1200.2" and not "libgtk-3.so.0", then the package managers will be able to do the right thing.
"It's unfortunate that Linux and/or Linux distributions have not yet solved the problem of supporting parallel versions of dependencies."
Uh, they have. For ever.
I have parts of boost 1.49, 1.53, 1.54 and 1.55 together on my system. I have GTK2 and GTK3 together on my system. I have Qt3, Qt4 and Qt5 together on my system. Python2 and Python3. libpoppler19, libpoppler44 and libpoppler46.
I really think this is exaggerated a lot. Maybe we don't use a lot of complex features, but at sugarlabs we have written a whole desktop environment and app ecosystem based of gtk3. We use the python gtk3 wrapper and I think there was only 1 instance this year where gtk3 broke our ui (icon_size got removed or something like that). We also use a lot of other gnome things (eg: gsettings) and those don't seem to be an issue.
Parts of the gui that used to render correctly now stops updating at all.
I suffer this problem with an image viewer (geeqie) on debian. I have to restore/maximize the window to force a redraw and it has been like that for months.
It would be nice, if the author would back his claims by specific references. Comparing the upstream tracker reports for GTK+ and Qt shed a different light on ABI compatibility between versions:
We loved GTK. It let us do great things for so many years... it was a necesary step while there were no other LGPL libraries.
But we got rid of it, changed all our code to Qt, which run circles around GTK Never looked back, best decision we could ever make. It works beautifully in all platforms, it integrates OpenGL, pdf output, printers support.
The only problem about Qt is that not all open source programs use it, so you use Inkscape(GTK) in Mac and works so badly, you can't even copy vectors(it copies pixel images instead!!).
I actually like writing GTK way more then Qt. I like the fact it is C and easily integrates with any language I wish to use.
Though I use Qt, the Mac support on GTK is terrible at best. Windows support is even worse (or non-existent) which makes me question their "cross platform gui" title.
If I was to suggest a method, I would suggest using each platforms GUI language and make a backend in something cross platform.
And Mac with X11 emulator/vm/what ever XQuartz is. And Windows if you use the 2 year old version that has a ton of issues (UI bugs, glitches, old GTK bugs, etc).
Running on multiple Linux DE's/OS's is not cross platform as far as I am concerned, even though technically it is lol.
That's not what people mean when they say "cross platform".
"Cross platform" means supporting all of the widely-used platforms at a given point in time, and supporting them well. Today, that includes at least Windows, OS X, and Linux. Some would even extend that to include the BSDs, Solaris, AIX and HP-UX.
Like others have pointed out, GTK+'s support for OS X and Windows has been very, very lacking. It's nowhere near as seamless as that offered by Qt, Swing, or SWT.
There is a lot of truth in this but it does take a very lazy attitude to testing.
The simple truth is that if you need your application to support certain distributions, you need to be there on the development releases testing them and either submitting bug reports or improving your application.
And you can automate much of this with a pile of bootable ISOs and a scriptable virtual machine. Testing isn't new.
I release an application "today" (and "today" for example is when Gtk 3.4 is released), and six months down the line the Gtk team decides to release Gtk 3.6 which breaks just about every application that has a GtkTable. As an application developer, I believe I have a right to be pissed.
How do I file a grievance? I go to the GNOME bug tracker to find my bug. CLOSED: WONTFIX - we don't care, our solution is you port to GtkGrid. Doesn't matter to us that you now have to bifurcate your codebase to work with people still running Gtk 3.0 or 3.2, or choose to ship your own built version of Gtk 3.4 with your application. Fuck you, application developer.
What do I do to mitigate the impact? Well, I have to spin an emergency release of my application, despite the fact that no other code might have changed, just because the Gtk team decided that breaking ABI was no longer a concern for them.
Now multiply this for basically every Gtk cycle from 3.4 to 3.1(3/4) and you realize the problem.
I think you're conflating "what's released and out there" with "what's being used or distributed".
I'm saying you can test on the versions distributions are actually shipping (and the ones they're about to be using). In a lot of distributions you can rely on that staying stable for a period of time.
If your application depends on an older GTK library the solution is simple: depend on that version or ship it with your application. This is pretty unusual in the Linux world but old hat for Windows developers where you can only really depend on Win32; and for anything else your installer makes sure it's present.
And finally, for the third time, I'll state that I'm not saying there aren't legitimate issues with how GTK+ is developed. There clearly are... I'm just saying that some of the things Morten wrote show issues in their own development process as much as anything else.
It's not fair to put all the blame on the end developer IMHO.
I mean, in Windows, Microsoft obviously take care not to break binary compatibility - even across several generations of OSes. Right now, in my Windows VM I can run Office 97 on Windows 7 [1].
That's an 18 year old piece of software. And it's still working fine.
I upgrade Ubuntu and it's a flip of a coin whether Google Earth will stop working.
Binary compatibility is noble goal, but when you have the source code to everything it can also be advantageous to just rebuild dependents. The Windows source code contained lots of extra complexity because of its legacy compatibility commitments.
Being mindful of ABI compat, though, is important because rebuilds mean more package downloads for end users and more builds for distros. That doesn't mean we should never, ever rebuild stuff, though.
Can you explain testing actually solves this problem? The problem is multifaceted: it involves the behavior changes of a Gnome, which testing will find, and the unwillingness of some distros (I would imagine Debian is in there) to release new software for reasons other than security fixes, which testing doesn't and cannot address.
The author makes it clear updating his code to fix the problems is not the problem.
That depends what you consider the problem in the scale of things. Needing to fight library versions is a pain but if you know about it (which I'm saying you can) you can do things to ensure your software —your responsibility- keeps working.
Testing helps you find issues before your users start knocking on your door and whining about broken software. They expect you to test. Is that fair? Not even the smallest bit... But it's the way things work.
If you don't give a crap about your users, and you only do this as a programming exercise, let your users be your first line of testing.
Again, I'm not saying that the problems described aren't real (or even that they're not frequent) but if you want to live in an ecosystem, you have to become a part of that. Get stuck in and fix these problems as best you can.
---
And maintaining security-only release structures is designed to make this very problem predictable and maintainable. You know what's going to be in the release a month, two months before it's out. Testing and fixing then means your app works for the life of that release.
FYI, for those who want to write cross-platform GUI apps but prefer to avoid C++ (even the mildly nicer Qt flavor...), there are alpha-quality go bindings for QML.
Folk who want to develop cross platform software GUI software should use Lazarus. It works on Linux, Windows and OS/X. Getting started can be dicey, what once you are up and running it simply works. The LCL doesn't use the latest Qt5 or Gtk3, but it just works.
Linux guys should stuff faffing about with endless compile times in C++, buffer overruns and pointer exceptions. They should use FreePascal and Lazarus and retain their sanity.
I'm actually teaching myself Xt Intrinsics and Xaw, because "lateral thinking with withered technology". It's crazy, I know it's crazy, but when I read about what a moving target the new hotness is, I wonder if there isn't some kernel of wisdom in looking to the battle-tested technology of the past for inspiration.
>How does one shield oneself from this, i.e., how does one ensure that the binary compiled (say) three years (or months) ago continues to work reasonably?
Static linking should save you from from this hell. I don't know if gtk even supports it, I know glibc does not, whitch is a shame.
ABI compatibility should not be a huge problem for distributions. They can just recompile all the packages linking against GTK+ when they update it.
The crazy stuff is having your window decorations disappear when running a GTK application under openbox because somebody thought it would be funny to screw with gtk+-3.12 [1].
Exactly. I feel like this post was blaming Gtk+ when it seems like the real problem was the distro not realizing that their package dependencies didn't trigger a rebuild.
If something breaks ABI compatibility, that doesn't mean it is API incompatible. It just means all you need to do is recompile dependents and everyone's happy. It sounds like almost all of the problems in this post were caused by a failure to rebuild dependents. That's a general problem that can affect any library, not just Gtk+.
Sure, just relicense your application under the GPL. You're getting these libraries for free. If you're not at all willing to contribute back, then you're obviously not the target audience.
If you check Gtk+'s history, I have contributed back on numerous occasions.
Unfortunately, convincing the people who write my paychecks, VMware, to contribute Workstation and Player to the Open Source community seems to be a pretty big stretch.
"(1) If you statically link against an LGPL'd library, you must also provide your application in an object (not necessarily source) format, so that a user has the opportunity to modify the library and relink the application." - http://www.gnu.org/licenses/gpl-faq.html#LGPLStaticVsDynamic
So there you have it. Link statically against the LGPL library and package the object files of your commercial application.
This is why I prefer Crunchbang Linux (or more generally, a good OpenBox based distro) to Xfce or Gnome 3 these days. I can freely switch between using a GTK app and its QT or other toolkit-based equivalent, assuming such equivalent exists, without the risk of breaking anything on the GUI side. With QT in particular becoming much more modular, I also don't end up with hundreds of megabytes of KDE bloat to support one 10MB app.
even though Qt might and wxwidgets might be better(wxwidgets is really more a wrapper to different gui backends), IMHO linux lacks a proper gui toolkit.
creating gui applications is much better in both windows in osx. Cocoa was a fairly well designed base api that gradually got improved. (yeah, there is a lot of valid criticism here too, but we're comparing it to gtk in this case)
We should build something that can gradually phase out GTK as was done with Carbon. If linux had a nice gui toolkit people would write more linux desktop applications.
Just my opinion. So feel free to completely disagree with it.
Before that time, we were using WxWidgets and had many issues, notably with Unicode and Windows support. WxWidget APIs and behaviors were changing too much between releases (even minor ones).
When we moved, we were in the early Qt 4.1/4.2 days, and most VLC developers were using Gnome and pushed a lot for Gtk. But one developer started the new UI in Qt, and I picked up the work. We had an important backlash from users, notably with some people in the community recoding an interface in Gtk...
Afterwards, QGtkStyle was introduced, and people could have a native look, even with Gtk environments.
Finally, Qt moved to a community project, to LGPL and Gtk went down the road with Gtk 3.x, breaking themes, Windows, OSX, and API/behaviors at every release (and removing features).
Those days, every cross-platform application are moving to Qt (subsurface, LXDE, wireshark, audacity). It's funny that we made this decision, at that time, without knowing all that. I think we just got very lucky... :D