I'd imagine that it's going to end up both getting somewhat better and somewhat worse.
2011 is around the time that programmers start taking undefined behavior seriously as an actual bug in their code and not in the compiler, especially as we start to see the birth of tools to better diagnose undefined behavior issues the compilers didn't (yet) take advantage of. There's also a set of major, language-breaking changes to the C and C++ standards that took effect around the time (e.g., C99 introduced inline with different semantics from gcc's extension, which broke a lot of software until gcc finally switched the default from C89 to C11 around 2014). And newer language versions tend to make obsolete hacky workarounds that end up being more brittle because they're taking advantage of unintentional complexity (e.g., constexpr-if removes the need for a decent chunk of template metaprogramming that relied on SFINAE, a concept which is difficult to explain even to knowledgeable C++ programmers). So in general, newer code is likelier to be substantially more compatible with future compilers and future language changes.
But on the other hand, we've also seen a greater tend towards libraries with less-well-defined and less stable APIs, which means future software is probably going to have a rougher time with getting all the libraries to play nice with each other if you're trying to work with old versions. Even worse, modern software tends to be a lot more aggressive about dropping compatibility with obsolete systems. Things like (as mentioned in the blog post) accessing the modern web with decade-old software is going to be incredibly difficult, for example.
2011 is around the time that programmers start taking undefined behavior seriously as an actual bug in their code and not in the compiler, especially as we start to see the birth of tools to better diagnose undefined behavior issues the compilers didn't (yet) take advantage of. There's also a set of major, language-breaking changes to the C and C++ standards that took effect around the time (e.g., C99 introduced inline with different semantics from gcc's extension, which broke a lot of software until gcc finally switched the default from C89 to C11 around 2014). And newer language versions tend to make obsolete hacky workarounds that end up being more brittle because they're taking advantage of unintentional complexity (e.g., constexpr-if removes the need for a decent chunk of template metaprogramming that relied on SFINAE, a concept which is difficult to explain even to knowledgeable C++ programmers). So in general, newer code is likelier to be substantially more compatible with future compilers and future language changes.
But on the other hand, we've also seen a greater tend towards libraries with less-well-defined and less stable APIs, which means future software is probably going to have a rougher time with getting all the libraries to play nice with each other if you're trying to work with old versions. Even worse, modern software tends to be a lot more aggressive about dropping compatibility with obsolete systems. Things like (as mentioned in the blog post) accessing the modern web with decade-old software is going to be incredibly difficult, for example.