One of the key takeaways from Mr. Andrew Ng's course on AI was that AI is not some kind of magic. There's always someone who attributes magical abilities to "lie detectors" and such wonders of technology. I think he was cautioning against such thinking. AI cannot do things a human expert is not capable of, for example predicting the stock market with 100% accuracy. It can only make predictions approximately as good as human experts.
In other words, if someone wanted to match your writings with linguistic analysis, and that were possible with AI, then they can do that today with human experts. The fact that you don't see it happen, and if anyone even tried they'd be met with great skepticism, means what you imagine probably isn't possible.
Strongly agree that AI isn't magic, but I think you're making too broad a statement here. AI can certainly be superhuman in some areas, eg. chess and go. Whether or not human-level is the maximum depends on how the training data is created. If you have to rely on human experts to produce the labels (this is a dog, this is a cat, etc.), then it's going to be hard to design a system that can beat human performance. But for chess and go, you can get around that by using self-play.
In the case of matching writings, you can get around it by having a bunch of people create several pieces of writing each. Even if no human expert could tell whether two pieces were from the same person, it's still possible to give the network the correct labels when training because we know the ground truth of who wrote what.
Of course, a model can only work with information that actually exists. My gut says that writing leaks plenty of information about identity, so it should at least be possible to identify the author of large chunks of text (over 1000 words, say). But I could be wrong about that.
Heh, my day-to-day experience of AI is cutting-edge machines from trillion-dollar companies trying to sell me *takes a quick look* diamond earrings, hair dryers, and universal remote controls. As a man who doesn't own a TV, it's hard to feel threatened by AI tracking online.
Realistically, using language models to change your writing style is probably going to stay a lot easier than using them to identify people.
The NSA already knows, but they don't care about that.