I’m not sure how I feel about this. I’m not an expert by any means.
But something just doesn’t feel right when you’ve got unicode with a character with one known use from forever ago.
Doesn’t this open up the flood gates to just a ridiculous amount of work or else biased gatekeeping?
How much work would it be to implement your own font of the entire unicode set? Or is that not actually a thing and fonts implement as-desired subsets?
There are quite a few such characters in Unicode because academic articles about things like cuneiform need to be digitized too. And because the historical record is so sparse, we often have vanishingly few, or only one example of a character, and perhaps no way to know if it was a misprint or a real character.
Actually this character seems like a scribe's joke, no different from the illustrated characters at the beginning of medieval paragraphs (all of which are represented in Unicode as A, B or whatever). But the point still holds.
It's not just the articles, it's digitization of the texts themselves and email conversations. Using characters offers the opportunity to do computational textual analysis (this allows you to do substitutions first, by replacing this character with 'o' -- much harder on a bunch of tiny images).
Plus there's no shortage of space in the Unicode address space.
> How much work would it be to implement your own font of the entire unicode set? Or is that not actually a thing and fonts implement as-desired subsets?
You can't, and you are not expected to do so. You are limited by OpenType limit (65,535 glyphs), various shaping rules that possibly increase the number of required glyphs, and lack of local or historical typographic convention. Your best bet is either to recruit a large number of experts (e.g. Google Noto fonts) or to significantly sacrifice quality (e.g. GNU Unifont).
A single OpenType font file is limited to 65,535 glyphs. Nothing stops your font from being implemented as a series of .otf files (besides what people think of as a "font" when it comes to usage on computers).
But yes, time constraints are the limiting factor. I don't think anyone is going to dedicate their entire life to making a single font.
While you are right that one logical font can consist of multiple font files (or possibly a OpenType collection), this constraint does affect most typical fonts, and in particular wide-coverage CJK fonts already hit this limit. Fonts supporting only one of Chinese, Japanese and Korean don't need that many glyphs, and probably even two of them will be okay, but fonts with all three sets of glyphs won't. It is therefore common to provide three versions of fonts, all differently named.
You could also go the shady route and just make a font out of all the "reference character sheets" that the Unicode site has. Probably not legal and the result would not be pleasant to read, but that's one way to create a font containing all of Unicode.
But something just doesn’t feel right when you’ve got unicode with a character with one known use from forever ago.
Doesn’t this open up the flood gates to just a ridiculous amount of work or else biased gatekeeping?
How much work would it be to implement your own font of the entire unicode set? Or is that not actually a thing and fonts implement as-desired subsets?