A tada list, or to-done list, is where you write out what you accomplished each day. It’s supposed to make you focus on things you’ve completed instead of focusing on how much you still need to do.
Greyscale-AA is the “natural” approach to anti-aliasing. The basic idea is to give partially-covered pixels partial-transparency. During composition, this will cause that pixel to be slightly tinted as if it were slightly covered, creating clearer details.
It’s greyscale because that’s the term used for one-dimensional color, like our one-dimensional transparency (otherwise glyphs tend to be a single solid color). Also in the common case of black text on a white background, the anti-aliasing literally shows up as greyness around the edges.
Subpixel-AA is a trick that abuses the common way pixels are laid out on desktop monitors.
When characters are missing from fonts, it’s nice to be able to communicate to the user that this happened. This is the “tofu” glyph. Now, you can just draw a blank tofu (a rectangle) and leave it at that, but if you want to be helpful you can write out the value of the missing character so it can be debugged more easily.
But, wait, we’re using text to explain that we can’t draw text? Hmm.
You could appeal to an assumption that the system must have a basic font that can draw 0-9 and A-F, but for those who expect to truly Destroy Their Tools With Their Tools you can do what Firefox does: the microfont!
Inside Firefox there’s a little hardcoded array describing one-bit pixel art of a tiny font atlas for exactly those 16 characters. So when drawing tofu, it can blit those glyphs out without worrying about fonts.
If you naively respect a user’s request for a very large font (or very large zoom level), you will run into extreme memory management problems with the size of your glyph atlas, as each character may be bigger than the entire screen.
Talking about rendering text: The overarching theme here will be: there are no consistent right answers, everything is way more important than you think, and everything affects everything else.
Emoji generally have their own native colors, and this color can even have semantic meaning, as is the case for skin-tone modifiers. More problematically: they have multiple colors!
As far as I can tell, this wasn’t really a thing before emoji, and so different platforms approach this in different ways. Some provide emoji as a straight-up image (Apple), others provide emoji as a series of single-color layers (Microsoft).
The latter approach is kinda nice because it integrates well with existing text rendering pipelines by “just” desugarring a glyph into a series of single-color glyphs, which everyone is used to working with.
However that means that your style can change repeatedly while drawing a “single” glyph. It also means that a “single” glyph can overlap itself, leading to the transparency issues discussed in an earlier section. And yet, as shown above, browsers do properly composite the transparency for emoji!
Just so you have an idea for how a typical text-rendering pipeline works, here’s a quick sketch:
Styling (parse markup, query system for fonts)
Layout (break text into lines)
Shaping (compute the glyphs in a line and their positions)
Rasterization (rasterize needed glyphs into an atlas/cache)
Composition (copy glyphs from the atlas to their desired positions)
Unfortunately, these steps aren’t as clean as they might seem.