The past few months I’ve been rewriting the text layout engine used by Krita’s text tool. This is not the same as the text tool itself, which is still a super small rich text editor, but it is a prerequisite to getting any kind of new features into the text shape. We haven’t done any real improvements to text since the work for the last fundraiser we had for it, and that is because this needed to be done, it is a lot of work, an we had vowed to take care of resource management first, which, uh, took us so long and was so intensive that it covered the whole development cycle from 4.0 to 5.0, or a span of 5~ years. I’m not the only developer who can finally tackle a sore point, there’s work being done on audio, lots of file format updates, work on assistants, technology upgrades and more… But this blog is about text.
The purpose of text in Krita is to provide artists an easy way to add text to images. One of the primary cases being to add text to comics, but other uses, such as adding small paragraphs of text, captions and creating headers are also accepted. It does not have to be able to show pages of text, or be able to fill the whole canvas, but it is expected that artists can convert a text to a path so they can make fine grained adjustments. The latter is usually used for sound effects in comics and graphical titles.
We previously used QTextLayout for our text shape layout. QTextLayout is used for Qt’s own text, in its labels and text edits. We initially had hoped it would be sufficient for putting text on an image, but sadly we kept coming across issues, partially ones caused by SVG having more complex needs, but also ones that were caused by QTextLayout no being designed for anything but Latin text. This includes things like no support for vertical text layout, but also, more painfully, certain kinds of technical choices which lead to decorative properties like underlines breaking joined scripts. We had to disable accelerators for our menus because of this; they made text that uses Arabic look glitchy. We also had an interesting bug caused by Qt setting the scaling on a text object in a way that we can’t access when only using QTextLayout, meaning all our font-sizes were wrong.
XML based markup like SVG in addition has some features which can only be computed as long as you understand the markup to be applied on a tree of nodes, while a lot of text layout engines like to think of them as ranges of text that have a format applied to them. This and other SVG specific features mean that we need to be able to customize how certain features are done significantly, and when using QTextLayout that generally resulted in a lot of workarounds, which in turn made the code quite intimidating.
Pango is another commonly used text layout engine, specifically the one used by all GTK projects. Inkscape uses it for its text layout. However, studying Inkscape I found that they too had workarounds for Pango, which, after the hell that was dealing with QTextLayout was something I had no interest in. Far more troublesome is that I couldn’t figure out how to use Pango without Cairo. Cairo is a painting engine, specifically one that is used by the likes of Scribus and Inkscape for its vector drawing capabilities. But the thing is that we already have two painting engines inside Krita, the one used for text and vector being QPainter, and our own KisPainter. KisPainter is the one that is fully colour managed and also the one we optimize the most, and I would like it if we could at some distant point switch our vector shape drawing code to KisPainter as well, primarily because it’s the one that can handle floating point depths and also the one we do all the optimizations on. So adding Cairo just to draw text seems to go into the exact opposite direction I’d like to go in.
Scribus is interesting here because while it uses Cairo to draw, it doesn’t use Pango and has its own text layout code. The biggest difference between Scribus and Krita in this regard is that being a Desktop Publishing program, text layout is one of the core features of Scribus, so I imagine for them it was a case of “we absolutely need full control of the code”, where with Krita we would’ve been fine with a halfway solution as long as it didn’t result in bug reports that we couldn’t fix because we were using a library in a way it was never expected to be used.
At the same time, you absolutely don’t want to start from scratch. So our text layout doesn’t implement its own Unicode functions and algorithms, using libraries like Fribidi, libunibreak and Qt’s own offerings where it can, as well as using fontconfig for locating fonts and Freetype for retrieving glyphs from the font. The library that has made everything possible however has been Raqm which handles itemization (breaking text up in runs of similar font and script), calling Fribidi for the bidirectional algorithm and finally calling Harfbuzz for shaping (selecting the correct glyphs for a given string of text). We were able to submit patches for UTF 16 support, as well as some other small things, and it has greatly simplified laying out text, to the point I was able to get the basics of SVG 1.1 text up and running in about… A week or so. The only downside for us as KDE project is that it requires Meson+pkgConfig, while KDE projects generally use CMake. Harfbuzz too seems to be going this route, so it can’t be helped.
So, having laid down which dependencies we’re using I am now going over the peculiarities and problems I encountered in… The order I am making/updating the relevant automated tests.
Going over the tests.
Now, you might be thinking: “Shouldn’t you start with tests when dealing with some so reliant on an official spec?” And you’d be right, and I did always have tests that I checked against. They just weren’t automated:
The benefit of a big sheet like this is that, aside from looking cool, was something I could load into Inkscape and Firefox from time to time to see how compatibility was looking.
The real reason I avoided tests was because we already had a ton of tests, and a lot of them were broken to begin with, which probably never had much to do with whether they actually worked but rather that the fonts weren’t available. Which brings us to the first topic:
So, typically font selection in Qt goes via QFontDatabase, which is both its own database as well as a front end to the various platforms preferred font selection mechanisms. I had not wanted to touch this, as it is perfectly decent at its job, save for the fact it behaves weird when you try to learn the font-stretch of a given QFont (it always reports 0). Sometimes you come across applications which keep hanging on the “font loading” stage, or which lag when you open up the font combo box, and this is because they don’t cache the fonts. QFontDatabase does, which makes it a huge pity that there’s no way to get ones hands on the filename of a given font. And we need that, because we need those to load the relevant Freetype faces. So one of the first things I had to implement was using Font Config to get the fonts.
The way FontConfig is typically used is that you create a pattern to which you add the things you want to search for. This is stuff like family names, but also font-weight (in combination with the FcWeightFromOpenType function), italics, and most usefully language. Fontconfig apparently keeps a list of characters for each language, allowing you to specify a language and it will then give preference to fonts that have characters for that language. This is kind of neat, given that a lot of font families need to be split up into smaller files because font files are (currently) not able to encode a glyph for each Unicode code point. I’m telling you all this because this isn’t actually written in the documentation, I had to read a whole lot of existing code and issues on trackers to learn this myself.
On the flip side, we can now implement some CSS features QFontDatabase wouldn’t make possible, like a list of font families for fallback. For this I am first creating the pattern, then calling the sort function. Then I take the text I am matching for, and split that up into graphemes using libunibreak. Then for each grapheme I try to find the best match for the whole grapheme and mark those as needing that font.
You want to do this for the whole grapheme because emoji sequences for example are joined by zero width joiners, and if you do per codepoint matching instead of thee whole grapheme, this might get matched to a different font, which means that during the itemization process (where Raqm takes the text and splits it up into runs of text with the same font and script to pass it to Harfbuzz) the parts of the sequence will end up in different runs, meaning Harfbuzz cannot select the appropriate glyph for that sequence. If you have the problem where an emoji sequence looks like a sequence instead of a single emoji despite the font supporting it, this is the reason. Emoji aren’t the only place were this happens, though it is the easiest to test. Variation selectors and combination marks are also best used with the same font, but this is much harder to test for folks who don’t speak languages that use either. Anyway, the matched graphemes are then merged into bigger runs of the same font, and then passed to Raqm.
I still have no idea whether this is going to work on on all platforms, given fontconfig is very Linux specific.
The first tests I created in this area were a test for the grapheme breaking (given we rely on libunibreak for this, it is more a test for our code that splits a string into a string list based on libunibreak’s guidance), and a test for loading fonts that are part of the unit tests. Technically, CSS does have mechanism for adding outside font files with @font-face, but our parser doesn’t support that feature, and implementing it could get quite complicated, with additional questions like ‘should we then support embedding fonts in .KRA files?’, so for now it’s a much simpler method that is only available to the tests.
Now able to ensure we have the test fonts available, the next two tests were ones adapted from the web-test-platform: A test for selecting bold, and one for selecting fonts of the correct font-weight on a font with OpenType variations (not to be confused with CSS font-variant, or with Unicode Variation Selectors). The later broke because we tried to cache the fonts to speed up layout, which meant that that if you configure the same font for multiple variations (or font-sizes), it would only use the last configuration.
More font tests followed.
Better font fall backs for unicode variation selectors.
The Unicode variation selectors needed custom test fonts which contained a glyph, and then another with both a glyph and a variation selector, because what we want in the end is that if no font has a given glyph and a variation selector, it should take the next best font (the one with only the glyph). Unicode graphemes start with the most important codepoint first, so this is a case of keeping around partial matches if no complete match was found.
Most fonts these days are vector based instead of raster based, and while we need to render to bitmap to display the glyphs eventually, Krita instead takes the glyph outline from Freetype and renders that by itself later. This allows us to support things like color gradient and pattern fills as well as SVG strokes for the text outline. Still, there are fonts out there without outlines, often quite old.
Now, Krita’s support for rendering these isn’t great, because the way our vector layer coordinate system works is that everything is in Points (1/72th an Inch), and we cannot figure out how many pixels there are per inch (PPI) inside the vector shapes. This is both kind of a pain for bitmap fonts, but also for glyph outlines, as font-hinting will result in Freetype returning adjusted outlines depending on what it thinks the PPI is. This is very much a legacy thing that dates back to the time of KOffice, but no one has ever had the time to look into it…
… And neither did I, right now there’s a hack in place that surmises the PPI during the painting code and based on changes there it runs the text layout algorithm. It isn’t great, but it works. For the test I made a quick bitmap font using FontForge with several different sizes for a single glyph. The code is then checked on which size it will select depending on which size is requested. This is important for color fonts as one of the types of color font (which are used for Emoji) is a bitmap font. There’s one big difference between old fashioned pixel fonts and the new color pixel fonts, and that is that the latter should get resized to the desired size if the selected size is too big. I haven’t put in a test for that, because I have no color bitmap font (specifically of the CBDT type) to test with yet.
After those tests, it was finally time for the main event.
SVG 1.1 tests.
Text support in SVG 1.1 allowed for laying out a single line of rich text, and then applying transformations to it. So we can move around and rotate specific glyphs, make them follow a path, or use text length to squeeze or stretch them into a specific shape, and finally, you can select whether a text is anchored at the start, middle or end.
The most common misconception I’ve seen regarding SVG 1.1 text is the notion of a text chunk. Some people think they are what tspans define, that is, a styled chunk of text, but they’re not. They are an absolutely positioned pieces of text, which can be anchored in different ways. The official SVG 2.0 algorithm even calls them ‘Anchored Chunks’, which is probably the better name. This is important, because SVG 2.0 introduces a big change with regard to text chunks. Before, each text chunk had to be shaped separately, now, all text chunk are laid out in one go meaning that shaping (which is necessary for joined scripts like Arabic) doesn’t break across boundaries as long as the font object doesn’t change. There’s some discussion in issue 631 about whether this is undefined behaviour and thus can differ between implementations, but the SVG 2.0 text layout algorithm makes this implicit, as it says to layout SVG 1.1 text as a single line of unbounded length with a CSS-based text renderer. CSS-based text renderers don’t know about SVG text chunks, so shaping (and bidirectional algorithm) behaviour is defined. This has consequences, but more on that later.
This is the point where in test land I started fixing the old tests. To my pleasure, most of the tests had merely some anti-aliasing differences between QTextLayout and my implementation. So I mostly spend time ensuring that the necessary font is loaded, and then took the string of SVG that was being tested and made it into an external file so I could occasionally check in other programs how they handled the test.
Some of the tests actually got extended a little. For the transforms I had to add test for vertical, as well as test for per-glyph-rotation, as we had neither before. For the test where we test whether different fills can be applied to different part of the text, I added ligatures and a combining mark to see how it would handle that. What is supposed to happen is that the first color assigned to a grapheme is used for the rest of the grapheme, even if the rest of the code points are assigned a different color, as this gives the most consistent result. Interestingly, this only works with font-caching enabled, as that ensures spans that have otherwise the same font will also use the same font object, meaning it won’t get split up during itemization (and Harfbuzz, which handles the ligatures cannot create them when they are in different runs of text). We generally don’t want that to happen (because it messes up joined scripts), so we’re going to need to find a way to solve that.
Among the tests that broke were attribute tests, because in the shift to SVG 2.0, glyph-orientation-vertical which takes an angle, needs to be switched to text-orientation, which takes keywords, and more such small conversion problems. Texts orientation allows for controlling whether glyphs for horizontal scripts get rotated when being laid out in vertical text. Krita doesn’t do much with this attribute, as we still need to implement support for it in Raqm.
As an aside, you might be wondering how we’re dealing with some of the more intricate features of CSS, like padding, and stuff like
display:table-block. And the answer is that we don’t have to: all child nodes of a text in SVG can only be inline, and SVG doesn’t use the CSS box model. This is because it would otherwise truly become too complicated.
The old test that gave me the most trouble was the right to left text. Now, here is were it starts to become ‘interesting’. The actual bug I had was fixed because I had forgotten to set the first absolute xy position to 0,0 if it was not set otherwise, but I still kept getting issues. You see, when I open it up in different browsers, I get different results:
My result in Krita was the same as that of Firefox, and it looked really wrong. Chromium looks the most correct here, right. However, after much contemplation, Firefox is correct. This is a side effect of SVG 1.1 text being laid out in a single line, as it means bidi-reordering is done over the whole paragraph. So a right-to-left text with two text chunks, which respectively end and start with left-to-right text, will have those end parts flipped around.
And then when we start positioning the text, a gap appears because of the flip of the two sets of glyphs. Arguably you could fix this by repositioning those glyphs so they’re snug together, or maybe something can be done with the bidi algorithm control points. I’m kind of hesitant however, because we’re not laying out a line (which would warrant the bidi algorithm to be only applied on the characters in the line, after line breaking), we’re positioning a logical chunk of text… So I’m unsure what to do here, and had to conclude Firefox is correct. I’ve submitted an issue to the SVG working group tracker.
Anyway, after this adventure, you can imagine why all the new tests are in triplicate: One for left-to-right, one for right-to-left and one for vertical.
TextLength is an SVG feature where text is stretched or squashed to fit a particular length, with an option to only transform the spaces in between letters or the glyphs themselves. It can be nested, meaning that a tspan with this feature enabled can have a child node that also has a textLength.
The SVG 2.0 algorithm for this is mostly correct. Because it is possible to have nested text length, it needs a recursive function: you go down the tree, first handling the text length of child nodes, and then that of the parents.
There is two things it misses though: glyphs that follow a stretched piece of text need to be adjusted too, until the end of the anchored chunk. Furthermore, all glyphs need to be adjusted in the ‘visual order’, that is, the order after the bidirectional algorithm is done with it, not the ‘logical’ (pre-bidi) order. This means that if you’re done with a node, you need to afterwards look forwards (or backwards for right-to-left) until the next anchored chunk begins or the text ends, note down the visual index, and then adjust these glyphs in the visual order, with the total amount of shift that the text length causes, as long as their visual index is higher than the one you adjusted last.
Also, the SVG 2.0 algorithm is setup for adjusting spacing only, meaning that the last character is not taken into account for the shift-delta nor adjusted. If you are transforming both spacing and glyphs you will want to include the last character so everything stretches nicely.
After all these adjustments, text length starts working as it says it should work, and you’ll have a perfectly working SVG 1.1 feature no one really uses. So, from there we’ll now go to the SVG 1.1 feature that everyone wants to use…
Text on Path
Being able to arrange text so it follows a path is a pretty common use case for the kind of typesetting that tries to mimic calligraphy and lettering (whether these are two separate disciplines depends on how you approach calligraphy). So greeting cards, poster titles, etc.
The SVG 2.0 algorithm is clear here, and you should follow it. However, there’s a few caveats unique to features in SVG 2.0.
First up is that the first absolute xy transform on the first glyph in a textPath element needs to be set to 0,0, because otherwise you get problems with multiple textPath elements in one single text element.
Secondly, SVG 2.0 allows for trailing spans of text that is outside the path, but not in a new text chunk, and the spec says, “hang these at the end of the path”. This mostly works, except, of course, with right-to-left text.
Chromium looks correct, but it’s kind of wrong in a sneaky way, because this is the text flow:
This is probably what is expected, but algorithm-wise that is the start of the path:
Double checking, it seems that right to left text-on-path was just never really considered, and every implementation I have tried will not show right-to-left text unless the start offset is at 100%, which is on some level weird. So I have made an issue out of that.
Before I go on to discuss text-wrapping, there’s two more side things to discuss:
Baseline alignment was part of SVG 1.1, and it allows text to be adjusted based on metadata inside the font, but in SVG 2.0 it’s supported through CSS-inline, here it’s folded into vertical-align. So, technically speaking if you get your text out of a CSS-based text renderer, like the algorithm suggest you do, you shouldn’t have to worry about this. We’re not doing that, so I had to implement this myself.
Initially I had wanted to make it part of Raqm, given it seemed something that everyone could use, but I quickly failed at doing so, as baseline alignment is applied tree-wise. Once I figured that out, it was relatively easy to implement, as SVG 1.1’s description is fairly straightforward:
Make a recursive function that first makes a table or map of baseline meta-data that is inside the first font of the given text span (with fallbacks as defined by CSS inline), then go over each child node and call this function again, passing them this table. Finally, use the table and the table the function got from the parent node to adjust the glyphs.
Fonts that carry baseline meta data are kind of rare though, so it’s best to have defaults to fall back to, and in particular to use Harfbuzz 4.0, which has such defaults built-in. For the tests, I ended up using a test-font from here.
This comprises of underlines and strike-throughs. Even if this should be something that is handled by the suggested CSS-based text-renderer, you will probably want to do your own implementation of this, if you want to have good looking underlines while text is on path. I had some trouble with this as no one had really figured this out, but eventually I found something out that made me happy:
You will need to make a recursive function, that first calculates the child-nodes, and passes the textPath path down to them if there is one. Then, you will wan to calculate the text-decoration boxes by joining the bounding boxes of the glyphs in the span in a way so that each text-chunk inside the span has it’s own decoration box. These are then used as the source for generating over lines, underline, strike-throughs, etc.
Now, for text-on-path, if you’re smart and know how to offset bezier curves, you should proly offset a part of the original path that corresponds to the width of each decoration box. I’m not that smart, so instead I create, per decoration box, a polyline which is as wide as the decoration box, but has a node every 4*underline stroke width (in the case of the “wavy” decoration-line-style, every other node is moved down 4*underline width as well, so it creates a nice even zig-zag). Then I use the same method as used for positioning text on path to adjust each node, and finally I use QPainterPathStroker to turn the path into a proper shape. These are then cached in the object that represents the text span, so that I can draw them before and after drawing the glyphs as CSS3-text-decoration wants, with the correct span colour and everything.
You will want to do this before doing the text-on-path alignment, so the glyphs have not been adjusted yet.
You can use per node offsets to stretch glyphs on path as well, with the caveat that straight parallel sections would be best off to be replaced by an offset bezier curve section, and this is especially necessary for Devanagari (as the connector line in fonts is usually a straight line), despite not doing that, I am keeping this around as a proof of concept.
So after all this, there’s the final important feature…
SVG 2.0’s biggest feature is the several text-wrapping options it introduces, some being simpler than others, and these are really necessary as doing paragraphs of text with simple chunk positioning is a headache. Inline-size is the simplest of these wrapping options, as it only says “wrap at this width”, with nothing really special going on there.
While my initial idea was to focus on SVG 1.1 features, I did want to get this in, as it requires a line-breaking library, so if we included this now, we wouldn’t need to add new dependencies for a while (at the least, until we want to hyphenate I guess). I later discovered that having libunibreak in was really useful because grapheme breaking helps a lot with font-selection. Anyway, right now we don’t have support for things like wrapping inside a shape yet, as I tried to focus on inline-size and text wrapping features.
There’s noting exciting here, a typical example of the greedy line-wrapping algorithm: Get libunibreak to find wrapping points, count the logical characters until you find a wrapping point, check if the total advance is higher than inline size, if not, add “word” to line (and adjust characters, etc.), else start new line and move word there, adjust previous line for line height reasons.
As you may guess from the word “line”, there was bidirectional algorithm things here too. officially, the bidirectional algorithm need to be applied after line breaking, but for us it happens before line breaking, because Raqm (via FreeBidi) handles it. And indeed, after I had implemented all CSS-text-3 things like overflow-wrap and line-height, I noticed that my bidirectional text was wrong, and that was the point at which my morale broke.
For a week. I managed to get a sort of fix by switching from counting in the visual order to the logical order and then recalculating the advance, which works fine for implicit bidi-reordering, but I need to investigate what happens when we introduce bidi-controls, which is fine-tuning for the algorithm that shouldn’t be necessary most to the time, but does exist for special edge-cases. In the worst case we’ll have to get wrapping to happen in Raqm, but if we ever want to have hyphenation, we might need to do that anyway…
Other than that, inline-size is a bit peculiar in that it defines the wrapping box as starting at the anchor, and ending at the width of the inline-size, with text-anchor (not text-align!) defining how the text is distributed. This makes sense from a specification stand point, as it means it is possible for a renderer to implement a simple as possible text-wrapping without immediately committing to CSS-text-3, though it does mean any kind of justification isn’t possible yet.
Wrapping inside a shape does allow for this, but I am going to implement that as a separate patch, partially because this one was becoming too big, and partially because I had no clear plan on how to tackle this when I decided the that no more new features were to be added to the current work. That said, I did go through all of CSS-text-3 to see which features I could implement…
The feature that allows you to set all text in all-caps, or lowercase, as a styling thing, without affecting the text itself. Where the complexity with East-Asian scripts is the sheer number of glyphs, and the complexity with joined scripts is that they have complex shaping rules, the complexity with Latin, Greek and Cyrillic is that at some point clerks in Europe decided that Important Words were going to have their first letter written in the slower formal strokes, while the rest of the letters would be written in the faster less formal strokes. Every language using these “Bicameral” scripts has on top of that their own rules what constitutes Important Words, some have their own rules which lowercase letter corresponds to which uppercase letter, and finally, there’s also different rules on how VERY IMPORTANT TEXT should look.
Thankfully, most of this case-mapping is thoroughly documented by Unicode, and if you have access to a library of Unicode functions it will have uppercase and lowercase functions at minimum. It does mean however that we need to have a language assigned to a text (line breaking and many font features require this too), and this is possible for text now, but I still need to figure out how to take the language set on the Krita document and have it be inherited as the default on text shapes which live in vector layers. Another thing which I am thinking of is whether we might be able to offer spelling check as a way to encourage artists to select the appropriate language for the text so it gets the best possible layout, though there is plenty more to do before I can do that.
That means that in Krita’s case we handle text-transform uppercase and lowercase by using QLocale’s to upper and lower functions. It’s going to be interesting how well this works, because these rely on ICU’s functions first, and then falls back on operating system functions and we don’t build ICU for Krita, by there’s a non-zero chance operating specific functions use ICU themselves.
For capitalization (which CSS simplifies to every first character of a word), you can get pretty far by finding every grapheme that follows a CSS word separator character and then only doing uppercase on those. You’ll have to create some language specific exceptions, like for example for Dutch to check if a J follows an I, because “Ijsbeer” reads like some kind of geographic feature, while “IJsbeer” is a Polar Bear.
Then there are two functions for East-Asian text layout, one to ensure the largest letters are chosen in situations where the text is very small, and the other to ensure that text lays out nicely in vertical situations. Full-size kana is a case of mapping the characters as defined by the table inside the CSS-text-3 specification. Full-width mapping in our case maps the relevant ASCII to the Unicode codepoints in the full-width and half-width forms page. For the rest, I use QChar’s decompositionTag() to see if “narrow” is part of it, and if so, replace with the decomposed form. Under the hood this also uses Unicode functions.
For the tests, I had wanted to adapt the ones for the web-platform-tests, but it seems that one tests all possible letters that have an uppercase, which was a bit too much for me, so I went with a greatest-hits version that tests the Latin alphabet, some Turkish text for the I, and then adapted the tailoring tests.
The downside of not implementing the uppercase function myself is that here is one test for Greek I don’t grasp. Namely, Greek uses diacritics, tonos, for lowercase text, but when the text is set to all-caps these tonos are removed (or reordered?). The uppercase function used by QLocale does all that. However, apparently this is not the case with capitalization. And I am unsure how to check for that, as I am unsure what is expected of these tonos. So I kind of need help here.
Same thing with the kana and half-width tests, with former testing some Katakana and Hiragana, an the latter with a bit of Latin, some half width katakana, some Hangul and a bunch of punctuation.
Line-break, word-break and overflow-wrap.
These are all was to refine the way line-wrapping is handled, with
line-break largely handled by libunibreak (it supports strict and normal, we may need to patch it if we want
loose at some point).
Word-break is missing
break-word because I didn’t have the concentration for it, spending it instead on overflow-wrap, which controls what happens when the words are too long to wrap.
Hyphenation technically belongs with them as well, but would require adjusting Raqm as shaping will need to be redone when hyphenation happens inside a ligature, in addition to using a library/hyphenation dictionary for the actual breaking. I’m delaying all that, as hyphenation is uncommon for text on images.
This controls how far the lines are apart. The spec says that
line-height:normal is up to the renderer, but the percent and ratio-based values are pretty well defined.
I have dyslexia, and while there’s different text-layout adjustments that help different people with dyslexia, mine is best served with bigger spacing between the lines, so I was kinda pleased I managed to get this to work.
White-space controls what happens with multiple spaces in a sequence, and replaces xml:space property as used by SVG 1.1. The white-space property for SVG 2.0 needs to use the one from CSS-text-4 because it needs to provide a correct fallback for
xml:space="preserve". I managed to get some of this implemented, but I am not testing it yet and will have to go back to double check everything, because our parsing code removes duplicate white-spaces by itself, and that needs to be undone carefully.
Text-indent and tab-size
Text indent indents the first line, or all lines but the first. I got both hanging and regular text-indent to work, but can’t test each-line, because the parsing code is still removing hard-breaks as one of the white spaces it removes. When I was doing the tests I had another bug with right-to-left text, but this time it had nothing to do with bidi-reordering for a change. Rather, I had mixed up negative and positive values for right to left meaning that when I intended to subtract the text-indent value, I was instead adding it.
We were already testing tabs, so I extended that old test to also test tab-size.
Hanging punctuation is when you let punctuation go outside of the wrapping area. This is particularly nice with justified text, and I suspect it may also give less messy word-wrapping if the wrapping algorithm is allowed to consider some punctuation as being outside the boundaries. Hanging punctuation is a potential solution for this, which is why it made sense to me to implement it…
… And then I discovered that no one else has a full implementation of it, and that even Adobe InDesign doesn’t have it, making me very worried. Like, on a rational level I know it is likely because nobody wants to mess with their perfectly functional line wrapping code, but on some level I am worried it is actually because hanging punctuation is known to cause computers to eat puppies or something.
After the inline tests I spend some time on adapting the CSS font-variant property tests on the web-platform-tests to SVG. These control whether OpenType features like ligatures and different character styles are applied during shaping. Krita previously only supported small caps, but with more direct access to Harfbuzz (OpenType support being one of its core features), it was easy to get this to work. It was mostly a case of copy-pasting the correct CSS-properties to be parsed, and then assembling a list of features for the given range and let Harfbuzz handle the rest. Right now only the CSS-fonts-3 features are supported, as CSS-fonts-4 requires parsing of @rules, which Krita doesn’t do yet.
The final test I wrote was a render test for ColrV0 fonts, using some OpenType test fonts from here. Krita officially supports ColrV0 and CBDT, but I still need font for the latter. I need to double check what is up with SBIX, and after that, the two big color fonts types are SVG and ColrV1. I didn’t bother with these yet, as I hadn’t figured out how to cache them properly (which is necessary both because of color palettes, as well as allowing the text to be converted to paths). I think I may have an idea how to now, but there’s more important things to be done first, and I suspect I still have a year or so before anyone starts to miss them. As with the font variants, @rules are not supported yet, so @font-palette-values isn’t either.
I am currently trying to clean stuff up, with my colleagues helping me with getting everything to build and helping me speed things up. Font-caching is a thing that absolutely needs to be done properly (maybe I should make sure that we can reuse it for the @rules later?). Text on path is a bit slow right now, and we may be able to speed that up. I am still kind of worried about the speed of the rendering, but we may need to make benchmarks before I can communicate my worries more clearly.
After that, I want to finish up the text-wrapping work, so we have proper white-space and text-in-shape handling. Probably fix bugs here and there too.
Then… doing preparation work so we have an interface to change text programmatically without having to write and parse SVG XML strings. And doing research, because there’s some common focus issues with on-canvas text tools that I’d really like to avoid.
After or intermediary to that I’d also like to get more improvements to East-Asian text layout, like text orientation, emphasis marks, and maybe even ruby (may seem unnecessarily ambitious, given there’s no other implementation doing this, but ruby annotations are an accessibility feature that is really common is East-Asian comics).
And there’s rendering things like better .nodef boxes for missing glyphs like the ones Firefox has. Or implementing paint-order for joined scripts, or the more advanced colour font formats. I don’t know yet in which order they will go…
Overall, text is complicated, sure, but it was also kinda fun to do? Like, it has a lot of research, and requires a lot of thinking about edge cases, and I’ll freely admit I was very upset whenever I found another new way for the bidirectional algorithm to make life difficult (though, it seems my code became simpler every time I fixed a right-to-left bug), but at the same time, I have seen a lot of programmers trying to do what I did, and at the research stage they give up, panicking at the size and fiddliness of it all (further fueled by articles like these making the rounds, said article is fully correct btw. We thankfully don’t have to worry about some of the elements, Krita being a painting program and not needing the ability to layout and render thousands of words like a web browser). I am still not sure why I was able to do all this, but if you are in my situation my tips would be:
- Try to figure out what has priority for your usecase. Krita as a painting program doesn’t need to do subpixel anti-aliasing, because more or less no other graphics program does that, which in turn is because type-setting of text in graphics programs is usually at a larger size/resolution. So I am not going to do that.
- Limit your scope. There’s a whole bunch of things in the parser or elsewhere that I am flat out ignoring for this particular project. If I had to implement a on-canvas tool in this same patch, or some of the obscurer ways of handling a CSS-value, I too, would go nuts.
- Do try to test multiple scripts, even if you can’t read them.
- Likely, someone out there will have tried to solve what you’re trying to solve, and if not, you do not need to be the first person to solve it. This is largely why I spoke against any suggestion to shift away from SVG for text, even though most painting programs don’t mix text layout and vectors: CSS is a very mature standard, and there is tons of discussion by people of all sorts of backgrounds on how to implement something, even if it also has stuff that only makes sense in an historical context.
- Examples of unsolved problems include, but are not limited to:
- Using ligatures and kashida to improve the quality of Justification. There’s attempts out there, but the quality is not great, and no one knows what the best possible solution would look like (hell, as far as I can tell there’s even no consensus at which part of the stack it needs to happen).
- Figuring out breakpoints for languages that have no word-spacers for the computer to recognize as break-points like Thai. There’s some rumours about using dictionaries to recognize the words, but that is too little information: what kind of dictionaries? And do we have to do some processing on those dictionaries as well, like a spelling checker would???
- Apparently letter-grouping for Devanagari uses different groupings than unicode graphemes, but there’s no documentation on what that entails.
- Getting the glyphs of joined scripts to merge so it doesn’t give a wrong outline. The big browsers may have their own problems here, but in Krita’s case we rely on qt’s path union operation, which is too crude for text glyphs, so hence why I am going to implement paint-order so artists can specify outlines to be drawn behind the main text, though it will still not solve everything.
- How are color font glyphs supposed to be stroked, if at all?
And like, it’s very hard, no doubt about that, but it is not impossible.
P.S. Sorry to the Inkscape devs for the bugs I found but did not report: I can’t seem to get my gitlab.com login to work.