Pre-GSoC work: Researching color deficiencies.

So, while the 2016 Google Summer of Code hasn’t officially started yet, and Krita’s master is in feature freeze till the release at the end of the month, it’s a good moment to start preparing.

My area of specialisation within Krita is Colour Management, and my project is focusing on softproofing. This area is one that isn’t difficult in regards mastering intricate c++ methods, but rather an area that focuses on research. In other words, figuring out what is actually true.

It’s not quite certain why there is so much misinformation out there, a simple suggestion would be to say that perhaps a lot of colour management UI is just too byzantine to understand. But on the other hand, Western Society in general has had no single Colour Theory survive longer than a century until a new one showed up. So perhaps there’s just something about colour, and especially about how relative human vision is, that makes it difficult to capture in a single coherent theory, and most artists just develop a sense for color than a cohesive method.

My focus is on the softproofing, a sort of on-the-fly filter to emulate how an image will look when being printed(and more importantly, which details could get lost). I already researched this back in February, LCMS’s API will allow for it easily, and I now mostly need to sit down with Boudewijn to stare at Krita’s architecture to decide what is possible before deciding upon a UI and implementation.

However, in a discussion on IRC it was mentioned that it’d be nice if we could emulate not just cmyk profiles, but also things like colour blindess.

Now, aside from LCMS’s display transform, we also have a lot of features through color management via OCIO. For example, you can preview an image’s relative luminosity in a seperate view as you work on it:

This is quite useful for artists, as it serves as a diagnosis tool. And ideally, I’d like to see softproofing done in a similar, per view, manner, so that the artist can tweak the original and see the changes in a softproofed view on the fly. However, the LCMS api’s softproofing is a one-single-function for everything deal, you give it an input(image) profile, the profile to softproof to, and an output (screen profile), and perhaps a warning colour.

Typically, we’d just replace our regular display transform with the softproofing one, but then we can’t have it per view. So what we might be doing instead, is to give it the same profile into the input and output, and keep the display transform seperated. That would mean it is theoretically slower, but if it means that we can have the softproofing per view, it’d be more userfriendly.

For the colour blindness simulation, similar considerations can be made. When we think of adding this, the first question is why? The answer is the same: ‘diagnostic tool’. As a designer and/or production artist you might be in the luxury of having full colour vision, yet at the same time, you want to make sure your designs are still functional for people with any form of colour blindness. And while we can try to imagine that for some people red and green look exactly alike, it’s far more helpful to simulate it and do precision work for such vision. So, you end up with a tool that is somehow there to increase empathy.

With that in mind, the following requirements:

  • It has to be non-destructive. The original image needs to be shown as if seen by a colour blind person, but not actually transformed and saved as.
  • It does not have to be 100% truthful, as it is there to create empathy and to diagnose weaknesses in a design.
  • There should be a variety of them, as there’s not a single colour blindness, but a number of different behaving deficiencies.

With that in mind, my first instinct is to make use of OCIO looks. These are aesthetic colour transforms in the form of LUTs that can be added into the regular colour management chain. The advantages of this are:

  • We don’t have to do extra architectural work. No need to make LCMS do anything it wasn’t meant to do, for example.
  • We have to support Looks, which was already a missing feature.
  • Looks are an aesthetic transform upon a regular transform, which makes the transformation colourspace independant.
  • With looks support, people can start using other config’s looks.
  • When we make LUTs, these can then be used by others.

The downsides are:

  • We’ll have to support looks.
  • We’ll have to ship a config(which we weren’t doing yet) and communicate to people how to use it.
  • We are tied to doing LUTs.

That last disadvantage is a peculiar one, and it directly touches upon how we decide to simulate our colour blindness. So this goes back into research, with the question of ‘what would be the right type of transforms’?

There’s several existing implementations.

GTK programs like Gimp and Inkscape(I think Inkscape has it…?) have their colour blindness thingy based on the Vienot, Brettel, Molon  paper from 1998.

Here, the requirement is to first convert RGB to LMS, and then modify the values in the LMS model to simulate the colour blindness chosen, and then convert back to RGB. Furthermore, a lot of decisions seem to have been made based on the input RGB being regular consumer screen sRGB.

So it’s highly questionable whether we can get a single LUT out of the observations of this paper, and whether the results would be fairly agnostic.

The second popular method is one used in all javascript implementations and other open source implementations, and they all seem to be based on Matthew Wickline’s formulas… who doesn’t mention where he got his data from?

Regardless, the result is simple RGB matrices, which could be easily converted to a LUT.

Finally, there’s this little plugin on the GIMP registery, which is based on the Machado, Oliveira, Fernandes paper of 2009. The paper again mentions converting to LMS, but the plugin has managed to simplify this to a set of RGB matrices. The weakness overal here is how sophisticated it is, with a sliding scale of colour blindness strength. Furthermore, the license of the plugin is something I need to stare hard at.

Overal, I suspect that I’ll need to do proper testing of each method, and maybe search a bit further.

Loading Facebook Comments ...

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>