Tutorial

Colorspacious is a Python library that lets you easily convert between colorspaces like sRGB, XYZ, CIEL*a*b*, CIECAM02, CAM02-UCS, etc. If you have no idea what these are or what each is good for, and reading this list makes you feel like you’re swimming in alphabet soup, then this video provides a basic orientation and some examples. (The overview of color theory starts at ~3:35.)

Now let’s see some cut-and-pasteable examples of what colorspacious is good for. We’ll start by loading up some utility modules for numerics and plotting that we’ll use later:

In [1]: import numpy as np

In [2]: import matplotlib

In [3]: import matplotlib.pyplot as plt

Now we need to import colorspacious. The main function we’ll use is cspace_convert():

In [4]: from colorspacious import cspace_convert

This allows us to convert between many color spaces. For example, suppose we want to know how the color with coordinates (128, 128, 128) in sRGB space (represented with values between 0 and 255) maps to XYZ space (represented with values between 0 and 100):

In [5]: cspace_convert([128, 128, 128], "sRGB255", "XYZ100")
Out[5]: array([ 20.51692894,  21.58512253,  23.506738  ])

Colorspacious knows about a wide variety of colorspaces, and you can convert between any of them by naming them in a call to cspace_convert().

We can also conveniently work on whole images. Let’s load one up as an example.

# if you want this file, try:
#    hopper_sRGB = plt.imread(matplotlib.cbook.get_sample_data("grace_hopper.png"))
In [6]: hopper_sRGB = plt.imread("grace_hopper.png")

What have we got here?

In [7]: hopper_sRGB.shape
Out[7]: (600, 512, 3)

In [8]: hopper_sRGB[:2, :2, :]
Out[8]: 
array([[[ 0.08235294,  0.09411765,  0.3019608 ],
        [ 0.10588235,  0.11764706,  0.33333334]],

       [[ 0.10196079,  0.11372549,  0.32156864],
        [ 0.09803922,  0.10980392,  0.32549021]]], dtype=float32)

In [9]: plt.imshow(hopper_sRGB)
Out[9]: <matplotlib.image.AxesImage at 0x7fb917f4c190>
_images/hopper_sRGB.png

It looks like this image has been loaded as a 3-dimensional NumPy array, where the last dimension contains the R, G, and B values (in that order).

We can pass such an array directly to cspace_convert(). For example, we can convert the whole image to XYZ space. This time we’ll specify that our input space is "sRGB1" instead of "sRGB255", because the values appear to be encoded on a scale ranging from 0-1:

In [10]: hopper_XYZ = cspace_convert(hopper_sRGB, "sRGB1", "XYZ100")

In [11]: hopper_XYZ.shape
Out[11]: (600, 512, 3)

In [12]: hopper_XYZ[:2, :2, :]
Out[12]: 
array([[[ 1.97537605,  1.34848558,  7.17731319],
        [ 2.55586737,  1.81738616,  8.81036579]],

       [[ 2.38827051,  1.70749148,  8.18630399],
        [ 2.37740322,  1.66167069,  8.37900306]]])

Perceptual transformations

RGB space is a useful way to store and transmit images, but because the RGB values are basically just a raw record of what voltages should be applied to some phosphors in a monitor, it’s often difficult to predict how a given change in RGB values will affect what an image looks like to a person.

Suppose we want to desaturate an image – that is, we want to replace each color by a new color that has the same lightness (so white stays white, black stays black, etc.), and the same hue (so each shade of blue stays the same shade of blue, rather than turning into purple or red), but the “chroma” is reduced (so colors are more muted). This is very difficult to do when working in RGB space. So let’s take our colors and re-represent them in terms of lightness, chroma, and hue, using the state-of-the-art CIECAM02 model.

The three axes in this space are conventionally called “J” (for lightness), “C” (for chroma), and “h” (for hue). (The CIECAM-02 standard also defines a whole set of other axes with subtly different meanings – see Wikipedia for details – but for now we’ll stick to these three.) To desaturate our image, we’re going to switch from sRGB space to JCh space, reduce all the “C” values by a factor of 2, and then convert back to sRGB to look at the result. (Note that the CIECAM02 model in general requires the specification of a number of viewing condition parameters; here we accept the default, which happen to match the viewing conditions specified in the sRGB standard). All this takes more words to describe than it does to implement:

In [13]: hopper_desat_JCh = cspace_convert(hopper_sRGB, "sRGB1", "JCh")

# This is in "JCh" space, and we want to modify the "C" channel, so
# that's channel 1.
In [14]: hopper_desat_JCh[..., 1] /= 2

In [15]: hopper_desat_sRGB = cspace_convert(hopper_desat_JCh, "JCh", "sRGB1")

Let’s see what this looks like. First we’ll define a little utility function to plot several images together:

In [16]: def compare_hoppers(*new):
   ....:     image_width = 2.0  # inches
   ....:     total_width = (1 + len(new)) * image_width
   ....:     height = image_width / hopper_sRGB.shape[1] * hopper_sRGB.shape[0]
   ....:     fig = plt.figure(figsize=(total_width, height))
   ....:     ax = fig.add_axes((0, 0, 1, 1))
   ....:     ax.imshow(np.column_stack((hopper_sRGB,) + new))
   ....: 

And now we’ll use it to look at the desaturated image we computed above:

In [17]: compare_hoppers(hopper_desat_sRGB)
_images/hopper_desaturated.png

The original version is on the left, with our modified version on the right. Notice how in the version with reduced chroma, the colors are more muted, but not entirely gone.

Except, there is one oddity – notice the small cyan patches on her collar and hat. This occurs due to floating point rounding error creating a few points with sRGB values that are greater than 1, which causes matplotlib to render the points in a strange way:

In [18]: hopper_desat_sRGB[np.any(hopper_desat_sRGB > 1, axis=-1), :]
Out[18]: 
array([[ 1.00506547,  0.99532516,  0.96421717],
       [ 1.00104689,  0.98787282,  0.94567164],
       [ 1.00080521,  0.98563065,  0.96004546],
       ..., 
       [ 1.00071535,  0.98363444,  0.97633881],
       [ 1.00445847,  0.99092599,  0.9908775 ],
       [ 1.00355835,  0.9900645 ,  0.97911882]])

Colorspacious doesn’t do anything to clip such values, since they can sometimes be useful for further processing – e.g. when chaining multiple conversions together, you don’t want to clip between intermediate steps, because this might introduce errors. And potentially you might want to handle them in some clever way (there’s a whole literature on how to solve such problems). But in this case, where the values are only just barely over 1, then simply clipping them to 1 is probably the best approach, and you can easily do this yourself. In fact, NumPy provides a standard function that we can use:

In [19]: compare_hoppers(np.clip(hopper_desat_sRGB, 0, 1))
_images/hopper_desat_clipped.png

No more cyan splotches!

Once we know how to represent an image in terms of lightness/chroma/hue, then there’s all kinds of things we can do. Let’s try reducing the chroma all the way to zero, for a highly accurate greyscale conversion:

In [20]: hopper_greyscale_JCh = cspace_convert(hopper_sRGB, "sRGB1", "JCh")

In [21]: hopper_greyscale_JCh[..., 1] = 0

In [22]: hopper_greyscale_sRGB = cspace_convert(hopper_greyscale_JCh, "JCh", "sRGB1")

In [23]: compare_hoppers(np.clip(hopper_greyscale_sRGB, 0, 1))
_images/hopper_greyscale_unclipped.png

To explore, try applying other transformations. E.g., you could darken the image by rescaling the lightness channel “J” by a factor of 2 (image_JCh[..., 0] /= 2), or try replacing each hue by its complement (image_JCh[..., 2] *= -1).

Simulating colorblindness

Another useful thing we can do by converting colorspaces is to simulate various sorts of color vision deficiency, a.k.a. “colorblindness”. For example, deuteranomaly is the name for the most common form of red-green colorblindness, and affects ~5% of white men to varying amounts. Here’s a simulation of what this image looks like to someone with a moderate degree of this condition. Notice the use of the extended syntax for describing color spaces that require extra parameters beyond just the name:

In [24]: cvd_space = {"name": "sRGB1+CVD",
   ....:              "cvd_type": "deuteranomaly",
   ....:              "severity": 50}
   ....: 

In [25]: hopper_deuteranomaly_sRGB = cspace_convert(hopper_sRGB, cvd_space, "sRGB1")

In [26]: compare_hoppers(np.clip(hopper_deuteranomaly_sRGB, 0, 1))
_images/hopper_deuteranomaly.png

Notice that contrary to what you might expect, we simulate CVD by asking cspace_convert() to convert from a special CVD space to the standard sRGB space. The way to think about this is that we have a set of RGB values that will be viewed under certain conditions, i.e. displayed on an sRGB monitor and viewed by someone with CVD. And we want to find a new set of RGB values that will look the same under a different set of viewing conditions, i.e., displayed on an sRGB monitor and viewed by someone with normal color vision. So we are starting in the sRGB1+CVD space, and converting to the normal sRGB1 space.

This way of doing things is especially handy when you want to perform other operations. For example, we might want to use the JCh space described above to ask “what (approximate) lightness/chroma/hue would someone with this form of CVD perceive when looking at a monitor displaying a certain RGB value?”. For example, taking a “pure red” color:

In [27]: cspace_convert([1, 0, 0], cvd_space, "JCh")
Out[27]: array([ 47.72696721,  62.75654782,  71.41502844])

If we compare this to someone with normal color vision, we see that the person with CVD will perceive about the same lightness, but desaturated and with a shifted hue:

In [28]: cspace_convert([1, 0, 0], "sRGB1", "JCh")
Out[28]: array([  46.9250674,  111.3069358,   32.1526953])

The model of CVD we use allows a “severity” scaling factor, specified as a number between 0 and 100. A severity of 100 corresponds to complete dichromacy:

In [29]: cvd_space = {"name": "sRGB1+CVD",
   ....:              "cvd_type": "deuteranomaly",
   ....:              "severity": 100}
   ....: 

In [30]: hopper_deuteranopia_sRGB = cspace_convert(hopper_sRGB, cvd_space, "sRGB1")

In [31]: compare_hoppers(np.clip(hopper_deuteranomaly_sRGB, 0, 1),
   ....:                 np.clip(hopper_deuteranopia_sRGB, 0, 1))
   ....: 
_images/hopper_deuteranopia.png

Here the leftmost and center images are repeats of ones we’ve seen before: the leftmost image is the original, and the center image is the moderate deuteranomaly simulation that we computed above. The image on the right is the new image illustrating the more severe degree of red-green colorblindness – notice how the red in the flag and her medals is muted in the middle image, but in the image on the right it’s disappeared completely.

You can also set the "cvd_type" to "protanomaly" to simulate the other common form of red-green colorblindness, or to "tritanomaly" to simulate an extremely rare form of blue-yellow colorblindness. Here’s what moderate and severe protanomaly look like when simulated by colorspacious:

In [32]: cvd_space = {"name": "sRGB1+CVD",
   ....:              "cvd_type": "protanomaly",
   ....:              "severity": 50}
   ....: 

In [33]: hopper_protanomaly_sRGB = cspace_convert(hopper_sRGB, cvd_space, "sRGB1")

In [34]: cvd_space = {"name": "sRGB1+CVD",
   ....:              "cvd_type": "protanomaly",
   ....:              "severity": 100}
   ....: 

In [35]: hopper_protanopia_sRGB = cspace_convert(hopper_sRGB, cvd_space, "sRGB1")

In [36]: compare_hoppers(np.clip(hopper_protanomaly_sRGB, 0, 1),
   ....:                 np.clip(hopper_protanopia_sRGB, 0, 1))
   ....: 
_images/hopper_protanopia.png

Because deuteranomaly and protanomaly are both types of red-green colorblindness, this is similar (but not quite identical) to the image we saw above.

Color similarity

Suppose we have two colors, and we want to know how different they will look to a person – often known as computing the “delta E” between them. One way to do this is to map both colors into a “perceptually uniform” colorspace, and then compute the Euclidean distance. Colorspacious provides a convenience function to do just this:

In [37]: from colorspacious import deltaE

In [38]: deltaE([1, 0.5, 0.5], [0.5, 1, 0.5])
Out[38]: 55.337158728500363

In [39]: deltaE([255, 127, 127], [127, 255, 127], input_space="sRGB255")
Out[39]: 55.490775265826485

By default, these computations are done using the CAM02-UCS perceptually uniform space (see [LCL06] for details), but if you want to use the (generally inferior) CIEL*a*b*, then just say the word:

In [40]: deltaE([1, 0.5, 0.5], [0.5, 1, 0.5], uniform_space="CIELab")
Out[40]: 114.05544189591937