Reference

Conversion functions

colorspacious.cspace_convert(arr, start, end)

Converts the colors in arr from colorspace start to colorspace end.

Parameters:
  • arr – An array-like of colors.
  • end (start,) – Any supported colorspace specifiers. See Specifying colorspaces for details.
colorspacious.cspace_converter(start, end)

Returns a function for converting from colorspace start to colorspace end.

E.g., these are equivalent:

out = cspace_convert(arr, start, end)
start_to_end_fn = cspace_converter(start, end)
out = start_to_end_fn(arr)

If you are doing a large number of conversions between the same pair of spaces, then calling this function once and then using the returned function repeatedly will be slightly more efficient than calling cspace_convert() repeatedly. But I wouldn’t bother unless you know that this is a bottleneck for you, or it simplifies your code.

Specifying colorspaces

Colorspacious knows about a wide variety of colorspaces, some of which take additional parameters, and it can convert freely between any of them. Here’s an image showing all the known spaces, and the conversion paths used. (This graph is generated directly from the source code: when you request a conversion between two spaces, cspace_convert() automatically traverses this graph to find the best conversion path. This makes it very easy to add support for new colorspaces.)

_images/colorspacious-graph.svg

The most general and primitive way to specify a colorspace is via a dict, e.g., all the following are valid arguments that can be passed to cspace_convert():

{"name": "XYZ100"}
{"name": "CIELab", "XYZ100_w": "D65"}
{"name": "CIELab", "XYZ100_w": [95.047, 100, 108.883]}

These dictionaries always have a "name" key specifying the colorspace. Every bold-faced string in the above image is a recognized colorspace name. Some spaces take additional parameters beyond the name, such as the CIELab whitepoint above. These additional parameters are indicated by the italicized strings in the image above.

There are also several shorthands accepted, to let you avoid writing out long dicts in most cases. In particular:

  • Any CIECAM02Space object myspace is expanded to:

    {"name": "CIECAM02",
     "ciecam02_space": myspace}
    
  • Any LuoEtAl2006UniformSpace object myspace is expanded to:

    {"name": "J'a'b'",
     "ciecam02_space": CIECAM02.sRGB,
     "luoetal2006_space": myspace}
    
  • The string "CIELab" expands to: {"name": "CIELab", "XYZ100_w": "D65"}

  • The string "CIELCh" expands to: {"name": "CIELCh", "XYZ100_w": "D65"}

  • the string "CIECAM02" expands to CIECAM02Space.sRGB, which in turn expands to {"name": "CIECAM02", "ciecam02_space": CIECAM02Space.sRGB}.

  • The strings "CAM02-UCS", "CAM02-SCD", "CAM02-LCD" expand to the global instance objects CAM02UCS, CAM02SCD, CAM02LCD, which in turn expand to "J'a'b'" dicts as described above.

  • Any string consisting only of characters from the set “JChQMsH” is expanded to:

    {"name": "CIECAM02-subset",
     "axes": <the string provided>
     "ciecam02_space": CIECAM02.sRGB}
    

    This allows you to directly use common shorthands like "JCh" or "JMh" as first-class colorspaces.

Any other string "foo" expands to {"name": "foo"}. So for any space that doesn’t take parameters, you can simply say "sRGB1" or "XYZ100" or whatever and ignore all these complications.

And, as one final trick, any alias can also be used as the "name" field in a colorspace dict, in which case its normal expansion is used to provide overrideable defaults for parameters. For example:

# You write:
{"name": "CAM02-UCS",
 "ciecam02_space": my_ciecam02_space}

# Colorspacious expands this to:
{"name": "J'a'b'",
 "ciecam02_space": my_ciecam02_space,
 "luoetal2006_space": CAM02UCS}

Or:

# You write:
{"name": "JCh",
 "ciecam02_space": my_ciecam02_space}

# Colorspacious expands this to:
{"name": "CIECAM02-subset",
 "axes": "JCh",
 "ciecam02_space": my_ciecam02_space}

Well-known colorspaces

sRGB1, sRGB100: The standard sRGB colorspace. If you have generic “RGB” values with no further information specified, then usually the right thing to do is to assume that they are in the sRGB space; the sRGB space was originally designed to match the behavior of common consumer monitors, and these days common consumer monitors are designed to match sRGB. Use sRGB1 if you have or want values that are normalized to fall between 0 and 1, and use sRGB255 if you have or want values that are normalized to fall between 0 and 255.

XYZ100, XYZ1: The standard CIE 1931 XYZ color space. Use XYZ100 if you have or want values that are normalized to fall between 0 and 100 (roughly speaking – values greater than 100 are valid in certain cases). Use XYZ1 if you have or want values that are normalized to fall between 0 and 1 (roughly). This is a space which is “linear-light”, i.e. related by a linear transformation to the photon counts in a spectral power distribution. In particular, this means that linear interpolation in this space is a valid way to simulate physical mixing of lights.

sRGB1-linear: A linear-light version of sRGB1, i.e., it has had gamma correction applied, but is still represented in terms of the standard sRGB primaries.

xyY100, xyY1: The standard CIE 1931 xyY color space. The x and y values are always normalized to fall between 0 and 1. Use xyY100 if you have or want a Y value that falls between 0 and 100, and use xyY1 if you have or want a Y value that falls between 0 and 1.

CIELab: The standard CIE 1976 L*a*b* color space. L* is scaled to vary from 0 to 100; a* and b* are likewise scaled to roughly the range -50 to 50. This space takes a parameter, XYZ100_w, which sets the reference white point, and may be specified either directly as a tristimulus value or as a string naming one of the well-known standard illuminants like "D65".

CIELCh: Cylindrical version of CIELab. Accepts the same parameters. h* is in degrees.

Simulation of color vision deficiency

We provide simulation of common (and not so common) forms of color vision deficiency (also known as “colorblindness”), using the model described by [MOF09].

This is generally done by specifying a colorspace like:

{"name": "sRGB1+CVD",
 "cvd_type": <type>,
 "severity": <severity>}

where <type> is one of the following strings:

  • "protanomaly": A common form of red-green colorblindness; affects ~2% of white men to some degree (less common among other ethnicities, much less common among women, see Tables 1.5 and 1.6 in [SSJN00]).
  • "deuteranomaly": The most common form of red-green colorblindness; affects ~6% of white men to some degree (less common among other ethnicities, much less common among women, see Tables 1.5 and 1.6 in [SSJN00]).
  • "tritanomaly": A very rare form of colorblindness affecting blue/yellow discrimination – so rare that its detailed effects and even rate of occurrence are not well understood. Affects <0.1% of people, possibly much less ([SSJN00], page 47). Also, the name we use here is somewhat misleading because only full tritanopia has been documented, and partial tritanomaly likely does not exist ([SSJN00], page 45). What this means is that while Colorspacious will happily allow any severity value to be passed, probably only severity = 100 corresponds to any real people.

And <severity> is any number between 0 (indicating regular vision) and 100 (indicating complete dichromacy).

Warning

If you have an image, e.g. a photo, and you want to “convert it to simulate colorblindness”, then this is done with an incantation like:

cspace_convert(img, some_cvd_space, "sRGB1")

Notice that these arguments are given in the opposite order from what you might naively expect. See Simulating colorblindness for explanation and worked examples.

CIECAM02

CIECAM02 is a standardized, rather complex, state-of-the-art color appearance model, i.e., it’s not useful for describing the voltage that should be applied to a phosphorescent element in your monitor (like RGB was originally designed to do), and it’s not useful for modelling physical properties of light (like XYZ), but it is very useful to tell you what a color will look like subjectively to a human observer, under a certain set of viewing conditions. Unfortunately this makes it rather complicated, because human vision is rather complicated.

If you just want a better replacement for traditional ad hoc spaces like “Hue/Saturation/Value”, then use the string "JCh" for your colorspace (see Perceptual transformations for a tutorial) and be happy.

If you want the full power of CIECAM02, or just to understand what exactly is happening when you type "JCh", then read on.

First, you need to specify your viewing conditions. For many purposes, you can use the default CIECAM02Space.sRGB object. Or if you want to specify different viewing conditions, you can instantiate your own CIECAM02Space object:

class colorspacious.CIECAM02Space(XYZ100_w, Y_b, L_A, surround=CIECAM02Surround(F=1.0, c=0.69, N_c=1.0))

An object representing a particular set of CIECAM02 viewing conditions.

Parameters:
  • XYZ100_w – The whitepoint. Either a string naming one of the known standard whitepoints like "D65", or else a point in XYZ100 space.
  • Y_b – Background luminance.
  • L_A – Luminance of the adapting field (in cd/m^2).
  • surround – A CIECAM02Surround object.
sRGB

A class-level constant representing the viewing conditions specified in the sRGB standard. (The sRGB standard defines two things: how a standard monitor should respond to different RGB values, and a standard set of viewing conditions in which you are supposed to look at such a monitor, and that attempt to approximate the average conditions in which people actually do look at such monitors. This object encodes the latter.)

The CIECAM02Space object has some low-level methods you can use directly if you want, though usually it’ll be easier to just use cspace_convert():

XYZ100_to_CIECAM02(XYZ100, on_negative_A='raise')

Computes CIECAM02 appearance correlates for the given tristimulus value(s) XYZ (normalized to be on the 0-100 scale).

Example: vc.XYZ100_to_CIECAM02([30.0, 45.5, 21.0])

Parameters:
  • XYZ100 – An array-like of tristimulus values. These should be given on the 0-100 scale, not the 0-1 scale. The array-like should have shape (..., 3); e.g., you can use a simple 3-item list (shape = (3,)), or to efficiently perform multiple computations at once, you could pass a higher-dimensional array, e.g. an image.
  • on_negative_A

    A known infelicity of the CIECAM02 model is that for some inputs, the achromatic signal \(A\) can be negative, which makes it impossible to compute \(J\), \(C\), \(Q\), \(M\), or \(s\) – only \(h\): and \(H\) are spared. (See, e.g., section 2.6.4.1 of [LL13] for discussion.) This argument allows you to specify a strategy for handling such points. Options are:

    • "raise": throws a NegativeAError (a subclass of ValueError)
    • "nan": return not-a-number values for the affected elements. (This may be particularly useful if converting a large number of points at once.)
Returns:

A named tuple of type JChQMsH, with attributes J, C, h, Q, M, s, and H containing the CIECAM02 appearance correlates.

CIECAM02_to_XYZ100(J=None, C=None, h=None, Q=None, M=None, s=None, H=None)

Return the unique tristimulus values that have the given CIECAM02 appearance correlates under these viewing conditions.

You must specify 3 arguments:

  • Exactly one of J and Q
  • Exactly one of C, M, and s
  • Exactly one of h and H.

Arguments can be vectors, in which case they will be broadcast against each other.

Returned tristimulus values will be on the 0-100 scale, not the 0-1 scale.

class colorspacious.CIECAM02Surround(F, c, N_c)

A namedtuple holding the CIECAM02 surround parameters, \(F\), \(c\), and \(N_c\).

The CIECAM02 standard surrounds are available as constants defined on this class; for most purposes you’ll just want to use one of them:

  • CIECAM02Surround.AVERAGE
  • CIECAM02Surround.DIM
  • CIECAM02Surround.DARK
class colorspacious.NegativeAError

A ValueError that can be raised when converting to CIECAM02.

See CIECAM02Space.XYZ100_to_CIECAM02() for details.

Now that you have a CIECAM02Space object, what can you do with it?

First, you can pass it directly to cspace_convert() as an input or output space (which is a shorthand for using a space like {"name": "CIECAM02", "ciecam02_space": <whatever>}).

The plain vanilla "CIECAM02" space is weird and special: unlike all the other spaces supported by colorspacious, it does not represent values with ordinary NumPy arrays. This is because there are just too many perceptual correlates, and trying to keep track of whether M is at index 4 or 5 would be way too obnoxious. Instead, it returns an object of class JChQMsH:

class colorspacious.JChQMsH(J, C, h, Q, M, s, H)

A namedtuple with a mnemonic name: it has attributes J, C, h, Q, M, s, and H, each of which holds a scalar or NumPy array representing lightness, chroma, hue angle, brightness, colorfulness, saturation, and hue composition, respectively.

Alternatively, because you usually only want a subset of these, you can take advantage of the "CIECAM02-subset" space, which takes the perceptual correlates you want as a parameter. So for example if you just want JCh, you can write:

{"name": "CIECAM02-subset",
 "axes": "JCh",
 "ciecam02_space": CIECAM02.sRGB}

When using "CIECAM02-subset", you don’t have to worry about JChQMsH – it just takes and returns regular NumPy arrays, like all the other colorspaces.

And as a convenience, all strings composed of the character JChQMsH are automatically treated as specifying CIECAM02-subset spaces, so you can write:

"JCh"

and it expands to:

{"name": "CIECAM02-subset",
 "axes": "JCh",
 "ciecam02_space": CIECAM02.sRGB}

or you can write:

{"name": "JCh",
 "ciecam02_space": my_space}

and it expands to:

{"name": "CIECAM02-subset",
 "axes": "JCh",
 "ciecam02_space": my_space}

Perceptually uniform colorspaces based on CIECAM02

The \(J'a'b'\) spaces proposed by [LCL06] are high-quality, approximately perceptually uniform spaces based on CIECAM02. They propose three variants: CAM02-LCD optimized for “large color differences” (e.g., estimating the similarity between blue and green), CAM02-SCD optimized for “small color differences” (e.g., estimating the similarity between light blue with a faint greenish cast and light blue with a faint purpleish cast), and CAM02-UCS which attempts to provide a single “uniform color space” that is less optimized for either case but provides acceptable performance in general.

Colorspacious represents these spaces as instances of LuoEtAl2006UniformSpace:

class colorspacious.LuoEtAl2006UniformSpace(KL, c1, c2)

A uniform space based on CIECAM02.

See [LCL06] for details of the parametrization.

For most purposes you should just use one of the predefined instances of this class that are exported as module-level constants:

  • colorspacious.CAM02UCS
  • colorspacious.CAM02LCD
  • colorspacious.CAM02SCD

Because these spaces are defined as transformations from CIECAM02, to have a fully specified color space you must also provide some particular CIECAM02 viewing conditions, e.g.:

{"name": "J'a'b'",
 "ciecam02_space": CIECAM02.sRGB,
 "luoetal2006_space": CAM02UCS}

As usual, you can also pass any instance of LuoEtAl2006UniformSpace and it will be expanded into a dict like the above, or for the three common variants you can pass the strings "CAM02-UCS", "CAM02-LCD", or "CAM02-SCD".

Changed in version 1.1.0: In v1.0.0 and earlier, colorspacious’s definitions of the CAM02-LCD and CAM02-SCD spaces were swapped compared to what they should have been based on the [LCL06] – i.e., if you asked for LCD, you got SCD, and vice-versa. (CAM02-UCS was correct, though). Starting in 1.1.0, all three spaces are now correct.

Color difference computation

colorspacious.deltaE(color1, color2, input_space='sRGB1', uniform_space='CAM02-UCS')

Computes the \(\Delta E\) distance between pairs of colors.

Parameters:
  • input_space – The space the colors start out in. Can be anything recognized by cspace_convert(). Default: “sRGB1”
  • uniform_space – Which space to perform the distance measurement in. This should be a uniform space like CAM02-UCS where Euclidean distance approximates similarity judgements, because otherwise the results of this function won’t be very meaningful, but in fact any color space known to cspace_convert() will be accepted.

By default, computes the euclidean distance in CAM02-UCS \(J'a'b'\) space (thus giving \(\Delta E'\)); for details, see [LCL06]. If you want the classic \(\Delta E^*_{ab}\) defined by CIE 1976, use uniform_space="CIELab". Other good choices include "CAM02-LCD" and "CAM02-SCD".

This function has no ability to perform \(\Delta E\) calculations like CIEDE2000 that are not based on euclidean distances.

This function is vectorized, i.e., color1, color2 may be arrays with shape (…, 3), in which case we compute the distance between corresponding pairs of colors.

For examples, see Color similarity in the tutorial.

Utilities

You probably won’t need these, but just in case they’re useful:

colorspacious.standard_illuminant_XYZ100(name, observer='CIE 1931 2 deg')

Takes a string naming a standard illuminant, and returns its XYZ coordinates (normalized to Y = 100).

We currently have the following standard illuminants in our database:

  • "A"
  • "C"
  • "D50"
  • "D55"
  • "D65"
  • "D75"

If you need another that isn’t on this list, then feel free to send a pull request.

When in doubt, use D65: it’s the whitepoint used by the sRGB standard (61966-2-1:1999) and ISO 10526:1999 says “D65 should be used in all colorimetric calculations requiring representative daylight, unless there are specific reasons for using a different illuminant”.

By default, we return points in the XYZ space defined by the CIE 1931 2 degree standard observer. By specifying observer="CIE 1964 10 deg", you can instead get the whitepoint coordinates in XYZ space defined by the CIE 1964 10 degree observer. This is probably only useful if you have XYZ points you want to do calculations on that were somehow measured using the CIE 1964 color matching functions, perhaps via a spectrophotometer; consumer equipment (monitors, cameras, etc.) assumes the use of the CIE 1931 standard observer in all cases I know of.

colorspacious.as_XYZ100_w(whitepoint)

A convenience function for getting whitepoints.

whitepoint can be either a string naming a standard illuminant (see standard_illuminant_XYZ100()), or else a whitepoint given explicitly as an array-like of XYZ values.

We internally call this function anywhere you have to specify a whitepoint (e.g. for CIECAM02 or CIELAB conversions).

Always uses the “standard” 2 degree observer.

colorspacious.machado_et_al_2009_matrix(cvd_type, severity)

Retrieve a matrix for simulating anomalous color vision.

Parameters:
  • cvd_type – One of “protanomaly”, “deuteranomaly”, or “tritanomaly”.
  • severity – A value between 0 and 100.
Returns:

A 3x3 CVD simulation matrix as computed by Machado et al (2009).

These matrices were downloaded from:

which is supplementary data from [MOF09].

If severity is a multiple of 10, then simply returns the matrix from that webpage. For other severities, performs linear interpolation.