Replies: 19 comments 3 replies
-
What I'm getting here is
This is with
I can get it do something with import colour
import numpy as np
def generate_square_waves(samples):
square_waves = []
square_waves_basis = np.tril(np.ones((samples, samples)))[0:-1, :]
for i in range(samples):
square_waves.append(np.roll(square_waves_basis, i))
return np.vstack((
np.zeros(samples), np.vstack(square_waves), np.ones(samples)
))
def XYZ_outer_surface(samples):
XYZ = []
wavelengths = np.linspace(
colour.DEFAULT_SPECTRAL_SHAPE.start,
colour.DEFAULT_SPECTRAL_SHAPE.end, samples
)
for wave in generate_square_waves(samples):
spd = colour.SpectralPowerDistribution(
'custom',
{wl: value for wl, value in zip(wavelengths, wave)}
# interpolator=colour.LinearInterpolator
).align(colour.DEFAULT_SPECTRAL_SHAPE)
XYZ.append(colour.spectral_to_XYZ(spd))
return np.array(XYZ).reshape(len(XYZ), -1, 3)
# 43 is picked as number of samples to have integer wavelengths.
colour.write_image(XYZ_outer_surface(43), 'CIE_XYZ_outer_surface.exr') |
Beta Was this translation helpful? Give feedback.
-
This is my fault, I should have mentioned that the code is using latest develop branch. We have changed the whole spectral computations backend and there are some backward incompatible changes. I could adjust the code for v0.3.10 if convenient for you though? |
Beta Was this translation helpful? Give feedback.
-
Let me just grab develop and try it out. |
Beta Was this translation helpful? Give feedback.
-
Alright, same result: exrdisplay shows |
Beta Was this translation helpful? Give feedback.
-
Oh right, this is the expected output, it is essentially a one pixel width image storing all the XYZ coordinates. I have used that because I can load it in The Foundry Nuke easily and it was more convenient to illustrate the SO thread than pushing data in MPL or Plot.ly. Here is a Plot.ly example: https://plot.ly/~KelSolaar/91 and the generating code: import plotly.plotly as py
import plotly.graph_objs as go
# From above "XYZ_outer_surface" definition.
XYZ = XYZ_outer_surface(43)
X, Y, Z = colour.utilities.tsplit(XYZ)
trace = go.Scatter3d(x=X, y=Y, z=Z)
data = [trace]
layout = go.Layout(margin=dict(l=0, r=0, b=0, t=0))
figure = go.Figure(data=data, layout=layout)
py.iplot(figure, filename='Visible Spectrum Volume Boundaries') |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
As discussed on SO, it is a normalisation process, so you could scale the volume uniformly ideally.
Yes although it would not yield a very useful volume representation! :) |
Beta Was this translation helpful? Give feedback.
-
Yes, I guess you just scale the thing such that the full spectrum maps to [100,100,100]. Fair enough. Let me say thanks very much for the discussion! This really enhanced my understanding of color theory. |
Beta Was this translation helpful? Give feedback.
-
You are very much welcome! |
Beta Was this translation helpful? Give feedback.
-
Oh and before I forget if you are happy about the SO answer, would you mind accepting it please? Cheers, Thomas |
Beta Was this translation helpful? Give feedback.
-
Yeah, sure. Before that, it probably needs some cleaning up, and give the answer, too. :) For example:
I don't think this is true, is it? The maximum of CIE Y is 1 alright, but even if the functions were scaled in any other way, the shape would come up. It is rather a consequence of the fact that one restricts oneself to two values in the spectrum, 0 and alpha (where alpha is arbitrarily chosen), right? Also, the answer should probably mention that "gamut of visible colors" is a bit of a misnomer. It's the gamut of colors generated by spectra with a certain maximum value. All this is more or less mentioned in the section Outer Surface Generation of the answer. This part should probably be moved to the beginning of the reply. |
Beta Was this translation helpful? Give feedback.
-
Ah, I think now I understood something else: The visible gamut is the visible gamut under a particular illuminant! Taking an "arbitrary" reference spectrum of all ones, corresponding to a perfectly white object, the integral
will give you the "whitest" color possible – under that illuminant. You're not just feeding square spectra into the observer, but chunks of the illuminant! Setting the illuminant to 1, as we have to so far, is essentially equivalent to picking the rarely used E illuminant. So, yeah, I guess it does make sense after all to speak about the gamut of visible colors (under a particular illuminant). |
Beta Was this translation helpful? Give feedback.
-
Yes, I need to reformulate that part to account for the whole surface not just the upper limit, I originally did not understand what you were originally looking for. |
Beta Was this translation helpful? Give feedback.
-
Not really under a particular illuminant but under the Equal Energy illuminant which have constant radiant power. The CIE definition where every word is important is actually as follows: The colour matching functions are the tristimulus values of monochromatic stimuli of equal radiant power. It is the first sentence of our dedicated (not-up-to-date) notebook here: https://github.com/colour-science/colour-notebooks/blob/master/notebooks/colorimetry/cmfs.ipynb There is an important but critical subtlety when you compute the tristimulus values XYZ of a sample, the illuminant part is actually the illuminant S under which you are viewing the sample, it is independent of the CMFS: Notice that there is no parameterisation to include an hypothetical illuminant under which the CMFS would have been measured, they are pure. You can replace them with camera sensor sensitivities and still not have to include any illuminant during the measurement conditions for the sensitivities. This is the reason I was saying "crank(ing) up some wavelengths here and there" would not be useful because you would be artificially increasing HVS sensitivity at some particular wavelengths. This page has some really interesting further reading on how the CMFS were derived through colour matching experiments by Wright & Guild following Maxwell work: http://www.handprint.com/HP/WCL/color6.html |
Beta Was this translation helpful? Give feedback.
-
Thanks again for the replies. I think we're on the same page now.
Alright. What I meant to say is: It doesn't make sense to speak of the visible gamut per se, but only with an illuminant; equal energy, D65, pick your poison. That illuminant then shines upon the object to consider, and it reflects the wavelengths more or less well (β(λ)).
"Equal radiant power" actually means I can pick and, because at the end of the day they are all normalized by
I don't know what that wants to say.
Looks super interesting, thanks for the hint! |
Beta Was this translation helpful? Give feedback.
-
We are probably entering the meta-discussion stage here. The way I see it is as follows, the HVS is a sensory system whose purpose is to measure something thus there is an envelope to the extent of what it is capable of measuring. The visible gamut represents the envelope for an event that would stimulate the sensor fully. It is obviously more complex because the envelope is relative and we don't account for saturation, etc...
What I was trying to say is that when you use the CMFS, you don't care about the conditions under which they were assembled and put together. They are pure in that they are self-contained like a blackbox and you don't have any parameter to change the way they measure radiant energy. A parallel would be a thermometer, in order to use one, you don't need knowledge of how it was assembled and you don't have hooks to tweak the way it measure temperature, it simply measure it. Does it make sense? |
Beta Was this translation helpful? Give feedback.
-
Sure, feel free to close this one. For your extraordinary effort, I've awarded a 100 points bounty on stackoverflow. Thank you again! |
Beta Was this translation helpful? Give feedback.
-
Thanks @nschloe, this is much appreciated! |
Beta Was this translation helpful? Give feedback.
-
Hi came across this when searching for how the visual gamut boundary is calculated. Good discussion; thanks. I have a few points to make though. It does seems arbitrary that the visual gamut is sampled at a few squared waves. In theory, we should be exhausting all physically-possible spectra and calculate the corresponding LMS/XYZ values and then plot them. That's obviously not possible and so sampling is necessary. But I am not sure I follow @KelSolaar's point that "crank(ing) up some wavelengths here and there would not be useful because you would be artificially increasing HVS sensitivity at some particular wavelengths." I am not sure how that's related to HVS sensitivity at all. For any given arbitrary spectrum, I can use the cone sensitivities to calculate the LMS responses for that light, which corresponds to a point in the LMS space and is a point that's inside the visible gamut. Why is using an arbitrary light artificially increasing HVS sensitivity? Also, I am not sure I follow the following discussion:
Visible gamut has nothing to do with illuminant doesn't it? Visible gamut is just all the colors that HVS can see. I am not sure why the definition of CMF has anything to do with visible gamut, either. The triplets at each wavelength in the CMFs tells us the responses stimulated from different spectral (monochromatic) lights that have the same energy/power/irradiance/radiance. So the tristimulus values at each wavelength in the CMFs are responses of unit-power monochromatic lights, not equal-energy illuminant. This interactive tutorial explains what CMFs really are: https://www.cs.rochester.edu/courses/572/colorvis/cone2cmf.html.
That's a great piece of work. I'd also recommend this article: http://yuhaozhu.com/blog/cmf.html, which is based on two prior publications, both of which meticulously document the exact steps taken to convert Wright & Guild's experimental data to derive the CIE 1931 RGB CMFs.
|
Beta Was this translation helpful? Give feedback.
-
I'm starting a thread in regard to https://stackoverflow.com/a/48396021/931625 as we will hit comments limit there.
I'm assuming @nschloe is author of the question and the snippet being discussed is as follows:
The output image is here: https://drive.google.com/file/d/1GScSAMyyljDQGvZqUWluix9n9fy66NNs/view?usp=sharing
Beta Was this translation helpful? Give feedback.
All reactions