Understanding the Complete LED Database:
by Tim Park
We packed our 2016 LED Database full of useful information about the color quality of over 165 LED lights on the market. To be as complete as possible, we felt it important to explain not only how we conducted our color measurements, but also how to quickly read and understand all that color data.
So continue reading below to learn more about the LED database. If you want to skip ahead and get to the results, head on over to Page 2:
Ranking: The scale includes many lights that are tied for the same rank. So it is best to NOT count down the list and say that one LED light is ranked two higher than another without considering if those two lights are ranked the same. (For more on why so many ranked the same, read the section below titled “Ranking the Lights.”)
Histogram: Here extended CRI is separated out by each of its R-values, R1 through R15. Pay specific attention to the R9 and R12 values, since LED lights have a hard time accurately reproducing these colors. With bicolor lights, the R9 value can greatly dip (depending on the brand of light) between the two extremes (such as a color temperature of 4000K), sometimes as much as a 10% decrease.
Spectrum: With LEDs there is almost always a huge peak at 450 nm followed by a dip around 470 nm. Depending on the quality of the LED, the peak and dip may be big or small. Ideally both will be as small as possible, although with daylight balanced lights a big peak followed by a small dip is normal. The spectrum should also be relatively broad since the tungsten and daylight light the LED is mimicking is also broad. Lower quality lights will have narrower spectra, resulting in a lack of reds above 620 nm, as well as missing colors between 450 nm and 500 nm.
TM-30-15 (Rf and Rg). TM-30-15 averages together 99 color samples. When looking at the color vector graphic for a high quality light, the red circle should completely trace the black reference circle. An oblong red trace means the colors are very skewed, something that might not be obvious when those values are averaged together to give you a numerical value. This is why comparing the TM-30-15 graphics are so important. The green arrows show hue-shifts in that area. Rf measures color fidelity while Rg measures the color gamut. The closer these are to 100, the better.
Color (CCT): Correlated Color Temperature tells what “version” of white the light is. The temperature is correlated (or compared) to a blackbody radiator at the given temperature kelvin. Tungsten balanced light is close to 2800K, while daylight balanced light is around 5600K. The color temp is included in the database so that you can compare lights of the same temperature. (Note: The standard used to measure CCT is different above and below 5000K. As a result you will probably see a jump or drop in certain R-values right at 5000K.)
CRI (Re): The Extended Color Rendering Index includes seven additional colors to the eight used with standard CRI, for a total of 15 colors, R1 through R15. The maximum value is 100, with 100 being the best. (Note: colors for Extended-CRI are named below.)
TLCI: The Television Lighting Consistency Index averages 24 colors from the MacBeth chart (which is now owned by X-Rite and is called the X-Rite ColorChecker). The maximum value is 100, with 100 being the best. (Note: colors for TLCI are named below.)
CQS: The Color Quality Scale uses 15 saturated Munsell color samples to determine its value. (The colors used are not the same 15 colors used for CRI (Ra) or CRI (Re).) The maximum value is 100, with 100 being the best.
Rf and Rg: See TM-30-15 above.
Values in both CRI (Ra) and CRI (Re)
R1 = Light Greyish Red (low saturation)
R2 = Dark Greyish Yellow (low saturation)
R3 = Strong Yellowish Green (low saturation)
R4 = Moderate Yellowish Green (low saturation)
R5 = Light Blue (low saturation)
R6 = Bluish Green (low saturation)
R7 = Violet (low saturation)
R8 = Reddish Purple (low saturation)
Additional Values in CRI (Re)
R9 = Red (saturated)
R10 = Yellow (saturated)
R11 = Green (saturated)
R12 = Blue (saturated)
R13 = Skin Color (Light)
R14 = Leaf Green
R15 = Skin Color (Medium)
Dark Skin, Light Skin, Blue Sky, Foliage, Blue Flower, Bluish Green,
Orange, Purplish Blue, Moderate Red, Purple, Yellow Green, Orange Yellow,
Blue, Green, Red, Yellow, Magenta, Cyan,
White, Neutral 8, Neutral 6.5, Neutral 5, Neutral 3.5, Black
To learn more about these color evaluation scales, check out our discussion about if CRI is the right tool for CRI.
Standards: Tungsten and Daylight
For comparison, here are color measurements of an incandescent bulb (tungsten) and daylight (on a cloudy afternoon) using the Asensetek Lighting Passport spectrometer. (These are NOT the standards that the spectrometer is calibrated to.)
Incandescent light from a tungsten filament results in very reliable color. In fact, I took readings from three brands and wattages of bulbs (Philips 60W Soft White, Ace 100W Soft White, and GE Clear 200W) and all measurements and graphics were basically identical.
Daylight is a tricky one. The Correlated Color Temperature is different depending on time of day and what is in the sky, such as clouds, haze, and smog. Blue sky is also very different than direct sunlight. (I’ll be adding these color readings and graphics as I collect them.)
Testing Tools: Spectrometer and Software
All measurements were taken with the Asensetek Lighting Passport SMART Spectrometer synced to the Spectrum Genius Mobile App. All graphics and data are straight out of the color meter app and have not been altered or adjusted other than to place into one graphic for easy viewing.
For those specifically interested in TLCI, Asensetek has a specific app that syncs with the Lighting Passport and analyzes the TLCI information: Asensetek Spectrum Genius Studio.
A majority of the color readings were taken at NAB 2016 in the Central Hall of the Las Vegas Convention Center. Measurements taken at a different location are labeled as such.
Why a convention center? Because nearly all LED lighting manufacturers that make lights for film and video work – big and small – are present. There are some LED manufacturers on the list that are brand new, and others that are not sold in the United States. These would be much harder to get demo units to test if we didn’t test them at a trade show.
“Won’t the convention hall lights affect the results?”
We tested the convention hall lights and they are so much dimmer than the lights we were testing that there was nearly no effect. We tested this various ways: one way was to compare the color reading of a sample light with and without using the background subtraction setting built into the Asensetek Lighting Passport. A second method was to test the light levels of the convention hall lights compared to the light levels of a sample light.
Additionally, all lights are being subjected to the same convention hall lights, so if there were any contamination, all lights would experience it nearly equally.
“Your readings are different than our readings”
This may be due to a number of factors. Lighting companies often bring the very best of their inventory to the trade show (what is known as “cherry picking”). Or it could be a quality control issue between the different batches of LED emitters used. (This can actually be pretty substantial from lot to lot.) Also, as can be seen when comparing the lights this year with the same models last year, LED technology is constantly improving. So it is possible you have a different generation of light.
We spent some time developing a fair and impartial ranking method. From the outset we knew that even after narrowing the focus to just color quality, it was going to be very difficult to apply the method to 165 lights. A light might rank high according to one color scale, but not as well on another scale. It might have accurate color rendition for most colors, yet fail miserably with one or two colors. Bicolor lights throw a completely different element into the mix. A bicolor light might have excellent color at the tungsten end of the scale, but does a terrible job on the daylight end.
Our solution was to look at all color scales. So our ranking is the average of extended-CRI (Re), TLCI, and CQS, which is then rounded up to the nearest 0.5. The logic behind this method is that each of these color scales is evaluating different parts of the spectrum, and so averaging them together will hopefully bring in more aspects of the spectrum. If one color scale greatly disagrees with the other two scales, it is pretty safe to conclude that there is some issue with the color that the other two scales might be missing.
For bicolor lights, the tungsten and daylight values of extended-CRI, TLCI, and CQS were all averaged together. The thought is that if one end of the scale has very low results, it really isn’t that useful of a bicolor light and as such should have a lower ranking. Perhaps a better option would be to buy the single color version of that light, if it is available.
There is also the issue of how averages can mask outlier values. For example LEDs are known to struggle with the R9 and R12 within the extended-CRI scale. Even within the extended-CRI value, this problem often gets lost. It is also why CRI (Re) values are nearly always lower than CRI (Ra) values. In fact, the R1 through R8 values of the CRI (Ra) are nearly always pretty similar, so that average really doesn’t help that much.
“Why aren’t the CRI (Ra) or TM-30-15 color scales included?”
We didn’t include CRI (Ra) because there was too much overlap with CRI (Re). This would give the CRI scales too much weight in the ranking.
The Rf and Rg values of TM-30-15 were not included because we didn’t want to dilute the other scales even more. Additionally the Rg value can go above 100, which would make the averaging of the other scales mathematically incorrect since they can’t go over 100. However, graphically the TM-30-15 gives A LOT of interesting information, so we still included it so that you can still use it in your personal evaluation of a light.
“Why were the rankings rounded up to the nearest 0.5?”
As with all measurements, there is a certain amount of error: the measurement device, electronic/light noise, the emitters of the LED, and the human doing the test. So we felt that 0.5 was a good place to round to since more decimal points than that are probably just noise.
Plus, does it really make a difference if the light is ranked 0.1 different than another light? The CRI or TLCI or CQS of the light could be 3.0 different (or more) and most people wouldn’t even notice.
“But there are multiple lights within the same ranking.”
Exactly. The results of all the lights within the same ranking are essentially the same. While it is true that some LED lights might have individual color readings that are very good or very bad — for example the R9 value — often resulting in them being upgraded or downgraded in a normal ranking system, we chose to leave them be so as to not overly influence and bias the ranking. Different people value certain color scales or color values more than others, and so if we downgraded a light because it had very bad R9 values, someone else might have issue with it since they don’t care about skin tones and red fruits and vegetables.
“What is NOT included in the ranking?”
Everything unrelated to color quality. This includes build quality, durability, power source, how it is mounted on-set, softness/hardness of the light, beam angle, ease in controlling color temperature and brightness, price, etc. All of these factors would affect price, which explains why some lights high on our scale are inexpensive and why others farther down are more expensive. We leave it up to you to decide if these characteristics are important to you.
Mixing Tungsten, Daylight, and Bicolor into the same list
The goal of this database was to be complete. We debated separating the lights by their color or function: tungsten in one list, daylight in another, bicolor in yet another. Panel lights in this list, point source lights in another, flex/ribbon lights in yet another, and then another list for the bulbs.
Since there are so many ways to splice and dice this list, we decided to let you the reader figure that part out. And that is where the beauty of ranking the lights solely based on color quality comes in instead of ranking each light against each other. If you remove, for example, all of the daylight-balanced LED lights from the list, the ranking stays the same. Our database isn’t to say, “My light is better than your light,” but instead to say, “This group of lights are all good.”
Also, don’t immediately discount the lights lower down on the list. For some of those, amazing color quality is not their objective. Instead they might be trying to create a very bright light for arenas or stage shows where color isn’t as important. Some lights in the middle might be super soft, even lights without the staccato effect seen in LEDs with hundreds and hundreds of diodes.
Questions or Comments?
Don’t agree with how we ran our tests? Think there is a better way to parse out the numbers? Do you think we left some aspect of color quality out? Please let us know. We really want this to be as useful of a list as possible, and appreciate everyone’s input.