Talk:Color depth

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


all 16 millions colors[edit]

The image that claims to map all 16M colors is false. while 4k*4k is enough pixels, the way it's done is incorrect. there misses all saturation ranges, it's only [max-sat, all-hues, all-values.] 240D:0:5B34:5800:8487:B5D2:419A:5CA5 (talk) 06:07, 10 March 2018 (UTC)[reply]

If you zoom in it shows all the colors, though I agree it is hard to see. Every combination of r+g is an 16x16 pixel block, and then each of the 256 pixels in each block have a different value of b added to it.Spitzak (talk) 18:16, 21 March 2018 (UTC)[reply]

In simple terms, the image is missing black (0,0,0) and white (255,255,255), and a lot more shades in between. — Preceding unsigned comment added by 59.100.113.33 (talk) 04:02, 6 December 2019 (UTC)[reply]

According to the legends of the images, black is in the lower-right corner of the upper-left blob, and white is in the center of the lower-right blob. I am not sure if the maker of the image scaled without filtering or if the "spirals" used for the blue channel are actually correct, both of these would cause not all combinations to be in the image. But it would still look the same.Spitzak (talk) 07:11, 6 December 2019 (UTC)[reply]
The legends are in fact correct (I checked in GIMP), but someone should probably upload a non-spiral all-16M-colors-image, since the spirals are so confusing Sisima70 (talk) 18:52, 15 August 2023 (UTC)[reply]

Images[edit]

I uploaded a photograph at different color depths, but I am very displeased with my formatting. If someone could reformat these images without destroying detail, I would be grateful. The problem with putting the pictures in a gallery or making them smaller thumbs is that the computer rounds (if that is the correct word) some of the colors, creating shades of grey in a b&w image and colors that do not exist in the color depth, etc. Thegreenj 21:59, 12 February 2007 (UTC)[reply]

This page needs cleanup. The Direct color section is a haphazard, unorganized list of color depths. I don't know how, but that needs to be cleaned up. --Josh Lee 21:31, Jan 21, 2005 (UTC)

Err, This page looks messy in Safari. Getting some overlapping links where scanner colour bit depth is explained. Anchor links malfunction too. Think it has something to do with these floated images, but unsure. FYI. george.

18- bit colour suitable for biomimetic application

Indexed Color[edit]

This section gives the feeling there is only PC graphics card in the world... --Diego 13:14, 20 June 2006 (UTC)[reply]

Beyond True Color[edit]

This article begins discussing color depth in terms of Bits Per Pixel. However, the "Beyond True Color" section introduces the term Bits Per Channel. In this context, what does channel refer to? I am assuming that channel refers to the individual colors in a pixel, RGB. Therefore, would 24 bits per pixel = 8 bits per channel? So when this section states that human vision can see at best about 10 bits per channel, would that be equivalent to 30 bits per pixel? - Unsigned

This whole section needs a lot of work. This paragraph, for instance is both poorly worded, and misleading:
"For extended dynamic range imaging, including high dynamic range imaging (HDRI), floating point numbers are used to describe numbers in excess of 'full' white and black. This allows an image to describe accurately the intensity of the sun and deep shadows in the same colour space. Various models are used to describe these ranges, many employing 32 bit accuracy per channel. A new format is the ILM "half" using 16-bit floating point numbers, it appears this is a much better use of 16 bits than using 16-bit integers and is likely to replace it entirely as hardware becomes fast enough to support it."
First of all, floating point numbers don't describe numbers "in excess of 'full' white and black." Rather, they allow for a less discrete distribution of tones *between* white and black, which enables you to record detail more accurately. Wording like "it appears" shows a lack of conviction about the data. The bit about ILM seems to refer to Industrial Light and Magic, who developed the OpenEXR format. If it is in fact discussing OpenEXR, why not say that, rather than ILM? It's confusing even to me, and I use HDR imaging in my every day work as a pro photographer. Dilvie 04:22, 20 April 2007 (UTC)[reply]
I'm not sure, but what it sounds like to me what this paragraph is describing is recording the RGB intensities of the sun (as their example) beyond full white (meaning, the upper clipping point of the eyes). At a certain point, the eye can't register higher intensities than a certain point, the value is clipped (thus intensities seen beyond that point appear as the same full white), but one can still record the full intensity electronically. This clipping is also what is responsible for a colored light (green, for example) beginning to turn whiter at a certain point until the color has become completely white (the green becomes clipped, then, as intensity increases, the red and blue receptors start registering an increasingly higher value — while the eye gives no different value for green — until they clip, as well).
Although a value beyond "full white" is not helpful for displaying something (as-is) to the eye, it is useful for image manipulation, research, and many other purposes. This is where floating point numbers have an advantage over integers as they can record values over a broader range. Their downside is that the higher the number goes, the less the smaller changes can be recorded (for example, working only with whole numbers for simplicity: 3 sig digits can record 0-999 fine, with no missed whole numbers, but at higher powers of ten, 100,000-999,000 skips nearly a thousand whole numbers at a time, 100,000,000-999,000,000 skips nearly millions of whole numbers at a time). I've worked on projects where we've used floating point to represent excessive brightness, but we used our own data format, and values were entered manually (no cameras involved). — al-Shimoni (talk) 20:27, 28 May 2012 (UTC)[reply]

Another reason further work is needed is that it suggests that anything beyond 8 bits per channel is unusual based on the number of colours available (3 times 2 to the power of 8). However, 16 bits per channel is now widespread and important in photographic manipulation software. The benefits are very clear: when tonal areas are 'stretched', using curves facilities (almost standard in advanced photographic image manipulation) 8 bits easily split shades into easily distinguishable areas, like a shaded contour map (aka 'posterisation'), losing the original smooth transitions. 16-bit colour images cabn be subjected to substantially more use of curves before the posterization effect occurs.

Television color[edit]

This section is missing references, e.g."Mitsubishi and Samsung, among others, use this technology in some TV sets". I couldn't find any evidence that these companies are really using 48bit color depth in their TVsets. Please add a reference or delete th line. - Andy. —Preceding unsigned comment added by 92.236.148.59 (talk) 16:20, 21 April 2009 (UTC)[reply]

RAW format[edit]

This page should mention RAW camera formats since one advantage many such formats provide is 12-bit-per-channel color. —Ben FrantzDale 13:33, 19 October 2006 (UTC)[reply]

Dubious[edit]

I added the dubious tag as some of the stuff there sounds like misleading marketing to me. I came across this discussion [1]. While not a reliable source, I'm pretty sure that what they're saying there is right. There is no instrinsic reason why you can't produce correctly saturated yellow from red and green. Simple physics and an understanding of the human visual system should tell you this. You may or may not be able to achieve this in practice at the current time, but suggesting it's an approximate is misleading. Nil Einne 16:56, 1 August 2007 (UTC)[reply]

Actually, the article text is correct, or at least has the right idea. The physics is not so simple. The complete spectrum of the human visual system can not be entirely represented by combinations of only three primary colors. For any "mixed" (non-primary) color, the saturation will not be as great as for a monochromatic light source of the same color. No matter what three primaries you pick, the gamut will not completely span the entire color space.
If you're not familiar with colorimetry, some diagrams might help: Look at this diagram and this diagram. In those diagrams, a color is described by its x-y coordinate and is represented by a point. The color's saturation is measured by the closeness of the color coordinate to the outer monochromatic line. If you choose three primary colors, then all possible mixtures of those primaries are contained in the triangle formed by the three primaries' color coordinates; the triangle is called the gamut. No matter what three primaries you pick, you'll always leave out part of the visible range. (To be more precise, the only way to have three primary colors that span the entire color space is to use so-called "imaginary" colors, which essentially make use of "negative brightness," a mathematical trick that results in non-realizable primaries.) For more details, see CIE 1931 color space, especially this section.
I removed the dubious tag and added a citation to support the article text. -- WakingLili (talk) 18:55, 16 August 2007 (UTC)[reply]

I have no idea where to put this. 8-bit color, to use this as an example, does not give you 256 colors; it gives you 256 gradations and 256^3 colors. — Preceding unsigned comment added by 68.71.2.138 (talk) 17:09, 4 July 2017 (UTC)[reply]

That's because the article is referring to true 8-bit color, not 24 bit (8 bits per channel), rather a grand total of 8 bits for all color channels. Thus a maximum of 256 (0-255) values can be used. 98.103.160.18 (talk) 19:41, 12 July 2017 (UTC)[reply]

10bit - Matrox Parhelia[edit]

Didn't the Matrox Parhelia graphics card support 10bit colour depth? I would have added this fact, but I do not feel I understand enough of this topic to make a definitive contribution.

16-bit colour[edit]

The statement that “many Macintosh systems” support[ed] 65536 colours is incorrect. All “16-bit” Mac graphics use (and have always used) 15-bit colour with one bit of padding, providing 32768 colours. On the other hand, I believe Windows supports 16-bit colour (5 bits R, 6 bits G, 5 bits B). I don’t have any sources for this offhand, but then, neither does much of anything in the article. Also, these modes are direct colour, not indexed colour. -213.115.77.102 (talk) 13:46, 10 January 2008 (UTC)[reply]

Macs never had direct 16-bit color. All 16-bit video hardware shipped by Apple had 8-bit DACs and chose 65536 colors from a 16777216 color palette. 76.126.134.152 (talk) 19:33, 19 August 2008 (UTC)[reply]
Got a citation for that? Why would you bother with the slow-to-access 192kbyte palette table that would imply (64k colours x 3 bytes per colour), as well as all the work of choosing the 65-thousand "best colours" for an image which may only have a few hundred thousand pixels, when the only purpose of using 16-bit colour mode when a 24-bit DAC is available would be to either accelerate graphic drawing or to save memory, and the perceptual effect of either 15 or 16 bit direct colour when properly used (especially with dithering etc) is almost as good as 24-bit, and certainly far better than 8-bit for anything but greyscale (which only needs 256 entries anyway)?
The conversion from 15 or 16 bit direct colour data to 24 bit output is amazingly simple - chuck the 16-bit colour word into an appropriate register that then splits the data out into three different lines (of 5 bits each, or 5 + 6 + 5) and feeds it to the high bits of the Red, Green and Blue registers of the DAC, with the low bits being automatically held at zero, in effect padding them without needing to do any work. Even if the table is held in a completely separate, dedicated RAM bank, your pixel now has to be slightly delayed for an additional step whilst its 24-bit value is looked up using a 16-bit one, THEN passed to the DAC; OK, you have to do that when looking up 8- or 4-bit pixels anyway, but we're already taking a speed hit by fetching a 2-byte word for each pixel rather than one byte or even a nibble (and therefore moving 2 or 4 pixels at a time over a 16-bit bus, or 4-8 over a 32 instead of just 2), making it even slower - possibly slower, in fact, than 24 bit direct colour and perhaps even as slow as 32 bit - seems somewhat counterproductive.
If you want, you can also apply a minor scaling factor to the output voltage, but the difference between 248 and 255 (1111-1000 and 1111-1111) is pretty hard to see when you don't even have 255 available for comparison, let alone 252 (1111-1100) and 255... (in fact, if you're doing 5-6-5, hold them all at "1"... the lowest level will be a very dark grey with the merest of purple tints (8-0-8), that won't be discernable on a correctly adjusted monitor, and white will be 255-255-255; otherwise, you'll get a true black (0-0-0), but a slightly green-tinged "white" (248-252-248))
Consider - at the lowest common colour resolution used by Macs, 512x384, a 16-bit direct colour mode would use exactly 384kbytes. Add a 15 or 16-bit colour map on top of that, and it goes up to 480 or even 576kbyte. The latter wouldn't even be possible if you only had a 512kbyte adaptor, and the former would be disingenuous when you're only drawing 192k pixels in the first place (or 3 pixels for each CLUT entry), and may be using a system which shares its RAM between video and programs and might only have 4MB installed in total - or even just 2MB. 192k in that case would represent almost 1/10th of the total memory, and possibly 1/8th or more of what's potentially usable by actual programs and data.
If we step up the resolutions the story doesn't change until you're using quite large screens, which simply wasn't the case back in the days when 24-bit was still a novelty.
At 640x400, direct colour uses 500kbyte, and so again will work with a 512k adaptor. Adding CLUTs naturally takes that to 596 or 692kbyte. Even if you happened to have that memory installed, it's far more likely you'd want to use it for extra resolution rather than a very very slight improvement to the colour quality.
640x480 means 600kbyte at 16 bit, and even 563kbyte if you were a bit crazy and were using 15 bitplanes instead of packed pixels. Obviously we need more adaptor memory, but theoretically no more than 640k, maybe 768, rather than necessarily a full 1MB. Add 192k to 600, and you get 792k... Whoops.
720x540 gives 760kbyte, which only just fits inside 3/4mb. I suppose if you had 1MB available, you could chuck a 192kbyte CLUT on top, but there's still only 380k pixels here. It seems excessive, like using a 256-colour palette for some ultra low rez display with only 1536 pixels (eg 48x36).
800x600 reaches the heady heights of 938k. Once again, you won't fit even a 32k-colour CLUT within 1.00MB here...
832x624 (a Mac specialty) noses in at 1014k, or 0.99MB. Seeing as this is a mode that is almost exclusively found on Macintoshes, and was used particularly on the iMac with 16-bit colour (presumably with 2MB VRAM and two video pages to make the display smoother?), is the suggestion seriously that they installed an extra 192kbytes just to hold all the colour data... for what would still only be about 500k pixels - still not even 8 pixels per colour? (Which would be equivalent to having a palette entry for each of the 4k colours you could use in HAM on an Amiga in its lowest rez, in terms of pixels per CLUT entry efficiency...)
Sorry, but no, I don't see this happening. You have to get up into modes which use 1.5MB bitmaps just so the 16-bit version is blowing only 1/8th of the total VRAM in use on a pretty much pointless lookup table. Or with a typical 2MB adaptor (about the breakpoint where it becomes practical to use 24bit mode instead, as you can use 800x600 or 832x624 in single-page, which is about as high as you'd want to run a typical 14 or 15 inch low-frequency CRT at), your maximum rez would be hobbled at 1024x768 or so, whereas you could have achieved 1152x864, 1152x870 or even 1152x900 (the 864-line variant is another Mac specialty, the others Linux) in direct colour.
Take us up to 3 or 4MB and the CLUT is a more practical and even useful proposition, consuming 1/16th of total RAM or less (equivalent to using a 256-from-16M palette with a 12kbyte/kpixel bitmap, or 128x96, which is a little extravagant but not outside the realms of imagination) but unless you're trying to do CAD on a very limited budget, why would you even bother? You can do 1024x768 at 24 bit already, or possibly even 1152x864. If you could afford a monitor that gave higher resolution output with a comfortable refresh rate, then you could also afford more VRAM. If you couldn't, then you could still run at 1280x960 or 1280x1024 in 16-bit with dithering, and your fuzzy, fat-dot-pitch CRT would smooth it out so it looked like 24-bit. More likely for that kind of work at the time you'd plump for 8-bit anyway, as it would be faster, and would still give sufficient colour fidelity. After all, 10 years earlier, you'd have been operating in monochrome. A bit of barely-perceptible dithering is no hardship.
(Oh, and for comparison's sake? 512x384 at 24-bit is... 576kbyte! How about that, eh? You'd use exactly the same amount of memory in both colour modes. Therefore, why not just use 24bit? At 640x400, 24bit is 750kbyte (again fitting quite neatly into 3/4mb), 640x480 is 900kb, and you have to go up to 720x540 to tip the scales beyond 1MB - up to 1.11 in fact. However, you can still just about squeeze 720x480 (aka "NTSC DVD resolution") into 0.99MB (before rounding-off, it's JUST less than 832x624x16-bit), which could be useful in various ways in of itself... I believe I've probably covered the relevant higher 24-bit resolutions already...) 193.63.174.211 (talk) 17:04, 8 February 2013 (UTC)[reply]

How many colours used in coding video?[edit]

I suspect in all videos coding can be used either 12 bit (R:G:B - 4:4:4 bit) or 15/16 bit (R:G:B - 5:5/6:5) or maybe in great quality videos like blueray even 24 bits (R:G:B - 8:8:8). Say video have 24 fps (frames per second), then with 15 bit colour system each (of 3 - R:G:B) colour have 32 levels of intensity. This is pretty enough for small resolution (like 480*360, 4:3) videos 32*32*32=32768 colours. Most videos from original (kinda .bmp) format are compressed about 30-40 times. How to explain such good compression without magic? I think need for each pixel add 8 bits, which means "how long pixel don't changing his colour". 8 bit is 256 values and 1 s is 24 values, so some pixel can be same colour 256/24=10.6666 s. So for 10 seconds each pixel can hold his colour without needing of more space on HDD. If half video pixels freez (don't changing) in 10 s, then half image was compressed 256 times and another half video if was uncompressed then still total compression is only about 2 times. If 19/20 of video pixels don't changing colours for 2 seconds, then 1/20 pixels is about 20 times compression and rest part is 48 times compression, so (19/20)/48=0.01979 and 1/20=0.05, so more space requiring 1/20 of each frame changings pixels raver than 19/20 pixels which holds for 2 s (=48 frames). But also need remember that 8 bits to each pixel colour (for how long he don't changing his intensity). So if I here get about 20 times compression then actuality it is only 20/8=2.5 times compression. Of course when everything on video moving fast then only quadrates are visible. For example I have beach scene video (~88.8 MB) with moving humans flash - .flv type with size 93187000 bytes= 745496000 bits. This video have resolution 480*360=172800 pixels. This video is length 23:33 (23 minutes, 33 seconds). Uncompressed this video should take space on HDD: 480*360*15*24(fps)*23.5*60=8.77*10^10 bits=10964160000 bytes =~10 GB, so compression is about 110 times. This video sound probably is mono 22kHz, 64 kbit/s, mp3. Sound makes 64000*23*60+64000*33=90432000 bits = 11304000 bytes ~10 MB. Precise compression is 10964160000/(93187000-11304000)=0964160000/81883000=133.9 times. HOW THE HELL TO GET SUCH COMPRESSION (134:1) ? Even if all video was frozen for 10 seconds, still it will be compressed only 256/8=32 times. —Preceding unsigned comment added by 84.240.9.58 (talk) 13:17, 5 August 2010 (UTC)[reply]

That's nice, 84.240.9.58, but you could have done with just doing a little research before getting all theoretical. For example, you'll find, thanks to the magic of MPG encoding and such, yer typical DVD (which is pretty old hat these days, using outdated compression technology), at roughly 4000 thru 11000 kbit total data rate, will give you 720x480 resolution at 29.97 full frames per second (or 720x576 at 25.00), and will do so with a YUV-gamut colour precision of at LEAST 8 bits per channel on the input side (before it's all mashed down to a lossy encoded mess via variable-precision/quantiser cosine thingies), and more typically 9 or 10 as the more modern standard, with 11 being the actual limit which isn't supported by all players. Along with improving the ultimate colour precision, the required bitrate and encoding time goes up. Usually, so long as I'm not in a hurry and there's plenty of bandwidth to spare, I plump for 10 bits per channel to give the best possible output. If we need to economise on disk space without having too many obvious MPG artefacts, or a faster encode, then it may drop back to 9 or even 8. Note that the effective output will tend to be lower precision than the input, which is why using 10 could be of benefit even with 24-bit (or indeed 18-bit dithered/flickered) LCDs as the final display.
I'm not sure what the newer MPG4-based codecs (as used in FLV, Blu-Ray, DivX etc) use, but I would be highly surprised if it's less than 8 bits per channel, though it's still likely to encode as YUV; in the video and photo compression arena, it's very useful to be able to separate the brightness and colour-tone signals, as maintaining the quality of the former at the expense of the latter tends to give much better perceptual results for the same bandwidth. I'd suspect that any broadcasts or web feeds using MPG4 would default to 8bpc, but things like Blu-Rays are likely 10bpc or more. Again, at least on the input side. It may go rather higher - to at least 12bpc... otherwise, what is the point of HDMI connections offering 36-bit or greater colour depth?
In terms of audio, that's entirely up to who's encoding it and their preferences for sound quality and audio vs video. However, going back to the DVD example, that will almost certainly be, at the base level, 48Khz, stereo (aka 2.0), with 16-bit sample depth or the equivalent, with an encoding rate between 192 and 256kbit per stereo pair, for "MP2" (MPG1 layer 2; MP3 is MPG1 layer 3... pretty old stuff, both) or Dolby Digital, the latter being higher quality for the same rate but not often clocked down to, e.g. 128k, as the bitrate saving is just so minimal when you're splurging several thousand kbits on the video. However, if you use the optional LPCM encoding, that's basically CD audio running at 48kHz instead of 44.1 ... or in other words, 1536kbit... Which is why you don't often see it, and particularly not on Blu-Rays or in webcasts.
Going up from there, you can have, e.g. Dolby Digital 5.1 surround (6 channels, one relatively low quality compared to the others...), 48khz, 16 ~ 20 bits sample depth, at a fairly modest 448kbit considering it gives you all that audio in good quality. If you're crazy, you can even put that in a DivX "raw" (as AC3).
And of course, for web purposes, you can drag these numbers down. I would expect, for example, yer typical Youtube video is about 640x360 pixels at 25 to 30 frames/sec, 8 bit per channel colour depth (in YUV) at a few hundred kbits, with 44.1kHz stereo audio at around 96kbit AAC (aka MP4).
Is that any good as a real world answer to your uninformed but otherwise fairly intelligent mathematical musings? :) 193.63.174.211 (talk) 16:29, 8 February 2013 (UTC)[reply]

Note that most videos are stored not as RGB (red, green and blue) but as YCbCr (or "YUV"), where brightness and colour are split up. Usually various schemes such as YUY2 are used to reduce the resolution of the colour channels, because they're not as important as brightness for human perception.

To make the latter a bit more transparent, YUV and its derivatives have two main differences from RGB that matter in the digital domain (whereas blue–luminance and red–luminance differences only really matter in the realm of analogue signals, not in the digital domain):
  • a.) The maximum depth or resolution per color channel between pure black and pure white is not 255 values but 255 - 16 - 20 = 219 values, which would be roughly 10.5 million colors all-in-all. The difference is invisible to the human eye when it comes to the number of unique colors, but results in notably different gamma when RGB and YUV material is displayed on a monitor working in the other color space.
  • b.) Additionally, YUV and its derivates use color subsampling not only in pixel resolution but also in color depth, where Y stands for brightness aka green and is the one channel that actually samples all 219 values, while blue and red as U and V are color-subsampled (meaning they sample less values between pure black and pure white), where the blue channel samples 88% (of the values in Y) = 193 different values between pure black and pure white, and red only 49% (of the values in Y) = 107 values, so the *ACTUAL* TV signal color depth of YUV and its derivates is 219 * 193 * 107 = roughly over 4.5 millions of colors. Again, the difference to RGB TruColor is virtually invisible to the human eye as for the *NUMBER* of different colors (whereas the color subsampling in number of *PIXELS* per color channel is *SOMEWHAT* notable, although it's only what they call a "psycho-visual" compression nowadays, based upon the strengths and weaknesses of the human eye that can see colors far less good than brightness), but this latter fact requires *DIFFRENT* gamma correction per color channel in order to convert between YUV and RGB. The practical result of these different gamma values per color channel is that what looks clean and neutral on a YUV screen looks like it has a slight pink tint on an RGB screen if no proper YUV --> RGB conversion is applied. --2003:EF:13C6:EE96:4CF5:586C:5643:4170 (talk) 22:31, 16 September 2018 (UTC)[reply]

Tetrachromacy[edit]

At 2011-02-06, the Television color section contained the sentence, “The Sharp Aquos, on the other hand, uses a proprietary color space known as Quattron, which operates on the principle of tetrachromacy and has a yellow subpixel in addition to the RGB, but not cyan or magenta. Using 8 bits per pixel, images would then result in a color depth of 32 bits.” I’m editing this to remove the reference to tetrachromacy; that reference is unmitigated nonsense. Please don’t put it back in unless you're fairly familiar with x-linked opsin genes. (I’m not right all the time, but I did write the book on color coding in video, Digital Video and HDTV Algorithms and Interfaces.) Cpoynton (talk) 22:23, 6 February 2011 (UTC)[reply]

Charles, thanks for noticing and fixing that nonsense. Dicklyon (talk) 22:42, 6 February 2011 (UTC)[reply]
I've replaced the nonsense (still commented out) with what I hope is more sensible discussion along the same lines. However, the nonsense text also had the problem of being wrong. The mention of tetracromacy is mostly put in a footnote, where I've re-used the previous citation.
166.205.91.38 (talk) 10:08, 23 October 2021 (UTC)[reply]
I trimmed it back some. The subtractive and film bits seemed out of place, and were completely unsourced; show us some sources and maybe we can work those in more sensibly. Dicklyon (talk) 18:59, 23 October 2021 (UTC)[reply]

LCD section[edit]

I amended this section to the best of my ability. However, I think this section does not belong to this article. If there is consensus, relocate of remove it. — Preceding unsigned comment added by 99.246.101.166 (talk) 21:16, 18 July 2011 (UTC)[reply]


System loyalty and non-bit-exact "direct" encoding[edit]

Just wondering...

First, how come HAM mode of the Amiga counts as index colour? It seems a bit like an attempt to weedle it in as the "highest on the list", at 12 bits. Really, you have a 12-bit master palette (effectively direct colour), and you can get any colour from that palette so long as you start from one of SIXTEEN (ie 4-bit) indexed colours, then apply the appropriate one-channel-at-a-time direct colour resetting. So from any of the indexed colours, you could arrive at any other colour the DAC could produce within a further three pixels (or if you had a greyscale index, within two) by directly resetting the red, then the green (then the blue). Sounds somewhat like direct colour by the back door.

Also, when the OCS was operating in 640-pixel width mode, it could "only" display 4 bitplanes, ie 16 indexed colours. OK, this is somewhat better than what the ST managed as the latter could only do that in 320-pixel mode, but it was still a limitation. And the 6-bit mode wasn't even explicitly indexed; half of the colours were just half-brightness copies of an actual 32-colour index. Methinks an Amiga fan's been at work here :-)

Besides that, one method used to get direct colour out of a 256-colour/8-bit index is to use a 6-levels-per-channel pseudoindex, which can be easily implemented in hardware without even needing a CLUT. Store the colour value as between 0 and 215 in decimal, as the result of adding (36xR) + (6xG) + (B), and then simply read it and feed it into the decoder. The overall fidelity is somewhat similar to 3-3-2 bit / 8-8-4 level bitwise chopping, but does make it a bit more even handed, and most especially gives you both more grey shades and something closer approximating a proper "50%" level. OK, it's mostly achieved by using a 256-colour "soft" CLUT with either 40 colours unused, or set to custom choices, but there's no reason it can't be implemented as a hardware mode... 193.63.174.211 (talk) 17:27, 8 February 2013 (UTC)[reply]


Dead Pantone Link[edit]

The Pantone citation link is dead and redirects to the Pantone homepage. It's citation number 8.

nVidia 30-bit deep color support[edit]

support 30-bit deep color[17] and Pascal or later Geforce and Titan cards when paired with the Studio Driver[18]

The soon followed regular drivers (ie. not studio) v436.02 also support deep color. But somebody double check it. --Xerces8 (talk) 12:44, 25 August 2019 (UTC)[reply]

It works in RGB full mode. In Photoshop windowed OpenGL. 2A00:1370:8184:2478:A122:DC44:8D4F:CD89 (talk) 11:48, 12 July 2022 (UTC)[reply]

True color (24-bit)[edit]

The image that claims to show 16M colors really shows 40,000 colors, and needs to be removed. The zoomed version is a fake. The downloadable version would be accurate, but could only be seen in its entirety on screens exceeding 4096 x 4096 resolution.

Another talk already points out this problem with a long –and correct– argumentation. — Preceding unsigned comment added by Domenico Strazzullo (talkcontribs) 15:54, 6 October 2021 (UTC)[reply]

When you run e.g. IrfanView → Image properties or GIMP → Colours → Info → Colourcube Analysis on that image, the result is 16,777,216 colours. Displaying all of them is another issue. But they are "physically" there.
See allrgb.com for many more images like that. --Avayak (talk) 05:29, 7 October 2021 (UTC)[reply]