Talk:NTSC

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Cleanup tag[edit]

I asked User:Blaxthos if the article still warranted the cleanup tag and the response was

Thanks for the note. While the article has improved considerably in scope, I still think there are some problems with the article structure and tone. There's a lot of very technical information that should probably be confined to the Technical Details section, and a good portion of the article uses informal voice ("This is done by...", "You could think of it as...", etc.) The content on the whole is really good, it just needs some formalization.

--Wtshymanski (talk) 14:32, 26 January 2009 (UTC)[reply]

Video blanking interval: 19, 20, 21, or 22 lines?[edit]

The Lines and refresh rate section of the article states that that the NTSC transmission is made up of 525 lines, of which the following are visible:

  • 21–263: 243 even-numbered scanlines
  • 283–525: 243 odd-numbered scanlines

for a total of 243 + 243 = 486 visible lines.

I'm guessing that the remaining lines are:

  • 1–20: a 20-line vertical blanking interval
  • 264–282: a 19-line vertical blanking interval

Yet the Vertical Interval Reference section says that lines 1-21 of each field are used for the vertical blanking interval, and that sometimes line 22 is used as well.

I think a clearer statement of the allocation of all 525 lines is needed. — Wdfarmer (talk) 04:35, 23 February 2009 (UTC)[reply]

525 lines per raster. 21 lines of blanking per field. 2 x 21 = 42. 525 - 42 = 483 visible scan lines (241 1/2 active lines per field). Line 19 is for a ghost-cancelling signal. Line 21 is for closed captioning. Line 22 was supposed to be for Teletext.
Is it really 241 1/2 active lines per field, or 241 in one and 242 in the other? Or, in other words, is the half line in the blanking interval or active interval? Gah4 (talk) 03:42, 15 May 2020 (UTC)[reply]
Well, including all lines (active and invisible) it's 525/2 = 262 1/2 lines per field. Keep in mind that the lines are not strictly horizontal, but are at a very slight slant where the line above (in the full frame) is one line height higher on the left as on the right. That's because the NTSC standard was always designed for the CRT-based video camera and the CRT-based TV receiver. For reasons of economy in TV receivers (and not part of the NTSC standard), a few visible lines at the top and bottom were never displayed (overscan). No where in the FCC Transmission Standard is there a non-visible Line 22. (Although there may have been elsewhere in the FCC rules?) So giving 21 lines of blanking per field, as per figures 6 and 7 in FCC Transmission standard, there are a total of 483 visible lines, one of which is split in half between top and bottom. The left-half of the split line is at the bottom with the split at the lowest part of the picture. The right half of the split line is at the top, with the split at the highest part of the picture. Since digital converters don't like half lines, it's likely that the split visible line is taken out, leaving 482 visible lines. Those 482 lines are then made perfectly horizontal by LCD screens, causing a slight skew when tube-based video cameras are matched to LCD receiving screens. This skew however is too small to be noticeable by humans. Ohgddfp (talk) 20:45, 18 May 2020 (UTC)[reply]
There is a recent edit changing the visible scan lines. I have found both 483 and 486 in sources, though maybe not WP:RS. Since this is from analog days, with overscan such that some lines are outside the visible part of the screen, it isn't so easy to give an exact number. There are lines that aren't part of the vertical sync, but are also not (supposed to be) visible. Gah4 (talk) 06:10, 6 November 2020 (UTC)[reply]
Gah4, about "it isn't so easy to give an exact number". Actually it is for the following reasons. Since this article is about "NTSC", we know what NTSC is by definition, given by the National Television System Committee itself. For that reason, there can be only one reliable source of what NTSC is. And that is the FCC transmission standard that adopted the NSTC specifications, and enforced those regulations onto TV stations nationwide. Remember that NTSC is a transmission standard, not a receiver standard. While the NTSC standard was designed to help CRT-based TV receivers be the most cost-effective, such as by specifying receiver primary colors for a reference receiver, what TV receivers choose to do with the transmission standard is not part of NTSC. The FCC does not regulate TV receiver design in this regard. Therefore, one only needs to look at the transmission standard itself to see how many visible lines were actually required by FCC regulations to be transmitted over the air. CRT-based TV receiver designs regarding number of visible lines displayed vary from manufacturer to manufacturer. And totally unlike flat screens, CRT-based TV receiver age and condition also varies the number of lines visible to the viewer. So a given TV receiver design cannot change what is transmitted over the air. And what is transmitted over the air is specified by the NTSC, adopted and enforced by the FCC, and changed slightly in later years by the FCC. Ohgddfp (talk) 15:50, 6 November 2020 (UTC)[reply]
Yes, it is the changed slightly in later years that makes it interesting. I did try to find it in fcc.gov, but so far didn't find one. But as for receivers, more than design or age, there is a knob that sets it. I even used to know how to do it when I had analog TV sets. Even more, the frame wasn't quite a rectangle but usually had more curvy sides to it. Also, the screen itself wasn't flat, especially on larger screens, as the glass has to hold off atmospheric pressure. I think it isn't quite that the FCC doesn't regulate receiver design, though, but maybe not quite as strict. For one, transmitters are required to use IQ, with more bandwidth in I than Q, but receivers are specifically allowed not to decode it that way. Only near the end did anyone actually do it right. Gah4 (talk) 18:13, 6 November 2020 (UTC)[reply]
The FCC had some mandates to include channels 14 -82 in TV receivers. However, nothing else was on the books for TV receiver design. Early TVs like the RCA CT-100 (1954) and some Arvin models (also 1954) used I/Q demodulation with I as the wider bandwidth channel. In 1985, RCA made a "Colortrack" premium model with I/Q demodulation and, using a comb filter, the full 340 lines of horizontal luminance resolution, the equivalent resolution of 453 pixel rows by 483 pixel columns, using rectangular-shaped pixels. Some ICs had provisions for I/Q demodulation. As far as vertical blanking is concerned, the reliable source is as follows: Code of Federal Regulations Title 47 part 73.682 (a)(24)(i) "The active video portion of the visual signal begins with line 22 ...". That means line 22 for each field, given that elsewhere in the code, line 21 is available for digital code on both fields. So vertical blanking totals 42 lines per frame, out of a total of 525 lines. This is 483 active lines. For CRT-based (pickup tube based) video cameras, line 262 1/2 is split with the left side of line 262 at the bottom, and the right hand side of line 262 1/2 at the top. So you are right. The edit that someone did gives 486, which is the wrong value. Ohgddfp (talk) 22:44, 6 November 2020 (UTC)[reply]

Former Broadcast Engineer here. You guys are completely wrong about the verticle blanking interval. The VBI is the time it takes for the electron gun to move from the bottom right of the screen to the top left after it's done "drawing" the last line. It's the amount of time it takes for the scanning gun to physically move. There's a wikipedia page about it that gets it right: https://en.wikipedia.org/wiki/Vertical_blanking_interval There have been times when data has been inserted in the verticle blanking interval, and the horizontal blanking interval has calibration bursts (front porch, back porch, and breezeway). I think for a little while, AOL tried sticking some internet like data in the interval, but it was short lived.

The complication is that it is an analog system. It doesn't have to be an exact number, like it would be for a digital system. Especially if you consider the length of time for the vertical retrace, again generated by an analog oscillator. Also, I was noting above where it says that the scan lines move down slightly. That would be true if the vertical and horizontal deflection coils were exactly 90 degrees apart, but most likely they are not. A tiny rotation, and the scan lines go straight across. Gah4 (talk) 05:51, 5 August 2022 (UTC)[reply]
About "It doesn't have to be an exact number, like it would be for a digital system." For the older analog receiver, this is true. But some later analog receivers (Sylvania one of them) used digital count circuitry ICs to get the vertical blanking interval exactly right. For analog broadcast--I was a broadcast chief engineer--the equipment used for vbi was digital, and as a result, the vbi of the analog signal over the air was indeed exactly right. Ohgddfp (talk) 15:39, 6 January 2024 (UTC)[reply]

NTSC / Television - Duplicate Effort[edit]

There is an article, Television that has almost the same information. Why this duplication of effort? --Ohgddfp (talk) 23:28, 18 March 2009 (UTC)[reply]

Because it doesn't have almost the same information? --Wtshymanski (talk) 02:41, 19 March 2009 (UTC)[reply]

Well, I guess what I am saying is that there seems to be no rhyme or reason why some concepts are in one artible and some are in the other. --Ohgddfp (talk) 16:21, 20 March 2009 (UTC)[reply]

NTSC was the name for the group doing the original US standard, but is mostly known for the US originated color standard around 1954. Parts specifically related to that color TV standard should go here, and there should also be PAL and SECAM articles for those color TV standards. The television article should have the basics for color TV, leaving the fine details to the specific articles. Gah4 (talk) 16:01, 12 May 2017 (UTC)[reply]

Knowledge Tree in the style of an Index[edit]

[NOTE: I use brackets for comments in order to differentiate those comments from suggested material.] [NOTE: My wording is not so good, and so I can use help at some point to make it much better.]

Video

[Use a common dictionary definition here] (See Video Signal)

Video Signal

A closed-circuit signal, which is a signal that is either inside a cable or inside an instrument such a TV transmiiter or TV receiver, that carries a single channel of changing graphical information such as a motion picture or radar image. This information is used ultimately for reconstructing the motion picture or changing graphic onto a viewing screen. The video signal supplies the information only in real time, as needed to update the viewing screen.
For motion picture television-like applcations, the moving picture is comprised of a series of still pictures (video frames) displayed one after the other in rapid succession at the rate of approximately 10 to 100 frames per second, giving the illusion of motion in the same manner as with motion picture film. The information of a given still picture (video frame) is delivered by the video signal only a short time before the viewing screen device uses that information to reconstruct that given still picture. This just-in-time frame-by-frame delivery of the information is called streaming. For analog broadcast TV, the delivery of a given frame is millesconds before it's displayed. For digital signals that are using digital image compression, pieces of a frame may be conveyed at slighty different times, requiring reassembly at the receiving end. This causes as much as approximately half second delay between the time all the pieces of a particular frame are sent, and the time it is displayed onto the viewing screen. This dealy in processing of the digital video information sometimes causes lip-sync problems. The frame rate for broadcast analog low-power TV in the United States is approximately 30 frames per second, while the frame rate for digital broadcast TV varies from about 23.97 frames per second to 60 frames per second.

Video Frame

A video frame is a still picture that is one of a continuous series of still pictures displayed in rapid succession to give the illlusion of motion. The information of a given still picture is ultimately used for reconstructing that picture onto a viewing screen.
Since a 2-dimensional still picture is really only variations of light over a flat surface, the picture can be electronically captured by a lens focusing an image onto the flat surface of an electronic image sensor that is covered with a sufficient number of light detectors (pixels) that together record the variations of light over that surface. The more light detectors there are, the closer the detectors are to each other, and the closer that neigboring light values can be located near each other while differences in their light values are still able to be resolved. This in turn allows the capture of details that are more finely (highly) defined. A video frame in a high definition (high resolution) system requires at least 1024 columns and 768 rows of pixels.
Monochrome (Black and White) Systems
Each light detector, called a picture element (pixel), converts the amount of light falling on it into a quantity of electricity. In this way, the information carried by the electrical quantity associated with a given pixel is the quantity of light falling onto that pixel.
A viewing screen consisting of many rows and columns of light sources, such as light bulbs or light emitting diodes, can reconstruct the image by connecting the individually amplified electical values of each sensor pixel to the correspondingly located display pixel. The brightness of the display pixel is detemined by the quantity of electricity applied to that pixel, which in turn was determined by the amount of light falling onto the correspondingly located image sensor pixel. So the variations of light are reproduced pixel by pixel over the entire surface.
The amplified electrical value of each pixel may be made available by switching the values onto a single wire, one pixel at a time. Alternately, the pixel values may be digitized and written to memory for later read out into a digital to analog converter, the output of which is also the amplified electrical value of each pixel. The reconstruction onto a viewing screen can be much more econmically carried out by transferring the amplified electrical value of each individual sensor pixel one at a time over a single wire that is connected to the display. In the display, each electrical quantity is electronically switched to the associated display pixel. Generally, the sensor pixel values are transferred row by row starting with the top row, and within each row, column by column. So the pixel values are read from locations in time order are like reading words on a page, left to right, starting at the top. For 1024 x 768 digital television, more than 700 thousand pixel values must be transferred for each video frame. With 60 video frames per second, that equates to more than 47 million light values tranferred each second for a black and white television program.

--Ohgddfp (talk) 16:14, 19 March 2009 (UTC)[reply]

This has got to be fixed. Serious questions of what is presently in the article[edit]

Article Section: Transmission modulation scheme

About "The highest 25 kHz of each channel contains the audio signal, which is frequency-modulated, making it compatible with the audio signals broadcast by FM radio stations in the 88–108 MHz band." I'm specifically referring to the later part of the sentence: "making it compatible with the audio signals broadcast by FM radio stations in the 88–108 MHz band." --Ohgddfp (talk) 15:46, 20 March 2009 (UTC)[reply]

What is the point of making TV audio compatible with FM stations? The sentence seems to imply that this is some kind of a benefit. What's the benefit? If there is no benefit, why even mention it? I'm sure that in some countries, the TV audio is AM, yet I'll bet the same country also has FM stations. See talk section: "250kHz Guard Band" ... Where it from? --Ohgddfp (talk) 15:46, 20 March 2009 (UTC)[reply]
When I was a child I had a cheap multifunction radio that could receive the FM audio signals. So I could listen to broadcast television shows. Many FM radios had the ability to do that, simply by extending the frequency range at one end of the FM radio dial. I believe the feature was most often used to listen to sports broadcasts. It was a huge benefit to radio manufacturers, it was like adding an app to a radio, in a world without apps. Sports fans were already a big demographic for radio sales, because many sporting events were only broadcast on radio.MacroMyco (talk) 18:16, 10 April 2019 (UTC)[reply]

About "The highest 25 kHz of each channel contains the audio signal ..." --Ohgddfp (talk) 15:46, 20 March 2009 (UTC)[reply]

This directly contradicts the graphic, which shows not 25 kHz, but more than 500 kHz containing the audio signal. Look at the part colored brown. See talk section: "250kHz Guard Band" ... Where it from? --Ohgddfp (talk) 15:46, 20 March 2009 (UTC)[reply]

About "A guard band, which does not carry any signals, occupies the lowest 250 kHz of the channel to avoid interference between the video signal of one channel and the audio signals of the next channel down" --Ohgddfp (talk) 16:14, 20 March 2009 (UTC)[reply]

What is the reference for this graphic? The FCC adopted NTSC's recommendations. Is this taken from an original NTSC document or from the U.S. Code of Federal Regulations, which legally govern these things? See talk section: "250kHz Guard Band" ... Where it from? --Ohgddfp (talk) 15:46, 20 March 2009 (UTC)[reply]
The guard band info is bogus. I Googled it and found the apparent source being a page that shows it and has no reference.
I just spent $51 USD of Comcast's money purchasing ITU-R BT.1700 and 1701-1 in order to debug an NTSC problem on one of our cable plants. The spec says nothing about a guard band. I've been studying and working with NTSC for about 40 years and have never heard of a guard band. What the spec does say is that the LSB aka VSB must extend no further than 750 kHz below the luminance carrier at full power, then must start to roll off, to a maximum power of 20 dB below the luminance carrier at the lower channel edge. It keeps rolling off to a maximum power of 42 dB below the luminance carrier at 3.5 MHz below the luminance carrier, which is way into the channel below. -- Liberator 10 (talk) 22:15, 14 May 2012 (UTC)[reply]
As well as I know, when broadcasting began, they didn't expect adjacent channels to be broadcast together. They spaced them out across cities. There is also a space between channels 4 and 5 such that they aren't actually adjacent. So, cable TV, using all the channels, was unusual. But then again, with nearly equal amplitude, it isn't so bad. Gah4 (talk) 20:14, 10 April 2019 (UTC)[reply]

About "The actual video signal, which is amplitude-modulated, is transmitted between 500 kHz and 5.45 MHz above the lower bound of the channel." --Ohgddfp (talk) 16:14, 20 March 2009 (UTC)[reply]

This wording is really clumsy. Without the graphic, it's completely unintelligable. Better to just take it out. --Ohgddfp (talk) 16:14, 20 March 2009 (UTC)[reply]

"The Cvbs (Composite vertical blanking signal) (sometimes called "setup") is a voltage offset between the "black" and "blanking" levels. Cvbs is unique to NTSC. Cvbs has the advantage of making NTSC video more easily separated from its primary sync signals. The disadvantage is that Cvbs results in a smaller dynamic range when compared with PAL or SECAM."

This part of the present article is completely wrong. See talk section: "CVBS Error" on this page. --Ohgddfp (talk) 16:14, 20 March 2009 (UTC)[reply]

About: "A guard band, which does not carry any signals, occupies the lowest 250 kHz of the channel to avoid interference between the video signal of one channel and the audio signals of the next channel down" --Ohgddfp (talk) 16:14, 20 March 2009 (UTC)[reply]

Which version of NTSC mentions a 250 kHz guard band? I cannot find it in the only document that local broadcast stations are legally required to follow. That is "Title 47 (FCC) Part 73". See talk section: "250kHz Guard Band" ... Where it from? --Ohgddfp (talk) 15:46, 20 March 2009 (UTC)[reply]
See my comment above about the guard band. It doesn't exist. The whole concept is bogus. There is a specified rolloff rate of the LSB aka VSB, but no guard band. -- Liberator 10 (talk) 22:18, 14 May 2012 (UTC)[reply]

Article Section: History

About "In December 1953, it unanimously approved what is now called the NTSC color television standard (later defined as RS-170a).". But RS-170a only refines the timing specifications. It is not a redefinition of NTSC. See Talk section: History. --Ohgddfp (talk) 15:32, 21 March 2009 (UTC)[reply]

Since TV audio quality was pretty low in the early years, it is not so obvious why go to FM. I suspect, though, that the constant amplitude makes sense for a few reasons. In any case, they knew how to make broadcast FM at the time, so it was likely easy to use a similar system. (Except 25kHz instead of 75kHz deviation.) Analog FM radios will usually tune down to channel 6 audio (just below the FM band). That is interesting if you live near a channel 6 TV station. Gah4 (talk) 20:11, 10 April 2019 (UTC)[reply]

History[edit]

About "Color information was added to the black-and-white image by adding a color subcarrier of 4.5 × 455/572 MHz (approximately 3.58 MHz) to the video signal." --Ohgddfp (talk) 15:04, 21 March 2009 (UTC)[reply]

Well, the wording is confusing. I would say something like this (with even better wording than mine) --Ohgddfp (talk) 15:04, 21 March 2009 (UTC)[reply]
"Color information was added to the black-and-white image by modulating a 3.58 MHz color subcarrier with color-difference (colorizing) information, and then adding the result to the black and white video signal to produce a composite video signal. --Ohgddfp (talk) 15:04, 21 March 2009 (UTC)[reply]

About "In December 1953, it unanimously approved what is now called the NTSC color television standard (later defined as RS-170a)." --Ohgddfp (talk) 15:29, 21 March 2009 (UTC)[reply]

I'm talking here about the portion from the above that says, "... later defined as RS-170a ...". RS-170a is a proposed standard that was never officially adopted by any of the private standards bodies. The timing portion of this standard did however become industry practice never-the-less. --Ohgddfp (talk) 14:39, 25 March 2009 (UTC)[reply]

The timing portion of RS-170a does not contradict FCC regulations. Rather, it refines them so that subcarrier to horizontal phase is maintained to a particular standard so that videotape editing of composite NTSC does not suffer from "H-SHIFTS", where the entire picture jumps horizontally by maybe only a sixteenth of an inch at the edit point. When making match cuts, this was very annoying. RS-170a solved this whenever it was correctly put into practice inside the studio. A side-effect is that the resulting transmitted signal usually conformed to the timing portion of RS-170a as well, which is okay because it does not contradict federal regulations. --Ohgddfp (talk) 14:39, 25 March 2009 (UTC)[reply]

But some other parts of RS-170a may be contrary to federal regulations. I haven't seen the entire standard, but I will be getting it though library loans to see its general coverage. --Ohgddfp (talk) 14:39, 25 March 2009 (UTC)[reply]
So I would just take out the "(later defined as RS-170a)". In its place I would get a definitive reference for broadcast grade picture monitors, broadcast grade studio camera manufactures or broadcast grade telecine film chains that sell to the TV networks that one or more of the specs, either by conforming entirely to SMPTE-170M or entirely to RS-170A, or some other spec that is contrary to federal regulations (FCC rules). Broadcast grade was a signal from manufacturers to potential customer not just of some vauge picture quality performance level. Broadcast grade was, in my experience working as video engineer in facilites using network grade equipment, also a promise from the manufacturers that their product does not violate federal regultaions (FCC rules). And I'm talking here about the federal regulations (FCC rules) that were written by the NTSC. The first broadcast grade studio equipment sold to the networks that contradicts FCC rules would indeed signal a switch in industry practice to evading federal law. Another reliable signal of a switch is in the broadcast grade picture monitors that dropped the matrix switch with the circuitry operation the same as the matrix switch in the off position. My feeling at this time is that color space conversion from FCC primaries to actual physical screen primaries was always available in broadcast grade picture monitors. Indeed, I purhased an Ikegami broadcast grade studio picture monitor in 1990. It cost five-thousand 1990 dollars, had full I/Q demodulation, used SMPTE-C phosphors, and had the standard matrix switch the same as used on the Conrac monitors. While at the NAB in 1990, I looked at many camera demos. All the monitors used SMPTE-C phosphors with their matrix switches turned on, meaning the camera signal expected FCC phosphors (FCC primary colors). So we need reliable references of an actual change in industry practice toward lawlessness. Standards bodies may adopt standards, but that's not the same as industry following them in their entirety. We should only point out the camera and monitor specifications along with any contradictions to engineering specs authored by the NTSC. My own feeling is that the industry was indeed law abiding, but I guess I could be wrong at some point. Let's see the actual refernces of a shift in industry practice. Remember also that signals not bound for local TV transmitter have no legal requirment to conform to ANY NTSC variant, FCC or otherwise, and so there was a market of cheaper TV equipment ("consumer", "semi-pro", "industrial" that made no pretenses to following ANYTHING, except to be "NTSC compatible". And this is where a lot of confusion is coming from. So we will leave it up the readers themselves to decide if people should have gone to jail. --Ohgddfp (talk) 14:39, 25 March 2009 (UTC)[reply]

Cite one instance of a person going to jail for a technical violation of FCC rules. A licensee would typically be fined. If the violation is particularly grievous and remains uncorrected for an extended period of time, the license itself might be revoked, but no jail time. — Preceding unsigned comment added by Chris319 (talkcontribs) 15:23, 12 May 2017 (UTC)[reply]

Of course no one ever went to jail. That was pure hyperbole. It was supposed to be a funny way to remind the talk page reader that the context was legal considerations. Ohgddfp (talk) 21:02, 18 May 2020 (UTC)[reply]
When I was young, and had a subscription to Popular Electronics, I remember a cartoon series based on FCC regulations and amateur radio, which might have included jail time. The only one I can now remember (maybe 50 years later) is someone grounding their transmitter to a water faucet hanging out the window. (That is, not in the ground or attached to anything in the ground.) I suspect that if someone died due to a mistake, that jail could be considered. Gah4 (talk) 15:57, 12 May 2017 (UTC)[reply]

I could not find any other factual errors in the History section. --Ohgddfp (talk) 15:29, 21 March 2009 (UTC)[reply]

Backwards compatibility[edit]

Does anyone know how the backwards compatibility with black and white tv's works exactly? I've googled around but can't find a clear answer. Is it just because the older tv set's hardware naturally acted like a low-pass filter and ignored the high frequency chroma signal? If someone has a clear/simple answer to this, please add it to the article... —Preceding unsigned comment added by 69.247.172.84 (talk) 03:51, 10 July 2009 (UTC)[reply]

It is not simply because older TVs acted like a low-pass filter and ignored the high frequency chroma signal. It was actually some of the newer sets that did erase most of the chroma using low-pass filtering, but at the expense of reducing fine image detail. So the reality is that backward compatibility is due to reasons that are much more complex, as explained below. Ohgddfp (talk) 21:54, 20 September 2012 (UTC) Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
All TV receivers low-pass filter the signal to minimize ringing (multiple ghosts on vertical edges), consistent with maintaining maximum fine picture detail. Additional (excessive) low-pass filtering, utilized to reduce receiver cost, improves compatibility slightly in one way, but only at the expense of reducing fine picture details. But some monochrome TVs do not have additional low-pass filtering, yet the compatibility is still excellent. And because of tighter specifications in the color broadcast signal, the monochrome pictures are even better with the color signal, even on those older sets that had no additional low-pass filtering. This means that other NTSC features working in a much more complex manner are at work to provide backwards compatibility. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
A little background infomation: In a color TV broadcast signal, the color composite video signal is carried by an RF carrier wave. Inside both color and monochrome receivers, the color composite video, with a frequency spectrum from zero to 4.25 MHz, is recovered from the RF wave, and is available on a single wire inside the receiver. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
The color composite video signal carries 3 kinds of information to convey every color in the scene. In the most general terms, this is Luminance, Hue and Saturation. Luminance conveys the lightness or darkness of a given color. Hue conveys which color, such as red, orange, green, purple, etc. Saturation conveys the strength of the color, from zero (gray, white), to weak (pale, pastel), to strong (vivid, electric, deep, rich, flourescent). Since Luminance conveys the lightness or darkness of a given color, it’s ideal for monochrome receivers. Indeed, the old monochrome-only TV signal attempted to convey luminance for that reason. So for backward compatibility, the composite color signal uses the old monochrome signal as its base. So to a monochrome receiverItalic text, the new composite color signal looks the same as the old monochrome-only signal. The composite color signal also contains "coloring" (Saturation and Hue) information, which the monochrome receiver interprets as minor interference that is almost invisible to the viewer, and is displayed in black and white. Ohgddfp (talk) 20:06, 22 September 2012 (UTC)[reply]
Following is an explanation of the complicated reasons why this above-mentioned "minor interference" is so difficult for the human eye to see. To convey the above-mentioned Hue and Saturation, a chroma signal is added to the old monochrome signal to make the new color NTSC signal, otherwise known as the color composite video signal. Ohgddfp (talk) 20:23, 22 September 2012 (UTC)[reply]
At first glance, Luminance, carried by both the old monochrome signal and the new composite color signal, seems to use the entire zero to 4.25 MHz frequency spectrum, so there is no apparent spectrum space available for combining the old monochrome-only signal (which carries Luminance), with the chroma (which carries Hue and Saturation) to make the composite color signal. Ohgddfp (talk) 20:23, 22 September 2012 (UTC)[reply]
But spectrum space actually is available. Some more background information is needed to understand why. Chroma is created by utilizing a 3.58 MHz subcarrier that is near the high end of the Luminance frequency spectrum. This subcarrier is modulated by the Hue and Saturation information to create sidebands. The subcarrier itself is then deleted, leaving the sidebands that carry all the Hue and Saturation information. These sidebands alone comprise the chroma signal. But the Chroma sidebands extend from 2 to 4.1 MHz, apparently overlapping the Luminance signal in that frequency range, and therefore causing crosstalk interference between Luminance and Chroma on the monochrome picture tube screen. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
The first method to improve backwards compatibility is to reduce this crosstalk by reducing the bandwidth of the Hue and Saturation information, since the eye is not sensitive to fine details in Hue and Saturation. This, combined with the choice of subcarrier frequency near the high end of the Luminance frequency spectrum restricts the frequency spectrum of the chroma sidebands from 2 to 4.1 MHz, also at the higher end of the Luminance frequency spectrum. The high relative frequency of chroma causes the interference to be a finely detailed pattern in black and white, making chroma more difficult to see in monochrome TV receivers. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
A second method is to take advantage of most objects not having much motion in most scenes, resulting in the Luminance frequency spectrum consisting mostly of harmonics of 30 Hz. The chroma subarrier frequency is chosen to be an odd multiple of 15 Hz, causing frequency interleaving, a major feature of color NTSC. This means that most of the time, 2.000010 MHz is a luminance harmonic, 2.000025 MHz is a chroma sideband, 2.000040 MHz is a luminance harmonic, 2.000055 MHz is a chroma sideband, and so forth. This alternation occurs from about 2 MHz to approximately 4.1 MHz. So in this way, it is seen on close inspection that Luminance and Chroma do not actually occupy exactly the same parts of the frequency spectrum most of the time. The chroma, already a finely detailed pattern, is seen, due to frequency interleaving, to reverse phase from one video frame to the next. The eye tends to integrate two successive video frames to average out the chroma on a monochrome screen, making the chroma even less visible, thereby improving backward compatibility. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
A further feature of NTSC frequency interleaving is that the chroma in time successive scan lines are also alternating phase, giving the chroma pattern appearing on a monochrome screen as a diagonal crosshatch pattern of dots instead of more visible vertical lines. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]

By the way, these are both the same feature, considered in the frequency domain or time domain. Gah4 (talk) 16:14, 12 May 2017 (UTC)[reply]

Another NTSC feature to reduce chroma visability is that the chroma signal subcarrier itself is deleted. This means that only strongly colored areas of the scene have strong chroma (due to chroma sidebands), and such strong chroma is co-located with the strong color to prevent any mildly visible chroma from confusing perception of objects in the scene. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
So for backward compatibility, the chroma visibility is greatly dimished on the monochrome receiver by using a relatively high subcarrier frequency along with narrow band Hue and Saturation information, coupled with frequency interleaving, and the removal of the chroma subcarrier itself, leaving only the sidebands. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
However, the effectiveness in reducing the visability of chroma on a monochrome screen is not always perfect when viewed too close to the screen. On such close viewing, some people, especially trained observors, can see the chroma, which appears to be a finely detailed screenwire or dot pattern moving slowly upwards (crawling upwards) only in those parts of the scene that are strongly colored (high saturation). As a result, chroma, when visible on a monochrome screen, is called "chroma crawl” interference. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
Since the luminance signal is needed to define the lightness or darkness of a given color, color TV receivers need the same luminance information as do monochrome receivers, and therefore color receivers can also be subject to chroma crosstalk into the luminance, with the potential of creating the same kind of interference pattern as on a monochrome screen. The same NTSC features therefore reduce the visibility of chroma crawl on color receivers as well. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
However, the high amplitude of chroma on strong colors has the potential of reducing displayed color saturation due to the increased luminance resulting from the non-linearity of the picture tube. The chroma in the broad area of a strongly colored part of the scene must therefore be reduced further in amplitude inside the color TV receiver. The best way to accomplish this is to use a sophisticated filter that can be applied to the composite color video to “unmix” the chroma from the luminance. The best of such filters is a 3-d motion compensated comb filter, which takes advantage of NTSC’s frequency interleaving to separate the two signals. This keeps chroma out of the luminance while maintaining full bandwidth luminance to 4.25 MHz. In this way, the eye no longer has to play a major roll in reducing the visibility of chrominance. The chroma, now separated from the composite color video, is further processed to provide the "coloring" (Hue and Saturation) to the black and white picture, yielding the full color image. Ohgddfp (talk) 20:06, 22 September 2012 (UTC)[reply]
Ideally, monochrome receivers should also use such a comb filter, but none ever did due to the fact that monochrome receivers are supposed to be relatively inexpensive. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
Older color TV receivers made from 1954 to 1969, and newer "low-end" color TV receivers get rid of chroma from the luminance by using a notch filter set to 3.58 MHz. In a given highly colored object in the scene, this nearly obliterates the chroma from the luminance in the interior region of the object, but does not remove it from the left and right borders of that object. The end result, since the Saturation and Hue do not have sharp left to right transitions anyway, is that the color saturation of the object is maintained throughout the object interior. This method was relatively inexpensive, and decreased luminance video rise and fall times only by a relatively small amount. This is important because all human perception of finely (highly) defined small picture details comes only from the luminance information. The end result was still high quality pictures. However, areas with sharp and extreme left to right color transitions across the borders of strongly colored object did exhibit noticable chroma crawl on those borders when viewed up close. Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
More below on frequency interleaving Ohgddfp (talk) 04:26, 21 September 2012 (UTC)[reply]
The color subcarrier is an odd half multiple of the line rate and frame rate. That means that the dot pattern cancels between successive lines and successive frames. If you look at in in spectral space, it is like a comb with the video interleaving the subcarrier, as the video signal tends to have compenents that are multiples of the line rate, while the color subcarrier is at odd half multiples of the line rate. While some may have a low bandwidth video amplifier, there is no reason that should be true. Gah4 (talk) 04:27, 19 November 2010 (UTC)[reply]
The signal looks to the black and white receiver just like a black and white signal. The colour subcarrier will be resolved as video information if the receiver has the bandwidth to process it. The result is that the colour subcarrier will appear on the picture as a fine dot pattern. However the amplitude of the colour signal is such that the pattern will only really be noticeable if you are too close to the set. Most viewers just won't notice it. In Europe, the colour subcarrier was deliberately chosen to outside of the specified video bandwidth so should be invisible, but I believe this was not the case in the US. 20.133.0.13 (talk) 09:05, 21 September 2009 (UTC)[reply]
I split System M into its own article a while ago, but I don't know if it helps. It's actually more a spin-off from this article; I created it because the current NTSC article really doesn't make much of a distinction, even though there's PAL-M also. --Closeapple (talk) 08:10, 22 September 2009 (UTC)[reply]
Ohgddfp claims that the NTSC standard deletes the chroma carrier, leaving only the sidebands. If this is true, how does an NTSC supposed to show a large solid colored area instead of just colored edges? — Preceding unsigned comment added by 76.202.253.136 (talk) 23:17, 28 October 2012 (UTC)[reply]
As President Nixon used to say, "I am glad you asked THAT question". I will speak about the "active portion" of the video signal, which does not include blanking, sync, or the subcarrier burst, which is found only within the blanking portion. To answer the question directly, a solid color filling more than half the screen will always show a frequency at exactly the subcarrier frequency on the spectrum analyzer. But even though this is the exact same frequency as the subcarrier, it's still technically a subcarrier sideband and not simply the subcarrier. Let me explain. Chroma, which is the shortened nickname for the subcarrier sidebands, carry I/Q information (for rendering saturation and hue), while the subcarrier itself carries no information at all. Hence, the subcarrier sideband content changes with scene content, the amplitude of the subcarrier itself is always constant. With NTSC, the subcarrier has a constant value of zero. In other words, NTSC deletes the subcarrier before transmission, but allows a subcarrier sideband at the exact same frequency as the subcarrier to come through. But the phase of the subcarrier sideband varies with scene content, and the amplitude of this "sideband" can also go to zero on some colored scenes. This is why it becomes necessary to transmit a "burst" of subcarrier, so that the receiver can reconstruct the subcarrier. The reconstructed subcarrier never appears in the video, but instead is used to recover the original I/Q signals from the subcarrier sidebands. It's the I/Q signals that add color to the monochrome image.
What the questioner is alluding to is one possible subcarrier "sideband", which carries the DC component of the I/Q signal. It just so happens that this "sideband" frequency is the exact same frequency as the subcarrier itself, and therefore shows up on a spectrum analyzer at the subcarrier frequency. But it's only a subcarrier sideband, because its amplitude varies with changing scene content. On some scenes, the I/Q signals might have no DC component, and the spectrum analyzer will respond by showing no energy at the subcarrier frequency. (When looking at this phenomenon with a spectrum analyzer, remember to electronically key out the blanking, leaving only the active video. Otherwise the burst, which has most of its energy at the subcarrier frequency, will show up on the analyzer, confusing the issue.)
Occasionally, some scenes have no DC component of I/Q at all, and therefore the subcarrier frequency is completely gone during the televising of that scene. Here is an example: There is a large solid object with a +Q color (a violet-like hue), and in the same scene, another identical object with a -Q color (a yellowish green hue). Both colors happen to have the same signal amplitude, just opposite polarity of the signal. These two colors are also complementary colors. So in this example, the subcarrier frequency is completely gone. So the subcarrier itself, which is always zero, does not change according to scene content, while a subcarrier "sideband" indeed does change according to scene content.
This approach of eliminating the subcarrier reduces the average amplitude at the subcarrier frequency, reducing the possibility of visible interference between chroma and luminance, and increasing visibility viewed close to the screen, only within objects that are strongly colored. If the subcarrier were to be transmitted, an interference pattern would almost always cover the entire screen, even when a given scene has no color at all. Ohgddfp (talk) 14:30, 15 August 2013 (UTC)[reply]

Audio subcarrier? Looks wrong.[edit]

In the Color Encoding section, the last paragraph uses the term "audio subcarrier" several times. The audio has its own carrier, but intercarrier audio (which just about every NTSC receiver had) uses one common IF amplifier chain for both video and audio (at least, it did in tube days). Audio is recovered at 4.5 MHz from the demodulated video, iirc, so, in a sense, at the demodulation stage, it's akin to a subcarrier; however, this paragraph needs rewriting slightly, I think. I didn't want to change it, because I'm not totally sure about what I believe to be true.

{For some reason, previewing this text as originally typed rendered it on one quite-long line, requiring scrolling to read it. I added line breaks.}

Regards, Nikevich (talk) 07:38, 24 November 2009 (UTC)[reply]

About "audio subcarrier". If, in transmission, you modulate the visual carrier with a frequency modulated 4.5 MHz audio subcarrier, along with the video itself, you separately get the AM visual carrier with sidebands containing picture information, plus an FM carrier with sidebands containing audio information. The FM carrier on the air is 4.5 MHz higher than the visual carrier. The original 4.5 MHz FM sound carrier (its sideband near 4.5 MHz), completely disappears on the air. Although this process can work with extremely linear amplifiers, it is just not done for full power commercial TV transmitters, due to the perfection needed for amplifier linearity so that beats don't appear. They instead use two separate transmitters, one for sound, the other for picture. At the receiving end, whenever the entire TV signal is presented to the input of an AM detector, the detector output is the video signal, plus a frequency modulated 4.5 MHz audio subcarrier, even though no such 4.5 MHz carrier is ever on the air. Older receivers in the forties would filter out the upper frequencies for the video channel, so that the AM detector was not presented with the full TV signal, but only those frequencies related to video. As a result, no sound subcarrier was on the output of the AM detector. Likewise, only the highest frequencies of the TV signal were presented to the input of an FM demodulator, which had only the sound signal as output. In the intercarrier system, the entire TV signal, not just the video related frequencies, are presented to an AM detector, where the visual carrier is much stronger than the frequencies related to the sound. As a result, the 4.5 MHz FM sound subcarrier with sidebands appears for the first time. Note that this FM sound subcarrier (with sidebands) must still be presented to the input of an FM detector in order to recover the audio signal. So "sound subcarrier" is legitimate, since it is physically indistinguishable from "4.5 MHz sound I.F." Ohgddfp (talk) 20:36, 1 February 2014 (UTC)[reply]

I want to buy a new led TV from the states and send it to Egypt will it work? —Preceding unsigned comment added by 173.58.216.194 (talk) 01:56, 18 May 2010 (UTC)[reply]

History: CBS system[edit]

How come the early CBS system had 24 effective frames/sec but 144 fields/sec? Did it split each frame into 144 / 24 = 6 fields? That's the logical explanation to me, but the way it's written now, it seems a bit obscure and confusing, so the section would probably benefit from adding that fact if it's true. --79.193.57.210 (talk) 21:00, 8 June 2010 (UTC)\[reply]

It's true that the CBS analog system (briefly the law of the land--FCC) used 6 fields per frame. Ohgddfp (talk) 16:09, 6 January 2024 (UTC)[reply]

How about compare CBS with French television channels. Compare NBC with England television channels. Compare ABC and FOX with German television channels. Are the frozen pictures look any opposite? --DuskRider 08:33, 5 December 2012 (UTC)

Phase is Hue - Amplitude is Saturation -- Not quite[edit]

About this portion of the article: "The phase represents the instantaneous color hue captured by a TV camera, and the amplitude represents the instantaneous color saturation."

This is not completely true.

Below are some concepts that can help guide the search for more reliable sources of information to put into the article.

If only the chrominance signal (which is the MODULATED subcarrier with the subcarrier itself suppressed), is examined on a vectorscope, one can readibly see amplitude and phase for various colors. But the amplitude provides zero quantifiable information on how much a given color is saturated. ZERO. At best, one can examine the NTSC spec and figure out that the acutal saturation of a given color, based only on what's seen on a vectorscope is within some VERY WIDE RANGE of possible saturations. That's because it's the COMBINATION of the Y (brightness or monochrome) signal and the subcarrier amplitude that most determines the actual saturation.

And gamma must be taken into account as well.

Furthermore, hue often shifts noticibly when only the subcarrier amplitude is changed a great deal. And we are talking IDEAL HARDWARE here. The discrepencies are a mathematical consequence of the NTSC specifications, not hardware imperfections.

One of the things that can be said for sure is that an increase in subcarrier amplitude will cause an increase in saturation, provided that the color on the display screen is not already at the maximum saturation that the display screen primary colors can support.

So here's a better way to put this: "Regarding the subcarrier, the phase represents the approximate instantaneous color hue, and the amplitude, COMBINED WITH THE EFFECTS OF THE Y SIGNAL, represents the approximate instantaneous color saturation. Ghidoekjf (talk) 22:49, 9 July 2010 (UTC)[reply]

Technical Details -- Power Supply Frequency and Intermodulation[edit]

About the article - Technical Details --> Lines and refresh rate --> 2nd Paragraph, where it says

"Matching the field refresh rate to the power source avoided INTERMODULATION (also called beating), which produces rolling bars on the screen."

The problem is the term INTERMODULATION. Another problem is what happens to the "rolling bars" with the color system, since the field refresh is no longer matched to the 60Hz frequency of alternating current power. Although some measure of intermodulation always occurs, and certainly a large measure of intermodulation occurs when the "rolling bars" are SEVERE, due to at minimum the non-linearity of the picture tube, intermodulation is NOT REQUIRED for producing a MILD "rolling bars" beat pattern. And mild is certainly the most likely by far. The example of "rolling bars" and beating was mentioned only for black and white, but not mentioned for color. So I'll mentioned it here. 60Hz minus 59.94Hz = 0.06Hz, where 0.06Hz is the frequency of the beat pattern for color service. Although the beats really do occur at the 0.06Hz rate, this does not mean there is a substantial 0.06Hz FREQUENCY COMPONENT when there is a "rolling bar" that takes 8 seconds to crawl from bottom to top. The 0.06Hz "rolling bars" frequency component for color can only occur with intermodulation, and if there is no substantial degree of 0.06Hz component, then there is also no substantial intermodulation either in the case of MILD rolling bars. Instead of intermodulation, the effect of MILD rolling bars is due almost entirely to simple LINEAR addition of the 60 or 120 Hz power supply ripple to the video signal. And intermodulation requires NON-LINEAR, not LINEAR combination. So change the article to REMOVE the term INTERMODULATION, thereby improving the accuracy of the article. Ghidoekjf (talk) 16:19, 8 August 2010 (UTC)[reply]

If the power supply filtering isn't so good then line frequency, or a multiple of line frequency, comes through into the video signal and is visible. For this reason, the power supplies for color TV had to be better than previously needed for B&W only. Though .06Hz is slow enough not to be so noticable. Gah4 (talk) 04:34, 19 November 2010 (UTC)[reply]
Actually, intermodulation isn't the main problem. The CRT is sensitive to external magnetic fields such as from nearby transformers or motors. By making the frame rate equal to the power frequency, although the effect isn't eliminated, it as at least stationary and thus goes unnoticed. The slight change of frame rate with the introduction of color ceased to be as serious a problem because the increase in accelerating voltage significantly decreased the sensitivity of the CRT to magnetic fields. Thus the effect of external fields became far less noticeable. A further improvement is obtained because color CRTs usually have a limited amount of magnetic shielding. 86.176.154.127 (talk) 18:10, 21 January 2011 (UTC)[reply]
The external magnetic fields do not produce a moving shadow effect ("rolling bars"). Instead, they affect the scanning beam positioning as to bend the image slightly, and this bending effect moves vertically at the same rate as genuine "rolling bars". Also, the ocsillating magnetic fields can cause rolling color impurity as well, also at the same rate as rolling bars. But this issue is "rolling bars", which is a video shading effect, not affected by magnetic fields. Bending can also happen if power supply ripple gets into video circuits ahead of the sync circuits. It's the disturbance of those horizontal sync circuits that causes the geometric bending. Therefore, magnetic field problems causing some distubance of some sort are limited to design issues of transformers too close. In service, a failed automatic degausser circuit applies some degausing all the time, which is also a magnetic disturbance. Conclusion: Magnetic fields are indeed the more difficult design issue, but with internal power transformer too close, not by external ocillating fields, except for a nearby vacuum cleaner motor. The "rolling bars" are a video moving shadow effect, potentially a design or component failure issue equal to monochrome or color, although color TV's have a bigger power supply. Ohgddfp (talk) 04:40, 21 September 2012 (UTC)[reply]

Color correction in studio monitors and home receivers - Article is right on[edit]

This section in the article looks very good. I guess this is not really an edit. Ghidoekjf (talk) 16:29, 8 August 2010 (UTC)[reply]

Color Encoding -- Discretization[edit]

About -- Technical Details --> Color Encoding --> 3rd paragraph (towards end), where it says,

"This process of discretization necessarily degrades the picture information somewhat, ..."

This is not true.

Here is what I expect to be found from a good source. Transferring back and forth between discrete samples and a continuous signal is a LOSSLESS operation. Of course, any kind of operation done in a sloppy manner will cause degradation, but that's true for ANY operation. That means it's POSSIBLE for the analog tuner section of a modern flat panel LCD receiver to recover the original pixels from a modern video camera imager, all within the NTSC standard for low power analog transmissions still on the air. Here's how it would work, just as an example to illustrate the concept. Use a video camera with a hypothetical 448x483 imager. ("rectangular pixels"). The imager samples are processed into a composite video signal at the same sample rate (different from subcarier). The pixels now also contain the subcarrier sidebands (chroma). Then low-pass filter (flat within the entire video band) so that indeed the horizontal pixels are blended into continuous lines, then transmitted, and received. The analog tuner section of the receiver resamples at the same rate, using burst as the clock input to a sample reference generator, where the sample reference is different (and higher) compared to the subcarrrier frequency. There are other details that need to be insured, but still within FCC NTSC specs. The result is a complete and exact replication of the original camera pixels as far as black and white movies are concerned. The exactness limited only by practical hardware, not the NTSC standard. For color video, color difference information is still bandwidth limited and other NTSC artifacts are also still intact. Of course inside HD displays, a digital resample and upconversion is needed. Sorry, no additional resolution or picture definition by upconverting NTSC to HD.

So discretization, whether the vertical scanning lines of NTSC, or both vertical and horizontal for digital TV, does not necessarily cause picture quality degradation.

Recommend to simply remove the article phrase containing the word "degrades". Ghidoekjf (talk) 17:33, 8 August 2010 (UTC)[reply]

Comparative quality - Differential Phase Cannot happen as a Reception Problem[edit]

About - Comparative quality --> 1st paragraph, where it says, "Reception problems can degrade an NTSC picture by changing the phase of the color signal (actually differential phase distortion), ..."

The problem is equating "differential phase distortion" with "reception problems". "Reception problems" sounds like issues that occur in the air between the transmit and receive antennas.

But "through the air" reception problems are limited to 1) Signal that is too weak. 2) Interfering signals linearly added to the desired signal, and 3) Multipath.

Multipath, which alters the reception strength (amplitude) versus frequency and the phase versus frequency, and sometimes also nulls out some frequencies, is a form of LINEAR distortion, and is caused by obstacles and reflections in the signal path from transmit antenna to receive antenna. Differential phase distortion on the other hand is NON-LINEAR, and so cannot happen in the air. It happens mostly in older TV transmitters, and also in poorly designed TV receivers near overload condition.

So a transmitter defect (differential phase distoriton) is not really a "reception problem" at all as the article implies. And an overloaded tuner (from a strong signal) is not the fault of reception conditions either. Certainly a signal that is very strong is not thought of as a reception problem.

What can be said is that NTSC is more visually sensitive than PAL to both differential phase distortion and to multipath. With NTSC, multipath can produce additional hues not present in the original. Ghidoekjf (talk) 21:39, 8 August 2010 (UTC)[reply]

So how do you explain the shift in phase of the hue vector between different stations on reception? All the colors of similar luminance shift by the same phase shift which can only be explained by differential phase distortion. 86.176.154.127 (talk) 18:03, 21 January 2011 (UTC)[reply]
About "shift in phase of the hue vector ...": There is no such thing as "shift in phase of the hue vector". Phase is a comparison between two signals. With NTSC, the two signals are 1)The reconstructed subcarrier inside the receiver, made from the "burst" portion of the incoming signal, and 2) The "chrominance" portion of the "active" video signal. Between 1) and 2) above, sometimes the phase relationship between those two signals are different between multiple stations broadcasting the same program, and I think this is what you are asking about. The cause is mostly improper technical operation at one or both TV stations. Some of the cause can be differential phase problems ( a form of non-linear distortion), most of which is in some videotape playback and some transmitters to some hopefully small degree. It can also occur if one or more of the received stations is onverloading the tuner because the signal is too strong. It can never occur in the air.
Ohgddfp (talk) 17:13, 13 September 2012 (UTC)[reply]

I-Q vs RGB[edit]

Why no mention of the I-Q components of the color subcarrier, their respective bandwidths, and the matrix bewteen them and the (B-Y) (R-Y) components. Gah4 (talk) 04:37, 19 November 2010 (UTC)[reply]

Because you haven't had time to write it up yet, with references? And maybe even a diagram? Pretty please! --Wtshymanski (talk) 14:48, 19 November 2010 (UTC)[reply]
This is a glaring omission. The article needs to explain how we get from R, G, B to R-Y, B-Y and then to I and Q. — Preceding unsigned comment added by 65.197.211.2 (talk) 00:27, 8 August 2012 (UTC)[reply]
Here is an explanation that can help with a search for sources. It should help to discern between good an bad sources.

The NTSC standard converts RGB video signals carried on 3 wires into a composite signal carried by 1 wire (see below on how this is done). At the receive end, the process can be reversed to recover the 3-wire RGB video signals. At the transmit end, NTSC also describes how to amplitude modulate a carrier wave with the composite signal to create an RF signal. NTSC further describes modifying this RF signal with special filtering, including phase correction and vestigial side-band filtering, although low-power stations do not require all this special filtering. The filtered RF signal is then sent to the transmit antenna.

Now to describe how the RGB 3-wire signals are converted into a 1-wire composite signal. First the RGB signals are converted into 3 new signals, also carried on three separate wires. These are Y, I, and Q. The Y is approximately the lightness or darkness of colors, and so this also serves as an excellent black and white signal for compatibility to black and white receivers. It is created by adding the RGB signals together in the following proportions:

Y = 0.30R + 0.59G + 0.11B

The I and Q signals do a similar combination of RGB signals, but in radically different proportions:

I = 0.4370R - 0.4366G - 0.1589B

Q = 0.2130R - 0.5251G + 0.3121B

The above I and Q formulae were alebraically manipulated from the FCC formulae given below. Both versions give the exact same results.

I = -.27(B-Y) + .74(R-Y)

Q = .41(B-Y) + .48(R-Y)

Note that most of the Internet and even several "NTSC" standards document use numbers never written by the NTSC. Such equipment will produce incorrect colors when NTSC equipment using non-NTSC numbers is combined with NTSC equipment using numbers actually written by the NTSC. An example of a mis-match is receivers using non-NTSC numbers that receive a signal transmitted using genuine FCC numbers. Note that the NTSC (National Television Systems Committe) wrote the numbers adopted by the FCC for over-the-air analog broadcasting. The numbers given here are those FCC numbers actually written by the NTSC.

The Q signal must be bandwidth limited to prevent crosstalk with the wider bandwidth I signal after modulation. This modulation process is quadrature ammplitude modulation of a 3.579545.4545... MHz subcarrier, with I and Q as the two modulating signals. The subcarrier itself is removed, leaving only the subcarrier sidebands, which carry all of the I and Q information. The NTSC limits of the above-mentioned filters are given in the FCC rules. There is a lot of equipment made over the years that violate these filter limits.

The quadrature amplitude modulated ("called chroma") signal is then simply added to the Y signal to form the (1-wire) composite video signal. But the frequency band of the chroma signal occupies the same general frequency band as the high end of the Y signal, causing mutual crosstalk. Note the high end of the Y signal carries only small picture details, and the chroma signal carries the I/Q information. The frequency of the subcarrier is chosen to make this crosstalk much less visable to the human eye.

The Y signal serves as a black and white picture, directly usable by black and white receivers. The crosstalk of chroma into the Y signal is seen in some black and white receivers as a monochrome screenwire pattern on strong colored areas of the image. Of course, all these patterns are seen in black and white on black and white receivers. Fortunately, these crosstalk patterns are hard for the eye to notice. Older 50's black and white receivers show stronger crosstalk, but still is not bothersome. Black and white receivers otherwise ignore the chroma signal.

This same Y (black and white) signal serves as the base black and white picture in color receivers as well. As in monochrome recievers, this Y signal is also contaminated with chroma, and is fortunately hard for the eye to notice much. The chroma contains the "color difference" information derived from I/Q information carried by chroma. Chroma and fine detailed Y is applied to a chroma demodulator to recover I/Q signals that are contaminated with fine detailed luminance. This fine detailed luminance creates false colors in the demodulator. Fortuanely the false colors alternate from one video frame to the next with the complementary color. The eye mostly averages out this false color, another clever feature of NTSC. Occasionally this trick fails and wierd colored rainbows dance through portions of the image.

The recovered I/Q comprises the color difference image, which is superimposed over the black and white picture to produce a complete color picture. This process is used on the Internet as well, and is similar to painting a black and white picture with transparent water colors to "colorize" the black and white image. The I/Q signals provide these transparent "water colors" which are always all the same lightness. So mostly, the underlying black and white signals are what provide the lightness and darkness of any given color.

R-Y and B-Y signals are optional. R-Y is made simply by adding the inverted Y signal to the R signal. This can be done precisely with just 2 resistors. B-Y can be made in a similar way. At the transmitting end (ususally inside a video camera), the R-Y and B-Y signals can be created first, then the FCC formula can use them to make I and Q. Or R-Y and B-Y can be dispensed with completely and the alternate I and Q formulae can be used to get the exact same results.

Note that a special feature built into the system allows the option of simplified recovery of R-Y and B-Y signals in the receiver using in-phase and quadrature subcarrier signals. They can then be simply combined with the Y signal to recover the R and B signals for the picture tube guns. But the G signal still needs to be made from matrix circuits, or else G-Y can be recovered directly using the more complicated non-quadrature version of the subcarrier. Then G-Y can be combined with Y to get the G video signal for the picture tube.

Another simplification by using R-Y and B-Y signals in the receiver is to get rid of the I bandpass circuit and to also get rid of the I delay line. The I delay line was once made with coils, capacitors and resitors. A problem with receivers simplified in this way is that some colors (skin tones, orange, cyan) have only one-third the horizontal resolution compared with genuine I/Q demodulation. Almost all receivers however are of this simplified design.

R-Y / B-Y signals are easier than I/Q signals to generate in cheaper TV cameras (industrial and consumer grade), and work well for simplified lower quality "NTSC-like" signals, ususally called "NTSC compatible", where the filters can be simplified or even removed entirely, with attendant reduction in picture quality. Signals made with these simplified processes are often considered illegal for over-the-air broadcasting. In particular a genuine I/Q receiver (1985 RCA Colortrack series, for example) will produce annoying crosstalk effects when receiving such simplified NTSC signals.

Ohgddfp (talk) 20:30, 14 September 2012 (UTC)[reply]

I agree, this would be a nice addition. I was pondering quite a while how the luminance and chrominance bands can overlap without cross-talk. In addition to the above very detailed explanation I found some useful info at Composite video about how the chosen sub-carrier frequency of the chroma information minimizes the cross-talk. In particular, that the harmonic components of the luma and chroma signals are - mostly - non-overlapping. I still don't understand how the luma signal ends up being composed of the line frequency's harmonics though ... Explaining all this - perhaps with a frequency domain diagram - would be very useful. I'm not an expert, so I wouldn't do this myself. Seipher (talk) 23:04, 11 July 2013 (UTC)[reply]
About: "I still don't understand how the luma signal ends up being composed of the line frequency's harmonics though". There are indeed strong harmonics of the line frequency present in a luma signal. But this is indeed hard to understand because the line frequency is itself the 525th harmonic of the frame frequency. The luma signal is actually composed of the frame frequency's harmonics, which include the line frequency itself, and also harmonics of the line frequency. Line frequency harmonics are therefore harmonics of both the frame frequency and the line frequency, simultaneously. Keep in mind that this is strictly true for scenes containing no motion. This is substantially true most of the time because most scenes do not contain a lot of motion. For scenes with a lot of motion, the luma signal can contain any frequency, which is guaranteed to cause problems with luma/Chroma separation inside the color receiver. Note that the chroma is composed of harmonics of 14.985 Hz. Yes, you saw this number here first. These chroma harmonics start at about 2 MHz (for FCC compliant broadcasts), and continue up to about 4.1 MHz. Ohgddfp (talk) 15:59, 26 November 2013 (UTC)[reply]
Vertical lines in the image will generate line frequency harmonics. Horizontal lines generate field frequency harmonics. As those are fairly common in images, those harmonics are common in the resulting signal, but other frequencies are there, too. Narrow diagonal stripes generate a lot of harmonics, many in between the line rate harmonics. TV personalities, as well as I know, are told not to wear clothes with those stripes, most commonly on ties. For the same reason, much of the chroma signal has harmonics in between, but with unusual patterns, might have others. Only in later years of NTSC receivers were the comb filters needed to separate those actually built. Gah4 (talk) 22:27, 15 March 2022 (UTC)[reply]
About: "Narrow diagonal stripes generate a lot of harmonics, many in between the line rate harmonics." , mentioned in the talk paragraph just above. Most of the paragraph I agree with, but with no motion, a spectrum analyzer reveals that "narrow diagonal stripes" actually generates zero in-between harmonics. That's because with no motion, the signal repeats periodically, and it is a mathematical certainty that a repeating signal can only consist of a fundamental (the repeating frame rate) and harmonics of the fundamental.. The motion-less narrow width diagonal lines that might be in the scene can only cause harmonics of both the frame rate and the line rate without contributing any in-between frequencies. Only moving details can produce those in-between frequencies. Ohgddfp (talk) 00:05, 6 January 2024 (UTC)[reply]
But in another case with no motion of the original camera RGB signals, in-between frequencies are possible. Before demodulation, a chroma signal correctly mixed into the luminance channel to produce the composite signal produces interference that is indeed moving, known as "dot crawl" in the colored portions of the image. It is this movement that causes those frequencies in-between both the frame rate harmonics and the line rate harmonics to be generated--this is the frequency interleaving feature of NTSC. Ohgddfp (talk) 00:05, 6 January 2024 (UTC)[reply]
Here's another way to see frequency interleaving in a signal having color and no motion. The fact that the subcarrier frequency is one half an odd multiple of half the line frequency, combined with the fact that there are an odd number of scan lines in a frame, causes the color NTSC baseband composite signal to repeat every four TV fields (every two frames), making a chrominance sideband contribute the first harmonic (fundamental) which is one-half the frame rate--approximately 15 Hz--with harmonics that are only the odd multiples of that fundamental, with the 238875th harmonic being the subcarrier. Only the odd harmonics are contributed by chrominance because the un-modulated color difference signals, from which the chrominance is made, have a 30Hz first harmonic, making the chrominance sidebands 30 Hz from each other. The luminance harmonics are also 30 Hz from each other. So the first harmonic is contributed by a chrominance sideband, the second harmonic is contributed by luminance, the third by a chrominance sideband, and so on. Ohgddfp (talk) 00:05, 6 January 2024 (UTC)[reply]

Where's Vietnam in the list of countries?[edit]

In the SECAM article (and in other publications, such as the WRTH), Vietnam is always shown as having both SECAM and NTSC used for colour TV - the SECAM article says it is "simulcast with NTSC-M". Presumably, NTSC is used in the former South Vietnam and SECAM in the former North Vietnam. So why is there no mention of it in the NTSC article? --108.12.198.48 (talk) 01:33, 31 July 2011 (UTC)[reply]

South America[edit]

Is it really accurate to say that most of South America used the NTSC system? While it seems that the majority of south American countries adopted the standard, the map shows that geographically the majority was PAL. Did the majority of viewers on the continent receive NTSC? (My apologies if this has been raised before; I haven't read through the entire talk page) Stanstaple (talk) 18:59, 16 August 2011 (UTC)[reply]

Ireland[edit]

There were no colour tests in Ireland using NTSC on 405 lines at all. Suggest the link to Ireland is removed. Donoreavenue (talk) 00:19, 20 May 2012 (UTC)[reply]

Are you sure ? The BBC and IBA were experimenting with NTSC as late as 1965. Its possible that engineers in RTE (who were following developments in Britain very closely) were carrying out their own tests ? 94.2.188.218 (talk) 18:50, 5 August 2015 (UTC)[reply]

PAL acronym =[edit]

My favourite interpretation of PAL is Please All Lobbies which it needed to do to become pan-European, except France.

Upcoming Fix - Discretization[edit]

About "This process of discretization necessarily degrades the picture information somewhat, though with small enough pixels the effect may be imperceptible.": Where is the source? We need to find a reliable source for this. Otherwise, I will delete the above quote from the article. Or, ....... not. So here's my argument. From sources on this subject over the years, I've found that discretization is a lossless process, meaning that, unlike that for many other kinds of operations where mathematically, there exists a minimum degree of degradation, discretization has no such minimum degree of degradation. In other words, with discretization, as with many other kinds of signal processing, there is no mathematical limitation as to how small the degradation can be.

So in NTSC, degradation caused by discretization can be minimized to any desired degree through hardware improvements. But that can be said about almost all operations. So why single out discretization? This is not an NTSC problem in reality. It gives weight to something that, due to hardware imperfections, is no worse than just about any other signal processing operation. Giving such weight to something without a good source to back it up only confuses the reader. Since this is actually in the realm of mathematics, I am waiting for a mathematical proof from a reliable source, that discretization "necessarily degrades the picture information somewhat". Since this is a mathematical issue, the source must be compatible with mainstream science. - A source, anyone? Ohgddfp (talk) 02:06, 13 December 2013 (UTC)[reply]

The section on colour encoding needs to be looked at by someone who really knows this subject well. The lower half of this section has some pretty specious explanations about scanning rates, dot patterns and sound IF frequencies. For starters I don't think these topics should be in the article at all. How is Mr Average supposed to understand any of this Gobbledygook? Spyglasses 10:31, 1 February 2014 (UTC) — Preceding unsigned comment added by Spyglasses (talkcontribs) ---

From Nyquist–Shannon_sampling_theorem if you sample a band-limited signal with a sample rate greater than twice the bandwidth, no information is lost. Video sources are normally band limited, so this should not be a problem. Actually Nyquist sampling requires that the signal be available over an infinite time span, but it is plenty close enough for usual video and audio sources. The loss comes later, when MPEG compression is done to the sampled signal. In signal processing, there is also quantization, relating to the finite number of values at each sample point. With uniform sample spacing, this results in quantization noise with amplitude about half the quantization step. Quantization noise is fundamental in any digital system, but can usually be made low enough, compared with other noise sources, such as thermal noise. Gah4 (talk) 16:59, 12 May 2017 (UTC)[reply]
Yes, digitization always causes quantization noise, which indeed is a degradation of the picture information. Hopefully such degradation banding is not usually noticable. Now discretization, required by digital systems, also occurs in some analog systems where the signal is sampled, but not digitized. This is sampled analog. There are some video tape time-base correctors (used for color under VCRs inside cable TV studios--I myself used them--that are this kind of sampled analog, using analog memory cells in a fire bucket delay arrangment. So sampled (discretization) analog degrades only by hardware imprefections, as any operation in analog is degraded by hardware inperfection. Analog disretization has no inherent picture information degradation at all. Ohgddfp (talk) 16:39, 6 January 2024 (UTC)[reply]

China uses PAL, not NTSC[edit]

According to [1], China uses PAL D, not NTSC. I also have a old Sharp chart showing the system of the World showing the same thing. I would like to propose that China be moved from NTSC to PAL. RAM (talk) 02:22, 6 March 2014 (UTC)[reply]

Taiwan seems to be on the list of countries that switched away from NTSC, but not on the list of countries using, or previously using, NTSC. According to the one China policy, Taiwan is part of China, so should China be back on? Gah4 (talk) 17:03, 12 May 2017 (UTC)[reply]

Uhh... what?[edit]

Quote:

"

This standard is slowly being replaced by HDTV.

"

HDTV (high-definition television) is not a "standard", it is an ambiguous comparitive term describing the resolution of nearly any video mode. NTSC is not being "replaced by HDTV", it is being replaced by newer video systems that support high-definition modes. Once upon a time, 405-line television System A was once considered "high definition" compared to the previous 30-line Baird system, as the later 625-line system was to A.

External links modified[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on NTSC. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers. —cyberbot IITalk to my owner:Online 07:46, 27 August 2015 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on NTSC. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 12:37, 27 February 2016 (UTC)[reply]

PAL-N, AFN transmitters in Germany, Paraguay[edit]

  • Statement about PAL-N: Greater number of lines results in higher quality.

Would be worth to have a comment on this from someone who actually has seen a PAL-N transmission. But TV engineers believe that PAL-N is no better than PAL-M because the video bandwith is limited to 4.2 MHz as well. And in fact it is still used only in Argentina and Paraguay, with all analogue transmitters in Uruguay already being closed.

  • Statement about NTSC 4.43: The format was used by the USAF TV based in Germany during the Cold War.[citation needed]

AFN TV transmitters in Germany used video carrier frequencies in accordance with the European "E" channels. But the video itself was standard NTSC-M and not some oddball. Audio was 4.5 MHz above video carrier as well. I have never seen any other information than this. Thus I think that the statement here is very likely incorrect, apparently reflecting confusion from the use of NTSC-M in the "E" channel spacing. The Cold War reference also needs to be removed. The last AFN TV transmitters in Germany have been closed only a few years ago, probably even outliving any PAL-B/G transmitters (I would have to check out the timeline in detail to see if indeed NTSC was in Germany still on air when PAL was already gone).

  • Paraguay (switched to PAL-N in 2005)

The 1994 edition of World Radio TV Handbook already shows PAL-N for Paraguay (p. 427). For all I know Paraguay used always the "N" system, i.e. 625/50 with 4.2 MHz video bandwith, and started colour TV outright with PAL-N, like the other two "N" countries who earlier choose to use 625/50 but had to observe the 4.2 MHz video bandwith restriction, imposed by the American 6 MHz channel spacing.

--2003:45:4547:4A33:215D:377A:8F93:CD21 (talk) 11:45, 11 September 2016 (UTC)[reply]

NTSC definition of the acronym[edit]

I've seen NTSC as an acronym for "National Television System Committee", but somewhere I read it was National Television Standards Committee. Is it possible that the latter was also used, or that it became one or the other over time? Misty MH (talk) 08:14, 5 January 2017 (UTC)[reply]

The NTSC called themselves the "National Television System Committee". It was never officially changed to anything else. See their 1953 publication: "Petition of National Television System Committee for Adoption of Transmission Standards for Color Television". This can be found in Google Books. Ohgddfp (talk) 16:50, 6 January 2024 (UTC)[reply]

External links modified (February 2018)[edit]

Hello fellow Wikipedians,

I have just modified 6 external links on NTSC. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 01:29, 11 February 2018 (UTC)[reply]

"color system that was used in North America from 1954 and until digital conversion,"[edit]

The article says: color system that was used in North America from 1954 and until digital conversion, Seems to me that this statement leaves out something important. Broadcast analog NTSC has stopped in many countries, but it is still used, for example, in the composite video output of many devices. I suppose VHS tapes, based directly on NTSC, are going away fast, but DVDs still say NTSC on them. Newer DVD players will convert to component video (which still has NTSC timing) or HDMI, and might even be able to do that for non-NTSC DVDs. One fix is to add the word broadcast to the statement. Gah4 (talk) 18:39, 27 August 2018 (UTC)[reply]

NTSC refers to more than the colour system. NTSC also refers to the 525-line 30Hz frame rate (originally) analogue format of the TV signal. Thus DVDs can correctly be described as 'NTSC' if they conform to that standard. This contrasts with 625-line 25 Hz frame rate DVDs (576/25i) which are often erroneously described as 'PAL'. PAL only refers to the analogue colour encoding system used on broadcast TV. Although the 576/25i digital format on the disc is converted to a 625-line PAL encoded analogue signal for playback in most of the world (or it used to be before this new fangled HDMI malarkey cane along - although some systems used RGB or Y/PB/PR connections), the analogue format of the signal owes nothing to the disc. Indeed, DVD players could be manufactured that output a 625-line SECAM encoded analogue signal for the French, Russian and a few other territories (but as far as I know, they were not).
DVD players manufactured for the Brazilian market play 'NTSC' discs but output their unique 525-line PAL encoded analogue signal (PAL-M). 86.142.48.141 (talk) 16:51, 30 August 2018 (UTC)[reply]
In either case, it is broadcast NTSC that is gone, but signals with NTSC timing, and even NTSC color encoding, are still around. A TV set that I had 30 years ago, was supposed to be able to display 625 line signals, though I never had a source to try it. (Monochrome only, unless NTSC color encoded.) I suspect modern HDTV sets will have less problems with unusual timing. Though the one I have now won't accept NTSC timing on the VGA input. Gah4 (talk) 18:31, 30 August 2018 (UTC)[reply]
Most CRT sets from the 1990's onward could cope with many different video timings (though not necessarily the colour encoding - though as time went on most colour decoder chips became universal). Flat panel TVs and computer monitors can only display signal timings for which they have an entry in their timing table. A VGA input will not accept an NTSC signal (or any regular analogue video signal) as the interface is very specific. 86.142.48.141 (talk) 12:01, 31 August 2018 (UTC)[reply]

Exact Vertical Frequency[edit]

A section of this article asserts that the vertical frequency is rounded to 59.939, a number lower than the commonly rounded to 59.94. However, dividing 60 by 1.001 yields a number higher than 59.94. It appears that this three-decimal number was derived from the specified horizontal frequency, which itself is a rounding severe enough to cause this aberration. Indeed, in addition to 60 / 1.001, the exact frequency can also be derived by starting from 315 / 22 / 910, further solidifying it. — Preceding unsigned comment added by 80.162.33.59 (talk) 00:19, 11 January 2019 (UTC)[reply]

There should be a tolerance on these numbers, but a search of fcc.gov didn't find it for me. Anyone know what it is? Gah4 (talk) 00:58, 11 January 2019 (UTC)[reply]
Page 51 of Ennes gives +/- 3 microseconds out of the vertical interval of 16,667 microseconds as the maximum allowable to preserve the visual illusion of perfect interlace or one part in 5,000.

[1] --52.119.118.14 (talk) 00:05, 31 August 2019 (UTC)[reply]

Normally they will be generated from the same reference, dividing the frequency down. That should preserve interlace at any frequency. Gah4 (talk) 00:09, 31 August 2019 (UTC)[reply]
From the article, the subcarrier is 315/88 MHz, the line rate is that divided by 227.5, and the frame rate is line rate divided by 262.5. That gives 59.9400599401 for the frame rate. But okay, if the transmitter frequency shifts, and the local crystal oscillator doesn't, then yes it will be wrong. Then it depends on the design of the oscillator in the receiver. Otherwise, both horizontal and vertical oscillator should phase lock to the sync pulses over a wide enough range. Gah4 (talk) 00:25, 31 August 2019 (UTC)[reply]
The tolerance for carriers of FM broadcast stations in the US are 2kHz (out of about 100MHz) Talk:FM_broadcasting#tolerance, or about 20ppm. I suspect that the tolerance on the subcarrier is about the same 20ppm, though it is probably somewhere in fcc.gov. Gah4 (talk) 00:01, 5 June 2021 (UTC)[reply]
From the original FCC proposal, it seems that the tolerance on the color subcarrier is 3ppm. Also, there is a 0.1 cycle/second/second tolerance on its rate of change. This tolerance then applies to the line and field rate, which are generated from that frequency. Gah4 (talk) 22:40, 15 March 2022 (UTC)[reply]


References

  1. ^ Ennes, Harold (1968). Television Systems Maintenance. Howard W Sams.

RS-170 vs RS-170a or black&white vs color[edit]

It may not seem too consequential, but there are BIG differences between the original RS-170 and RS-170a - this should be mentioned in the variants

  • EIA RS-170 is a black/white only spec that was widely used in closed circuit TV systems and for industrial apps and is based on the original 1946 spec modified in 1950.
    • RS-330 is an industrial spec based on RS-170 that uses standard scan rates that was defined in 1966
    • RS-343 is an industrial spec based on RS-170 modified for higher scan rates that was defined in 1967
  • RS-170a was an upgraded spec from 1953 that was considered backwards compatible and designed to allow a compatible color system. This is what people generally call NTSC. The addition of the color burst signal on the back porch in the days of vacuum tube circuitry required extensive engineering support to transmit a good signal. Before the days of digital sync generators the timing signals were generated using monostable multivibrator timing circuits whose timing would drift as components aged and temperatures changed.

[1] [2]

People may be calling RS-170a an "upgraded version" of NTSC, but it was never official with the NTSC itself, nor with the Federal Communication Commission, and this should be made plain. Part of RS-170a was implemented by broadcasters for the purpose of frame-match editing with video tape recorders and does not contradict NTSC-FCC standards. Color fidelity took a hit after 1962 when receivers used the more energy efficient but different colored phosphors. Eventually, some broadcasters could in theory have utilized cameras with RS170a colorimetry that would have brought color fidelity back up to 1953 performance levels for those 1960's TV receivers. The color gamuts of FCC-NTSC on the one hand and the color gamuts of RS170a are actually different, and should be considered different non-compatible standards as far as colorimetry is concerned. Indeed, broadcast monitors after 1962 used the newer phosphors and electronically partially corrected for an NTSC (not RS170a) signal when the "Matrix" switch on the monitor was turned on. Receivers did the same. Ohgddfp (talk) 19:55, 3 June 2021 (UTC)[reply]

References

  1. ^ Showalter, Leonard (1969). Colsed-Circuit TV for Engineers & Technicians. Howard W Sams & co. pp. 9–16.
  2. ^ Ennes, Harold (1968). Television Systems Maintenance. Howard W Sams & co.

Horizontal resolution (Pixels per line)??[edit]

I was very surprised to see that the article does not provide the horizontal resolution of NTSC frames at any point, which is a crucial piece of information for anyone working with the format. From my limited research, the answer appears to be that a standard NTSC signal contains 720 pixels per line (one source being https://web.archive.org/web/20170429173550/https://documentation.apple.com/en/finalcutpro/usermanual/chapter_C_section_6.html ), but that some of these may not be presented in the resulting image. I am a very new editor and I'm not certain of the best way to add this information to the article, but I think it's essential to be included.

Some additional and potentially conflicting information is available at https://en.wikipedia.org/wiki/480i , https://en.wikipedia.org/wiki/Nominal_analogue_blanking , https://en.wikipedia.org/wiki/Standard-definition_television and perhaps other pages. I would be very grateful if any editors more knowledgeable than me could perhaps consolidate the correct information under https://en.wikipedia.org/wiki/NTSC , where I think it is extremely relevant to be included. Thank you very much! 184.152.223.48 (talk) 14:40, 3 June 2021 (UTC)[reply]

It is there, but it is hiding. It is commonly given as video bandwidth, the frequency response of the video system all the way through. Unlike the digital case, it often has a gradual drop. From the bandwidth, horizontal sync. frequency, and fraction of the line that is the displayed image, you can convert to pixels. (Remember the factor of two from Nyquist rate.) It is also given as the Modulation Transfer Function for optical systems, and given graphically. Gah4 (talk) 02:11, 4 June 2021 (UTC)[reply]
That seems not to be a link, and no such file is found by Google. Do you have an actual URL? Gah4 (talk) 03:33, 7 June 2021 (UTC)[reply]
OK, one reference[1] says 52.6us for active video per line. I believe monochrome can go to 4.2MHz, or color with a good comb filter, so 52.6us * 4.2MHz * 2 = 441 pixels. VHS at the slower speeds might be 2.5MHz or less, maybe 260 pixels. In any case, it is commonly noted in terms of video bandwidth and not pixels. Gah4 (talk) 23:51, 4 June 2021 (UTC)[reply]

References

  1. ^ "BASICS OF ANALOG VIDEO". www.maximintegrated.com. Maxim. Retrieved 4 June 2021.

exactly the same as a pixelized[edit]

The article says: exactly the same as a pixelized. This isn't quite right. In the case of analog video, there is a frequency response through the whole system, which gives a gradual fall-off in amplitude at higher frequencies. Gah4 (talk) 01:59, 4 June 2021 (UTC)[reply]

By the way, I used to go by "Ghidoekjf", elsewhere on this talk page. The camera usually had a non-standardized gentle high frequency roll off in the luminance channel. Similar for a converter from HDTV analog (Like analog HDTV Y, r-y, b-y), where the luminance roll off might be mostly non-existent within the 4.25 MHz luminance frequency band. The analog receiver has a much more aggressive high frequency roll off in order to minimize the Gibbs effect, a.k.a. "Haloing", so you are indeed correct that the overall system frequency response has a (luminance) roll off. Now, inside the LCD panel receiving an analog NTSC signal, the analog signal of a TV scan line is converted to discrete time by sampling according to Nyquist, which preserves 100% of the analog information. These samples are applied to the corresponding row of pixels on the LCD panel without explicitly converting from discrete time back to the original continuous time of the original analog signal, leaving a possibly visible sample and hold display with consequences as follows. Along that row, comparing the spacial frequency spectrum between a continuous line on a CRT, and the discrete row of pixels on an LCD panel, the spacial frequency spectrum within the 4.25 MHz analog bandwidth is identical for both CRT and LCD panel, including the overall system frequency roll off as 4.25 MHz is approached. Unfortunately, due to the LCD display pixels, additional image frequencies, spatially higher in frequency than any of the original analog frequencies, are present. Normally, when converting from discrete-time signaling to continuous-time signaling (like in a digital to analog converter), an analog low-pass filter is employed to get rid of the much higher and unwanted image frequencies, either electrically or spatially. But with an LCD panel, the finite size of the pixels themselves only effectively provides some of this image frequencies removal. If an HDTV or 4k panel is used, these image frequencies are pushed to a much higher spatial frequency band so high that the human eye provides much of this analog anti-image filtering. In any case, if you cannot see the LCD pixel structure itself, that means the spatial image frequencies are not causing a change in picture quality. What's left is that a line on a CRT and the row of pixels on the LCD have the identical spatial frequency spectrum and therefore look the same as each other. Therefore there is a minimum equivalent pixel resolution that an LCD panel is required to have so none of the analog picture detail is lost when viewed on the LCD panel. Ohgddfp (talk) 20:12, 4 June 2021 (UTC)[reply]

This is traditionally given as MTF, or modulation transfer function. It is, for example, given for photographic films. In the case of video, it is commonly given as the frequency response of the video amplifier, including any filters intentionally added. An additional complication is that it is often (or almost always) given as line pairs, or cycles in frequency space. That is, as a spatial frequency. Following Nyquist, this is half the sampling rate, or pixel resolution in the case of digital signals. The is especially complicated as "line pairs" often abbreviates to "lines". Gah4 (talk) 01:59, 4 June 2021 (UTC)[reply]

Yes, these things you mention seem correct to me, but does not alter what I said. Ohgddfp (talk) 20:12, 4 June 2021 (UTC)[reply]

In any case, it isn't exactly the same, but it is related, and there is a resolution limit to analog video. Gah4 (talk) 01:59, 4 June 2021 (UTC)[reply]

But the spatial frequency spectrum is indeed exactly the same (CRT versus LCD panel), as I explained above. Therefore the visual aspects of the picture quality, including all the alternative measurements you mentioned, are also exactly the same with CRT versus LCD panel, not just related. (Check out discrete-time processing, which I think you already did to some extent.) And about the resolution limit to analog video, every format has its limit, whether it's analog NTSC, digitized NTSC, analog HDTV (yes, there is such a thing), or any of the other many digital video formats. They all have a limit (different from on-another of course) to the ultimate spatial frequency response. Ohgddfp (talk) 20:12, 4 June 2021 (UTC)[reply]
For a CRT, which accepts an analog luminance signal, there is no sampling. For a variety of reasons, the last being the spot size, there is a high frequency roll-off, which might have a long tail. Sampled signals have no long tail. The filter before sampling might be sharp or not so sharp, but anything higher goes into aliasing. It is the sharp cutoff of sampling vs. the more gradual falloff of analog signals that I was trying to say are not exactly the same. Depending on the response, maybe close enough, though. In the case of LCD, yes, the signal is sampled and aliasing can occur. Gah4 (talk) 23:36, 4 June 2021 (UTC)[reply]
About "For a CRT, which accepts an analog luminance signal, there is no sampling." Yes. That's right. About "For a variety of reasons, the last being the spot size, there is a high frequency roll-off, which might have a long tail." Yes, it is part of the system high frequency roll off. The receiver manufacturer considers the frequency response due to spot size when designing the video frequency roll off of the electronics. The system frequency response is a cascade of each stage of the system, the spot size being functionally one of those stages. About "The filter before sampling might be sharp or not so sharp, but anything higher goes into aliasing." Yes. As long as there is no significant energy beyond 4.25 MHz, the four or even three times subcarrier sampling rate will not produce aliasing. Ohgddfp (talk) 01:26, 5 June 2021 (UTC)[reply]
About "It is the sharp cutoff of sampling vs. the more gradual falloff of analog signals that I was trying to say are not exactly the same." So I believe this is the crux of the discussion here. The analog TV transmitter (which is a critical part of what NTSC was all about), provides a sharp cut off in order to prevent adjacent channel interference. The sharp cut off is shown by the sinc test signal on one of the blanking lines, which shows ringing before and after the pulse once it has gone through the transmitter. Only a sharp cut off can produce this ringing. This allows the maximum signal energy at all video frequencies, zero to 4.25 MHz to be transmitted, without changing the in-band frequency spectrum coming from the camera luminance channel. Almost all of the system slow luminance frequency response roll off is done inside the receiver, where the sinc test signal (sometimes on one of the blanking lines) has its ringing removed due to the gradual roll off. Remember that the system frequency response is a cascade of all the stages from camera to TV screen. There is a sharp cut off in the receiver as well, even in analog receivers. Again to prevent an adjacent station from interfering. So in the analog receiver, there is both a slow luminance roll off and a sharp cut off. Simultaneously. At the frequency of the sharp cut off, the energy is already rather low due to the gradual luminance channel roll off. So since 1953, analog color TV transmitters always had a sharp cut off, which naturally includes a sharp low-pass cut off filter of the luminance information. And since 1953, demo receivers, and starting in 1954, color TVs sold to the public, also had sharp cut off low-pass filtering, even though no luminance sampling was involved for the 50's and 60's. Ohgddfp (talk) 01:26, 5 June 2021 (UTC)[reply]
About "In the case of LCD, yes, the signal is sampled and aliasing can occur." Well anything can go wrong. But since the NTSC analog signal applied to the LCD panel has already gone through multiple sharp cut off low-pass filters, the aliasing indeed will not occur. But that is different from LCD induced unwanted image spatial frequencies I discussed earlier. Even though in both cases a sharp cut off low-pass filter needs to be involved in order to prevent aliasing and image frequencies, in the case of the LCD panel, some of the low-pass spatial filtering for the rejection of image frequencies is by the finite size of the pixels, and also by the human eye, especially when HDTV and 4k panels are involved. Ohgddfp (talk) 01:43, 5 June 2021 (UTC)[reply]
I think without exactly, I would not have any problem. The filtering in an analog video system won't be exactly the same as in a sampled system. Actually, no two analog systems will be exactly the same, but maybe close enough. Gah4 (talk) 02:28, 5 June 2021 (UTC)[reply]
Oh, "exactly". Well, readers already know that no two things on this earth are quantifiably exactly alike, so they would expect some other kind of understanding of "exact". The only other kind of "exact" that would have to come to mind would be by a process of elimination, which is the limit as to how much resolution a TV of either kind can be designed to have, when both kinds are fed an analog NTSC TV signal. Keep in mind that digital or discrete-time signals on the one hand, and analog continuous time signals on the other hand, as different as they are from each other, are at their heart linear systems, and therefore both have the same notions of "frequency response". Now with digital signal processing, it is easy to achieve any frequency response, including that of any analog system. Discrete analog time signals (yes, I said that right, and there really is such a thing) and continuous analog time signals can carry the exact same information, and the losses from converting back and forth between them is due solely to hardware imperfections, not continuous-time versus discrete time. As long as the hardware for both kinds achieves or exceeds the resolution that analog NTSC is capable of, then the limit as to how much resolution the designers of either type of TV can achieve is solely the resolution of the NTSC signal itself. And since both kinds of TVs are fed the same analog NTSC signal, then the limit of resolution that a designer can achieve for either kind of TV is mathematically exactly the same. Now what is the smallest pixel count that would represent an LCD panel hardware that achieve the same resolution as is available in an NTSC signal? That is an exact mathematical number, where any smaller pixel count makes it impossible to achieve full NTSC resolution, while it is also a mathematical certainty that more pixels can never result in greater than NTSC resolution. This is an exact number. And if "resolution" is understood to include the shape of the system frequency response roll off, digital signal processing makes it easy for a designer to make a given LCD TV to match the frequency response of a given CRT TV, where there is no inherent limit as to how exact the match between the two types can be. So given certain numbers, such as 4.25 MHz absolute TV response limit, the number of scan lines, the refresh rate, blanking specs, an LCD TV would need at minimum 453 pixels per row to achieve this. This is an exact number. Any less, than some picture details available in the analog NTSC signal will be lost. Any more, there is no improvement beyond what an NTSC analog transmission can achieve. (Although with more display pixels, the pixel structure becomes less visible, but pixel structure visibility is not part of defining "resolution"). So therefore the conclusion remains, an analog NTSC signal has exactly the same resolution as an LCD panel with 453 (rectangular shaped) pixels per row. Ohgddfp (talk) 05:06, 5 June 2021 (UTC)[reply]
When you get to quantum mechanics, you do find some things that are exactly alike. All electrons are exactly like all other electrons. All protons are exactly like all other protons. And, I suspect, no LCD panels have 453 pixels per row. There are some numbers that makers like, and that isn't one. 640 and 768 are numbers that they like. But yes, LCD panels will have an exact number, where analog video has a not so exact bandwidth, depending on the results of all the filters it goes through. I believe that there is an FCC rule regarding bandwidth for transmitters, but didn't find it in a quick search. There is the extra fun of the vestigial carrier, and the IF amplifiers having to correct for it. There are tolerances on frequencies and bandwidths, but I don't have the reference. In the case of analog systems, though, pretty much nothing is ever exact. Gah4 (talk) 23:20, 5 June 2021 (UTC)[reply]
Well, Gah4, that's absolutely right. I've never heard of a panel with 453 pixels per row. But it doesn't have to be a display. These same numbers are also a fully equivalent way to express software resolution, in that when digitizing at that or higher resolution, all the analog video picture details are captured. There are actually studio digitizers at work in the 70's that digitized at 10.8 MHz, (3x subcarrier), which is just barely above the 453x483 software pixel resolution. And capturing all the analog picture details is not based on the shape of the roll off. It's based on the highest frequency value that the sharp cut off band-limit filter is legally allowed on an FCC compliant transmitter. That is what determines the software digitizing resolution needed to not miss any analog picture details that can squeeze through that transmitter, which is what is of practical value, even today, when talking about resolution. I'm sure that's what the "Talker" was asking about because that is the only resolution that is of use today, for determining when and when not a digitizing of old analog video will lose analog picture detail in the digiting process. It shows what the analog NTSC resolution given in the terms equivalent to modern technology would be. 453x483 shows people that old off air analog NTSC content that was digitized will display with full analog NTSC resolution using 640x480 on YouTube. It also shows people that 360x240 will lose some of the analog NTSC picture details. As far as exactness is concerned, I take the numbers that the FCC requires as the passband (it's in drawings, not text). You mentioned that analog video has a not so exact bandwidth. According to the FCC, it does. My "exact" figure matches the FCC information for the sharp cut off frequency, which is the best of what a legitimate and legal analog over-the-air transmitter can do. So to ensure that all analog picture detail is captured, the software pixel resolution spec must be based on the best that the TV transmitter can do. And that's a very tight number. And because the FCC specifies on the drawing only one number, then any calculation of software pixel resolution has to be exactly based on that number. The fact that filters can vary doesn't change that magic number. Since the calculations are based on that one number, there is no variation of what the best TV transmitter can be before running afoul of the FCC rules. So my number is "exact", because there is no variation of the FCC's number, even while an actual filter may come up a little short. Of course, any analog filter has some variation. But in the case of the analog TV transmitter, not much variation in the cut off frequency that determines the software pixel resolution. That's because the tightness of this bandwidth is bound by the need to pass all the chroma information, which is at the high end of the video transmitter frequency band. (Sound transmitter is a separate RF power amplifier.) Any lack of bandwidth, like even 5 percent, starts to noticeably smear the colors. That, combined with the very tight regulation of carrier frequency results in a very tightly specified bandwidth indeed, making TV transmitters more expensive because any cheating on this with sloppy filtering results in smeared colors for the viewer. Now the frequency response roll off may vary in TV receivers, some even having a "sharpness control" which works on that roll off. Remember that IF filters trying to correct for various issues is not part of the NTSC standard; only the transmitted signal is specified. So it's the transmitter filtering that is tightly controlled as far as the sharp cut off frequency is concerned, and for digitizing, its that cut off frequency that determines the sample rate and equivalent software pixel resolution needed to capture 100 percent of the analog video data that can squeeze through the best legitimate NTSC-FCC TV transmitter. Ohgddfp (talk) 06:55, 6 June 2021 (UTC)[reply]
Do you have the reference for the FCC document on this? I tried to find it, but not so hard. Besides discussing in here, the article should have it, too. Gah4 (talk) 09:29, 6 June 2021 (UTC)[reply]
Yes. It's a graphic from the FCC. It shows the frequency spectrum of a TV transmission. Display title Category:Broadcast engineering

Default sort key Broadcast engineering The file is: FCC-NTSC_Idealized_Picture_Transmission_Amplitude_Characteristic.png