|This is the talk page for discussing improvements to the Photon mapping article.
This is not a forum for general discussion of the article's subject.
|This article is of interest to multiple WikiProjects.|
Specialisation of ray tracing?
It's hard to tell from this article what in particular makes photon mapping a specialisation of ray tracing; I assume it has something to do with the data structure used for "caching" the "photons"... ? Chas zzz brown 07:55 Jan 28, 2003 (UTC)
reverse ray tracing?
The article says that reverse ray tracing originates the light rays at the light source. But doesn't reverse ray tracing have the rays start at the camera? MichaelGensheimer 21:26, 15 Nov 2003 (UTC)
Yes you're correct ColinCren 08:10, 10 February 2007 (UTC)
Actually, quoting from Foley et al., Computer Graphics: Principles and Practice, p. 792: "It might seem that we would need only to run a conventional ray tracer "backward" from the light sources to the eye to achieve these effects. This concept has been called backward ray tracing, to indicate that it runs in the reverse direction from regular ray tracing, but it is also known as forward ray tracing to stress that it follows the actual path from the lights to the eye. We call it ray tracing from the light sources to avoid confusion!" Sampo Smolander (talk) 20:15, 25 November 2007 (UTC)
You write "... to decrease the energy of the photon ...". I suppose this does not refer to the engery of a photon in a physical sense. Since the energy of a photon is related to its frequency by E = hf, changing the energy will change its frequency. On the other hand, the frequency distribution of light determines its color. --Borishollas (talk) 17:23, 8 December 2007 (UTC)
Energy, in the sense of Joules, is not what is literally intended. Rather, a "photon" in photon mapping refers to a collection of light particles, and that when they strike an object, some are reflected and some are absorbed. In this sense, the "energy" (or total particles in the collection) decreases. The Monte Carlo method more accurately describes what is happening with collisions, except it treats the photons as individual light particles instead of as a group of them.
On another note, frequency does NOT necessarily determine color in real life. For example, there is no frequency of a photon for pink, rather the human eye interprets a combination of different frequencies as that color. (Similar to how yellow can be seen from yellow photons as well as from a mixture of red and green ones) Xcelerate (talk) 01:15, 18 January 2008 (UTC)
I organized the article, as the original one had information about various parts of the photon mapping method scattered about. I also corrected some incorrect information (including my own), that originated from the belief that radiance estimates came from photon density or irradiance estimates. I expounded on the common effects that photon mapping is used for as well. Also, I noticed that some information about different implementations of photon mapping was included before (or in replacement of) the actual method, so I moved these to the "Variations" section. I also added information about how exactly radiance was construed from a photon map, which was not present before. I hope my edits clarify and make this article seem cleaner. Comments? Xcelerate (talk) 04:49, 8 March 2008 (UTC)
In the article, it is said that the bias comes from the fact that if you average infinitely many renderings, you don't get the correct solution. Although I agree with the fact itself, I am wondering why this is taken as the definition of bias, and not as "if you take an infinite number of photons" which would make more sense imho. In a statistical sense, the estimated illumination at any given pixel is not different from the expected one if you take an infinite number of photons, thus not creating any bias, ie., the expected value equal the empirical mean..over photons. Isn't it ? Nbonneel (talk) 16:46, 27 August 2009 (UTC)
What you describe is known as being "consistent". Bias is simply where a renderer produces a correct image on average, which photon mapping does not. I'm not really sure who came up with the definitions. Xcelerate (talk) 16:15, 21 July 2010 (UTC)
Russian roulette alternative
The article reads:
- In the 1st pass of photon mapping, an alternative to using Russian roulette to determine direction is to give each photon an "energy" attribute. Each time the photon collides with an object, this attribute is also stored in the photon map. The energy is subsequently then lowered. Once the energy of the photon is below a certain pre-determined threshold, the photon stops reflecting.
Is there literature behind this idea? It doesn't seem like a good idea to me, mainly because it breaks PM's consistency -- objects which are several reflections away from a light source will be under-illuminated. Some PM implementations will insert several entries in the PM as a photon's path is traced, but they still use RR to maintain consistency. Thoughts? — xDanielx T/C\R 09:54, 22 November 2010 (UTC)
What about wavelength?
It says in the article that the intersection point and incoming direction are stored in the photon map, but isn't the wavelength (the color) of the photon stored here too? —Kri (talk) 23:12, 15 July 2011 (UTC)