Sunday, May 1, 2016

The complicated way computers perceive perspective

So in a very informal way I thought I would get the "required" questions out of the way first.

 How are math and science related to this topic?

Though seemingly a tedious filler question, I can appreciate something to transition from watching a seventeen minuet video to writing; so I'll roll with it. The majority of the topic at hand in regards to math are based around light. Light reflection, light behavior, and physics. at one point a complicated formula for light indirect reflection was displayed. Not to mention the necessity to count, multiply, etc pixels. The science portion is attributed more to the light in relation to the objects. In order to have some idea of how the light will react to a surface one must know more then the formula to make it do so.

 Explain Projection and Rasterization, and contrast it with Ray Casting.

Well then...*takes very large breath*... Keeping things as simple as possible, when rendering became a thing people had to figure out how exactly each facet of a scene interacts with itself or other objects. The video provided chooses basic, empty, 3D space and triangles to explain how this process was done. Projection and rasterization is a process by witch a grid is placed over the perspective of the "camera" on the object. This grid encompasses the upper left most of the object, to the lower right creating a square. Once the object is within the grids bounds the computer uses mathematics and code to recognize which of the selected "pixels" are taken up by the object. If a pixel has a portion of the object, it is filled in accordingly, if not it is simply left blank. While a massive breakthrough at the time, this process is simplistic compared to ray casting.Ray casting, in contrast to projection and rasterization, will essentially remove unnecessary light rays. In other words, as opposed to filling entire pixels, rays may prioritize some objects over others. With rasterization, objects would overlap in strange fashions due to the intersection of light rays. With ray casting, whenever a ray finds an intersecting point shared by two objects, the computer will choose to stop at the first point; or the point closest to the camera. This alone cements ray castings usefulness above rasterization.

What were three major problems with the Rasterization method, and how did Ray Tracing help solve them?

Good shadows, reflections, and refraction were three very big hurdles presented to rasterization. Shadows were fixed via secondary rays. A secondary ray is created when a ray presented from the camera position travels from an object directly to a light source. This would allow a computer to assign lighter values due to the object or the rays' positions. At the same time a reflection ray is cast using angle of incidence. Where that ray lands will send another secondary ray and another reflective ray, and so on. If an object is translucent then a refractive ray will be created using index of refraction. This ray designates areas to the computer that will have refractive light rays. In other words, ray tracing really stepped up the bar in terms of generating images.

Explain the difference between direct and indirect illumination? Why is this relevant to photorealistic rendering?

Direct Illumination is light reflected directly from a source off of an object. Indirect Illumination is the illumination present that has already bounced off of another object or surface. In Photorealistic rendering, both of these principals should be present.

How was the issue of indirect illumination overcome, and what were the tradeoffs?

The issue presented by this was solved with a very complex rendering equation. This was a mathematical equation based on conservation of energy and maxwell's equations. This would simulate the indirect light in each pixel. However, it wouldn't be able to handle transmission and sub surface scattering very well. Not to mention how difficult to calculate it was.

Discuss Moore’s Law vs. Blinn’s Law and how they relate to this topic.

Moore's law states that the number of transistors in a dense integrated circuit has doubled approximately every two years. However Blinn's law slightly contradicts Moore's law by stating that as technology advances, rendering time remains constant. This simply means that as standards are raised, so is the amount required to render thus keeping rendering time very unforgiving. In other words, the more "photorealistic" the longer it will take to render due to the amount being rendered.

Do you agree that CGI is different from other art forms? Explain.

CGI certainly requires a different mind set and overall perspective. CGI is much more then painting, drawing or any other medium, simply because you as an artist are never limited by your tools. The video says that CGI itself is a tool, and a very adaptable one at that. CGI has no foreseeable limit, one day maybe we wont even need actors or sets, entire worlds may be created out of this amazing technology simply for our enjoyment.

While I really...Really, hate that guy, I do love when science and math come together in a practical application. I love videos like this because the learning (almost) never feels like learning. There are several interesting points and perspectives brought to light by this video but regardless the end message remains to be interpreted in the eye of the beholder. An interesting piece indeed.
"It's just a tool! Like a paint brush, or a man in a rubber suit..." 16:20 The science of rendering photorealistic CGI ...........Can't say I have many of those in my tool shed...

No comments:

Post a Comment