Mobile app version of vmapp.org
Login or Join
Berumen635

: Do 3d Programs calculate perspective? Recently I've started reading up on colour theory, perspective and such fundamentals. There are several things I find puzzling, and this question is motivated

@Berumen635

Posted in: #Perspective

Recently I've started reading up on colour theory, perspective and such fundamentals.
There are several things I find puzzling, and this question is motivated by the issue 'How trustworthy are 3D simulations in regards to perspective?'.

From what I understand, how light and our eyes interact basically decide what we perceive as perspective. As in, perspective is an effect/interpretation of what we see, and not real in the way light and objects are.
Therefore, even though I can approximate and express perspective using things like vanishing points and such, I've been wondering how a certain point of view's perspective would be calculated. And from that I came to ask myself how any 3d program that displays models knows how to calculate that?
For example, how does a program know how small and at which position on the screen to display part of an object relative to other parts.

My specific question is: Does a 3d program calculate perspective, and if yes what information does it take, or is perspective merely a side-effect (in the viewers mind) of what is actually calculated/displayed, and if so, what is actually calculated?

I can't express my question very well I feel, so again the example:
If I have a simple wooden planket model (rectangle) and look at it from different points of view in a program, I could take screenshots and draw the vanishing lines and points for each of these. The side 'further away' would look smaller even though in reality the sides are the same. This is what I mean with perspective in this question. In nature that would be an effect of how my eyes work, since the sides are the same size. How about in the simulation? Is it my eyes/mind that cause me to see this perspective or does the program actually calculate it, and if so, how?

I don't know whether this is the most appropriate se site for my question, I also had the computergraphics se in mind, but I'm not aware of any other art related ones.

10.02% popularity Vote Up Vote Down


Login to follow query

More posts by @Berumen635

2 Comments

Sorted by latest first Latest Oldest Best

 

@Gail6891361

TL;DR = YES!

Ok, this question would be a better fit for computergraphics.SE. But since you ask, here ill answer here with a shorter answer than CG.SE would warrant.


'How trustworthy are 3D simulations in regards to perspective?'.


As trustworthy as you make them. The standard 3D model is usually manipulated by a 4 by 4 matrix and then divided by perspective (see projection on Wikipedia). This is reasonably accurate, as in as accurate as any 3 point perspective you would draw by hand.

The standard camera model is too perfect though. As it assumes a perfect pinhole camera on a perfect capture plane. As does classical 3 point perspective by the way. A real camera lens has all kinds of problems.

In essence 3D graphic people know how to do all of these non ideal things. It is just a question of how long your willing to wait the image to finish. Raytracers make these effects trivial to solve at the cost of computation speed.


If I have a simple wooden planket model (rectangle) and look at it from different points of view in a program, I could take screenshots and draw the vanishing lines and points for each of these.


Yes, Indeed. You can even do this in reverse and take a picture that is in perceptive and correct it to a flat representation. Anyway the 3d application is perfectly capable of drawing these lines for you no need to manually do this.


Is it my eyes/mind that cause me to see this perspective or does the program actually calculate it, and if so, how?


I dont know how much mathematics you know. Did you do linear algebra in school? But basically like this:


You model the points of your object in a local coordinate space.
You concatenate the affine transformation matrices that is local space to world space coordinates and from there to camera space coordinates.
You add the perspective transformation and then divide by the z distance.


An old animation that i did way back describing the process can be found here (2.5 MB QT movie).

10% popularity Vote Up Vote Down


 

@Kaufman565

The two processes are actually the reverse of each other. In the human visual system, we are trying to interpret into a sort of 3D internal reality the 2D picture we receive through our eyes (with a lot of help from the parts of our brain that recognize things, of course). In order to do that well, we need more than one version of the picture. That can come from observing over a period of time, or from observing from more than one point of view (through two eyes, or from different head positions, etc.) and, to a lesser extent, by the near/far relationship you can detect by having to focus your eyes. When you're looking at a picture, like a non-stereo photograph, neither the focus changes nor the shift in perspective are available to you, so anything you think you're seeing in terms of 3-dimensionality is something your brain is doing based on both the picture itself and what you "know" about what you're seeing in the picture. (The word know is in scare quotes because your mind can easily be fooled. Are you looking at a big thing in the distance or a small thing up close? Michael Paul Smith's Elgin Park photo series fools just about everybody at first.)

In a 3D modelling/rendering program, we create a 3D "reality" out of objects with "real" height, width, depth, geometry and optical properties. In other words, we start with what the viewer eventually has to reconstruct in their brain. All that remains is to pick a point of view and an angle of view, and generate the 2D picture as a viewer (or camera) at that one specific position would see it, and that is a straightforward mathematical process. In a static image, that would be one perspective from one point of view, but there is absolutely no difference between viewing an image created by a 3D program and one taken by a camera (assuming, of course, that the materials, shading and lighting are good simulations).

Drawing or painting a convincing representation of something imagined is much less straightforward, but that's mostly because you need to both generate an internal model of what you're trying to draw/paint and imagine how that would be seen from your desired point of view, then render that into 2-dimensional form. There are a number of "tricks" we can use to help with the process (perspective geometry lines and so forth); the real problem is in overcoming the icons of our thoughts (what things are) so that we can render them as they would be seen and felt. Those who can do it well can make you believe you've seen things that never were; those who can't escape what they "know" give away the game every time they try.

10% popularity Vote Up Vote Down


Back to top | Use Dark Theme