Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore CU-BCA-SEM-IV-Computer Graphics_Second Draft

CU-BCA-SEM-IV-Computer Graphics_Second Draft

Published by Teamlease Edtech Ltd (Amita Chitroda), 2021-09-09 06:19:38

Description: CU-BCA-SEM-IV-Computer Graphics_Second Draft

Search

Read the Text Version

geometry, the vanishing point is the image of the point at infinity associated with L, as the sightline from O through the vanishing point is parallel to L. Vanishing Line As a vanishing point originates in a line, so a vanishing line originates in a plane α that is not parallel to the picture π. Given the eye point O, and β the plane parallel to α and lying on O, then the vanishing line of α is β ∩ π. For example, when α is the ground plane and β is the horizon plane, then the vanishing line of α is the horizon line β ∩ π. Anderson notes, \"Only one particular vanishing line occurs, often referred to as the \"horizon\". To put it simply, the vanishing line of some plane, say α, is obtained by the intersection of the image plane with another plane, say β, parallel to the plane of interest (α), passing through the camera centre. For different sets of lines parallel to this plane α, their respective vanishing points will lie on this vanishing line. The horizon line is a theoretical line that represents the eye level of the observer. If the object is below the horizon line, its vanishing lines angle up to the horizon line. If the object is above, they slope down. All vanishing lines end at the horizon line. Properties of vanishing points 1. Projections of two sets of parallel lines lying in some plane πA appear to converge, i.e. the vanishing point associated with that pair, on a horizon line, or vanishing line H formed by the intersection of the image plane with the plane parallel to πA and passing through the pinhole. Proof: Consider the ground plane π, as y = c which is, for the sake of simplicity, orthogonal to the image plane. Also, consider a line L that lies in the plane π, which is defined by the equation ax + bz = d. Using perspective pinhole projections, a point on L projected on the image plane will have coordinates defined as, This is the parametric representation of the image L′ of the line L with z as the parameter. When z → −∞ it stops at the point on the x′ axis of the image plane. This is the vanishing point corresponding to all parallel lines with slope –b/a in the plane π. All vanishing points associated with different lines with different slopes belonging to plane π will lie on the x′ axis, which in this case is the horizon line. 2. Let A, B, and C be three mutually orthogonal straight lines in space and 201 CU IDOL SELF LEARNING MATERIAL (SLM)

be the three corresponding vanishing points respectively. If we know the coordinates of one of these points, say vA, and the direction of a straight line on the image plane, which passes through a second point, say vB, we can compute the coordinates of both vB and vC 3. Let A, B, and C be three mutually orthogonal straight lines in space and be the three corresponding vanishing points respectively. The orthocentre of the triangle with vertices in the three vanishing points is the intersection of the optical axis and the image plane. Detection of vanishing points Several methods for vanishing point detection make use of the line segments detected in images. Other techniques involve considering the intensity gradients of the image pixels directly. There are significantly large numbers of vanishing points present in an image. Therefore, the aim is to detect the vanishing points that correspond to the principal directions of a scene. This is generally achieved in two steps. The first step, called the accumulation step, as the name suggests, clusters the line segments with the assumption that a cluster will share a common vanishing point. The next step finds the principal clusters present in the scene and therefore it is called the search step. In the accumulation step, the image is mapped onto a bounded space called the accumulator space. The accumulator space is partitioned into units called cells. Barnard [4] assumed this space to be a Gaussian sphere cantered on the optical centre of the camera as an accumulator space. A line segment on the image corresponds to a great circle on this sphere, and the vanishing point in the image is mapped to a point. The Gaussian sphere has accumulator cells that increase when a great circle passes through them, i.e. in the image a line segment intersects the vanishing point. Several modifications have been made since, but one of the most efficient techniques was using the Hough Transform, mapping the parameters of the line segment to the bounded space. Cascaded Hough Transforms have been applied for multiple vanishing points. The process of mapping from the image to the bounded spaces causes the loss of the actual distances between line segments and points. In the search step, the accumulator cell with the maximum number of line segments passing through it is found. This is followed by removal of those line segments, and the search step is repeated until this count goes below a certain threshold. As more computing power is now available, points corresponding to two or three mutually orthogonal directions can be found. 202 CU IDOL SELF LEARNING MATERIAL (SLM)

Applications of vanishing points 1. Camera calibration: The vanishing points of an image contain important information for camera calibration. Various calibration techniques have been introduced using the properties of vanishing points to find intrinsic and extrinsic calibration parameters. 2. 3D reconstruction: A man-made environment has two main characteristics – several lines in the scene are parallel, and a number of edges present are orthogonal. Vanishing points aid in comprehending the environment. Using sets of parallel lines in the plane, the orientation of the plane can be calculated using vanishing points. Torre [6] and Coelho [7] performed extensive investigation in the use of vanishing points to implement a full system. With the assumption that the environment consists of objects with only parallel or perpendicular sides, also called Lego-land, using vanishing points constructed in a single image of the scene they recovered the 3D geometry of the scene. Similar ideas are also used in the field of robotics, mainly in navigation and autonomous vehicles, and in areas concerned with object detection. 12.5 SUMMARY  Centre of Projection: It is the location of the eye on which projected light rays converge. Projectors: It is also called a projection vector. These are rays start from the object scene and are used to create an image of the object on viewing or view plane.  A vanishing point is a point on the image plane of a perspective drawing where the two-dimensional perspective projections (or drawings) of mutually parallel lines in three-dimensional space appear to converge. When the set of parallel lines is perpendicular to a picture plane, the construction is known as one-point perspective, and their vanishing point corresponds to the oculus, or \"eye point\", from which the image should be viewed for correct perspective geometry. Traditional linear drawings use objects with one to three sets of parallels, defining one to three vanishing points.  Orthographic projection, common method of representing three-dimensional objects, usually by three two-dimensional drawings in each of which the object is viewed along parallel lines that are perpendicular to the plane of the drawing. For example, an orthographic projection of a house typically consists of a top view, or plan, and a front view and one side view.  Three sub-types of orthographic projection are isometric projection, dimetric projection, and trimetric projection, depending on the exact angle at which the view deviates from the orthogonal.  An orthographic projection is a way of representing a 3D object by using several 2D views of the object. Orthographic drawings are also known as multiviews. The most commonly used views are top, front, and right side. 203 CU IDOL SELF LEARNING MATERIAL (SLM)

 A parallel projection (or axonometric projection) is a projection of an object in three- dimensional space onto a fixed plane, known as the projection plane or image plane, where the rays, known as lines of sight or projection lines, are parallel to each other.  First angle projections and third angle projections are the two main types of orthographic drawing, also referred to as 'working drawings'. The difference between first and third angle projection is in the position of the plan, front and side views.  These include the Frontal Plane, Profile Plane, and Horizontal Plane: In addition to this, if a plane is placed at any other place, then it is called Auxiliary Plane. These are used to draw inclined surfaces of an object.  Typically, an orthographic projection drawing consists of three different views: a front view, a top view, and a side view. Occasionally, more views are used for clarity. The side view is usually the right side, but if the left side is used, it is noted in the drawing.  Orthographic projection uses multiple views of an object, from points of view rotated about the objects centre through increments of 90 degrees. In first angle projection, each view of the object is projected in the direction (sense) of sight of the object.  First Angle Orthographic Projection. Orthographic Projection is a way of drawing an 3D object from different directions. Usually a front, side and plan view are drawn so that a person looking at the drawing can see all the important sides  Third angle projection is one of the methods of orthographic projection used in technical drawing and normally comprises the three views (perspectives): front, top and side. When using third angle projection to compile a diagram of the three views, we first draw the most prevalent side of the object as the front view. 12.6 KEYWORDS  View Plane: It is an area of world coordinate system which is projected into viewing plane. A viewing plane (projection plane) is set up perpendicular to w and aligned with (u,v). Up is the view up vector, whose projection onto the view-plane is directed up.  Centre of Projection: It is the location of the eye on which projected light rays converge. It is the location of the eye on which projected light rays converge. Projectors: It is also called a projection vector. These are rays start from the object scene and are used to create an image of the object on viewing or view plane.  Projectors: It is also called a projection vector. These are rays start from the object scene and are used to create an image of the object on viewing or view plane.It is 204 CU IDOL SELF LEARNING MATERIAL (SLM)

the process of converting a 3D object into a 2D object. It is also defined as mapping or transformation of the object in projection plane or view plane. The view plane is displayed surface.  Window - The method of selecting and enlarging a portion of a drawing is called windowing. The area chosen for this display is called a window. Viewport: An area on display device to which a window is mapped  Clip - Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. 12.7 LEARNING ACTIVITY 1. Create a session on different types of projection. ___________________________________________________________________________ ______________________________________________________________ 2. Create a session on the various angles of parallel projections. ___________________________________________________________________________ ___________________________________________________________________________ 12.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. Define the word frustum. 2. What is a vanishing point? 3. What do you mean by centre of projection? 4. Define the term projection. 5. What do you mean by orthographic projection? Long Questions 1. Explain frustum view volume with diagram. 2. Explain the principle of vanishing point. 3. Describe axonometric projection in detail. 4. Describe orthographic projection with suitable diagram. 5. Explain different types of parallel projections. 205 CU IDOL SELF LEARNING MATERIAL (SLM)

B. Multiple Choice Questions 1. Which are the types of projections? a. Parallel projection and perspective projection b. Perpendicular and perspective projection. c. Parallel projection and perpendicular projection d. None of these 2. Which is the correct type of parallel projection? a. Orthographic projection and quadratic projection b. Orthographic projection and Oblique projection c. Oblique projection and quadratic projection d. None of these 3. Which is the projection in which the projection plane is allowed to intersect the x,y and z axis at equal distance? a. Isotonic projection b. Constructive solid geometric projection c. Isometric projection d. Back face removal projection 4. In which projection, the plane normal to the projection has equal angle with these three equal axes? a. Wire frame projection b. Constructive solid geometric projection c. Isometric projection d. Perspective projection 5. Where does the engineering drawing commonly apply? a. Orthographic projection b. Oblique projection c. Perspective projection d. None of these Answers 206 CU IDOL SELF LEARNING MATERIAL (SLM)

1-a, 2-b, 3-c, 4-c, 5-a 12.9 REFERENCES References  Sawyer, F., Of Analemmas, Mean Time and the Analemmatic Sundial  Maynard, Patric (2005). Drawing distinctions: the varieties of graphic expression. Cornell University Press.  McReynolds, Tom; David Blythe (2005). Advanced graphics programming using openGL. Elsevier. Textbooks  Godse, A. P. (1984). Computer graphics. Technical Publications.  Snyder, J. P. (1987). Map Projections—A Working Manual (US Geologic Survey Professional Paper 1395). Washington, D.C.: US Government Printing Office.  Snyder, John P. (1993). Flattening the Earth: Two Thousand Years of Map Projections pp. 16–18. Chicago and London: The University of Chicago Press. Websites  https://en.wikipedia.org/wiki/Orthographic_projection  https://www.gatevidyalay.com/tag/3d-viewing-transformation-in-computer-graphics.  http://www.rnlkwc.ac.in/pdf/study-material/comsc/2D_Transformation.pdf 207 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 13 – THREE DIMENSIONS VIEWING STRUCTURE 13.0 Learning Objectives 13.1 Introduction 13.2 Three-Dimensional Viewing Transformations 13.3 Summary 13.4 Keywords 13.5 Learning Activity 13.6 Unit End Questions 13.7 References 13.0 LEARNING OBJECTIVES After studying this unit, you will be able to:  Explain three dimensional views.  Illustrate the idea of transformations.  Describe 3-D viewing transformations. 13.1 INTRODUCTION A “picture” is a “scene” consists of different objects. The individual objects are represented by coordinates called as “model” or “local” or ”master” coordinates. The objects are fitted together to create a picture, using coordinates called a word coordinate (WCS). The created “picture” can be displayed on the output device using “physical device coordinates” (PDCS). The mapping of the pictures elements from “WCS” to “PDCS” is called a viewing transformation. Definition A finite region selected in world coordinates is called as ‘window’ and a finite region on which the window is mapped, on the output device is called a ‘view point’. 208 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 13.1: Veiwing pipeline Viewing Transformations in two dimensions Viewing Transformation / a complete mapping from window to view point is as shown in figure 13.2. Figure 13.2: Window and view point Introduction to Clipping The process which divides the given picture into two parts : visible and Invisible and allows to discard the invisible part is known as clipping. For clipping we need reference window called as clipping window. 209 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 13.3: Clipping Point Clipping Discard the points which lie outside the boundary of the clipping window. Where, Xmin ≤ X ≤ Xmax and Ymin ≤ Y ≤Ymax Figure 13.4: Point Clipping 210 Line Clipping Discard the part of lines which lie outside the boundary of the window. We require 1. To identify the point of intersection of the line and window. 2. The portion in which it is to be clipped. The lines are divided into three categories. i. Invisible ii. Visible iii. Partially Visible [Clipping Candidates] CU IDOL SELF LEARNING MATERIAL (SLM)

To clip we give 4- bit code representation defined by Figure 13.5: Line clipping Where, Bits take the volume either 0 or 1 and here, we divide the area containing the window as follows. Where, the coding is like this, Bit value = 1 if point lies outside the boundary OR = 0 if point lies inside the boundary. ( Xmin ≤ X ≤ Xmax and Ymin ≤ Y ≤Ymax ) Bit 1 tells you the position of the point related to Y=Ymax Bit 2 tells you the position of the point related to Y=Ymin Bit 3 tells you the position of the point related to X=Xmax Bit 4 tells you the position of the point related to X=Xmin Figure 13.6: Bit code representation 211 Rules for the visibility of the line: 1. If both the end points have bit code 0000 the line is visible. 2. If at least one of the end point in non-zero and i. The logical “AND”ing is 0000 then the line is Partially Visible ii. If the logical “AND”ing isnon-zero then line is Not Visible. Cohen-Sutherland Line Clipping Algorithm CU IDOL SELF LEARNING MATERIAL (SLM)

For each line: 1. Assign codes to the endpoints 2. Accept if both codes are 0000, display line 3. Perform bitwise AND of codes 4. Reject if result is not 0000, return 5. Choose an endpoint outside the clipping rectangle 6. Test its code to determine which clip edge was crossed and find the intersection of the line and that clip edge (test the edges in a consistent order) 7. Replace endpoint (selected above) with intersection point 8. Repeat Figure 13.7: Cohen-Sutherland Line Clipping Algorithm Introduction to Polygon Clipping Polygon Clipping Sutherland Hodgman Polygon Clipping algorithm 1. The polygon is stored by its vertices and edges, say v1, v2, v3 ,……vn and e1, e2, e3,…. en. 2. Polygon is clipped by a window we need 4 clippers. Left clipper , Right Clipper, Bottom Clipper, Top Clipper 3. After clipping we get a different set of vertices say v1’ , v2’ , v3’ ,…… vn’ 4. Redraw the polygon by joining the vertices v1’, v2’, v3’, vn’ appropriately. 212 CU IDOL SELF LEARNING MATERIAL (SLM)

Algorithm 1. Read v1, v2, v3, vn coordinates of polygon. 2. Readcliping window. (Xmin, Ymin)(Xmax, Ymax) 3. For every edge do { 4. Compare the vertices of each edge of the polygon with the plane taken as the clipping plane. 5. Save the resulting intersections and vertices in the new list } // according to the possible relationships between the edge and the clipping boundary. 6. Draw the resulting polygon. The output of the algorithm is a list of polygon vertices all of which are on the visible side of the clipping plane. Here, the intersection of the polygon with the clipping plane is a line so every edge is individually compare with the clipping plane. This is achieved by considering two vertices of each edge which lies around the clipping boundary or plane. This results in 4 possible relationships between the edge and the clipping plane. 1st Possibility If the 1st vertex of an edge lies outside the window boundary and the 2nd vertex lies inside the window boundary. Here, point of intersection of the edge with the window boundary and the second vertex are added to the put vertex list (V1, v2)→( V1’, v2) Figure 13.8: Ist possibility 2nd Possibility If both the vertices of an edge are inside of the windowboundary only the second vertex is added to the vertex list. 213 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 13.9: 2nd Possibility 3rd Possibility If the 1st vertex is inside the window and 2nd vertex is outsideonly the intersection point is add to the output vertex list. Figure 13.10: 3rd Possibility 4th Possibility If both vertices are outside the window nothing is added to the vertex list. Once all vertices are processed for one clipped boundary then the output list of vertices is clipped against the next window boundary going through above 4 possibilities. We have to considerthe following points. 1. The visibility of the point. We apply inside-outside test. 2. Finding intersection of the edge with the clipping plane We extend rays from the viewer’s position through thecorners of the viewing window; we define a volume that representsall objects seen by the eye. This viewing volume is shown in the leftdiagram of Figure. Anything outside the volume will not be visible inthe window. When we apply the perspective projection, objectsfurther away from the viewer become smaller, and objects in frontof the window appear larger. Logically, this is identical to “warping”the viewing pyramid into a viewing rectangular solid in which thesides of the viewing box are parallel. For example, the cube shownin the left viewing volume becomes warped to the non-parallelobject shown on the right. Now, the process of clipping becomesmuch simpler. Figure 13.11: Clipping in 3D 214 CU IDOL SELF LEARNING MATERIAL (SLM)

Clipping in 3D is similar to clipping in 2D. Everything outsideof the canonical window that is not visible to the user is removedprior to display. Objects that are inside are retained, and objectsthat cross the window boundary need to be modified, or “clipped” tothe portion that is visible. This is where the effect of the perspectivetransformation shown in Figure simplifies the process.If we were clipping to the sides of the pyramid as shown onthe left, the calculations would be substantially more complex thanthe 2D clipping operations previously described. However, after theperspective transformation, clipping to the edges of the window isidentical to clipping to the edges of the 2D window. The samealgorithms can be used looking at the x and y coordinates of thepoints to clip.To complicate matters, however, we have the addedcapability in 3D of defining clipping planes that are parallel to theviewing window, but at different depths from the viewer. These areoften referred to as “near” and “far” clipping planes as shown inFigure. The concept is that objects that are too close to the viewer,or too far away, are not visible and should not be considered. Inaddition, without clipping against the near clipping plane, you wouldsee objects that were behind the camera! If it were a simple matterof culling objects based on their depths and clipping those that fellbetween the two planes, it would be no problem. However, thecomplexity arises when objects cross the boundaries of the nearand far planes similar to when objects cross the edges of thewindows. The objects need to be “clipped” to the far and nearplanes as well as to the edges of the window. Figure 13.12: The objects need to be “clipped” to the far and near planes 13.2 THREE-DIMENSIONAL VIEWING TRANSFORMATIONS The basic idea of the 3D viewing transformation is similar tothe 2D viewing transformation. That is, a viewing window is definedin world space that specifies how the viewer is viewing the scene. Acorresponding view port is defined in screen space, and a mappingis defined to transform points from world space to screen spacebased on these specifications. The view port portion of thetransformation is the same as the 2D case. Specification of thewindow, however, requires additional information and results in amore complex mapping to be defined. Defining a viewing window inworld space coordinates is exactly like it sounds; sufficientinformation needs to be provided to define a rectangular window atsome location 215 CU IDOL SELF LEARNING MATERIAL (SLM)

and orientation. The usual viewing parameters thatare specified are: Eye Point the position of the viewer in worldspace. Look Point the point that the eye is looking at View Distancethe distance that the window is from the eye Window Size theheight and width of the window in world space coordinates UpVector which direction represents “up” to the viewer, this parameters sometimes specified as an angle of rotation about the viewingaxis These parameters are illustrated in figure. Figure 13.13: Parameters specified as an angle of rotation about the viewing axis The Eye Point to the Look Point forms a viewing vector thatis perpendicular to the viewing window. If you want to define awindow that is not perpendicular to the viewing axis, additionalparameters need to be specified. The Viewing Distance specifieshow far the window is from the viewer. Note from the reading onprojections, that this distance will affect the perspective calculation.The window size is straightforward. The Up Vector determines therotation of the window about the viewing vector. From the viewer’spoint of view, the window is the screen. To draw points at theirproper position on the screen, we need to define a transformationthat converts points defined in world space to points defined inscreen space. This transformation is the same as thetransformation that positions the window so that it lies on the XYplane cantered about the origin of world space. The process of transforming the window, using the specifiedparameters, to the origin, aligned with the XY plane can be brokeninto the following steps: 1. Compute the centre of the window and translate it to the origin 2. Perform two rotations about the X and Y axes to put the windowin the XY plane 3. Use the Up Vector to rotate the window about the Z axis andalign it with the positive Y axis 4. Use the Window Height and Width to scale the window to thecanonical size. 216 CU IDOL SELF LEARNING MATERIAL (SLM)

These four steps can be combined into a singletransformation matrix that can be applied to all points in worldspace. After the transformation, points are ready for final projection, clipping, and drawing to the screen. The perspective transformationoccurs after points have been transformed through the viewingtransformation. The perspective and view port transformations willnot be repeated here. Text Clipping  Depends on methods used to generate characters & therequirements of a particular application.  Methods or processing character strings relative to a windowboundary.  All-or-none string clipping strategy.  All or none character clipping strategy.  Clip the components of individual characters. All-or-None String Clipping Strategy  Simplest method, fastest text clipping.  All string - inside clip window, keep it, and otherwise discard.  Bounding rectangle considered around the text pattern.  If bounding position of rectangle overlap with windowboundaries, string is rejected. Figure 13.14: Text clipping using a bounding rectangle about the entire string All or None Character Clipping Strategy  Discard or reject an entire character string that overlaps awindow boundary i.e., discard those characters that are notcompletely inside the window.  Compare boundary limits of individual characters with thewindow.  Any character which is outside or overlapping the windowboundary are clipped. 217 CU IDOL SELF LEARNING MATERIAL (SLM)

Figure 13.15: Text clipping using a bounding rectangle about individual characters Clip the Components of Individual Characters  Treat characters same as lines  If individual char overlaps a clip window boundary, clip off theparts of the character that are outside the window . Figure 13.16: Text clipping is performed on the components of individual characters. 13.3 SUMMARY  Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. ... A rendering algorithm only draws pixels in the intersection between the clip region and the scene model.  Clipping method is dependent on the method of generation used for characters. If all characters of the string are inside window, then we will keep the string, if a string 218 CU IDOL SELF LEARNING MATERIAL (SLM)

character is outside then whole string will be discarded in fig (a). Another method is discarded those characters not completely inside the window.  Text Clipping is a process of clipping the string. In this process, we clip the whole character or only some part of it depending upon the requirement of the application. In this method, if the whole string is inside the clip window then we consider it. Otherwise, the string is completely removed.  Text Clipping is a process in which we remove those part (portion) of string that is outside the view pane (window).” Various methods and techniques can do the text clipping. These techniques depend on the character generation method.  Clipping is one of the ways new words are created in English. It involves the shortening of a longer word, often reducing it to one syllable. ... Math’s, which is a clipped form of mathematics, is an example of this. Informal examples include 'bro' from brother and 'dis' from disrespect.  Clipping is a procedure that identifies those portions of a picture that are either inside or outside of our viewing pane. In case of point clipping, we only show/print points on our window which are in range of our viewing pane, others points which are outside the range are discarded.  the difference between this strategy for a polygon and the Cohen-Sutherland algorithm for clipping a line: The polygon clipper clips against four edges in succession, whereas the line clipper tests the out code to see which edge is crossed, and clips only when necessary.  In computer graphics, the Cohen–Sutherland algorithm (named after Danny Cohen and Ivan Sutherland) is a line-clipping algorithm. The algorithm divides a 2D space into 9 regions, of which only the middle part (viewport) is visible.  There are five primitive types clipping, such as point, line, polygon or are, curve and text clipping. Classical line clipping algorithms includes Cohen–Sutherland algorithm, Midpoint Subdivision algorithm, Liang Bearsky and Nicholl-Lee-Nicholl algorithm.  The viewing-pipeline in 3 dimensions is almost the same as the 2D-viewing-pipeline. Only after the definition of the viewing direction and orientation (i.e., of the camera) an additional projection step is done, which is the reduction of 3D-data onto a projection plane.  The term Viewing Pipeline describes a series of transformations, which are passed by geometry data to end up as image data being displayed on a device. The 2D viewing pipeline describes this process for 2D data: norm. object- world- viewing- device- device cord. 13.4 KEYWORDS 219 CU IDOL SELF LEARNING MATERIAL (SLM)

 Clipping - Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry.  3 Dimension - For three dimensional images and objects, three-dimensional transformations are needed. These are translations, scaling, and rotation. These are also called as basic transformations are represented using matrix.  Viewing - Almost all 2D and 3D graphics packages provide means of defining viewport size on the screen. ... Viewing transformation or window to viewport transformation or windowing transformation: The mapping of a part of a world- coordinate scene to device coordinates is referred to as a viewing transformation etc  Transformations - Transformation refers to the mathematical operations or rules that are applied on a graphical image consisting of the number of lines, circles, and ellipses to change its size, shape, or orientation. It can also reposition the image on the screen. Transformations play a very crucial role in computer graphics.  String - In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. 13.5 LEARNING ACTIVITY 1. Create a session on 3D viewing transformation. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a survey regarding 3D clipping. ___________________________________________________________________________ ___________________________________________________________________________ 13.6 UNIT END QUESTIONS A. Descriptive Questions 220 Short Questions 1. Define text clipping. 2. Define 3D clipping in simple words. 3. What is all or none character clipping? 4. What is all or none string clipping? 5. What is point clipping? CU IDOL SELF LEARNING MATERIAL (SLM)

Long Questions 1. How 3D clipping is different from 2D Clipping? 2. What are the steps to be taken for breaking the XY plane which is aligned to the origin? 3. Explain all or none string clipping strategy. 4. Explain all or none character clipping strategy. 5. Explain how to do clipping the components of individual characters. B. Multiple Choice Questions 1. Which is the most basic transformation that are applied in the 3D planes? a. Translation b. Slaling c. Rotation d. All of these 2. What do you call for the transformation in which an object can be shifted to any coordinate position in 3-dimensional plane? a. Translation b. Slaling c. Rotation d. Shearing 3. When does the 3-dimensional graphics become effective? a. 1960 b. 1980 c. 1950 d. 1999 4. What is true for a three dimensional graphics? a. Have two axes b. Have three axes c. Has only one axis d. None of these 5. How can we represent a three dimensional object? 221 a. Method CU IDOL SELF LEARNING MATERIAL (SLM)

b. Equation c. Point d. None of these Answers 1-d, 2-a, 3-b, 4-b, 5-b 13.7 REFERENCES References  Donald Hearn and M. Pauline Baker Computer GraphicsPrentice Hall of India.  J.D. Foley, A Van Dam, S. K. Feiner and R. L. Phillips, Addision Wesley Computer Graphics Principles and Practice  Steven Harrington Computer Graphics McGraw-Hill.  William M. Newman, Robert F. Sproull Principles of Interactive Computer Graphics Tata McGraw-Hill  J.D. Foley, A. Van Dam, S. KIntroduction to Computer Graphics Textbooks  R. K. Maurya, John Wiley Computer Graphics  David F. Rogers, J. Alan Adams Mathematical elements of Computer GraphicsTata McGraw-Hill.  David F. Rogers Procedural elements of Computer GraphicsTata McGraw-Hill. Websites  https://www.graphics.cornell.edu/about/what-computer-graphics  https://www.britannica.com/topic/computer-graphics  https://open.umn.edu/opentextbooks/textbooks/420 222 CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook