• No results found

# COMPUTER GRAPHICS - IARE

N/A
N/A
Protected

Academic year: 2024

Share "COMPUTER GRAPHICS - IARE"

Copied!
58
0
0

Loading.... (view fulltext now)

Full text

In black and white systems, the frame buffer that stores the pixel values ​​is called a bitmap. In color systems, the frame buffer that stores the pixel values ​​is called the image map. The electron beam of the CRT is directed only to the parts of the screen where the image is to be drawn.

Small holes in a metal plate separate the colored phosphors in the layer behind the front glass of the screen. The reflection of the electron beam significantly reduces the depth of the CRT bottle and consequently of the screen. In addition to the keys of the main keyboard (used for typing text), keyboards usually also have a numeric keypad (for efficiently entering numerical data), a bank of edit keys (used in text editing operations), and a row of function keys along the top (to easily call up certain program functions).

Touch screens allow the user to select an option by pressing a specific part of the screen.

Mid-point Circle Algorithm

Midpoint Ellipse Algorithm

Inside-Outside Tests

### Winding number method

In such cases, we can fill areas by substituting a specified interior color instead of searching for a border color. In this case we can add 2 points to the list of intersections, instead of adding 1 point. This decision depends on whether the 2 edges on either side of the vertex are both above, both below, or one above and one below the scan line.

Only for the case if both are above or both are below the scan line, then we will add 2 points.

## Two Dimensional Transformations

Basic Transformations Translation

Rotation About the Origin

Homogeneous co-ordinates

UNIT -4

## 2-Dimensional viewing

### Images on the Screen

• Windows and Clipping

A Viewport is the section of the screen where images are included from the window to. When a window is "placed" in the world, only certain objects and parts of objects can be seen. Cropping can be described as the procedure that identifies the parts of a picture that lie within the region, and therefore, should be drawn, or, outside the specified region, and therefore not be drawn.

Additionally, there are a wide variety of algorithms designed to perform certain types of clipping operations, some of which will be discussed in a unit. Sutherland Line Clipping • Cyrus-Beck Line Clipping Algorithm Polygon or Area Clipping Algorithm • Sutherland-Hodgman Algorithm.

### Cohen-Sutherland Line Clipping

Each of the nine regions associated with a window is assigned a 4-bit region identification code. If the area is on the left side of the window, the first bit of the code is set to 1. After the codes are determined for each endpoint of the line, a logical AND operation of the codes determines whether the line is completely outside the window.

If the logical AND of the endpoint codes is not zero, the row may be trivially rejected. For example, if one endpoint had a code of 1001 while the other endpoint had a code of 1010, the logical AND would be 1000 indicating that the line segment extends outside the window. On the other hand, if the endpoints had codes 1001 and 0110, the logical AND would be 0000, and the row could not be trivially rejected.

The logical OR of the endpoint codes determines whether the line falls completely within the window. The endpoints of the line segment are tested to see if the line can be trivially accepted or rejected. If the line cannot be trivially accepted or rejected, an intersection of the line with a window edge is determined and the trivial rejection/acceptance test is repeated.

To perform trivial acceptance and rejection tests, we extend the edges of the window to divide the window plane into nine regions. Each end point of the line segment is then assigned the label of the region in which it lies. If both codes have a 1 in the same bit position (the AND bit of the code is not 0000), the line is out of window.

If a line cannot be trivially accepted or rejected, at least one of the two endpoints must lie outside the window and the line segment crosses a window edge. When a set bit (1) is found, the intersection I of the corresponding window edge with the line from to is calculated.

### Liang-Barsky Line Clipping

If both codes are 0000, (or the bitwise OR of the codes returns 0000 ) the line extends completely within the window: pass the endpoints to the draw routine.

## Algorithm

### Sutherland - Hodgman Polygon Clipping

As the algorithm goes around the edges of the window and clips the polygon, it encounters four types of edges. For each edge type, zero, one, or two vertices are added to the output list of nodes defining the clipped polygon.

UNIT-5

## 3D Object Representations

Polygon Surfaces

Polygon Tables

Plane equation and visible points

## Curved Surfaces

Spline Representations

## Sweep Representations

Several small weights are distributed along the length of the strip to hold it in place on the drawing board while the curve is drawn. We can also vary the orientation of the cross-section relative to the sweep path.

## Unit-6 Three Dimensional Transformations

• Scaling with respect to a Selected Fixed Position
• Three-Dimensional Viewing
• Viewing Pipeline Modelling Coordinates
• Viewing Transformation
• Projections
• Parallel Projection Classification
• Perspective Projection
• View Volumes
• Clipping

To verify, roughly plot the x and y values ​​of the original and resulting triangles, and imagine the locations of the z values. Translate the object so that the axis of rotation moves back to its original position. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.

Rotate the object so that the axis of rotation is returned to its original orientation. Translate the object so that the axis of rotation is returned to its original position. 3D descriptions of objects must be projected onto the flat viewing surface of the output device.

The viewing coordinate system is used in graphics packages as a reference to specify the observer's viewing position and the position of the projection plane. Usually combined with clipping, visual surface. identification, and surface-. rendering) Workstation transformation maps the coordinate positions on the projection plane to the output device. Projections of distant objects are smaller than the projections of objects of the same size that are closer to the.

Orthographic parallel projections are made by projecting points along parallel lines that are perpendicular to the projection plane. Viewport - A rectangular area in the view plane that controls how much of the scene is displayed. The edges of the viewing window are parallel to the viewing axes xv and yv. View volume - it is formed by the view window and the type of projection used.

A visual volume is therefore bounded by 6 planes => rectangular parallelepiped or a truncated cone, for parallel projection and perspective projection respectively. The purpose of 3D clipping is to identify and store all surface segments within the display volume for display on the output device.

Unit-7

Visible-Surface Detection Methods

Problem definition of Visible-Surface Detection Methods

Characteristics of approaches

## Object-space Methods

• Object Coherence
• Face Coherence
• Edge Coherence
• Scan line Coherence
• Area and Span Coherence
• Depth Coherence
• Frame Coherence
• Back-Face Detection
• Depth-Buffer Method (Z-Buffer Method)
• Scan-Line Method
• Depth-Sort Method
• Binary Space Partitioning

The process is unrelated to the screen resolution or the individual pixel in the image and the result of the process is applicable to different screen resolutions. Use the calculated results for one part of the scene or image for other nearby parts. If the face is small, we can sometimes assume that if part of the face is invisible to the viewer, the whole face is also invisible).

Pictures of the same scene at successive points in time are likely to be similar, despite small changes. If this vector points toward the center of projection, it is a front and can be seen by the viewer. The test is very simple, if the z component of the normal vector is positive, then it is a posterior.

Initially, each z-buffer pixel is set to its maximum depth value (the depth of the last clipping plane). After all surfaces are processed, each pixel of the image buffer represents the color of the visible surface at that pixel. Therefore, this method is less attractive in cases where only a few objects in the scene need to be rendered.

For large images, the algorithm can be applied to, e.g., 4 image squares separately, in order to reduce the requirement for a large additional buffer. Recall the basic idea of ​​polygon filling: For each scanline that crosses a polygon, this algorithm locates the points of intersection of the scanline with the edges of the polygon. If there are no changes in the intersection pattern of polygon edges with successive scan lines, it is not necessary to make depth calculations.

When there are only a few objects in the scene, this method can be very fast. However, if depth overlap is detected, we need to make some additional comparisons to determine if any of the surfaces need to be rearranged.

1.The algorithm first build the BSP tree

## 2.To display a BSP tree

### Area Subdivision Algorithms

The total display area is successively divided into smaller and smaller rectangles until each small area is simple, i.e. To improve the classification speed, we can use the boundary rectangles of. Check the result of 1., that if any of the following conditions is true, then no subdivision of this area is necessary.

For cases b and c, the color of the area of ​​that single surface can be determined.

7. 5 Octree Methods

Unit-8

## Computer Animation 8.1 Overview

Keyframing

### Kinematics

• Forward Kinematics
• Motion Capture

In motion capture, the player has a number of small, round tags attached to their body that reflect light in the frequency ranges that motion capture cameras are specifically designed for.. image from Motion.nyu.edu). The resulting motion curves are often noisy, requiring even more effort to clean up the motion data to more closely match what the animator wants. Despite the work involved, motion capture has become a popular technique in the film and game industries, as it allows for the creation of fairly accurate animations from the movements of actors.

However, this is limited by the density of markers that can be placed on a single actor.

References

Related documents