Program 3 2021: Rasterization

 Partial Due: 11:59pm, Tuesday Oct 12

Final Due: 11:59pm, Tuesday Oct 19

Goal: In this assignment you will practice basic modeling and implement transforms and lighting on 3D objects using the WebGL rasterization API.

Submission: Submit your assignment using this Google Form.


BASIC GRADING:
The main components of this programming assignment are:
  • 5% Part 0: partial feedback
  • 5% Part 1: properly turned in assignment
  • 10% Part 2: render the input triangles, without lighting
  • 25% Part 4: light the triangles
  • 25% Part 5: interactively change view
  • 5% Part 6: interactively select a model
  • 25% Part 7: interactively transform the triangles
  • Participation: Receive participation credit (outside of this assignment) for posting images of your progress, good or bad, on the class forum!

General:
You may (optionally) work with one partner on this assignment. You should each turn in the same code. 

You will only render triangles in this assignment, described in the same sorts of JSON input files used in the first. We will test your program using several different input files, so it would be wise to test your program with several such files. The input files describe arrays of triangles using JSON. An example input file resides at https://ncsucgclass.github.io/prog3/triangles.json. When you turn in your program, you should use these URLs in hardcode as the locations of the input triangle files — they will always be there. While testing, you should use a different URL referencing a file that you can manipulate, so that you can test multiple triangle files. Note that browser security makes loading local files difficult, so we encourage you to access any input files with HTTP GET requests.

We provide a small shell in which you can build your code. You can run the shell here, and see its code and assets here. The shell shows how to draw triangles using WebGL without any model or view transform, and how to parse the input triangles.json file. It also shows how to use animation callbacks to render multiple image frames.

The default view and light are as in the first assignment. The eye is at (0.5,0.5,-0.5), with a view up vector of [0 1 0] and a look at vector of [0 0 1]. Locate the window a distance of 0.5 from the eye, and make it a 1x1 square normal to the look at vector and centered at (0.5,0.5,0), and parallel to the view up vector. With this scheme, you can assume that everything in the world is in view if it is located in a 1x1x1 box with one corner at the origin, and another at (1,1,1). Put a white (1,1,1) (for ambient, diffuse and specular) light at location (-0.5,1.5,-0.5).

This is an individual or partnered  assignment, no exceptions. That said, we encourage you to help one another. Feel free to suggest how other students might solve problems, and to help them debug their code — just don't write their code for them. The code you turn in should still be your own or your single partner's (except for the shell). This is a simple assignment, and should not need other third party libraries. As always, if you are ever uncertain if the help you want to give or the code you want to use is permissible, simply ask me or the TA. For information about how to correctly submit, see this page on the class website.

Part 0: Partial feedback
You should turn in an "ugly," incomplete version of your program by Friday October 12. If you simply turn in a copy of our shell, you will get half credit (2.5%). If you actually do something to visibly change the shell's output, you will receive full marks (5%), and receive comments on what you've done. For example, if you turn in a complete, first attempt at the assignment, we will tell you in text what is working, and what isn't, so you can raise your final score. We will not otherwise grade the assignment at this point, only comment on it.

Part 1: Properly turned in assignment
5% of your assignment grade is just for correctly submitting your work! For more information about how to correctly submit, see this page on the class website.

Part 2: Render the input triangles, without lighting
Use rasterization to render unlit triangles, giving each triangle its unmodified diffuse color (e.g, if the diffuse color of the triangle is (1,0,0), every pixel in it should be red). You will have to use vertex shaders to perform viewing and perspective transforms, and fragment shaders to select the diffuse color. We recommend the use of the glMatrix library for creating these transforms.

Part 3: Light the triangles
Shade the triangles using per-fragment shading and the Blinn-Phong illumination model, using the reflectivity coefficients you find in the input files. Use triangle normals during lighting. Your fragment shaders will perform the lighting calculation.

Part 4: interactively change view
Use the following key to action table to enable the user to change the view:
  • a and d — translate view left and right along view X
  • w and — translate view forward and backward along view Z
  • and e — translate view up and down along view Y
  • A and D — rotate view left and right around view Y (yaw)
  • W and — rotate view forward and backward around view X (pitch)
To implement these changes you will need to change the eye, lookAt and lookUp vectors used to form your viewing transform.

Part 5: Interactively select a model
Use the following key to action table to interactively select a certain model:
    • left and right — select and highlight the next/previous triangle set (previous off)
    • space — deselect and turn off highlight
    A triangle set is one entry in the input triangle array. To highlight, uniformly scale the selection by 20% (multiply x y and z by 1.2). To turn highlighting off, remove this scaling. You will have to associate a transform matrix with each triangle to maintain state, and apply this transform in your vertex shaders. glMatrix will also be helpful here.

    Part 6: Interactively transform models
    Use the following key to action table to interactively transform the selected model:
    • k and ; — translate selection left and right along view X
    • o and — translate selection forward and backward along view Z
    • and p — translate selection up and down along view Y
    • K and : — rotate selection left and right around view Y (yaw)
    • O and — rotate selection forward and backward around view X (pitch)
    • and P — rotate selection clockwise and counterclockwise around view Z (roll)
    Translate the model after you rotate it (so the model rotates around itself), and after the highlighting scale (see above, so the model doesn't translate as it scales).


    EXTRA CREDIT GRADING: 
    The extra credit components we suggest for this assignment are below:
    • 461: 1% — arbitrarily sized viewports
    • 461: 1% — off-axis and rectangular projections
    • 461: 1% — multiple lights at arbitrary locations
    • 461: 3% — 561: 1% — smooth shading with vertex normals
    • 461: 4% — 561: 2% — render ellipsoids
    Students in 561 should not perform components that will not earn extra credit. Other components are possible with instructor approval. You must note any extra credit in your readme.md file, otherwise you will likely not receive credit for it.

    Extra credit: Arbitrarily sized viewports 
    Accept a new square canvas (viewport) width/height through your UI. Size your canvas to match.

    Extra credit: Support off-axis and rectangular projections 
    Accept new window parameters in viewing coordinates through your UI (left, right, top, bottom). Adjust your projection matrix to this new window, and render the scene.

    Extra credit: Multiple and arbitrarily located lights
    Read in an additional lights.json file that contains an array of objects describing light location and color. Note that these lights will have distinct ambient, diffuse and specular colors. Render the scene with all of these lights. You can find an example lights.json file here. Assume that the input lights file will always reside at this URL when you turn in your code.

    Extra credit: Smooth shading with vertex normals 
    Using only triangle normals, your curved shapes will look disappointingly faceted. To represent curvature more accurately, you need vertex normals. When you read in triangles, check for vertex normals in the input file. As you apply the composited modeling, viewing and projection matrices to your vertices, apply the inverse transpose of the modeling transform to your vertex normals. During lighting, use these normals rather than the face normal. The rasterizer will interpolate them for you. We will provide an example json file with a curved shape on request.

    Extra credit: Render ellipsoids
    Render ellipsoids as described in input. You can find an example ellipsoids.json file here. There are no ellipsoid primitives available in WebGL, so you will have to build an ellipsoid out of triangles, then transform it to the right location and size. You can do this statically with a hardcoded sphere model, or procedurally with a latitude/longitude parameterization. Again you will have to use vertex shaders to perform viewing and perspective transforms, fragment shaders to select color. The ellipsoids should be shaded like triangles, and should use vertex normals if you are claiming that extra credit.