Program 3 2024: Rasterization

Partial Due: 11:59pm, Wednesday Oct 9

Final Due: 11:59pm, Monday Oct 21

Goal: In this assignment you will practice basic modeling and implement transforms and lighting on 3D objects using the WebGL rasterization API.

Submission: Submit your assignment using this Google Form.


BASIC GRADING:
The main components of this programming assignment are:
  • 5% Part 0: partial feedback
  • 5% Part 1: properly turned in assignment
  • 10% Part 2: render the input triangles, without lighting
  • 25% Part 3: light the triangles
  • 20% Part 4: interactively change view
  • 5% Part 5: interactively select a model
  • 20% Part 6: interactively transform the triangles
  • 10% Part 7: make it your own
  • Participation: Receive participation credit (outside of this assignment) for posting images of your progress, good or bad, on the class forum!

General:
You may (optionally) work with one partner on this assignment. You should each turn in the same code. 

You will only render triangles in this assignment, described in the same sorts of JSON input files used in the first. We will test your program using several different input files, so it would be wise to test your program with several such files. The input files describe arrays of triangles using JSON. An example input file resides at https://ncsucgclass.github.io/prog3/triangles.json. When you turn in your program, you should use these URLs in hardcode as the locations of the input triangle files — they will always be there. While testing, you should use a different URL referencing a file that you can manipulate, so that you can test multiple triangle files. Note that browser security makes loading local files difficult, so we encourage you to access any input files with HTTP GET requests.

We provide a small shell in which you can build your code. You can run the shell here, and see its code and assets here. The shell shows how to draw triangles using WebGL without any model or view transform, and how to parse the input triangles.json file. It also shows how to use animation callbacks to render multiple image frames.

The default view and light are as in the first assignment. The eye is at (0.5,0.5,-0.5), with a view up vector of [0 1 0] and a look at vector of [0 0 1]. Locate the window a distance of 0.5 from the eye, and make it a 1x1 square normal to the look at vector and centered at (0.5,0.5,0), and parallel to the view up vector. With this scheme, you can assume that everything in the world is in view if it is located in a 1x1x1 box with one corner at the origin, and another at (1,1,1). Put a white (1,1,1) (for ambient, diffuse and specular) light at location (-0.5,1.5,-0.5).

This is an individual or partnered  assignment, no exceptions. That said, we encourage you to help one another. Feel free to suggest how other students might solve problems, and to help them debug their code — just don't write their code for them. The code you turn in should still be your own or your single partner's (except for the shell). This is a simple assignment, and should not need other third party libraries. As always, if you are ever uncertain if the help you want to give or the code you want to use is permissible, simply ask me or the TA. For information about how to correctly submit, see this page on the class website.

Part 0: Partial feedback
You should turn in an "ugly," incomplete version of your program by the date above. If you simply turn in a copy of our shell, you will get half credit (2.5%). If you actually do something to visibly change the shell's output, you will receive full marks (5%), and receive comments on what you've done. For example, if you turn in a complete, first attempt at the assignment, we will tell you in text what is working, and what isn't, so you can raise your final score. We will not otherwise grade the assignment at this point, only comment on it.

Part 1: Properly turned in assignment
5% of your assignment grade is just for correctly submitting your work! For more information about how to correctly submit, see this page on the class website.

Part 2: Render the input triangles, without lighting
Use rasterization to render unlit triangles, giving each triangle its unmodified diffuse color (e.g, if the diffuse color of the triangle is (1,0,0), every pixel in it should be red). You will have to use vertex shaders to perform viewing and perspective transforms, and fragment shaders to select the diffuse color. We recommend the use of the glMatrix library for creating these transforms.

Part 3: Light the triangles
Shade the triangles using per-fragment shading and the Blinn-Phong illumination model, using the reflectivity coefficients you find in the input files. Use triangle normals during lighting. Your fragment shaders will perform the lighting calculation.

Part 4: interactively change view
Use the following key to action table to enable the user to change the view:
  • a and d — translate view left (a) and right (d) along view X
  • w and — translate view forward (w) and backward (s) along view Z
  • and e — translate view up (q) and down (e) along view Y
  • A and D — rotate view left (A) and right (D) around view Y (yaw)
  • W and — rotate view forward (W) and backward (S) around view X (pitch)
To implement these changes you will need to change the eye, lookAt and lookUp vectors used to form your viewing transform.

Part 5: Interactively select a model
Use the following key to action table to interactively select a certain model:
    • left and right — select and highlight the next/previous triangle set (previous off)
    • space — deselect and turn off highlight
    A triangle set is one entry in the input triangle array. To highlight, uniformly scale the selection by 20% (multiply x y and z by 1.2). To turn highlighting off, remove this scaling. You will have to associate a transform matrix with each triangle to maintain state, and apply this transform in your vertex shaders. glMatrix will also be helpful here.

    Part 6: Interactively transform models
    Use the following key to action table to interactively transform the selected model:
    • k and ; — translate selection left (k) and right (;) along view X
    • o and — translate selection forward (o) and backward (l) along view Z
    • and p — translate selection up (i) and down (p) along view Y
    • K and : — rotate selection left (K) and right (:) around view Y (yaw)
    • O and — rotate selection forward (O) and backward (L) around view X (pitch)
    • and P — rotate selection clockwise (I) and counterclockwise (P) around view Z (roll)
    Translate the model after you rotate it (so the model rotates around itself), and after the highlighting scale (see above, so the model doesn't translate as it scales).

    Part 7: Make it your own
    In Parts 1-6, you strove to make your image look "correct". Now you should use the techniques you have learned (transforms, lighting and WebGL rasterization) to make a new image that is "interesting". To earn full credit for Part 5, all that you must achieve is to make your image substantially different from that produced by the shell repo input, and from the imagery of your fellow students. Your "interesting" image should appear after you press the exclamation mark (!). 

      EXTRA CREDIT GRADING: 
      The extra credit components we suggest for this assignment are below:
      • 461: ½% — arbitrarily sized viewports
      • 461: ½% — off-axis and rectangular projections
      • 461: ½% — multiple lights at arbitrary locations
      • 461: 1% — 561: ½% — smooth shading with vertex normals
      • 461: 3% — 561: 1% — render ellipsoids
      Students in 561 should not perform components that will not earn extra credit. Other components are possible with instructor approval. You must note any extra credit in your readme file, otherwise you will likely not receive credit for it.

      Extra credit: Arbitrarily sized viewports 
      Accept a new square canvas (viewport) width/height through your UI. Size your canvas to match.

      Extra credit: Support off-axis and rectangular projections 
      Accept new window parameters in viewing coordinates through your UI (left, right, top, bottom). Adjust your projection matrix to this new window, and render the scene.

      Extra credit: Multiple and arbitrarily located lights
      Read in an additional lights.json file that contains an array of objects describing light location and color. Note that these lights will have distinct ambient, diffuse and specular colors. Render the scene with all of these lights. You can find an example lights.json file here. Assume that the input lights file will always reside at this URL when you turn in your code.

      Extra credit: Smooth shading with vertex normals 
      Using only triangle normals, your curved shapes will look disappointingly faceted. To represent curvature more accurately, you need vertex normals. When you read in triangles, check for vertex normals in the input file. As you apply the composited modeling, viewing and projection matrices to your vertices, apply the inverse transpose of the modeling transform to your vertex normals. During lighting, use these normals rather than the face normal. The rasterizer will interpolate them for you. We will provide an example json file with a curved shape on request.

      Extra credit: Render ellipsoids
      Render ellipsoids as described in input. You can find an example ellipsoids.json file here. There are no ellipsoid primitives available in WebGL, so you will have to build an ellipsoid out of triangles, then transform it to the right location and size. You can do this statically with a hardcoded sphere model, or procedurally with a latitude/longitude parameterization. Again you will have to use vertex shaders to perform viewing and perspective transforms, fragment shaders to select color. The ellipsoids should be shaded like triangles, and should use vertex normals if you are claiming that extra credit.

      Program 2 2024: Intro to WebGL

      Due: 11:59pm, Monday Sept 30

      Goal: In this assignment you will focus on gaining basic understanding of the WebGL rasterization API, by learning how to render and manage a few triangles.

      Submission: Submit your assignment using this Google Form.


      GRADING:
      This assignment is a forgiving introduction to WebGL, with correspondingly forgiving grading. The main components of this programming assignment are:
      • 20% Part 1: attempt to display all triangles using index buffers
      • 25% Part 2: display all input triangles in correct positions with index buffers
      • 20% Part 3: attempt to display varying triangle colors using shader parameters
      • 25% Part 4: display triangles with correct colors using shader parameters
      • 10% Part 5: make it your own
      • Participation: Receive participation credit (outside of this assignment) for posting images of your progress, good or bad, on the class forum! Please tag pretty but wrong imagery with #cool-bugs-and-stuff.

      General:
      You may (optionally) work with one partner on this assignment. You should each turn in the same code. 

      You will only render triangles in this assignment, described in JSON input files similar to first programming assignment. We will test your program using several different input files, so it would be wise to test your program with several such files. The input files describe arrays of triangles using JSON. An example input file resides at https://ncsucgclass.github.io/prog2/triangles.json. When you turn in your program, you should use this URL in hardcode as the location of the input triangles file — it will always be there. While testing, you should use a different URL referencing a file that you can manipulate, so that you can test multiple triangles files. Note that browser security makes loading local files difficult, so we encourage you to access any input files with HTTP GET requests.

      We provide a small shell in which you can build your code. You can run the shell here, and see its code and assets here. The shell shows how to draw triangles using WebGL, treating them all the same way. It also shows how to parse the input triangles.json file.

      We are using webGL's default view setup, with the eye at the origin, and the window from -1 to 1 horizontally and vertically.

      This is an individual or partnered  assignment, no exceptions. That said, we encourage you to help one another. Feel free to suggest how other students might solve problems, and to help them debug their code — just don't write their code for them. The code you turn in should still be your own or your single partner's (except for the shell). This is a simple assignment, and should not need other third party libraries. As always, if you are ever uncertain if the help you want to give or the code you want to use is permissible, simply ask me or the TA. For information about how to correctly submit, see this page on the class website.

      Parts 1 & 2: Use index buffers to display all triangles in their correct positions
      For these parts of the assignment, your goal is to display all input triangles in the correct position using index buffers. The shell renders only the first set of triangles using only vertex buffers and the call DrawArrays. You must change the code to display all the triangles, using not only vertex but also index buffers and the call DrawElements. If you make a solid attempt at adding index buffers, you will receive full credit for Part 1. If you render all the triangles their correct positions in white using DrawElements, you will earn full credit for Part 2.

      Parts 3 & 4: Improve the shader to render triangle input colors correctly
      For these parts of the assignment, your goal is to color the input triangles with their diffuse color. The shell includes basic fragment shader code that renders everything in white, and basic shader code that does not manage color. You must improve the fragment shader to accept and use a color parameter, the vertex shader to accept and use vertex colors as well as positions, and the javascript code to send the vertex shader color buffer as well as a position buffer. If you make a solid attempt at this, you will receive full credit for Part 3. If you correctly render each triangle set with its unique color, you will earn full credit for Part 4.

      Part 5: Make it your own
      In Parts 1-4, you strove to make your image look "correct". Now you should use the techniques you have learned (triangles and WebGL rasterization) to make a new image that is "interesting". To earn full credit for Part 5, all that you must achieve is to make your image substantially different from that produced by the shell repo input, and from the imagery of your fellow students. Your "interesting" image should appear after you press the space bar. 

      Here is a sample output for the triangle.json given in the shell

      Program 1 Target Outputs

      Hi everyone, here are the target outputs for program 1.

      The results are rendered based on triangles2.json in the repo.


      Part 1: Using ray casting, render unlit, colored triangles


      Part 2: Using ray casting, render lit triangles


      Just a reminder that as stated in the instructions, if you have completed Part 2, you don't need to show your results for Part 1.

      Best,

      Chung-Che Hsiao

      Program 1 2024: Ray Casting

      Partial feedback due: 11:59pm, Monday September 9

      Final submission due: 11:59pm, Monday September 23

      Goal: In this assignment you will practice the ray casting methods that we are discussing.

      Submission: Submit your assignment using this Google Form.

      Introductory refs: On javascript.


      BASIC GRADING:
      • 5% Part 0: partial feedback
      • 5% Part 0.5: properly turned in assignment
      • 45% Part 1: ray cast the colored triangles in the input file without lighting
      • 35% Part 2: color the triangles with Blinn-Phong illumination
      • 10% Part 3: make it your own
      • Participation: Receive participation credit (outside of this assignment) for posting images of your progress, good or bad, on the class slack!

      General:
      You will only render triangles in this assignment, which are described in an input file. We will test your program using several different input files, so it would be wise to test your program with several such files. The input files describe an array of triangles using JSON. An example input file resides at https://ncsucgclass.github.io/prog1/triangles.json. When you turn in your program, you should use this URL in hardcode as the location of the input triangles file — it will always be there. While testing, you should use a different URL referencing a file that you can manipulate, so that you can test multiple triangles files. Note that browser security makes loading local files difficult, so we encourage you to access any input files with HTTP GET requests.

      We provide a small shell in which you can build your code. You can run the shell here, and see its code here, and find all supporting files in the progam 1 repo. The correct image for the default input will appear shortly. The shell shows how to draw pixels without using WebGL, and how to parse the input triangles.json file. It contains three drawing functions: one that merely draws random pixels, one that loads the triangles file and draws orthographic projections of them using canvas draw functions, and one that loads the triangles file and renders some random pixels in them. The last is probably closest to what you must produce for this program. Some of our programming exercises also contain relevant code.

      All vertex locations should be described in world coordinates, meaning they do not require any transformation. Locate the eye at (0.5,0.5,-0.5), with a view up vector of [0 1 0] and a look at vector of [0 0 1]. Locate the window a distance of 0.5 from the eye, and make it a 1x1 square normal to the look at vector and centered at (0.5,0.5,0), and parallel to the view up vector. With this scheme, you can assume that everything in the world is in view if it is located in a 1x1x1 box with one corner at the origin, and another at (1,1,1). Put a white (1,1,1) (for ambient, diffuse and specular) light at location (-3,1,-0.5).

      Advice: be careful to implement the algorithm we described in class, which loops first over pixels, then over primitives. The code you find in our exercises to date implements rasterization, which loops first over primitives, then pixels.

      This is an individual assignment, no exceptions. You should code the core of this assignment yourself. You may not use others' code to determine the location of pixels in the world, to do ray-triangle intersection, or to color a pixel. You may use math libraries you find, but you must credit them in comments. You may use GenAI coding tools such as Copilot or Intellicode, but your higher level code must be unique and your own. You may recommend libraries to one another, speak freely with one another about your code or theirs, but you may never directly provide any code to another student. If you are ever uncertain if the advice you want to give or code you want to use is permissible, simply ask me or the TA.

      Part 0: Partial feedback
      You should turn in an "ugly," incomplete version of your program. If you simply turn in a copy of our shell, you will get half credit (2.5%). If you actually do something to visibly change the shell's output, you will receive full marks (5%), and receive comments on what you've done. For example, if you turn in a complete, first attempt at the assignment, we will tell you in text what is working, and what isn't, so you can raise your final score. We will not otherwise grade the assignment at this point, only comment on it.

      Part 0.5: Properly turned in assignment
      Remember that 5% of your assignment grade is for correctly submitting your work! For more information about how to correctly submit, see this page on the class website.

      Part 1: Using ray casting, render unlit, colored triangles
      Use ray casting to render unlit triangles, with every pixel in each triangle having the unmodified diffuse color of that triangle (e.g, if the diffuse color of an triangle is (1,0,0), every pixel in it should be red). You will have to test for depth, to ensure that each triangle is correctly colored. You should see flatly colored triangles.

      Part 2: Using ray casting, render lit triangles
      Now you will have to perform a local Blinn-Phong lighting calculation at each intersection. You should now see triangles with depth revealed by illumination, in the same locations and with the same silhouettes as in part 1. You need not show your results for Part 1, if you have completed Part 2.

      Part 3: Make it your own
      In Part 2, you strove to make your image look "correct". Now you should use the techniques you have learned to make a new image that is "interesting". You may work only with triangles, or with other modeling primitives as well (ellipsoids, boxes, etc), but every pixel must be ray casted. To earn full credit for Part 3, all that you must achieve is to make your image substantially different from that produced by the shell repo input, and from the imagery of your fellow students. Your "interesting" image should appear after you press the space bar. 


      EXTRA CREDIT GRADING: 
      • 461: ½% — arbitrarily sized images (and interface windows)
      • 461: ½% — arbitrary viewing setups
      • 461: ½% — off-axis and rectangular projections
      • 461: ½% — multiple lights at arbitrary locations
      • 461: 1% — 561: ½% — shadows during ray casting
      • 461: 2% — 561: 1% — render spheres
      • 461: 3% — 561: 1% — voted most interesting
      Other extra credit is possible with instructor approval. You must note any extra credit in your readme file, otherwise you will likely not receive credit for it.

      Extra credit: Arbitrarily sized images and viewports 
      Accept a new canvas (viewport) width and height through your UI. Size your canvas to match, and change your ray casting interpolation to match. This should affect every part of your assignment.

      Extra credit: Support arbitrary viewing setups
      Accept new eye location, view up and look at vectors through your UI. Reorient the window to be normal to the new look at vector and centered around the new eye. Render the scene with these viewing parameters. Note that with bad viewing parameters, you will not see the model. This should affect every part of your assignment.

      Extra credit: Support off-axis and rectangular projections 
      Accept new window parameters through your UI (relative to the viewing coordinates described by the look at and up vectors, these will be two X values (left, right) and two Y values (top, bottom)). Adjust your ray casting interpolation to this new windows. Render the scene with these new projection parameters. Note that if you also perform the arbitrary viewing extra credit, these coordinates may not be in world space! Also with bad projection parameters, you will not see the model. This should affect every part of your assignment.

      Extra credit: Multiple and arbitrarily located lights
      Read in an additional lights.json file that contains an array of objects describing light location and color. Note that these lights will have distinct ambient, diffuse and specular colors. Render the scene with these lights. During illumination, you will have to sum the colors all the lights. You can find an example lights.json file here. Assume that the input lights file will always reside at this URL when you turn in your code.

      Extra credit: Detect shadows during ray casting
      When performing lighting during ray casting, shoot an additional ray toward the light to decide if only ambient light reaches the intersection. If you also support multiple lights, make sure to shoot a ray at each light! This should only affect the last part of your assignment.

      Extra credit: Render spheres 
      Read in an additional spheres.json file that describes the sphere center, its radius, and one material (set of reflectivity coefficients) to use with them all. Read in and render these spheres in addition to the input triangles. You will have to perform ray-sphere intersection, and must code this yourself. You can find an example spheres.json file here. Assume that the input spheres file will always reside at this URL when you turn in your code.

      Extra credit: Voted most interesting
      To be voted "most interesting", you must post your interesting image in the #programming-work channel of the course discord (this may count as participation). Our TAs will select several finalist images for an in-class vote. The student whose image receives the most votes will receive 3%; second most, 2%; and third most, 1%. (In 561, this will be 1% 2/3% and 1/3%). 

      Program 5 2023: Putting it all together — Frogger!

         


      Due: 

      • 461: during final period — Wednesday, December 13, 12-2:30pm
      • 561: during final period — Monday, December 11, 12-2:30pm

      Goal: In this assignment you will apply what you've learned of basic WebGL and GLSL to build a simple game.

      Submission: Submit your partial and final assignment using this Google Form, and demo your assignment during the final exam period.

      BASIC GRADING:
      The main components of this programming assignment are:
      • 5% Part 1: properly turned in program (new requirements!)
      • 20% Part 2: display road, river, frogs, cars, turtles, logs and homes
      • 25% Part 3: animate the frog
      • 20% Part 4: animate cars, turtles and logs
      • 20% Part 5: homes and winning
      • 10% Part 6: make it your own
      • Participation credit: Receive participation credit (outside of this assignment) for posting your resulting imagery and video, good or bad, on the class forum!
      Note that there is no partial turn in for this assignment.

      General:
      Our suggested game is a 3D version of Frogger. If you are not familiar with the game, you can play it online here or here, view some historic gameplay on arcade or console, and find more information about the game at its Wikipedia entry here. There are also many other sources online.

      If you would rather implement a different game, you may do so, providing you ask for instructor's approval by Wednesday December 1. To obtain that approval, submit your proposal using this form. Small teams are also acceptable, but the scope of the project must increase to match. Use the same form if you wish to propose a group project. For example, we will approve two-person teams that propose building Frogger as described below along with all extra credits to earn 100%. 

      Unlike previous programs, your game is not required to load specific assets (models, textures or lighting). You are free to hard-code paths to the assets your game requires.

      We prefer you continue to improve your mastery by using WebGL. You may use any 3rd party game or graphics libraries you find, including three.js and Unity, but with a penalty (461: -5%; 561: -10%). You may not use code from any implementation of Frogger you find online. We are aware of several such implementations and will be comparing them to your code.

      Part 1: Properly turned in program
      Remember that 5% of your assignment grade is for correctly submitting your work! For more information about how to correctly submit, see this page on the class website. Since we encourage variation in your games, make sure to include a readme file.

      For this assignment only, you can also earn extra credit (461: 2%; 561: 1%) by allowing us to make your assignment public, and providing us with some extra material to aid us in that. We will pick a few of the best assignments and publish them on our course website. If you wish to allow us, please also deliver:
      • a description: your game in four sentences or less
      • a screencast: a video walking us through your game within a few minutes. 
      Assignment material is due online by the time of your final, as noted above. You must also demo your game live to teaching staff during the final period, or if you are in the distance class, a remote meeting you schedule by emailing our TAs. If you do not demo your game during the final, your program mark will be reduced by 20%; if you do not demo at all, you will forfeit 30%. If you fail to demo an assignment that is not browser based, your assignment will not be accepted. Late demos are not possible, and late improvements of assignments will not be accepted.

      During the final exam period, you may optionally demo your game in front of the class (without a separate demo to staff). If you do, you will enter a competition for a $20 Amazon gift card. You will win if your fellow students vote your game the best. Students in teams will be in a separate competition from individuals.

      Part 2: Display road, river, frogs, cars, turtles, logs and homes
      Create and render road, river, frogs, cars, turtles, logs and five "frog homes". Models should be 3D, and the projection must be perspective, with a view that creates at least a little foreshortening (shrinking with distance). Fancy modeling is not necessary; cubes, spheres etc. are enough. Nothing needs to move. 

      Part 3: Animate the frog
      The frog can move up, down, left or right — on an invisible grid. On the road, when it touches a car, it dies. On the river, when it doesn't touch a log or turtle, it dies. All motion is 2D on the ground plane.

      Part 4: Animate cars, turtles and logs
      Cars, logs and turtles now move either left or right. Each lane moves at a different speed, but all within the lane move at the same speed. Certain turtles will periodically submerge. If the frog is struck by a moving car, it dies. If it is on a log or turtle that reaches the window edge, it dies. If it is on a turtle that submerges, it dies. 

      Part 5: Homes and winning
      Frogs jumping off the uppermost logs/turtles can jump into their "frog homes" where they remain, while a new frog appears at the bottom. Frogs that miss their jump into a home die. When all five homes are filled, the game is over. 

      Part 6: Make it your own
      In Parts 1-5, you strove to make your game work "right". Now you should use the techniques you have learned to make the game "different". To earn full credit, all that you must achieve is to make your game substantially different from the standard game, visually or behaviorally, from the standard game, and from your fellow students' games. Your "different" game version should appear after you press the exclamation mark (!).


      EXTRA CREDIT GRADING:
      Extra credit opportunities include the following, with values in format (461, 561)%. Other extra credits are possible, but must be approved by teaching staff in advance to ensure credit:
      • (1, ⅓)% — track and display score. You can choose any scoring scale you want. The standard scoring system is described here.
      • (1, ⅓)% — add a "first-" or "third-person" view, with the camera attached to the frog.
      • (1, ⅓)% — add animated effects, which appear when a frog dies, or when a turtle dives.
      • (2, ½)% — play music, and on game events play a sound, e.g. when a frog jumps or dies.
      • (2, ½)% — add at least one more level, which increases difficulty. In Frogger, this typically means faster car/log/turtle motion, or added NPCs such as crocs or snakes.
      • (2, ½)% — add two power ups, e.g. a temporary freeze, or brief invulnerability. 
      • (2, ½)% — support a second player, either with different keys or the mouse.
      • (4, 1)% — add better/different physics, e.g. 3D movement, or acceleration/deceleration.
      • (20, 5)% — 3D Frogger: a field of play with different elevations, jumping up or down, 3D view control. Viewing QBert gameplay may be inspiring.

      Prog 4 2023: texture and transparency

      Partial Due: 11:59pm, Tuesday November 14

      Final Due: 11:59pm, Tuesday November 21 (just before Thanksgiving)

      Goal: In this assignment you will learn about rendering textured and transparent models using the WebGL rasterization API.

      Submission: Submit your assignment using this Google Form.


      BASIC GRADING:
      The main components of this programming assignment are:
      • 5% Part 0: partial feedback
      • 5% Part 1: properly turned in assignment
      • 30% Part 2: render the input triangles, textured but without lighting
      • 20% Part 3: render using both lighting and texture
      • 30% Part 4: render using lighting, texture and transparency
      • 10% Part 5: make it your own
      • Participation: Receive participation credit (outside of this assignment) for posting images of your progress, good or bad, on the class forum!

      General:
      You will render triangles, described in the same sorts of JSON input files used in the third assignment. We will again test your program using several different input files, so it would be wise to test your program with several such files. The input files describe arrays of triangles using JSON. Example input files reside at https://ncsucgclass.github.io/prog4/triangles.jsonUse these URLs in hardcode as the locations of the input triangle files — they will always be there. While testing, you should use a different URL referencing a file that you can manipulate, so that you can test multiple triangle files. These files have been improved to include texture file names and alpha values. The triangles files also contain vertex normals and uv coordinates. 

      For this assignment, we have made additions to the file format supporting a transparency alpha, texture filenames and texture coordinates. All textures will reside at the same URL as the model input files. For example, if the triangles.json file makes a reference to texture1.jpg, then it will reside at https://ncsucgclass.github.io/prog4/texture1.jpg. 

      When you load texture images, you can only load them from your own server (hosting both your code and your images), or from a server that explicitly allows cross-origin access, like github.com. To access images from such an allowing server, set the image's crossOrigin attribute to "Anonymous". You can find more detail hereTexture loads happen asynchronously (on a different thread from your javascript), so your texture may not appear immediately. We recommend that you load a one pixel "dummy" texture locally to avoid runtime errors in the meanwhile, as shown here.

      We are providing a shell in which you can build your code. You can run the shell here, and see its code here. This shell is a correct implementation of program 3, which also loads an image (which can ultimately become a texture) as the canvas background. Default viewing parameters are the same as in program 3. (Note: this will be released in a few days, after most of the late period for program 3 has passed). Meanwhile a shellless version is here

      This is an individual assignment, no exceptions. You should code the core of this assignment yourself. You may not use others' code to implement texturing, blend texture with lighting, or implement transparency. You may use math, matrix and modeling libraries you find, but you must credit them in comments. You may recommend libraries to one another, speak freely with one another about your code or theirs, but you may never directly provide any code to another student. If you are ever uncertain that the advice you want to give or the code you want to use is permissible, simply ask me or the TA.

      Part 0: Partial feedback
      You should turn in an "ugly," incomplete version of your program by Tuesday November 14. If you simply turn in a copy of our shell, you will get half credit (2.5%). If you actually do something to visibly change the shell's output, you will receive full marks (5%), and receive comments on what you've done. We will not otherwise grade the assignment at this point, only comment on it.

      Part 1: Properly turned in assignment
      Remember that 5% of your assignment grade is for correctly submitting your work! For more information about how to correctly submit, see this page on the class website.

      Part 2: Render the input triangles, textured but without lighting
      Use WebGL to render unlit triangles, giving each fragment its textured color. Do not perform lighting. You will have to use the fragment shader to find the appropriate color in the texture. UV coordinates are provided with triangle sets.

      Part 3: Render with texture and lighting
      Improve your renderer to shade fragments by mixing texture with lighting. A simple approach is modulation, which uses the lit fragment Cf to scale the texture Ct: C = CfCt. You can find more ideas hereToggle across at least two light/texture blending modes (e.g. replace and modulate) when the b key is pressed.

      Part 4: Render with texture, lighting and transparency
      Improve your renderer further by adding transparency (alpha) to its rendering. To avoid transparent objects occluding other objects, you will have to first render opaque objects with z-buffering on, then transparent objects with the z-write component of z-buffering off (gl.depthMask(false)).

      Part 5: Make it your own
      In Parts 1-4, you strove to make your image look "correct". Now you should use the techniques you have learned (texturing, transparency) to make a new image that is "interesting". To earn full credit for Part 5, all that you must achieve is to make your image substantially different from that produced by the shell repo input, and from the imagery of your fellow students. Your "interesting" image should appear after you press the exclamation mark (!). 


      EXTRA CREDIT GRADING: 
      The extra credit components we suggest for this assignment are:
      • 461: ½ — support transparent textures
      • 461: ½ — support multitexturing
      • 461: 2%  — 561: 1% — improve transparency correctness with the painter's algorithm
      • 461: 4%  — 561: 2% — improve transparency correctness further with a BSP tree
      Other extra credit is possible with instructor approval. You must note any extra credit in your readme.md file, otherwise you will likely not receive credit for it.

      Extra credit: support transparent textures
      Make use of the alpha component in textures during rendering. Use this to given your models irregular outlines. There are textures in the assignment repo that will produce this effect.

      Extra credit: support multitexturing
      Combine multiple textures when performing lighting of a model. For example, you could perform light mapping. We will include a texture in the assignment repo that supports light mapping.

      Extra credit: improve transparency correctness with a partial sort
      Sort your transparent triangles by depth to make transparency more correct. You must sort before rendering. You may have to issue a separate draw calls to ensure gpu parallelism does not undo your ordering.

      Extra credit: improve transparency correctness further with a BSP tree
      Sort your transparent triangles with a BSP tree to improve transparency further. This will split triangles when they intersect.

      Sample output

      Textured with texture transparency and lighting.


      Textured with texture transparency, triangle transparency and lighting.


      With ellipsoids.

      Program 3 2023: Rasterization

       Partial Due: 11:59pm, Monday Oct 16

      Final Due: 11:59pm, Wednesday Oct 25

      Goal: In this assignment you will practice basic modeling and implement transforms and lighting on 3D objects using the WebGL rasterization API.

      Submission: Submit your assignment using this Google Form.


      BASIC GRADING:
      The main components of this programming assignment are:
      • 5% Part 0: partial feedback
      • 5% Part 1: properly turned in assignment
      • 10% Part 2: render the input triangles, without lighting
      • 25% Part 3: light the triangles
      • 20% Part 4: interactively change view
      • 5% Part 5: interactively select a model
      • 20% Part 6: interactively transform the triangles
      • 10% Part 7: make it your own
      • Participation: Receive participation credit (outside of this assignment) for posting images of your progress, good or bad, on the class forum!

      General:
      You may (optionally) work with one partner on this assignment. You should each turn in the same code. 

      You will only render triangles in this assignment, described in the same sorts of JSON input files used in the first. We will test your program using several different input files, so it would be wise to test your program with several such files. The input files describe arrays of triangles using JSON. An example input file resides at https://ncsucgclass.github.io/prog3/triangles.json. When you turn in your program, you should use these URLs in hardcode as the locations of the input triangle files — they will always be there. While testing, you should use a different URL referencing a file that you can manipulate, so that you can test multiple triangle files. Note that browser security makes loading local files difficult, so we encourage you to access any input files with HTTP GET requests.

      We provide a small shell in which you can build your code. You can run the shell here, and see its code and assets here. The shell shows how to draw triangles using WebGL without any model or view transform, and how to parse the input triangles.json file. It also shows how to use animation callbacks to render multiple image frames.

      The default view and light are as in the first assignment. The eye is at (0.5,0.5,-0.5), with a view up vector of [0 1 0] and a look at vector of [0 0 1]. Locate the window a distance of 0.5 from the eye, and make it a 1x1 square normal to the look at vector and centered at (0.5,0.5,0), and parallel to the view up vector. With this scheme, you can assume that everything in the world is in view if it is located in a 1x1x1 box with one corner at the origin, and another at (1,1,1). Put a white (1,1,1) (for ambient, diffuse and specular) light at location (-0.5,1.5,-0.5).

      This is an individual or partnered  assignment, no exceptions. That said, we encourage you to help one another. Feel free to suggest how other students might solve problems, and to help them debug their code — just don't write their code for them. The code you turn in should still be your own or your single partner's (except for the shell). This is a simple assignment, and should not need other third party libraries. As always, if you are ever uncertain if the help you want to give or the code you want to use is permissible, simply ask me or the TA. For information about how to correctly submit, see this page on the class website.

      Part 0: Partial feedback
      You should turn in an "ugly," incomplete version of your program by Monday October 16. If you simply turn in a copy of our shell, you will get half credit (2.5%). If you actually do something to visibly change the shell's output, you will receive full marks (5%), and receive comments on what you've done. For example, if you turn in a complete, first attempt at the assignment, we will tell you in text what is working, and what isn't, so you can raise your final score. We will not otherwise grade the assignment at this point, only comment on it.

      Part 1: Properly turned in assignment
      5% of your assignment grade is just for correctly submitting your work! For more information about how to correctly submit, see this page on the class website.

      Part 2: Render the input triangles, without lighting
      Use rasterization to render unlit triangles, giving each triangle its unmodified diffuse color (e.g, if the diffuse color of the triangle is (1,0,0), every pixel in it should be red). You will have to use vertex shaders to perform viewing and perspective transforms, and fragment shaders to select the diffuse color. We recommend the use of the glMatrix library for creating these transforms.

      Part 3: Light the triangles
      Shade the triangles using per-fragment shading and the Blinn-Phong illumination model, using the reflectivity coefficients you find in the input files. Use triangle normals during lighting. Your fragment shaders will perform the lighting calculation.

      Part 4: interactively change view
      Use the following key to action table to enable the user to change the view:
      • a and d — translate view left (a) and right (d) along view X
      • w and — translate view forward (w) and backward (s) along view Z
      • and e — translate view up (q) and down (e) along view Y
      • A and D — rotate view left (A) and right (D) around view Y (yaw)
      • W and — rotate view forward (W) and backward (S) around view X (pitch)
      To implement these changes you will need to change the eye, lookAt and lookUp vectors used to form your viewing transform.

      Part 5: Interactively select a model
      Use the following key to action table to interactively select a certain model:
        • left and right — select and highlight the next/previous triangle set (previous off)
        • space — deselect and turn off highlight
        A triangle set is one entry in the input triangle array. To highlight, uniformly scale the selection by 20% (multiply x y and z by 1.2). To turn highlighting off, remove this scaling. You will have to associate a transform matrix with each triangle to maintain state, and apply this transform in your vertex shaders. glMatrix will also be helpful here.

        Part 6: Interactively transform models
        Use the following key to action table to interactively transform the selected model:
        • k and ; — translate selection left (k) and right (;) along view X
        • o and — translate selection forward (o) and backward (l) along view Z
        • and p — translate selection up (i) and down (p) along view Y
        • K and : — rotate selection left (K) and right (:) around view Y (yaw)
        • O and — rotate selection forward (O) and backward (L) around view X (pitch)
        • and P — rotate selection clockwise (I) and counterclockwise (P) around view Z (roll)
        Translate the model after you rotate it (so the model rotates around itself), and after the highlighting scale (see above, so the model doesn't translate as it scales).

        Part 7: Make it your own
        In Parts 1-6, you strove to make your image look "correct". Now you should use the techniques you have learned (transforms, lighting and WebGL rasterization) to make a new image that is "interesting". To earn full credit for Part 5, all that you must achieve is to make your image substantially different from that produced by the shell repo input, and from the imagery of your fellow students. Your "interesting" image should appear after you press the exclamation mark (!). 

          EXTRA CREDIT GRADING: 
          The extra credit components we suggest for this assignment are below:
          • 461: ½% — arbitrarily sized viewports
          • 461: ½% — off-axis and rectangular projections
          • 461: ½% — multiple lights at arbitrary locations
          • 461: 1% — 561: ½% — smooth shading with vertex normals
          • 461: 3% — 561: 1% — render ellipsoids
          Students in 561 should not perform components that will not earn extra credit. Other components are possible with instructor approval. You must note any extra credit in your readme.md file, otherwise you will likely not receive credit for it.

          Extra credit: Arbitrarily sized viewports 
          Accept a new square canvas (viewport) width/height through your UI. Size your canvas to match.

          Extra credit: Support off-axis and rectangular projections 
          Accept new window parameters in viewing coordinates through your UI (left, right, top, bottom). Adjust your projection matrix to this new window, and render the scene.

          Extra credit: Multiple and arbitrarily located lights
          Read in an additional lights.json file that contains an array of objects describing light location and color. Note that these lights will have distinct ambient, diffuse and specular colors. Render the scene with all of these lights. You can find an example lights.json file here. Assume that the input lights file will always reside at this URL when you turn in your code.

          Extra credit: Smooth shading with vertex normals 
          Using only triangle normals, your curved shapes will look disappointingly faceted. To represent curvature more accurately, you need vertex normals. When you read in triangles, check for vertex normals in the input file. As you apply the composited modeling, viewing and projection matrices to your vertices, apply the inverse transpose of the modeling transform to your vertex normals. During lighting, use these normals rather than the face normal. The rasterizer will interpolate them for you. We will provide an example json file with a curved shape on request.

          Extra credit: Render ellipsoids
          Render ellipsoids as described in input. You can find an example ellipsoids.json file here. There are no ellipsoid primitives available in WebGL, so you will have to build an ellipsoid out of triangles, then transform it to the right location and size. You can do this statically with a hardcoded sphere model, or procedurally with a latitude/longitude parameterization. Again you will have to use vertex shaders to perform viewing and perspective transforms, fragment shaders to select color. The ellipsoids should be shaded like triangles, and should use vertex normals if you are claiming that extra credit.