Hey all! I hope June has been kind to you :) Between playing with the band and working on the topic of this devlog, I've been keeping well occupied.
Let's cut to the chase...Nyctophobia is the in-development codename of a videogame that I'm building from the ground up with the Odin programming language and the help of the SDL3 library. It's been an absolute learning cliff learning about GPU programming and leveraging SDL3's Vulkan API to make the most of the available hardware, but I've found my footing--and by the end of this next test project I should be well-equipped to handle the real deal.
Test project? How many test projects have you made??Ok it's not that bad, I'm on my fourth--and that's the topic of this devlog! I'd like to showcase what I've been up to so far :>
This was my first test project, and mostly consisted of reading SDL3 and Vulkan
API documentation, translating example C code into Odin (and learning Odin), and reading about
GPU architecture and standard practices. The objective was initially to get a
shaded triangle, but it evolved to rendering a textured quad.
Despite the simple looking result,
a lot of configuration and setup went into making it happen, and it was a useful introduction
to GPU programming!
After the first test project, I needed a break from the GPU stuff--so I
decided to brush off my linear algebra skills and make a 3D wireframe software renderer
using Odin's built in `linalg` module and SDL3's `Renderer` API.
Put roughly, the system works by taking each models instance-space vertices (relative to the model's local origin)
and transforms them to their world-space positions using the object's translation vector
and transformation matrix. The worldspace positions are then transformed into camera-space
positions, where the camera's origin is at z=-1 and its direction is along the +z axis; at which point the camera-space positions
are projected on to the z=0 (XY) plane by drawing a line from the camera origin at (0, 0, -1)
to each position in camera-space and solving for that line's intersection with the z=0 (XY) plane.
Finally, I can normalize the projected points based on the size of the plane, and then multiply
by the screen dimensions to determine where the points should be drawn on the screen. From there it's
just a matter of drawing the connecting lines to render the wireframes.
TLDR:
It's a mouthful, but it's not too hard to wrap your head around conceptually if you
imagine coordinate changes as simple shifts in perspective. The pipeline then becomes:
The purpose of this project was twofold! 1: I wanted to mess around with a raycast-driven
lighting algorithm I'd been thinking about, and 2: I wanted to make a tesselation alogrithm
capable of handling concave polygons for funsies. The project offered plenty of insight into how much
I'll be able to get out of the CPU during realtime lighting calculations, and reinforced
the importance of leveraging the GPU for tesselation and rasterization.
Lighting is an important aspect in Nyctophobia, so it's important that I get it right without
sacrificing accuracy or performance.
Unfortunately, I've run out of fun little software-rendered sidequests to embark
upon... all that's left is the biggun'. I'm currently working on the first iteration
of the proper Nyctophobia rendering framework! It will use instance-based rendering
to batch and display large amounts of geometry, fully leveraging GPU instancing, and
it will drive Nyctophobia's unique 2D lighting engine by calculating and compositing
lighting textures.
Anywhoo, thanks for reading, and until next--take care! <3
-N