Skip to content

Latest commit

 

History

History
68 lines (42 loc) · 7.12 KB

LOG.md

File metadata and controls

68 lines (42 loc) · 7.12 KB

log

Probably a good idea to note down what troubles I ran into, in case I need to do this setup in the future.

Setting up the environment

  • To install GLFW, I just downloaded it from the site. GLFW version 3.3.8.
    • Make sure to use the right libraries! I incorrectly assumed my MinGW was 64-bit, but it's actually 32-bit. I thought it was a problem with the linker for the longest time, but everything resolved itself the moment I replaced the 64-bit libraries with 32-bit ones.
    • Furthermore, make sure to link the gdi32 library. The GLFW build guide mentions this briefly, but it's not obvious at first glance.
  • To install GLAD, I used this tool. GL version 3.3, and Core Profile.
    • Very important! I needed to put src/glad.c inside my project. It was a single phrase in this resource, and I completely missed it.
  • GLM is headers-only, so no libraries are needed. Version 0.9.9.9.
    • Not sure exactly, but something funky happened during the unpacking process and the incorrect headers for a few files (common.hpp, integer.hpp, packing.hpp, and vector_relational.hpp) got placed, causing include path errors. Fixed by just re-installing.

While debugging, I found that the commands g++ -print-search-dirs was helpful for listing all (default) library paths and g++ -E -x c++ - -v was helpful for listing all include paths.

A basic render

I decided to set up the majority of the project structure, and get a simple square rendered on screen.

  • The project consists of an app, which manages the window. The event loop is in main.
  • The app will render a scene.
  • The scene will manage components. Components will be basic drawable primitives: rectangles, circles, curves. Most of the program logic should go into the scene.

Some good resources I referenced:

  • LearnOpenGL, which is a pretty classic resource for learning OpenGL. Also, the GLFW Getting Started guide was useful as well.
  • GLFW OpenGL Base, a repo I found that sets up a basic GLFW and OpenGL project. I found it useful for figuring out how to set up my shaders and filestructure in general.
  • Motion Canvas, a vector graphics animation library.
  • Model View Projection, a nicely-animated article about model-view-projection matrices.

Initializing shaders and components

Since this project is pretty much me experimenting with graphics programming, of course I'm going to be implementing some shader shenaniganery. Accordingly, each component (just circle, line, and arrow right now) has a dedicated shader program.

The main issue is that I can't compile the shaders without initializing GLFW first, so we can't define them in-line along with the class. Thus, src/components/core.cpp defines functions that compile the shader programs, which are called in app::init_opengl(). (Since we have to reassign them, the shader programs can't be const... which is not the best, but it's the simplest workaround I have at the moment. Will probably change this later by adding more functionality to shader.)

A similar problem arises when creating the default components, so that happens here, as well. Currently, each component has its own VAO and VBO, which definitely does not scale and should be changed later, but even if I coalesced everything, I would still need to initialize the VAO's/VBO's in a similar manner.

Graphs

Instead of the scene managing components directly, I decided to add in one final layer of abstraction, the graph. This is because ideally there should be a fair amount of logic as a graph is rendered and algorithms are run, so making dedicated objects for nodes and edges seemed like the right choice.

These objects can hold data (vertex/edge weights) and keep track of the relevant connections. The implementation is quite barebones, with the only notable thing being that edges should always stay attached to their endpoints, so some component management needs to be done there.

Interactability

Obviously, the user should be able to drag objects around with their mouse. How do we tie this into the existing implementation?

  • The app handles the raw inputs with GLFW callbacks,
  • which are passed to the scene in the form of select and drag events,
  • and each component has hit and drag functions which deal with hitbox detection and per-frame mouse movement, respectively.

Along with the endpoint updating mentioned above, this allows us to spawn in a graph with draggable points attached.

Other unrelated technical notes

I switched my MinGW to a 64-bit version. Also, glad.c is excluded from the .gitignore now, which will probably help with first-time setup.

Also, (0, 0) is now the center of the screen, rather than the bottom left corner. Orientation stays unchanged.

Animation

Okay, now we need some way to animate the things on-screen. Right now, the "animations" for clicking (changing the object's color) and dragging (changing the object's position) are basically hard-coded into the system. We'd like a way to make this more modular, especially since later on we want to make animations data-driven.

Enter var and anim. var is a variable value, which emits values according to the attached animation. (I've tried to make the API as nice as possible, so that vars can be treated implicitly as values when assigning or doing calculations.) A default var/anim can be instantiated with a single value, representing a constant; however, much more complex behavior can be performed, like tweening between two values, lerp smoothing to a target, or just following a function exactly, for example.

Most of the code has been refactored to use vars now, although we still need an animation controller to handle higher-level state-based animation. That should also be the part that should be extremely customizable in the future, to support all different kinds of animation as one sees fit.

One big issue I'm seeing right now, though, is what happens when we want multiple animations to take effect at the same time? How are we going to handle blending states together? What happens if multiple systems (e.g. user input vs. data-based states vs. physics engine) want to take control of the same variable at once? Are there ways to restore default values or complete states if something goes wrong? Many hard questions arise (which will be answered as I keep on implementing).