Ryan Fleury
Recently, I've been working on the game's asset system. I'm doing this to start allowing other developers to have more immediate power in the process of making the game; if my artist would like to add a new texture for a new static object to decorate the world more, they should be able to do that without having to go through me. Similarly, if my music artist who is working on background music is spending time in the world and finds that a new ambient music pattern would suit the environment well, they should be able to add it and see the changes immediately.

When I started approaching this problem, I soon realized that it wasn't necessarily straightforward, however; there are times that the game's compiled code needed to reference a texture directly (for example, when drawing or animating the player). The code referenced constants that were the index of the certain textures directly. This is not unreasonable; from the perspective of he who is writing the code, it is perfectly readable, and it is as fast as hard-coding the index of the texture (because the constant is resolved to the equivalent literal at compile-time).

 1 2 3 4 5 6 7 8 enum { // ... TEX_player, // ... MAX_TEX }; Texture *player_texture = texture(TEX_player); 

There's a problem, though. If I want my artist to be able to add new assets at run-time, I cannot guarantee a reliability of texture indices; they very well might change at run-time. In such a case, the code would still refer to the texture by an index equal to whatever TEX_player resolves into. This would not work. Anything more complicated, however, would add run-time overhead, and perhaps a lot.

This is when that mysterious light bulb appeared over my head. When I'm working on the game, I don't care about performance to an extent that I would when preparing a shipped version. I want the game to work quickly enough to test effectively, and I don't care about much else. I already had a build mode for both a release version and a debug version; I could just do slow (but convenient) things in debug mode, but translate those things to something fast in release mode.

There are a few challenges here. Firstly, there is the actual implementation of the dynamic asset sets. I then had the idea of "asset tags", which are a way to express intent for an asset without assigning it to a texture. There had to be some way within the asset system's API to say "get me the player texture, whatever it turns out to be"; tags allow the programmer to do this. Secondly, there is a code flexibility problem. The following could work:

 1 2 3 4 5 #if BUILD_RELEASE Texture *player_texture = texture(TEX_player); #else Texture *player_texture = texture_from_tag_table("player"); #endif 

...but that is extremely cumbersome, and assumes that "player" refers to TEX_player. In this case, that wouldn't change (I will always have the player texture map to "player"), but this might not be the case elsewhere, especially after modifications have been made by my artist (who might add new tags, textures, etc.). This is not a maintainable solution.

My thoughts took me to the realization that I had an API problem; what is an API that could resolve both into something that looks up into a set of tags at run-time in developer mode, but resolves to a constant at compile-time in release mode?

I solved this problem by introducing the following:

 1 2 3 4 #define assets_get_texture_by_tag(assets, name) // Insert something here... // This can be used like: Texture *player_texture = assets_get_texture_by_tag(assets, player); 

What is the macro assets_get_texture_by_tag defined as, then?

 1 2 3 4 5 #if BUILD_RELEASE #define assets_get_texture_by_tag(assets, name) (&(assets)->textures[TEX_TAG_ ## name]) #else #define assets_get_texture_by_tag(assets, name) (&(assets)->textures[asset_index(assets->texture_tag_table, #name)]) #endif 

In release mode, it resolves to a pointer to the texture that's at index TEX_TAG_player, but in developer mode, it performs the needed tag look-up.

TEX_TAG_player, then, is just defined as the index of the texture that "player" mapped to at compile-time! This is done with a simple metaprogram that takes:

 1 2 "player" : "player.png" # etc. 

...and generates:

 1 #define TEX_TAG_player TEX_player 

...along with a file defining the indices for different textures (like TEX_player).

The above system produces texture-grabbing that is just as fast as it was before in release mode, but allows for run-time modifications in developer mode! As the system is implemented currently, these modifications can be which texture each tag maps to, or the texture itself.

Here's a few videos of this system in action:

As you can see, as I make modifications to the tags file, the texture being used for the game's splash screen changes at run-time (because the game is referring simply to the texture that is mapped to "splash", instead of any specific texture directly). Additionally, the game reloads textures when they have been modified.

That's the new asset system. It seems to be a great addition, and will hopefully benefit the productivity of the game's developers. I hope you enjoyed the read!

Ryan

Ryan Fleury
Hey, everyone!

I just uploaded a new devlog diving into The Melodist's metaprogramming system. I hope you enjoy!

Ryan Fleury
Hey, everyone!

I just uploaded the newest Behind The Scenes devlog, in which I speak about some progress I've been making in developing the game's gameplay.

I hope you enjoy!

Ryan Fleury
As I've written about before, my primary roadblock in developing The Melodist has been making the game legitimately fun and interesting. I've discovered reasons why I want to make the game, and why I think the game is important, but I've struggled hugely over the past year in making a legitimately interesting game to play. This, of course, has been an extremely frustrating process.

I am known to distract myself with visuals, engine details, and other technical endeavors, but I've been trying to attack this problem in the past month through a lot of experimentation and thinking about what I want the game to feel and play like. Recently, I tried a very simple gameplay experiment that I actually found joy in, and I think it might be the first step that I am really quite happy with.

A video of this gameplay experiment follows.

(Thank the artists of the world, ladies and gentlemen, because my placeholder art is really not very good)

It has taken an extremely long time, but I think that those struggles may be ending. I wanted to share my experience for those who may find it useful, as I think there are some important lessons to be learned.

I am, and was, personally invested in and intrigued by, the general principle of the game that I had invented: A world that reacts to music in certain ways. However, in the past year when thinking about the gameplay, I had a tendency to stop myself short when trying to think of how I'd like such a world to be shown to players. I'd have ideas (and it's true that some of them really were not very good), but I'd come up with reasons why those ideas wouldn't be appropriate (even though I was intrigued by them).

A good example is the idea in the above video.

I was intrigued by the idea of different vines in the world growing, corresponding to the occurrence of different notes. This would make traversal of a landscape also play a melody, which is something I thought would feel great to experience (and sure enough, even with my placeholder vines that, well, don't really look like vines, it does feel great to experience). I stopped myself before pursuing this, though, telling myself that it would be a bad design decision to make the vines react arbitrarily to different notes, as it wouldn't be clear to the player.

This is obviously untrue. In the above video, it is very clear to the player what is happening because of the way that the world is presented to the player. Additionally, it is very possible to hint to the player potential reactions that might take place with colored lighting or other graphics.

I didn't know, however, that it was untrue before, because of my assumptions about the nature of the game as a final product. These assumptions were often unstated, but they were extremely harmful.

Deriving from the above, I will write a message to both my future self and any readers:

The important take-away here, I think, is simply to not be afraid to try something, especially when it is intriguing to you. Take care to identify any assumptions you are making about the game you are making. Are these assumptions good, or are they bad? Find the simplest possible image relating to your game that intrigues you, and chase it. Don't be afraid to fail.

Thanks for reading! Expect to hear more from me soon.

Ryan
Ryan Fleury
It is without doubt that all software developers have always used abstractions, be it formatting data using a struct, writing a function, or perhaps reasoning about native processor instructions with a one-to-one textual representation. What is an abstraction, how do abstractions affect code, and are they useful?

To understand what an abstraction is in the context of software development, it first must be asserted that a programmer interacts with a program with source code; that is, a programmer does their job by writing sequences of characters that can be translated to some form of an instruction or set of instructions to a machine. An abstraction, in the context of software development, is to change the sequences of characters that a programmer can type such that the interface that a programmer is reasoning about changes fundamentally. This is usually done in the pursuit of more efficient software development through code reuse; that is, in fact, why concepts like "structs" or "functions" exist; they have been created in order to allow a programmer to specify more with less code.

A C programmer might deal with many abstractions, including (but not limited to) variables, functions, and structs.

A "variable" is a method by which a programmer can consider some block of physical memory as representing some discrete entity that persists for some period of time. Consider the following:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18  Address | State of Physical Memory ------------------------------------ 0 | 00000000 1 | 00000000 2 | 00000000 3 | 00000000 4 | 00000000 5 | 00000000 6 | 00000000 7 | 00000000 8 | 00000000 9 | 00000000 10 | 00000000 11 | 00000000 12 | 00000000 13 | 00000000 14 | 00000000 15 | 00000000 

In C, when sizeof(int) == 4:

 1 2 3 4 5 6 7 8 9 int x; // Suppose that x refers to the memory from // address 0 to address 3 (occupying 4 addresses, // when each address references a byte). Now, // those 4 bytes can be referenced just using 'x', // and their value can be modified, with, say: x = 5; 

A "function" is a method by which a programmer can reuse a specific set of instructions. Consider the following:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 int x, y, z, result; // Some complicated math operation (don't // pay too much attention to it, it's totally arbitrary) x = 5; y = 1; z = 3; result = x*y - x/z + y*47 + z*23 + x%z + ^z * ~x; // This operation might need to happen more than once, // but maybe with different values: x = 2; y = 17; z = 43; result = x*y - x/z + y*47 + z*23 + x%z + ^z * ~x; x = 4; y = 12; z = 1; result = x*y - x/z + y*47 + z*23 + x%z + ^z * ~x; x = 1; y = 2; z = 3; result = x*y - x/z + y*47 + z*23 + x%z + ^z * ~x; 

Now, suppose that the operation must change in order to meet the needs of the program. Now, the programmer working on this bit of code can not simply modify the operation that is occurring; they must modify 4 places in the code. In very large, complicated programs, this isn't maintainable, so a function is introduced:

  1 2 3 4 5 6 7 8 9 10 11 12 int do_a_complicated_math_operation(int x, int y, int z) { return x*y - x/z + y*47 + z*23 + x%z + ^z * ~x; } // Somewhere else... int result; result = do_a_complicated_math_operation(5, 1, 3); result = do_a_complicated_math_operation(2, 17, 43); result = do_a_complicated_math_operation(4, 12, 1); result = do_a_complicated_math_operation(1, 2, 3); 

Now, if the math operation must be modified, the function can change, but the interface that one must interact with in order to retrieve the result of such an operation stays the same. The code must only change in one location.

A "struct" is a method by which a programmer can reuse a specific data format. Consider the following:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 // Data for a person const char *name; int age; int salary; int number_of_children; bool will_buy_the_melodist; bool will_subscribe_to_ryan_on_youtube; name = "Ryan Fleury"; age = 20; salary = SOME_LOW_CONSTANT; number_of_children = 0; will_buy_the_melodist = false; will_subscribe_to_ryan_on_youtube = false; print_out_person_data(name, age, salary, number_of_children, will_buy_the_melodist); name = "Handmade Network Community Member"; age = 32; salary = 75000; number_of_children = 2; will_buy_the_melodist = true; will_subscribe_to_ryan_on_youtube = true; print_out_person_data(name, age, salary, number_of_children, will_buy_the_melodist); // Imagine having to do this several times, and imagine modifying // print_out_person_data or the data required for a "person" in // the program. // // The same data is required in all places, so a struct can be // introduced. The print_out_person_data can be modified to // reason about the struct. typedef struct Person { const char *name; int age; int salary; int number_of_children; bool will_buy_the_melodist; will_subscribe_to_ryan_on_youtube; } Person; Person person1 = { "Some Name", 23, 1000000, 0, true, true }; print_out_person_data(&person1); // Now, imagine having to modify the data for // a person. It won't be nearly as laborious. 

It follows from the above examples that, when an abstraction is introduced to code, the code can change in one of two ways.

The first possibility is that capability of some operation or memory format is lost. When the Person struct was introduced, in all code that reasons about a 'Person', the format in which data for a person is stored for the program must necessarily be consistent. When the function do_a_complicated_math_operation was introduced, the exact math operation could not be significantly restructured from the site at which the function was called; thus, capability has been lost. In the most basic case, one can imagine an extraordinarily complicated operation being wrapped in a function that takes no arguments; at sites that call said function, there is little capability over what takes place inside of the function.

The other possibility is that, if steps are taken to use abstractions but maintain capability, the complexity of an operation or specification of a memory format is not 'hidden', as many will claim, but it will rather be moved elsewhere. Imagine the opposite extreme of the aforementioned complicated function with no arguments; it would be a function with infinite arguments, to control every possible operation that could possibly be performed within the function.

The idea that abstractions have one of the two above affects on code rings true when very high level and modern languages are examined; the amount of experience and expertise required to fully understand, say, JavaScript or Python, is still extraordinarily high. The abstracted nature of the languages has done very little to reduce the complexity with which programmers are dealing; the complexity, instead, has moved elsewhere (in order to maintain some level of capability).

Are abstractions useful, then? The answer should follow from the earlier examples: Yes. It is very much useful at times to reduce capability and complexity in favor of code reuse to promote productivity and reduce mental overhead; however, it must be noted by programmers that complexity cannot be reduced without losing capability. Even in the science-fiction utopia in which a computer perfectly interprets a sentence in a spoken language and performs the best possible operation to react, the capability is supposedly extremely high, so the complexity must have moved somewhere. The complexity has, in fact, embedded itself in the complexity of the spoken language and human communication; it's vital to understand that it has not disappeared.

This is all to make the point that it is, in fact, impossible to maintain capability while also reducing complexity, and therefore the ivory tower of increasing abstraction in pursuit of simplifying problems is, in fact, useless. It is perhaps, then, most useful to concern oneself only with the complexities of the most transferable skills, and those that are not arbitrary, so that the constraints under which one is working are strictly grounded in reality and utility, and not the arbitrary decisions of another individual. Abstractions are undeniably useful, but the decision to introduce an abstraction should be a reasoned choice, instead of an assumed action.

In a language like C, it is true that the abstractions that one is working with are fundamentally defined not by reality but by humans; however, they have been structured around building a more useful tool to command a machine's hardware. The goal of the language and its abstractions is not to disregard hardware and its complexities entirely, but rather to make the hardware more convenient for a programmer to command (as there are many repeated operations that a programmer does when programming at a lower level). In other, more high level languages, the goal has shifted to abstract away the hardware entirely and instead force the programmer to reason about nebulous ideas born in the minds of others instead of in physical reality. To program effectively in those languages, then, one must concern themselves with the complexities of the mental models by which those languages were formed. It should follow that a deep understanding of these mental models is not transferable, and is more arbitrary.

In addition to the above implications of abstractions, there are also many performance implications, though this post was focusing on the theoretical aspect of abstractions in particular.

I hope that this was helpful in promoting programmer reasoning about abstractions, why they should be used, and what sort of implications they can have on code.