If you've read my release post, you may remember that TWC heavily relies on code generation to allow for rapid updates in response to Twitter API changes. To refresh your memory, here's the basic idea:
- A script runs over the Twitter API documentation website to gather a schema of all of the Twitter API endpoints. This schema includes information about the URI, name, description, and inferred type of each endpoint.
- A set of templates ("pattern", not C++ template) is defined in addition to the rest of the library framework code. These templates specify how a given type is serialized out to a string or cURL parameter.
- A program written in C parses the JSON schema and the template code in order to produce A) the declarations of a function for each endpoint in the API and B) implementations of that function which serialize all of the parameters and call cURL with the correct URL and OAuth parameters.
So (dispensing with the contrived second-person pronouns), after several weeks on Mastodon I was musing about how I could adapt TWC to work with the Mastodon API. It didn't seem like it would be that difficult.
There were a few problems to overcome first, however:
- I'd need to convert that markdown specification of the API into a JSON schema
- The URLs would all need not only have a different base, but be runtime-configurable so that you could call different Mastodon instances without needing to recompile the library
- Mastodon uses OAuth2 instead of OAuth1, so the authorization was potentially going to need to change.
As it turned out, 1 and 3 were easy, but 2 required enough changes that I haven't figured out how I'm approaching it yet.
For #1 I adapted my Twitterdoc script to read in from the markdown file and parse everything out into the format specified by the JSON meta-schema. This is a short term and hopefully temporary solution, because I want to approach the maintainers of the Mastodon project about building a /schema.json endpoint or something similar into Mastodon itself. This would short-circuit all my hacky text-parsing and API freshness issues and be a sustainable long-term solution.
To address 2, as a hack to get things working, I took the easy route and just redefined the base URL macro in mastodon.h to be the API uri of Cybrespace, the Mastodon server I've been running. This problem is "difficult" (annoying, mostly) due to the way I structured TWC initially: all of the API uris are concatenated together by the preprocessor as string literals. I will need to instead only concatenate the common (path) components of each uri, and fill in the domain part at runtime based on user configuration. Not a lot of work, but it remains as of yet undone.
For #3, it actually turned out to be very straightforward. In OAuth2, they dispensed with the complicated set of OAuth parameters and cryptographic signing and instead just issue a single access token that you include with each request. This made it very simple to write alternatives to twc_GenerateOAuthHeader and twc_OAuthHeaderMaxLength that simply return the user's access token with a string prefix, and conditionally use the alternatives if we're compiling TWC as a mastodon library.
And with that, it was working well enough to begin making successful API requests to Mastodon instances!
There's still much to be done, and I'll be posting updates as I continue to work on this. I also have a couple unrelated things in the pipeline that I may be posting more about in the coming weeks, so watch this space. But for now, thanks for reading!