theExpeditionarium Ideas as technology.

~/../rapiDex

Note: This is a WIP and subject to change and further development.

This project–under the working title, rapiDex–is intended to produce a CMS which incorporates some key features of DraftGollum, TiddlyWiki, Prose, and even Asana or to some small extent, Apostrophe. This would include a minimalistic responsive interface, git-backed version control, and the ability to load dynamically as a single page application among other aspects from all three and beyond from similar notable applications.

Possibly the nearest kin to this concept would be CosmoCMS, but one key difference would be that that the editing would directly interact with a git repository and that the version history would also be fetched from the repo, rather than the CMS keeping its own. The seamless editing interface style used in that app is also similar to what the one in use at Medium and very similar to what rapiDex is after.

Why Another CMS?

The world of content management systems and publishing platforms is diverse and disparate and fraught with divergent notions of what is necessary or not and why. What they all share in common, however, is that they handle all the heavy-lifting involved with making raw content presentable–usually in a published fashion.

Among such systems, we find wiki platforms, the front-end of which straddles the line between static content (typically documentation of some kind) and a content editor; albeit, almost universally in a rather clunky manner of the like typically employed by blogging platforms. In fact, there’s often little difference between a blog and a wiki beyond site layout and the slim difference between pages and posts (both of which may also be referred to simply as articles).

The heart of a wiki is its version control system, but just as other aspects of a wiki’s interface are baked into its platform the version control likewise gated for exclusive use by that platform.

Responsive front-end

We need the ability to manipulate the text and site directory on the front-end in realtime, seamlessly and intuitively, retaining any user’s changes in their own cache or whatever other system of storage while checking those changes against centrally and/or decentrally hosted content and offering realtime feedback about the status of those differences. This should include a field for commenting on the changes with a button to submit those changes in a pull request. In the case of a submitted pull request, the realtime status and comments regarding that pull-request should be immediately visible for that user. For all others, such notifications may be visible depending on that user’s role and settings, and this would include the ability to see the changes of a pull-request and to see that fork’s changes implemented if desired.

There should be little difference between a preview and editing interface besides the visibility of markdown formatting and the toggling of the visibility of frontmatter; however, both should be configurable by the user in terms of palette (like the ability to use Solarized) or font but also the availability of page-flipping in lieu of scrolling or even things like typewriter scrolling and focus-modes. However, the editing interface should only be visible to owners of the repo (individual users or organization members) or registered users who maintain a fork of the repo (github members, for instance), and most likely, all of those users will have to grant app permissions (although it might be possible to facilitate forking and other sorts of interactivity more broadly).

The app itself should probably only have access to repositories configured to use it, but if a user has multiple instances of rapiDex (or simply multiple projects within rapiDex), they should be able to switch to these within the interface (likely taking them to that instance’s own domain if it has one, or else its hash).

Rapid back-end

rapiDex should essentially piggyback on various deployment platforms like Github and Netlify to the extent that it interacts with HTTP at all. It’s static site generation should aim to achieve a rapid build time to enable the most seamless integration possible with such platforms’ continuous deployment features. We would hopefully be able to use APIs from those platforms to be able to retrieve the status of deployments; however, if a user’s cache contains their own most current version of the content, and we are retrieving the status of that content directly from the repository, then perhaps the status of the static site’s deployment is irrelevant. This is particularly true considering that no portion of the site’s configuration (CSS or YAML, for instance) should be malleable from within the CMS interface.

It could perhaps be that in order to build the best possible prose-centric interface, static site generation should be more of an after-thought and even, perhaps, only available upon rapiDex branching out from prose and into coding. In consideration of this, we will have to backtrack a bit.

Beyond Concept (How might it actually work?)

It’s perhaps best to begin with the distributed and decentralized (hyper-participatory) angle and work from there, as almost every feature unique to this project is aimed at streamlining that aspect. So firstly, we will need to cover two technologies and how they must jive together to make this possible: git and IPFS.

rapiDex, distributed

First of all, the only way this is going to work as desired is by git implementation through IPFS by way of IPLD. This allows every git object in a repository to be linked to an IPFS hash and perhaps not to the exclusion of an IPNS hash. Permissions, however, are perhaps the most troublesome aspect of this. While pubsub could certainly be used to some degree in reconciling changes to a repository, this could be complicated beyond any possible feasibility unless everyone is working on their own branch or fork (presumably under different IPNS hashes under the sole ownership of each contributor).

Now it’s important to mention here that forking is not an explicit feature of git, itself, but rather, a feature unique to GitHub and other git hosting services. However, forking is an implicit feature of git and could be harnessed more intimately to it by way of IPVC (as outlined here). By having each user working on their own branch in their own fork, pull requests could be used as the sole means on contributing to a master branch that could, perhaps, only be synced by having all contributors online (Voltron Syncing?), but then, anytime there is only a portion of all contributors online, a branch representing their combined efforts could be created along with a pull-request for any contributor not present. This would perhaps require some creative and automated usage of CODEOWNERS.

Something like that might be workable, but some portion of information would have to be made more permanently available, and perhaps the best way of accomplishing that would be through the blockchain. A branch’s hash or perhaps also any associated pull-request could be written to a block (for a small fee) so that any contributor not online at the moment of those changes could still at least see that the changes did occur regardless of the actual static objects’ availability. The alternative to this would be to use either Swarm or Filecoin or Keybase or simply leverage more conventional git hosting services to maintain a more permanent version of a repository.

It should be said, though, that permanent storage based on incentivization or good faith resembles another sort of centralization and that it is only so permanent as the availability of funds (either the users’ own or the services’). As such, it would be more ideal to remain more truly decentralized. The usage of something like Whisper to handle pull-requests might be feasible, but the surest way to make our version control work is to make it completely resistant merging conflicts by its very framework while encouraging near-100% uptime by every contributor. And so a clever use of forking, branches, and pull-requests will likely be essential to making it durable, but to encourage up-time, we will require something a bit more tangible.

Plug-and-play

Deploying rapiDex as a device first might seem a bit strange considering that it’s just an application. But the case can be made that most any application which makes one a node on a network should be deployed in this manner (much like the Ethereum Computer). Such a device should be accessible as a standalone computer, but there should be no expectation that anyone will plug anything into it beyond a power source and an ethernet cable. It would be readily accessible by SSH and would then prompt the user to set up a global identity of some kind (GPG, uPort, Keybase, etc.). From there, they would likely have the ability to either start a new project (effectively initializing a new repo) or import an existing one (either cloning a repo or otherwise importing it over SFTP, curl, wget, or any number of cloud services). Finally, they would begin interacting with meat of the CLI in which they can access their projects or begin editing within them.

Ideally, this would not require any explicit interaction with SSH and could instead all be accomplished locally in command line or on a web interface or a desktop application which would handle SSH automatically or with only minimal user input. It could have its own sandboxed manner of handling keys, perhaps with some tie-in between identity handling services. And perhaps every instance of this application in use would more or less replicate the main node (in the manner of an IPFS cluster, for instance) so that everything is backed up in a highly redundant fashion.

As a publishing platform

This should function as a smart-contract enabled publishing platform. Complete control over pricing strategies as well as connectivity with social platforms for marketing would be available. The ability to tokenize a book, even retroactively, would be ideal and allow for even more versatile marketing strategies.

As an archival platform

Bots could be used to scrape sites like Wikipedia for the purposes of replication. The bots could act as users in the platform’s social layer and could be paid as they contribute content. A portion of what the bots collect would be set aside for each new contribution scraped and validated that would be reserved solely for the original Wikipedia contributor, which they could claim by verifying and linking their account to the wallet in some manner. Additionally, that content could be upvoted by validators for the quality of contribution while poor contributions could go unrewarded and even unvalidated (though archived, perhaps like a branch so that if future edits include that edit, they might be merged if finally validated).

This would partly serve as a means by which rapiDex, as a newer platform, might compete with MediaWiki. This would also serve as a vandalism-resistant and incentivized fork of Wikipedia which could ultimately be forked as a standalone competitor, while new edits and articles could still be merged from the bot-served fork as desired using some sort of a voting/reputation scheme. The work of editors on Wikipedia would continue to earn them rewards and reputation on the new fork, which would hopefully incentivize their direct participation in that fork.

WIP: To consult or contribute those tasks yet uncompleted in this document, please refer to the comments visible when editing the wiki . Please consult the contribution guidelines for further elaboration.