My Fractured Writing Process

Yesterday I mentioned using Scrivener to write, and I mentioned that I have sort of a graduated process for writing that isn’t very efficient. I wanted to actually write down how I write, just to sort of collect what I do; maybe the documentation process will help me see it for what it is (“a mess”) and fix it some.

Here’s the thing: it’s very haphazard and very much cross-platform. All of it. I write on mobile devices (my tablet, mostly, but my phone as well), I write on Linux, I write on paper, I write on Windows (rarely, but still!), and usually on one of two Macs. One’s a desktop machine (upon which I’m typing right now) and one’s a MacBook.

The tools I have available at any time factor very heavily into how I write and what I write and where, often with negative effects.

Draft Simply

I rely really heavily on simple mechanisms: pen and paper, cloud storage, simple keyboards. I’ll start with a text editor more often than not – either WordPress (as I’m doing right now) or Day One, because they’re shared – if I’m on a walk and I dictate something into Day One, I know I can open up Day One on my Mac and see whatever it is I happened to say. Likewise, on WordPress, I can write it in one place and see it somewhere else.

Cloud storage has done more to free me than anything else.

I also tend to write in plain text; Day One has simple markdown-style formatting, as does WordPress; I don’t get wrapped up in style very often, although in WordPress I can – I keep wanting footnotes in WordPress just like I do in Word (which is why I rarely draft in Word).

So: the first step in my writing process is to draft simply.

Going Further

The next step in my writing process is to decide if there’s… more steps to follow.

Honestly, a lot of the stuff I write is just me capturing my thoughts so that my kids can see inside my head should they ever want to; I’m really writing to them.

So a lot of editing would actually work counter to what I’m trying to accomplish; I don’t want my kids to see a sanitized version of me, I want them to see how I think and why I think what I think, to hear my voice and my motivations.

When I write directly, with little editing, you’re getting what I actually would “sound like” as long as you cut out all the stuttering and pauses and moments where I … you know, lose what it is I’m trying to say as I’m trying to work it out.

That – the loss of what I say as I’m trying to say it – happens a lot. I get distracted. A lot of my writing gets discarded because of that; I’ll look at it, and see where I hopped off the tracks, and think to myself, “This is not worth knowing or reading; it communicates my confusion, not my soul,” and … into the bin it goes.

Day One is fantastic for this; I have a lot of rather confusing journal entries. They’re embarrassing, really, but for the right reasons; they’re not embarrassing because I’m betraying some deep, dark secrets, but because they’re rather silly even in their own context.

Anyway – did you see how I got off track, right there? – after I’ve drafted something in a simple medium, the question is: what next?

A lot of times, it’s pretty simple: hit publish! I said it, go to … well, not “print,” because a lot of it’s online, but go “live.” That’s fine, that’s the whole purpose of a lot of things I write, like this piece itself.

I’m writing “raw,” and publishing “raw” is the whole point.

Going Formal

If the answer to the “next step” isn’t “expose it to everyone, flaws and all,” then it’s time to get serious. Here, I’ll crank up a real tool – it’ll be either Scrivener or a mind mapper of some kind.

A mind mapper – Freeplane, XMind, or MindNode, for example – is where I’ll take the draft’s points if I think they’re solid but disorganized. A draft is going to be completely burned down to the ground if I take it to a mind mapper; this is usually a completely destructive, but entirely useful, process.

It’s where I look at the structure of what I have written, and extract the useful bits. I’ll use them to rebuild a structure from the ground up.

In my opinion, my best works – not my most artistic works, but my best – come out of this process.

The next destination – mind-mapped or not – is likely to be Scrivener, where I’ll either transcribe the mind map into text to be moved about, or I’ll just copy the draft and then edit it there, with notes. This is a fairly formal drafting process – this, or the mind map, are the first times I usually actually try to apply process to writing.

Scrivener allows me to make notes about what I’m writing (much as mind mapping allows me to make connections between concepts). Scrivener also allows me to focus on the drafting process without getting tied down by the editing process, which is a big deal (and, again, why I avoid Word for writing, usually).

Final Production

The next step is to compile the work from Scrivener into a Word document.

I’ll then read… and read… and reread… and read again until I’m sick of it, applying edits and notes back in Scrivener and republishing.

Once I’m happy with it – or once I’m so sick of it I can’t read it any more – I’ll do a final compilation with Scrivener and send off to a publisher… or copy it from there back to WordPress or wherever its final destination will be. This part’s usually pretty light.

There’s More Than One Way

Of course, if it’s not clear already, there’s more than one way to do it.

That Day One -> Scrivener -> Final Destination process is probably what I do most often, but it’s not the only way.

I also draft with Asciidoctor, and do the same render/edit cycle there that I do with Scrivener (including the mind mapping stage).

I wrote a book this way, for example, and there were a lot of really good aspects to this… and some really unfortunate aspects to it. The problem AsciiDoctor has is exporting to Word format, which is the lingua franca of publishing; the Word conversion is… problematic.

(There’s more to it than that, too, but this is not the right forum for that.)

Anyway, how about you? What do you do?

Scrivener 3.2 Compile Issue

I’m a big fan of Literature and Latte’s Scrivener product. If I’m writing “for real,” it’s typically in Scrivener, although I think my process there could still use a lot of work. (I use a graduated system for writing, which … now that I think about it, isn’t very efficient for organization or promotional purposes.)

Anyway: Scrivener! If you write, it’s a fantastic product. Highly recommended.

However, they recently put out 3.2, and I ran into a problem with it.

The process in Scrivener is to write a draft (surprise!) and then compile that draft into a final product, which can be in any of a number of formats: Word document, PDF, Mobi, Epub, and so forth and so on.

What was happening is that I could not get it to run that compilation step, at all. I’d select the menu option, the program would… do nothing. It was as if I wasn’t even hitting the menu item.

I reported it to Literature and Latte, and they figured out a workaround: it’s related to a setting for fonts in the compile.xml file.

There are a number of ways you can approach this: my project’s still in early draft mode, so I simply opened the directory that my project was in, went to Settings, and deleted compile.xml. Once that was done, the menu item worked again and I could generate a draft document from my project.

You can also open up compile.xml in a handy text editor, and delete the lines that have <Font> in them. (The error is related to a font lookup, somewhere internally.) I haven’t tried this, because, well, the project I’m working on is in early draft so I don’t need anything special here.

Lastly, you can wait for Literature and Latte to release a new build of Scrivener 3.2. I’m on build 14343, and I expect they’ll suss out this problem quickly and there’ll be a fix out soon.

Programming is also Teaching

Programming can be thought of as something that takes place as part of interacting with a culture – a culture with two very different audiences. One “audience” is the CPU, and the other audience is made of other programmers – and those other programmers are typically the ones who get ignored, or at least mistreated.

(Note: This article was originally written in 2009 or so, and published on, where it’s no longer easily accessible. So I’m posting it here.)

Programming has two goals.

One goal is to do something, of course: calculate an amortization table, present a list of updated feeds, snipe someone on Ebay, or perhaps smash a human player’s army. This goal is focused at a computing environment.

The other goal is – or should be – to transfer knowledge between programmers. This has a lot of benefits: it increases the number of people who understand a given piece of code, it frees a developer to do new things (since he’s no longer the only person who can maintain a given program or process), and it often provides better performance – since showing Deanna your code gives her a chance to point out where your code can improve. Of course, this can be a two-edged sword, because Deanna may have biases that affect the code she writes (and therefore, what you might learn.)

The CPU as an Audience

The CPU is an unforgiving target, but its very nature as a fixed entity (in most cases!) means it has well-known characteristics that can be compensated for and, in some cases, exploited.

The language in use has its own inflexible rules; an example can be seen in C, where the normal “starting point” for a program is a function with the signature “int main(int, char **)“. Of course, depending on your environment, you can circumvent that by writing “int _main(int, char **),” and for some other environments, you’re not expected to write main() at all; you’re expected to write an event handler that the library-supplied main() calls when appropriate.

The point is simple, though: there are rules, and while exceptions exist, one can easily construct a valid decision tree determining exactly what will happen given a program’s object code. Any errors can be resolved by modifying the decision tree to fit what is actually happening (i.e., by correcting errors.)

This is crucially important; flight code, for example, can be and has to be validated by proving out the results of every statement. If the CPU was not strictly deterministic, this would be impossible and we’d all be hoping that our pilot really was paying close attention every moment of every flight.

High-level languages like C (and virtually every other language not called “assembler”) were designed to abstract the actual CPU from the programmer while preserving deterministic properties, with the abstractions growing in scale over time.

Virtual machines bend the rules somewhat, by offering just-in-time compilation; Sun’s VM, for example, examines the most common execution path in a given class and optimizes the resulting machine code to follow that path. If the common path changes later in the run, then it can (and will) recompile the bytecode to run in the most efficient way possible.

Adaptive just-in-time compilation means that what you write isn’t necessarily what executes. While it’s possible to predict exactly what happens in a VM during execution, the number of state transitions is enormous and not normally something your average bear would be willing to undertake.

Adaptive JIT also affects what kind of code you can write to yield efficient runtimes. More on this later; it’s pretty important.

The Programmer as an Audience

The other members of your coding team are the other readers of your code. Rather than reading object code like a CPU does, they read source, and it’s crucially important how you write that source – because you have to not only write it in such a way that the compiler can generate good object code, you have to write it in such a way that humans (including you!) can read it.

To understand how people understand code, we need to understand how people understand.

How People Learn

People tend to learn slowly. This doesn’t mean that they’re stupid; it only means that they’re human.

A paper written in the 1950’s called “The Magical Number Seven, Plus or Minus Two” described how people learn: this is a poor summary, and I recommend that you read the original paper to learn more if you’re interested.

Basically, people learn by integrating chunks of information. A chunk is a unit of information, which can be thought of as mirroring how a neuron works in the human brain.

People can generally integrate seven chunks at a time, plus or minus two depending on various circumstances.

Learning takes place when one takes the chunks one already understands and adds a new chunk of information such that the resulting set of information is cohesive. Thus, the “CPU as an Audience” heading above starts with simple, commonly-understood pieces of information (“CPUs are predictable,” “C programs start at this function”) and refines it to add exceptions and alternatives. For some, the “chunk count” of the paragraphs on C’s starting points make up roughly four chunks – easily integrated by most programmers due to a low chunk count.

If the reader doesn’t know what C is, or doesn’t know what C function declarations mean or look like, those become new chunks to integrate, which may prove a barrier to learning.

Adoption of C++

Another example of chunking in action can be seen in the adoption of C++. Because of its similarity to C – in use by most programmers at the time – it was easily adopted. As it grows in features, adding namespaces, templates, and other changes, adoption is slower now because not only is there more to understand to C++ than there was, but it’s different enough from the “normal” language C that it requires a good bit more integration of new chunks than it did.

The result is that idiomatic C++ – where idioms are “the normal and correct way to express things” – is no longer familiar to C programmers. That’s not a bad thing – unless your goal is having your friendly neighborhood C programmer look at your C++ code.

It’s just harder, because there’s more to it and because it’s more different than it used to be.

Here’s the thing: people don’t really want to learn

This is where things get hard: we have to realize that, on average, people really don’t want to learn all that much. We, as programmers, tend to enjoy learning some things, but in general people don’t want to learn that stop signs are no longer red, but are now flashing white; we want the way things were because they’re familiar. We want control over what we learn.

Our experiences become a chunk to integrate, and since learning is integration of chunks into a cohesive unit, new information can clash with our old information – which is often uncomfortable. Of course, experience can help integrate new information – so the fifth time you see a flashing white stop sign (instead of the octogonal red sign so many are familiar with), you will be more used to it and start seeing it as a stop sign and not something that’s just plain weird.

That said, it’s important to recognize that the larger the difference between what people need to know and what they already know, the harder it will be for them to integrate the new knowledge. If you use closures in front of someone who’s not familiar with anonymous blocks of executable code, you have to be ready for them to mutter that they prefer anonymous implementations of interfaces; named methods are good. It’s what they know. They’re familiar with the syntax. It’s safe.

This is why “Hello, World” is so important for programmers. It allows coders to focus on fairly limited things; most programmers quickly understand the edit/compile/run cycle (which often has a “debug” phase or, lately, a “test” phase, thank the Maven) and “Hello, World” lets them focus on only how a language implements common tasks.

Think about it: you know what “Hello, World” does. It outputs “Hello, World.” Simple, straightforward, to the point. Therefore, you look for the text in the program, and everything else is programming language structure; it gives you an entry point to look for, a mechanism to output text, and some measure of encapsulated routines (assuming the language has such things, and let’s be real: any language you’re working with now has something like them.)

This also gives programmers a means by which to judge how much work they actually have to do to do something really simple. The Windows/C version of “Hello, World,” as recommended by early programming manuals, was gigantic – in simple console-oriented C, it’s four lines or so, and with the Windows API, it turns into nearly seventy. This gives programmers an idea (for better or for worse) what kind of effort that simple tasks will require – even if, as in the case of Windows, a simple message actually has a lot of work to do. (In all fairness, any GUI “Hello World” has this problem.)

So how do you use how people learn to your advantage?

There’s a fairly small set of things to keep in mind when you’re trying to help people learn things quickly.

First, start from a known position – meaning what’s known to the other person. This may involve learning what the other person knows and the terminology they use; they may be using a common pattern, but call it something completely different than you do. Therefore, you need to establish a common ground, using terminology they know or by introducing terminology that will be useful in conversation.

Second, introduce changes slowly. This may be as simple as introducing the use of interfaces instead of concrete classes before diving into full-blown dependency injection and aspect-oriented programming, for example. Small changes give your audience a chance to “catch up” – to integrate what you’re showing them in small, easy-to-digest chunks.

Third, demonstrate idioms. If your language or approach has idiomatic processes that don’t leap off the page (i.e., what most closure-aware people consider idiomatic isn’t idiom at all to people who aren’t familiar with closures), you need to make sure your audience has a chance to see what the “experts” do – because chances are they won’t reach the idioms on their own, no matter how simple they seem to you.

Related to the previous point, try to stay as close to the audience as you can. Idioms are great, but if the person to whom you’re talking has no way to relate to what you’re showing them, there’s no way you’re going to give them anything to retain. Consider the Schwartzian Transform: it creates a map from a set, where the key is the sort field and the value is the element being sorted, sorts based on the keys, then creates a set in the order of the (now-sorted) keys. It uses a function to generate a sortable key in place, which could be a closure.

If your audience doesn’t understand maps well, or the audience is conditioned to think in a certain way (ASM, FORTH, COBOL, maybe even Java?) the Schwartzian Transform can look like black magic; Java programmers have a better chance, but even there it can look very odd because of the mechanism used to generate the sort keys. In Java, it’s not idiomatic, so you’d approach the Schwartzian Transform in small steps to minimize the difficulty integrating for the audience.


Programming is not just telling your CPU what to do, but it’s also teaching your fellow coders how you work, and learning from them how they work. It’s a collaborative effort that can yield excellent programs and efficient programmers, but can also offer confusing user interfaces and discouraged coders.

The difference is in whether the coding team is willing to take the time to code not just for the CPU or for the team, but to both the CPU and the team.

The CPU is usually easier to write for because it’s less judgemental and more predictable, but a CPU will never become great. It’s got a set of capabilities, fixed in place at manufacturing. It will always act the same way.

A team, though… a team of mediocre, inexperienced coders who work together and write for the benefit of the team has the capability to become a great team, and they can take that learning approach to create other great teams.

It all comes down to whether the team sees its work as simply writing code… or writing with the goal of both code and learning.

Big Sur and Homebrew

I converted to a mostly-Apple home office over the past couple of years. I still have different platforms hanging out – I have a Windows laptop for games (which is currently gathering a lot of dust), a few Linux embedded devices lurking about… but my primary tools are all running OSX.

I typically keep them all up to date. That’s got advantages and disadvantages; it puts me at Apple’s mercy for security (and there are legitimate security concerns with the Apple technology stack) but it also means everything mostly magically works.

Big Sur came out last week, and I updated my main work machine – an iMac – through the automatic process. My media creation machine – a MacBook Pro – is still on Catalina, because I record music and a lot of my music software isn’t ready for Big Sur yet. I’m going to wait until I have more confidence that I’m not locking myself out of my tools before upgrading that machine.

However, Homebrew stopped working on Big Sur, with a complaint that the command line tools weren’t working. This is related to Xcode.

The solution, though, was simple: I validated my version of Xcode (to make sure it was Xcode 12.2, the current version), and then went to and installed the Command Line Tools for Xcode 12.2.

After that, brew upgrade worked as it always did. Everything’s back on track.

Books that Shaped You

What books helped shape your political and moral opinions?

A lot has gone into my reading list. Here’s a list of the things I think were most important, with a focus on fiction:

  • Starship Troopers. Often derided as fascist, this book… isn’t fascist. It’s not a complicated book, but it does contain a lot of essays about political theory and the application of force: a lot of its message is “You don’t own it if you’re not willing to defend it.”
  • The Fountainhead. Ayn Rand was not a … good writer, but the Fountainhead’s focus on personal creativity and adherence to individual vision was, and is, inspiring. There’s a lot to find distasteful here – her view of personal relationships was… um… not profitable to anyone who didn’t enjoy the concept of Fifty Shades of Grey, but she avoids bonking her readers over the head quite so much with morality plays in The Fountainhead, unlike some of her other books.
  • Dune. Dune is a fantastic book for communicating ideas about perspective and control. When the Imperium itself is 10000 years old, the value of an individual life… it ends up looking like what it is: a drop of water in a vast river. It’s still valuable, but it can’t scream that it’s the point of the river, nor is it in control.
  • Foundation. In addition to being a rollicking set of adventures, the perspective shifts about what’s important and what things drive economies and political engines are wonderful. And then Asimov breaks the model with an outsized predator just to show the system in action.
  • To Kill a Mockingbird. Anyone who can read this without being affected is a robot. Accepted groupthink along tribal lines died for me for once and for all when reading this book… even accepted groupthink that agrees with the premise, that racism is wrong and evil. It is wrong and evil… but it’s not a set of definitions that can be applied without reason. I may agree with groupthink, but it’s because I agree, not because it’s groupthink.
  • Lucifer’s Hammer. An apocalyptic book about a comet’s calves hitting the earth, it’s a lot like Starship Troopers in that it focuses heavily on the issues one would care about given a lack of comfortable privation.
  • A Wizard of EarthSea. Illustrated the idea that a hero didn’t have to act like, or look like, a traditional hero. Wizards who didn’t focus on blasting spells at enemies? Wizards who were not white? Even gender issues were addressed. Fantastic book, fantastic series, fantastic author.
  • The Wheel of Time. As a prospective author of fiction, this series gives me hope: if people are willing to pay for crap like this, then maybe I can some day retire by pumping out similar dreck. An author whose best material falls under the quality level of Robert Jordan’s offerings really should never be willing to write such that others can buy it. Books not linked because I’m a kind person and I don’t want someone to accidentally read this and blame me.

This is hardly a list of “good material” – I mean, I’m leaving off the Jubal van Zandt series, Lord of the Rings, Dragonlance, The Mote in God’s Eye, Night, Neuromancer… really more books than I can even think of at the moment. But these are the books that I can think of right now that shaped my political and personal philosophies the most.

What about you?