Messaging with MQTT and Javascript

I do a lot with messaging architectures, and because I work on embedded systems so much lately, my main broker protocol has been MQTT, used with Javascript. I learned something that surprised me this morning, even though it really shouldn’t have, given some thought.

MQTT is a common protocol used in IoT. It stands for “Message Queuing Telemetry Transport“, and the current specification for MQTT is 5.0. Common brokers include Eclipse Mosquitto, EMQX, and HiveMQ.

As its name implies, MQTT is designed to be super-light. It has some pretty nice features, including termination semantics, quality of service (guaranteed delivery), retention, and other facets, but the primary use is for fairly simple communication of small messages – the header has room for a “payload length” and it’s two bytes, so your maximum packet size is somewhere in the region of 64k bytes.

The thing about MQTT that got me was the subscription model. It would be convenient to have a handler per subscription:

// this code does not do what it seems to expect.
client.subscribe('rpc/1', 
    (message) => { 
        console.log('hey, we got a message on rpc/1: ${message.toString()}'); 
    });
client.subscribe('rpc/2', 
    (message) => { 
        client.publish('rpc/1', `hello, ${message}`); 
    })

As the comment says – who reads comments, right? – this code does not work. It looks like it should do something specific when a message comes in on rpc/1, and something different when a message comes in on rpc/2 – but it doesn’t. It actually solely establishes two subscriptions for the client connection and those “message handlers” will not be executed, ever, under normal circumstances.

MQTT clients in Javascript, using the mqtt and async-mqtt modules, use Javascript’s stream paradigm. Connections use on() to establish event handlers for events as they come through. Thus, a connection event emits a connect event, an incoming message emits a message event, and so on.

If you wanted to subscribe to two topics, as expressed above, you’d have a simpler subscription process:

client.subscribe('rpc/1');
client.subscribe('rpc/2');

This tells the client connection (and the broker) to handle any messages matching the topic name, including any wildcards you might want. You can have as many subscriptions on a single client as you like, although I imagine the brokers have rational limits.

That applies in the same way to message handlers. You add a message handler as a callback for the message event:

client.on('message', (topic, message) => {
        // assumes 'message' is a human-readable string!
        console.log(topic, message.toString());
    });

Here’s the thing: you can have multiple message handlers, too. And they’ll get every message that the client is subscribed to.

If we’re subscribed to rpc/1 and rpc/2, that same message handler gets any message posted to either of those topics. It won’t get any other messages – presuming we haven’t added any more subscriptions – but it will get every message for those subscribed topics.

What’s more, if we add another message handler – via client.on('message', ...) again – every message handler will get every message, without discrimination.

If the handlers should need to handle only specific messages, then they each have to implement that functionality – filtering on topic, for example, or message content.

An alternative approach – and the one I think is more appropriate, within the limits of resource consumption – is to have multiple MQTT connections, each one with subscriptions that match a specific functionality.

In our first broken example, we have two topics, rpc/1 and rpc/2, where a message written to rpc/1 emits a sort of “hello world” message, and a message written to rpc/2 causes a message to be published to rpc/1.

If we’re preserving connections to the broker, our message handler would have to look something like this:

client.on('message', (topic, message)=>{
    if(topic.endsWith('1')) {
        console.log('hello', message.toString());
    }
    if(topic.endsWith('2')) {
        client.publish('rpc/1', message.toString());
    }
});

In environments where sockets are less expensive – i.e., we aren’t worried about counting how many sockets we use – we can be a lot more clear:

const helloClient=MQTT.connect('tcp:localhost:1883');
const sayHelloClient=MQTT.connect('tcp:localhost:1883');

helloClient.on('connect', ()=>{
    helloClient.subscribe('rpc/1');
});
helloClient.on('message', (topic, message)=>{
    console.log('hello', message.toString());
});

sayHelloClient.on('connect', ()=>{
    helloClient.subscribe('rpc/2');
});
sayHelloClient.on('message', (topic, message)=>{
    sayHelloClient.publish('rpc/1', message.toString());
});

In most message queueing libraries, you would set up a handler for incoming messages on each subscription, but MQTT is designed to be lighter than that. This shows a clean way to handle topic propagation in MQTT.

Migrating WordPress to a New Server

I recently set up a new VPS, because there was a sale on an instance large enough to serve my needs (and then some) at a price point that I couldn’t ignore.

My old host was RamNode, and make no mistake, I’ve never ever had any qualms with RamNode – the customer service has always been above and beyond. If I was running a business on a VPS, I’d be quite comfortable using RamNode.

But I’m… not running a business. I host a few blogs, connect to IRC remotely, do some programming tests and things like that. I want a lot of reliability, to be sure, but my needs are really pretty light; I could probably get by with a small Linux box (maybe even a Raspberry Pi) on my home network, if I didn’t live in the boonies on a trunk that’s already saturated with my neighbors’ traffic.

So I’m migrating to SSDNodes. They were running a sale, like I said in the first paragraph, and how could I resist that?

That left actually migrating a few sites. The first site I migrated was AutumnCode, which is my actual primary domain. But that’s effectively a static site, and served mostly to confirm that I had nginx set up properly.

That left migration of the WordPress sites.

I’m a simple man; I actually set up each site to run on its own database, even though I could get by with a multisite configuration, so I needed to migrate each one separately (which served my purposes anyway; I could migrate the less important sites and validate that the migration worked, and work my way up to the “more important” sites, like the one you’re reading right now).

So first I backed up my NGinx configuration and my LetsEncrypt directory, as a whole. I then pushed all of those configurations to the new server.

I could have used the WordPress backup mechanism to move sites, I suppose, but I have some sites with a lot of media (images, etc.), plus that felt like a lot of work. What I ended up finding was a WordPress plugin, called Duplicator Pro. There’s a free version (Duplicator!) and it … works, but there are constraints for it.

Duplicator Pro was worth it, though. I got the cheapest version, because you can enable it and disable it per site.

It’s really easy to use. After installation and registration with a key, you create a “package” for a given site. A package consists of an archive (either a zip or a daf file) and an installer.php file.

You download those two files and store them. Then you create a database and a user for the site on the target system.

Next, you set up resolution so that you can refer to the new server by name – for me, that meant going to my desktop computer’s /etc/hosts file and adding an IP and a name. This means that the old server still works for anyone else on the Internet, but that I can work on the new server to install everything I need on it.

Now there’s some relatively easy work to do – copy the NGinx configuration file for the site into place (normally /etc/nginx/available-sites) and create a softlink in /etc/nginx/enabled-sites, along with making sure the directories for the sites exist. Then copy the package files into the HTML directory for the site.

After the directories are set up, restart NGinx.

You now actually have a “working site” although there’s nothing there that should be publicly available; installer.php might be (if you named it that, and I didn’t) but nobody else knows about it yet because the DNS records are still pointing to your old server. The only way someone else would reasonably hijack the process at this point is if they, too, had a custom name resolution set up with your server name and IP.

At this point, it’s all downhill. Open up the installer file’s name, whatever it is, and fill in the database credentials, and sit back and wait; Duplicator Pro replicates all of the old data and the entire filesystem of the WordPress installation from which you created the package.

It then asks you to log in as an administrator, and gives you an easy one-click process to delete the package’s files (which leaves you with a handy clean installation of your original WordPress installation).

At this point, all you have left to do – after clicking around in the new instance just to make sure things look right, and they should – is change the actual DNS records to point to the new host IP.

Don’t forget to change the A record (the IPv4 address) and the AAAA record (the IPv6 address).

After that, it’s a waiting game; normally DNS lookups are cached for an hour, so even if someone’s using your site actively, after an hour they’ll be using your new WordPress installation. They may have to log back in (if they’re logged in, of course) but that’d be the extent of the migration from their perspective (unless, of course, they added content while you were doing the migration; that content would be lost unless you do another backup and import.)

Really, a marvelously painless process.

New Network Router!

I’ve been running TomatoUSB on my router (an Asus RT86) for years, but I’ve been noticing some flakiness lately, primarily in network devices just acting oddly on connection, poor connection throughput for a few specific devices, and a few other oddities.

It just so happened that a few of the devices that were struggling were, like, the ones I work on every day, so that was turning out to be a big deal. (Spoiler: it wasn’t the router. I’ll get to what the problem was in the next five paragraphs, I promise.) The router I was using had been holding its configuration quite successfully for a long time, and I’m oddly grateful for a piece of plastic and metal for that, but my thought was that it was time to move on.

After consulting with a few co-workers who were a bit more modern than I, I ended up going with an Eero 6 system, a dual-band mesh system. (They have tri-band, which is supposed to be awesome, but that’s way too rich for me… I thought I’d try dual-band and see how it went, and move up if I had no other choice.)

So far, it’s been great. I had a support problem when I first started it up; coverage was great, and it’s fast as all get out. My problem was that some of my devices were struggling with DHCP – which might have been the problem with my Asus router, actually – and were allocating addresses like mad. (They were failing the very last step of DHCP, and I don’t know why, as I have no custom networking devices on my network.)

But here’s the thing… I called Eero to try to work out the problem, and they spent an hour and a half on the phone, checking everything out with me, and not only working out what the exact problem was (the DHCP handshake error) but actually set up the network so that my devices no longer had the issue.

We reconfigured the network physically a few times (remember, I’m both wireless and wired, depending on the exact machine), and we actually isolated the problem to a specific switch I had installed recently, a Linksys 8-port that was somehow messing up DHCP. Take the switch off of the network? Everything clears up.

So was the switch (to Eero, not the Linksys) worth it? Heck, yes. For one thing, my bandwidth over the whole house has gone way up; I can finally stream TV over the network, which is likely to be a death knell for satellite in my home. (The real question is: which service? I’m not going to all of them.) For another, the support level was incredible.

When I say they spent time with me working out the problem, I’m probably underselling them. We literally walked through the entire physical network configuration (comprised of the modem, mesh router, and three switches, and a dozen endpoints) multiple times until we isolated the faulty router. We’d actually gotten it working before we isolated the router but they suggested – note, they suggested – keeping after the problem until we did a complete triage and repair.

So now I have a network that performs much better, at a price point close to what I spent on my old and faithful, yet underperforming, router; I not only have much better coverage over the house (thanks to the mesh configuration) but far, far better bandwidth, and the support is about as good as I could possibly have imagined.

The Eero is a little weird, sure; I’m used to having control over my router, and the Eero mesh tends to expose things to you that you need to control but not everything you might want to control. That’s probably a good thing, really; it is easy for people not used to networking to set things up incorrectly, and the Eero actually works out what will work best based on analyzing actual conditions.

But it gives you the things you need most: not only bandwidth and coverage, but IPv6, security, support, guest network provision (if you want it), along with other features.

Big thumbs up for Eero. Excellent product.

Programming is also Teaching

Programming can be thought of as something that takes place as part of interacting with a culture – a culture with two very different audiences. One “audience” is the CPU, and the other audience is made of other programmers – and those other programmers are typically the ones who get ignored, or at least mistreated.

(Note: This article was originally written in 2009 or so, and published on TheServerSide.com, where it’s no longer easily accessible. So I’m posting it here.)

Programming has two goals.

One goal is to do something, of course: calculate an amortization table, present a list of updated feeds, snipe someone on Ebay, or perhaps smash a human player’s army. This goal is focused at a computing environment.

The other goal is – or should be – to transfer knowledge between programmers. This has a lot of benefits: it increases the number of people who understand a given piece of code, it frees a developer to do new things (since he’s no longer the only person who can maintain a given program or process), and it often provides better performance – since showing Deanna your code gives her a chance to point out where your code can improve. Of course, this can be a two-edged sword, because Deanna may have biases that affect the code she writes (and therefore, what you might learn.)

The CPU as an Audience

The CPU is an unforgiving target, but its very nature as a fixed entity (in most cases!) means it has well-known characteristics that can be compensated for and, in some cases, exploited.

The language in use has its own inflexible rules; an example can be seen in C, where the normal “starting point” for a program is a function with the signature “int main(int, char **)“. Of course, depending on your environment, you can circumvent that by writing “int _main(int, char **),” and for some other environments, you’re not expected to write main() at all; you’re expected to write an event handler that the library-supplied main() calls when appropriate.

The point is simple, though: there are rules, and while exceptions exist, one can easily construct a valid decision tree determining exactly what will happen given a program’s object code. Any errors can be resolved by modifying the decision tree to fit what is actually happening (i.e., by correcting errors.)

This is crucially important; flight code, for example, can be and has to be validated by proving out the results of every statement. If the CPU was not strictly deterministic, this would be impossible and we’d all be hoping that our pilot really was paying close attention every moment of every flight.

High-level languages like C (and virtually every other language not called “assembler”) were designed to abstract the actual CPU from the programmer while preserving deterministic properties, with the abstractions growing in scale over time.

Virtual machines bend the rules somewhat, by offering just-in-time compilation; Sun’s VM, for example, examines the most common execution path in a given class and optimizes the resulting machine code to follow that path. If the common path changes later in the run, then it can (and will) recompile the bytecode to run in the most efficient way possible.

Adaptive just-in-time compilation means that what you write isn’t necessarily what executes. While it’s possible to predict exactly what happens in a VM during execution, the number of state transitions is enormous and not normally something your average bear would be willing to undertake.

Adaptive JIT also affects what kind of code you can write to yield efficient runtimes. More on this later; it’s pretty important.

The Programmer as an Audience

The other members of your coding team are the other readers of your code. Rather than reading object code like a CPU does, they read source, and it’s crucially important how you write that source – because you have to not only write it in such a way that the compiler can generate good object code, you have to write it in such a way that humans (including you!) can read it.

To understand how people understand code, we need to understand how people understand.

How People Learn

People tend to learn slowly. This doesn’t mean that they’re stupid; it only means that they’re human.

A paper written in the 1950’s called “The Magical Number Seven, Plus or Minus Two” described how people learn: this is a poor summary, and I recommend that you read the original paper to learn more if you’re interested.

Basically, people learn by integrating chunks of information. A chunk is a unit of information, which can be thought of as mirroring how a neuron works in the human brain.

People can generally integrate seven chunks at a time, plus or minus two depending on various circumstances.

Learning takes place when one takes the chunks one already understands and adds a new chunk of information such that the resulting set of information is cohesive. Thus, the “CPU as an Audience” heading above starts with simple, commonly-understood pieces of information (“CPUs are predictable,” “C programs start at this function”) and refines it to add exceptions and alternatives. For some, the “chunk count” of the paragraphs on C’s starting points make up roughly four chunks – easily integrated by most programmers due to a low chunk count.

If the reader doesn’t know what C is, or doesn’t know what C function declarations mean or look like, those become new chunks to integrate, which may prove a barrier to learning.

Adoption of C++

Another example of chunking in action can be seen in the adoption of C++. Because of its similarity to C – in use by most programmers at the time – it was easily adopted. As it grows in features, adding namespaces, templates, and other changes, adoption is slower now because not only is there more to understand to C++ than there was, but it’s different enough from the “normal” language C that it requires a good bit more integration of new chunks than it did.

The result is that idiomatic C++ – where idioms are “the normal and correct way to express things” – is no longer familiar to C programmers. That’s not a bad thing – unless your goal is having your friendly neighborhood C programmer look at your C++ code.

It’s just harder, because there’s more to it and because it’s more different than it used to be.

Here’s the thing: people don’t really want to learn

This is where things get hard: we have to realize that, on average, people really don’t want to learn all that much. We, as programmers, tend to enjoy learning some things, but in general people don’t want to learn that stop signs are no longer red, but are now flashing white; we want the way things were because they’re familiar. We want control over what we learn.

Our experiences become a chunk to integrate, and since learning is integration of chunks into a cohesive unit, new information can clash with our old information – which is often uncomfortable. Of course, experience can help integrate new information – so the fifth time you see a flashing white stop sign (instead of the octogonal red sign so many are familiar with), you will be more used to it and start seeing it as a stop sign and not something that’s just plain weird.

That said, it’s important to recognize that the larger the difference between what people need to know and what they already know, the harder it will be for them to integrate the new knowledge. If you use closures in front of someone who’s not familiar with anonymous blocks of executable code, you have to be ready for them to mutter that they prefer anonymous implementations of interfaces; named methods are good. It’s what they know. They’re familiar with the syntax. It’s safe.

This is why “Hello, World” is so important for programmers. It allows coders to focus on fairly limited things; most programmers quickly understand the edit/compile/run cycle (which often has a “debug” phase or, lately, a “test” phase, thank the Maven) and “Hello, World” lets them focus on only how a language implements common tasks.

Think about it: you know what “Hello, World” does. It outputs “Hello, World.” Simple, straightforward, to the point. Therefore, you look for the text in the program, and everything else is programming language structure; it gives you an entry point to look for, a mechanism to output text, and some measure of encapsulated routines (assuming the language has such things, and let’s be real: any language you’re working with now has something like them.)

This also gives programmers a means by which to judge how much work they actually have to do to do something really simple. The Windows/C version of “Hello, World,” as recommended by early programming manuals, was gigantic – in simple console-oriented C, it’s four lines or so, and with the Windows API, it turns into nearly seventy. This gives programmers an idea (for better or for worse) what kind of effort that simple tasks will require – even if, as in the case of Windows, a simple message actually has a lot of work to do. (In all fairness, any GUI “Hello World” has this problem.)

So how do you use how people learn to your advantage?

There’s a fairly small set of things to keep in mind when you’re trying to help people learn things quickly.

First, start from a known position – meaning what’s known to the other person. This may involve learning what the other person knows and the terminology they use; they may be using a common pattern, but call it something completely different than you do. Therefore, you need to establish a common ground, using terminology they know or by introducing terminology that will be useful in conversation.

Second, introduce changes slowly. This may be as simple as introducing the use of interfaces instead of concrete classes before diving into full-blown dependency injection and aspect-oriented programming, for example. Small changes give your audience a chance to “catch up” – to integrate what you’re showing them in small, easy-to-digest chunks.

Third, demonstrate idioms. If your language or approach has idiomatic processes that don’t leap off the page (i.e., what most closure-aware people consider idiomatic isn’t idiom at all to people who aren’t familiar with closures), you need to make sure your audience has a chance to see what the “experts” do – because chances are they won’t reach the idioms on their own, no matter how simple they seem to you.

Related to the previous point, try to stay as close to the audience as you can. Idioms are great, but if the person to whom you’re talking has no way to relate to what you’re showing them, there’s no way you’re going to give them anything to retain. Consider the Schwartzian Transform: it creates a map from a set, where the key is the sort field and the value is the element being sorted, sorts based on the keys, then creates a set in the order of the (now-sorted) keys. It uses a function to generate a sortable key in place, which could be a closure.

If your audience doesn’t understand maps well, or the audience is conditioned to think in a certain way (ASM, FORTH, COBOL, maybe even Java?) the Schwartzian Transform can look like black magic; Java programmers have a better chance, but even there it can look very odd because of the mechanism used to generate the sort keys. In Java, it’s not idiomatic, so you’d approach the Schwartzian Transform in small steps to minimize the difficulty integrating for the audience.

Conclusion

Programming is not just telling your CPU what to do, but it’s also teaching your fellow coders how you work, and learning from them how they work. It’s a collaborative effort that can yield excellent programs and efficient programmers, but can also offer confusing user interfaces and discouraged coders.

The difference is in whether the coding team is willing to take the time to code not just for the CPU or for the team, but to both the CPU and the team.

The CPU is usually easier to write for because it’s less judgemental and more predictable, but a CPU will never become great. It’s got a set of capabilities, fixed in place at manufacturing. It will always act the same way.

A team, though… a team of mediocre, inexperienced coders who work together and write for the benefit of the team has the capability to become a great team, and they can take that learning approach to create other great teams.

It all comes down to whether the team sees its work as simply writing code… or writing with the goal of both code and learning.

Big Sur and Homebrew

I converted to a mostly-Apple home office over the past couple of years. I still have different platforms hanging out – I have a Windows laptop for games (which is currently gathering a lot of dust), a few Linux embedded devices lurking about… but my primary tools are all running OSX.

I typically keep them all up to date. That’s got advantages and disadvantages; it puts me at Apple’s mercy for security (and there are legitimate security concerns with the Apple technology stack) but it also means everything mostly magically works.

Big Sur came out last week, and I updated my main work machine – an iMac – through the automatic process. My media creation machine – a MacBook Pro – is still on Catalina, because I record music and a lot of my music software isn’t ready for Big Sur yet. I’m going to wait until I have more confidence that I’m not locking myself out of my tools before upgrading that machine.

However, Homebrew stopped working on Big Sur, with a complaint that the command line tools weren’t working. This is related to Xcode.

The solution, though, was simple: I validated my version of Xcode (to make sure it was Xcode 12.2, the current version), and then went to https://developer.apple.com/download/more/ and installed the Command Line Tools for Xcode 12.2.

After that, brew upgrade worked as it always did. Everything’s back on track.

I migrated an account to GSuite and it was trivial.

I recently migrated my wife’s email to GSuite after an unconscionable delay – delayed mostly because the documentation for GSuite left some questions that I wasn’t sure I could easily answer – and the delay was entirely unjustified. It was easy.

The History

I had my own email server back in the day. There were a few vanity addresses, I guess, but nothing especially magical; I had a server running Linux, I set up sendmail and a POP3 server, and off I went. I was old-school; I even used pine for my mail app.

Those were simpler times. I’ve never been a mail hound, really, but back then it seemed like most of the email sent was real and intentional personal interaction; sure, there were mailing lists, there was the occasional promotional email, but spam was usually the result of someone trying to troll you out of humor rather than being what we’d consider spam today.

It got worse; I remember the first time hearing about SpamAssassin and how it worked. I was in one of the early waves of installing it – I can’t imagine I was in the first waves, but it would have been not long after its release.

Then I got the online email account bug – HotMail first (which I still use as a backup and recovery email service, oddly enough), and then GMail. GMail took over everything; I haven’t run my own email server for well over a decade now.

My Wife’s Email

But my wife’s email never quite worked the way it should with her own GMail account. Her domain names kept changing as she improved her mission statement, and as the names proliferated, her DNS records became… byzantine. This name redirected to that one, the MX records were hosted in multiple ways…

It all “worked” but not especially well. A lot of email simply never go to her, and her outgoing emails were iffy, too.

What’s more, she relies on email – this was not a tolerable situation.

The Failed Migration to Postfix

I ended up setting up – for the first time in a long time – my own email server, for the primary purpose of serving as her email server. Postfix and dovecot to the rescue, with Thunderbird as her mail client. Set up MX records for IPv4 and IPv6 pointing to my server, open ports for Thunderbird, require account validation for some measure of security, all would be well!

It was not well.

The IPv6 endpoint, in particular, apparently triggered GMail’s filters; it wouldn’t accept email from IPv6 consistently. What’s more, delivery seemed better but even there it wasn’t consistent.

The server was a “success” in the purely technical sense – could it send mail for users in specific domains? Could it receive mail for those domains? – but it failed in the real sense: giving my wife a satisfactory email service.

It was time to throw in the towel and try GSuite, which had been recommended to me by two people who use it (and whom I trust).

Migrating to GSuite

I held off on the migration initially because:

  1. I’m Jewish! It’s a paid service!
  2. The documentation covering the migration suggested that it was easy, but didn’t say why or how.

It was supposed to be trivial, but I wasn’t sure how to set up the custom domains’ MX records (the DNS stuff that tells mailservers where to send mail), nor was I confident in what the impact would be on her free (unpaid) GMail account.

The information was out there, but it wasn’t where I was expecting to find it.

But with my wife needing to be able to rely on her email, I decided to dive in; I’d bug GSuite’s support until it all worked, right? If I was going to give them money, they’d earn it.

It turns out I needn’t have bothered. The steps were pretty simple:

  1. Set up her GSuite admin account
  2. Verify that she owned her domain (probably the most “complex” part of this, in that I had to add a TXT record to her DNS, and I used the opportunity to add a proper CNAME and A record instead of a redirect at the DNS server level as well)
  3. Create an email account for her (which was actually part of the admin setup, so it looked like something I needed to do but wasn’t)
  4. Change the domain registrar’s MX record for GSuite (which GSuite actually walked me through step by step based on who the registrar was)
  5. Set up email forwarding from her old GMail account to the new GSuite account (which they walked through, even though I didn’t need that)
  6. Set up data migration on the GSuite admin page

The data migration was actually the hardest thing in the whole process. The documentation was referred to clearly in each step; like I expected, the information was there, but if you weren’t in the flow of the process, it wasn’t laid out clearly.

But if you were doing the migration, everything was very simple; even the data migration – the “most complex” part – was really simple to do.

And her email service, to the best that we can tell, works reliably now.

Installing GraalVM on OSX with SDKMan

Want to install GraalVM on OSX? It’s easy.

First, get SDKMan. Trust me. You want it. Almost as much as brew, if you’re doing anything with the JVM. You’ll want to install bash – via brew – because SDKMan uses bash and the OSX bash shell is badly outdated.

Once you have SDKMan installed and available in your shell, execute the following command:

$ sdk install java 19.3.0.2.r11-grl

If you don’t install it as the default JVM, you can select it as the current JVM with this:

$ sdk default java 19.3.0.2.r11-grl

You can check it with java -version:

$ java -version
openjdk version "11.0.5" 2019-10-15
OpenJDK Runtime Environment GraalVM CE 19.3.0.2 (build 11.0.5+10-jvmci-19.3-b06)
OpenJDK 64-Bit Server VM GraalVM CE 19.3.0.2 (build 11.0.5+10-jvmci-19.3-b06, mixed mode, sharing)

This installs the latest GraalVM installation for Java 11, as of when this was written. Enjoy!

New Ubuntu Version!

I upgraded Ubuntu to eoan, 19.10, this morning, and WordPress broke. I was getting 502 Gateway errors, which is never comfortable.

What the 502 means is that nginx was not able to connect to the backend service, which is php7.3-fpm in this case.

I have nginx set up to forward to php-fpm to 127.0.0.1:9000. This is relevant.

The solution was really pretty simple: go to /etc/php/7.3/fpm/pool.d, and edit www.conf.

There will be a line with listen = /run/php/php7.3-fpm.sock ; change that line to listen = 9000 and now the socket connection is listening on port 9000 on localhost. Everything works again.

I’d still love to find a way to move off of WordPress without having to go through a massive porting process (and then a mad search for functionality, which is the bigger problem) but for right now, WordPress suffices.

I’ve been thinking about a business idea

I’ve been thinking about creating a service offering for people writing about programming on the web: editorial services.

I’ve been thinking about creating a service offering for people writing about programming on the web: editorial services.

I’m not sure how it’d work yet, but here’s what I’m thinking as of right now:

What most authors need is someone to give their writing a once-over, a sanity check… someone who can say “I don’t know what you’re trying to say here,” or even “this isn’t clear enough to be effective.” Maybe the person reviewing it could even offer advice, like “you need to make your point earlier in the text, because most readers won’t get far enough along to benefit from what you’re saying.”

Sometimes writers need copy editing – fixes for grammar and spelling – and sometimes they need technical review – someone to actually validate that what they’re saying is even valid.

I was thinking of offering my services mostly for that first type of editorial service: someone who reads the content, and actually considers what kind of response the text creates.

That doesn’t mean I wouldn’t offer copy corrections (“You are, using too, many, commas, in, your, text”) or that I wouldn’t point out programmatic errors where I have knowledge and experience…. but the primary point would be to offer advice on flow and effective prose.

I’d have to be able to refuse some content: if someone says something factually incorrect or misleading and insists on it, well… I’m not willing to associate my name with something that lies to its audience. I’ve never been willing to do that before, and I’m not willing to do that now.

I don’t know yet how I’d negotiate with content authors, nor am I sure what pay scale would be involved.

What do you think? Would this be something you’d be interested in exploiting as a service, and if so, what kind of price point would you like to see?

Newsblur; Fricassee; old friends – 14/Feb/2019

Things I’m thinking about:

RSS Feeds

I’ve started using Newsblur again. I shut off Facebook a while back for various reasons (nothing drastic, just… tired, mostly), so my news has been supplied by a fairly limited set of channels.

Newsblur fixes that. It’s not just Newsblur, of course; you can use any of a number of feed readers, but Newsblur is the one that works best for me.

I’m enjoying it so far.

With that said: if you know of any sites that are new, flashy, interesting, relevant for … well, news, visual arts, philosophy, creating music, Python or Java programming, let me know! I may already have them in my feed, but I might not, and I want to grow my list of sources if I can.

Fricassee!

I looked up what a fricassee was, because I used it as a sort of joke dish. However, my use was copied from, like, Bugs Bunny back in the 1970s; I had no idea what a fricassee actually is.

Now I do:

A dish of stewed or fried pieces of meat served in a thick white sauce.

We learn together! (Unless, of course, you already knew what a fricassee was.)

Old Friends

I have no intention of living in the past – the “good old days” were the “bad old days” too – but I miss those friends with whom I’ve lost communication.

Social networking helps in a few cases, but it’s also so…ephemeral that it doesn’t really establish the connections that made us friends in the first place.

C’est la vie – a phrase I use far too often, I think.