≡ Menu

Smart Grids, part bā: Submitting live data

The previous article in this series added a way to see our light sensor data via a web page, with the article before that giving us a way to request live data.

It’s time to add a service to submit live data.

A minor interlude, reminding us of what we’re doing

Before we go too far down this particular path, let’s restate what we’re doing and where we are in the application development lifecycle.

The Requirements

We’re designing a light level tracking application. Presumably, an external device will submit a light level with a geolocation to a broker; that broker will make this data available to a consumer, whose purpose is not yet determined as part of the requirements.

Our project’s current lifecycle

What we’re actually doing at the moment, with all the services and HTML and JavaScript (“Oh, my!”) is writing a broker. Now, if that sounds all well and good to you, that’s fine… but we shouldn’t be writing a broker. We should be using a broker.

And we will be.

Our current project stage is actually a prototype; we’re testing the design, and trying to write it in such a way that eventually we can snap in better technology to provide the high availability that was part of our original goal.

Remember, the article series is titled “Smart Grids,” not “A Light Sensor Application.”

Along the way we’re discussing a lot of useful processes and thought lines that hopefully shed some light (via a light sensor application, woo!) on development practices.

Most of the code we’re writing now is useful but not likely to be permanent. This is very important. If we forget this, we’re likely to add features we’re going to have to rewrite.

With that said, some of the code will be “production code:” we’ll point that out when we get to it. (And we haven’t gotten to much of it yet; only DataPoint.java is likely to be in our final application, and I’m not making any promises for that, either.)

Submitting Data

It’s not very useful to be able to request data without the ability to submit data. We can use many of the same concepts we used in the presentation layer, actually, just inverted.

So let’s get to it.

First, we need to make sure our data object is serializable. This means making sure it has a default (no-argument) constructor, and mutators for the serialized fields. (Pretty standard for objects; however, some people use different conventions. It’s likely that mutability has already been baked in for most objects of this sort.)

Second, we need to think about what data we’re submitting. We deliver data in JSON, so it makes sense to use JSON to submit data as well.

The data type will be a DataPoint… but we want to actually accept a collection of data points.

We want to accept a collection because it’s possible that a given device might be collecting data without a connection. We don’t want to lose its data, presumably; at the same time, if it has to send a time series, we don’t want it to send a request for every data point.

A collection with one entry is not large enough compared to a single entry, therefore the collection costs us very little; collections win.

We have the option of using a Set or a List.

A Set is a unique collection of elements; thus, you can’t have two objects (O1 and O2) contained in a Set such that O1 is equal to O2.

This fits what we need; we don’t actually want duplicate timestamps. However, if we specify a Set, the library creates a HashSet by default.

That’s not what we want. It works, but we actually want the data in some sort of timestamp order.

So: going back to our dictum that working code is better than elegant-but-nonworking code, we’re going to cheat and use a List instead, and sort the List into proper order.

What is “proper order?” Well, that’s a fine question. My first guess is that proper order is in timestamp-ascending order, oldest first.

That allows us to iterate over the List, storing each DataPoint as we go; if we had a write-through enabled, this means we would get time-series data persisted to secondary storage, with the most recent data being readily available.

So let’s code our service. Let’s see the addition to Provider, then we’ll walk through the interesting bits.

private static final Comparator comparator=
    new Comparator() {
    @Override
    public int compare(DataPoint dataPoint, 
                       DataPoint dataPoint1) {
        return (int)(Math.signum(1.0*dataPoint.getTimestamp()-
                dataPoint1.getTimestamp()));
    }
};

@POST
@Consumes(MediaType.APPLICATION_JSON)
public void submitData(List dp) {
    Collections.sort(dp, comparator);
    Cache cache =
            container.getCache("sensorData");
    for(DataPoint dataPoint:dp) {
        cache.put(dataPoint.getDeviceId(), dataPoint);
    }
}

The first thing we do is implement a Comparator for the sake of ordering our DataPoints. It’s private final static because it’s not going to change for the life of this class, and probably isn’t going to be necessary anywhere else.

The next method is our actual consumption service. We mark it as @POST; this means that if the external service uses “./provider/data” with an HTTP POST, this method will be called after parsing the input as specified by the method signature.

We mark it as consuming “application/json” because we, well, want to consume it as JSON.

We then sort the list, get a reference to the cache, and write each one into the cache; it’s very simple, very straightforward, as one would hope such a method would be.

Testing the Service

Testing the service is an interesting prospect. Arquillian promises to make the process very easy, and in concept that’s absolutely true, but there are aspects to this application that make leveraging Arquillian more difficult for me.

I really, really want to change this.

Ideally, I’d create an Arquillian deployment in a JUnit test, using an embedded JBoss container and a local instance of the datagrid; then I’d issue an HTTP request to that embedded container, which would allow me to check the data grid to make sure the data was written as expected.

However, while I know how to do much of this, I don’t know how to do it all.

So let’s cheat a little and use a browser to test our service.

I use Chrome‘s Advanced REST Client; if you use FireFox, I suppose you could use RESTClient, but I haven’t any experience with that particular plugin.

With Advanced REST Client, you’re presented a URL, with a choice of HTTP method and data (if appropriate for the method.) Therefore, our data request might look like this, with http://localhost:8080/sensor-web/provider/data as the URL:

Note the response: this is our default data item.

Submitting data via POST – our new submission service – is very similar. We can use the same URL, but we need to change the method to POST, which gives us a method body field to fill in.

I used [{"deviceId":"000-000-0000","longitude":-78.57743,"latitude":35.773371,"level":255,"maxLevel":255,"timestamp":1347371000891}] as input, which gave me the following screen:

This is all well in-bounds; we’re not returning any content, so that 204 response code is fine.

Now we can go back to our data page (http://localhost:8080/sensor-web/data.html and see if we submitted data properly:

Now we see two data points: one is our mud-yellow default data point, and the other is a bright yellow point just to the east of the first. That’s the point we just submitted!

Now we have the groundwork in place for us to write an actual live producer for our light sensor application.