Back in June I found myself standing on platform #5 of the train station in Malmö, Sweden waiting for the 8:02 a.m. train to Copenhagen with [[Catherine]], [[Luisa]] and [[Olle]] when an intriguing looking man approached us on the platform.  He was intriguing mostly because he was carrying a substantial looking metal briefcase that was held together with green gaffer’s tape. This man turned out to be known to Olle and Luisa: he was David Sjunnesson, an interaction designer, hacker, and Arduino propagator  headed to the same [[reboot]] conference that we were.

Over the next few days I got to know David a little — we ended up sharing the train from Malmö more than once — and on the final afternoon of reboot I was in the room as David skillfully facilitated a Full Body Arduino workshop.

When I got home to Canada I started to follow the RSS feed of 1scale1, the Malmö-based “critical research studio” that David is a part of, and it’s through the 1scale1 blog that I came to learn, this afternoon, of a wonderful new book that comes out of 1scale1 called Open Softwear: Fashionable Prototyping and Wearable Computing using the Arduino.

The book is wonderful on several levels.  First, the subject matter hits at the heart of my current obsession with physical computing and DIY, second it’s well-written and lovingly designed, and third, and perhaps most interesting, it has been published with a Creative Commons Attribution-Noncommercial-Share Alike 2.5 Generic license.

This means that, among other things, I was free to grab the PDF of the book and print up a copy for myself.  This turned out to be an interesting and fulfilling exercise in its own right.

Sending PDF files to local printers here in Charlottetown has always been inconvenient or expensive or both, so I was initially reluctant to go through the hassle.  Before giving up though, I decided to try Staples (Staples is a Canadian big-box office supply chain). While I’d never received particularly good face-to-face service from their print shop, I was interested in their purported ability to accept print orders online, and so I gave their system a try.

Somewhat to my surprise — we do, after all, live in a world where almost no company gets online service delivery right — the experience was almost completely positive: quick, easy, and inexpensive.

I simply created an account on the Staples Copy and Print website, uploaded the PDF of Open Softwear that I’d downloaded, and then selected my print options: double sided on white 30% recycled 20 lb. paper with a white 65lb. cover, coil bound.  As I played with different options the price was updated in real time (I could have printed it for as little as $8.00 with no cover or binding), and when I was done I had this order:

Staples Online Print Order

The entire ordering exercise took about 10 minutes.  About 15 minutes after I placed the order I got a call from the local Staples telling me that they couldn’t coil-bind the order, but they could cerlox-bind it, and I gave them the go ahead.

Two hours later I was at home, checked my email, and found a message from Staples:

Your print job #513589 is complete and ready for pick up.

I hopped in the car and 10 minutes and $17.00 later the printed and bound book was in my hands. They did a nice job: the cover is thick and solid-feeling, the binding holds it all together well, and the inside, although only in black and white and not its native full-colour glory, is perfectly acceptable:

My Copy of Open Softwear Inside Open Softwear

This was all deeply satisfying to me.

While on one level this is a simple “big deal: you got a PDF file printed” story, there is a whole other layer that says a lot about the future of publishing:

  • The “marketing” for the book consists of a blog post and a simple website — I learned about the book via RSS. No agents, publishers, wholesalers, bookstores or other third-parties were involved.
  • While the book’s subject matter scratches all the right itches for me, the “wearable computing” market is admittedly quite small, and I might be the only person interested in the book in my neighbourhood: but that doesn’t matter anymore.
  • While I’d experienced the wonders of print-on-demand before, the “on demand” with something like Lulu.com involves a time lag of weeks for the printing to happen; this experience was much closer to the Utopian “desktop book-printing machine” we’ve long heard about.
  • The Creative Commons license that made this all possible is such a sensible way to spread ideas: sure, the authors have nothing but my gratitude (and this blog-post-of-thanks), but their material is in my hands now, and now I am part of the propagation machine.

So, to authors Tony Olsson, David Gaetano, Jonas Odhner, and Samson Wiklund: thank you for your ideas, and your openness in sharing them. To David Sjunnesson, thanks for starting me down the path.  And to everyone else — or at least those of you with a wearable computing fetish — go print yourself a copy of the book too!

Now, off to read my new book and start stitching together my location aware laser-guided Oxford-cloth button-down shirt.

Earlier this month I blogged about some work I’d done to grab and parse my credit union statements in an effort to learn more about my spending habits.

A few days later I received a helpful and supportive email from the Director of Product Management at Central 1 Credit Union, the company that develops and manages the MemberDirect service that powers my local credit union’s online banking system (this is sort of like blogging about a problem with your MacBook and having Jonathan Ive give you a call). 

Not only did he encourage my experimenting, but he also suggested that we speak on the phone so he could answer some of my technical questions and discuss some of the broader ideas of helping credit union members learn more about our spending.

We’ve just had that call and it was very helpful. Among the things I learned:

  • Central 1 Credit Union is a new company resulting from the merger of the provincial Credit Union centrals in British Columbia and Ontario.
  • In Atlantic Canada there’s a central organization called “League Data” that provides the technology used by my Credit Union itself – the actual “how much money is in Peter’s account” stuff; the MemberDirect service communicates with League Data’s systems using a message specification.
  • MemberDirect can only deliver to me data that League Data’s systems deliver to it.  So, for example, the MemberDirect system supports delivery of full “metadata” for both current and archival transactions, but it appears that League Data only provides MemberDirect for full metadata for the current month’s transactions.  This is thus a League Data issue, not a MemberDirect issue.
  • The message specification used for communication between MemberDirect and League Data doesn’t support any greater level of time granularity that a single day: in other words, transactions can only be date-stamped, not time-stamped (which limits my pie-in-the-sky “geolocation through transaction analysis” project somewhat…).

My ideas about providing spending information to members through MemberDirect were also well-received, and we’ve agreed to keep talking about this.

Just as the call was ending I mentioned that I’d enjoyed watching the evolution of MemberDirect over the years, and appreciated the fact that its technology decisions are, unlike the systems of banks, 100% member-driven.  As evidence of this I noted a feature that was introduced recently that saw a “Make this my default account to pay bills from” checkbox added to the bill payment screen. It’s a simply feature, but a very useful one that’s saved me from a lot of frustration (through “you need to select an account” error messages).  As it turns, the person whose shephherded this feature into the system was also in on the call, so I got a chance to thank him personally for the help.

Taken together this is all an illustration of what’s so great about the Credit Union movement: Credit Unions are transparent, member-driven, and excel at the kind of personal customer service that is completely beyond the range of banks.  It’s inconceivable to me that anyone involved in technology planning at a bank would contact me if I blogged about their systems; even if they wanted to, they would likely not be allowed to.  Within the Credit Union movement, apparently, talking with users is actually encouraged. Go figure.

I was a Metro Credit Union proud member today, and I’m looking forward to taking this project further.

Not Every Child Is Secretly a Genius, an essay in The Chronicle of Higher Education by Chris Ferguson, is an excellent bubble-bursting treatise on the “multiple intelligences” education theories of Howard Gardner that have been so much in vogue over the last 20 years as to have become accepted as truth within the educational establishment.

As much as I’m not sure that Ferguson is completely right, I’ve never been altogether sure Gardner’s theories represented more than a hope that our minds work differently than they do. And this is the crux of Ferguson’s argument about multiple intgelligences:

It’s “cool,” to start with: The list-like format has great attraction for introductory psychology and education classes. It also seems to jibe well with the common observation that individuals have particular talents. More important, especially for education, it implicitly (although perhaps unintentionally on Gardner’s part) promises that each child has strengths as well as weaknesses. With eight separate intelligences, the odds seem good that every child will be intelligent in one of those realms. After all, it’s not called the theory of multiple stupidities.

It would be wonderful to live in a world where we were all equally capable of achieving greatness in something. Indeed I’d say that’s the bedrock of my educational philosophy to date, and a good part of the underpinning of how I approach the world.  But it’s good to be reminded that it’s a relatively recent model for intelligence, and one that might be based on a Utopian dream more than a practical reality.  As Ferguson writes:

That is the root of the matter. Too many people have chosen to believe in what they wish to be true rather than in what is true. In the main, the motive is a pure one: to see every child as having equal potential, or at the very least some potential. Intelligence is a fundamentally meritocratic construct. There are winners and there are losers. A relative doofus may live a comfortable life so long as his or her parents are wealthy. However, clawing one’s own way out of abject poverty is best achieved with a healthy dose of both motivation and “g.”

As much as it pains me, I’ve a feeling Ferguson might be right about all this, and I’m left with the question: what to do about the doofuses?

(Oh, and just to be clear: my child is secretly a genius)

As proof that there is an organization for everyone: Canadian Association of Ukrainians from Former Yugoslavia. Thanks to my father for the pointer.

In my old homebrew blogging system, I built in a function that let me enter references to articles in the Rukapedia but just surrounding an entry’s title in double square brackets: when the post was displayed to the public, the link would automagically get rewritten to become an HTML hyperlink. 

I didn’t want to have to go back and edit these posts and manually insert HTML links for Drupal, so I solved the problem by leaving this as they were and writing a simple Drupal Input Format filter called wikilinks.  It turned out to be really easy to do this.  Here’s how.

First, I created a new directory under the sites/all/modules directory in my Drupal installation called wikilinks.  Inside that directory I created two files: wikilinks.info and wikilinks.module.

In wikilinks.info I put the following information describing the module:

name = Wiki Links Filter
description = A filter to convert wiki links into HTML links to the wiki.
core = 6.x

 In wikilinks.module I put the following PHP code:

<?php
// $Id$

function wikilinks_filter($op, $delta = 0, $format = -1, $text = '', $cache_id = 0) {
  switch ($op) {
    case 'list':
      return array(0 => t('Wiki links'));

    case 'description':
      return t('Converts old-style wiki links to HTML hrefs to wiki.ruk.ca.');

    case 'prepare':
      return $text;

    case "process":
      $text = preg_replace("/\[\[(.*?)\]\]/","<a href=\"http://wiki.ruk.ca/wiki/$1\">$1</a>",$text);
      $text = str_replace("[indent]","",$text);
      return $text;

    default:
      return $text;
  }
}

The heavy lifting is done in this line:

$text = preg_replace("/\[\[(.*?)\]\]/","<a href=\"http://wiki.ruk.ca/wiki/$1\">$1</a>",$text);

The text that gets passed to the filter gets scanned and all instances of two square brackets followed by any number of characters followed by another two square brackes gets replaced by a hyperlink to http://wiki.ruk.ca/wiki/ followed by the term that was surrounded.

Once these two files were in place, I just visited Administer > Site Building > Modules, added the Wiki Links module, and then Administer > Site Configuration > Input Formats and clicked the Configure links for the “Full HTML” input format and checked the box beside “Wiki Links.”

Once all this was in place, all the old-style encoded links to the Rukapedia worked fine, with no conversion needed.

[[Oliver]] and I drove up to Rollo Bay this morning to take in the last day of this year’s Rollo Bay Fiddle Festival. We arrived just in time for the “Tune Writers Circle” in hall at Noon, and were entertained by the fiddle, banjo, piano and mandolin stylings of players like J.J. Chaisson, Brent Chaisson, Elmer Deagle, Anastasia DesRoches, and Mike Hall.

At 2:00 p.m. the afternoon concert started, and we stayed around for Courtney Hogan, the Queens County Fiddlers, Tim Chaisson, and another round of Mike Hall, who was the real highlight of the event for me.  Born in Saint John and honed in Cape Breton, Hall is a sharp player who’s obviously going places. We picked up his CD, A Legacy Not to Be Forgotten, and will look for him again.

By 3:00 p.m. Oliver was flagging — an 8 year old can only take some much fiddle — and so we took the long way home, up to Naufrage and along the north shore and over to Mount Stewart where we stopped in at The Trailside for a piece of cake and a chocolate sundae and a chat with the Doug Deacon.

All of this was part of the 2009 edition of “trying not to completely miss summer on Prince Edward Island,” an effort that, so far, we’re doing pretty good at.

After I succeeded at getting my taxonomy into Drupal, my next task was to get 10 years worth of blog posts from my old homebrew blogging system into Drupal too.  In this regard the Drupal module Node Import was of great help, but a little massaging on the old posts was required before I actually did the import, for which I wrote a PHP script.

My script was quite simple: it took the existing blog posts from a MySQL table, did the massaging, and output them as a vertical bar-delimited ASCII text file ready for Node Import.  The “massaging” amounted to the following:

  • I stripped the HTML tags from the post titles – just used strip_tags on this field.
  • I escaped the vertical bars inside posts themselves – just changed all instances of | to \|.
  • I created a “Taxonomy” field with a double-vertical bar-delimited list of taxonomy terms – for example Internet||Weblogs||Firefox.

Once my script ran on my archive of posts, I had an ASCII file that looked something like this:

"5487"|"2009-05-01 11:11:34"|"Post Title"|
<p>This is the post text in HTML.</p>"|"Internet||Blogs||Firefox"

The first field is the unique record ID on my old system, which I wanted to preserve in Drupal to make referencing old posts easier. The final step, once I had the ASCII file exported, was to run it through iconv to convert the character encoding from ISO-8859-1, used in my homebrew system’s tables, to the UTF-8 used by Drupal:

iconv -f ISO-8859-1 -t UTF-8 drupal-export.txt > drupal-export.utf8.txt

Now I was ready for Node Import.  In Drupal I went to Administer > Content Management > Import Content, clicked “New Import,” and then walked through the wizardy steps to define the import.  Some note on each of the steps:

  • Step One
    • I selected “Story content type”, as this was the content type I decided to use for blog posts.
  • Step Two
    • Remember that you have to “Browse” for the file, then click “Upload” to actually upload it before you go on to the next step.
  • Step Three
    • Delimiter Separated Values
      • Record Separator Newline
      • Field Seperator Pipe (|)
      • Text Delimiter Double Quote (“)
      • Escape Character Backslash (\)
    • If you make the above selections and then click on the “Reload Page” button, you can see a preview of your import in the “Sample data” section of the page, and can get a quick visual indication of whether you selected properly.
  • Step Four
    • I mapped each of my export file’s fields to the appropriate Drupal field.
    • I’d previously added a CCK field called “Previous Number” to hold the original blog post’s record number.
  • Step Six
    • This is the step where you define the field settings you wanted applied to every post where no value appears for a given field in your export file.
  • Step Seven
    • This is the most useful step of all: it gives you a preview of 5 imported items as they will appear once imported (and you can change “5” to a greater number using the “Number of records to preview” drop-down list at the top).  If posts don’t look right here, then they’re not going to look right when they import, so check the preview carefully for possible glitches.
    • If you find problems that require creating a new version of the ASCII file export, you can just click on the module’s “Back” button (not the browser’s) to go back to Step Two and upload another file; all the other choices you’ve made on on steps are remembered.

It took me about an hour of back-and-forth, looking at the preview of the import in Drupal, making tweaks to my export, uploading another version of the file, previewing again, and so on, to get things working properly; this was mostly working around peculiarities in posts on my old system, and didn’t have much to do with Drupal itself.

Once I was ready to launch the import itself it went quite quickly, and was done in under 30 minutes.  The result: more than 5,000 old blogs posts, mapped to a hierarchical taxonomy, in a new home in Drupal.

Our Canoe

About a decade ago, back when [[Catherine]] and I had no [[Oliver]], and thus had surplus time on our hands, we bought a canoe.  It is a nice canoe – nothing extravagant, but enough to knock around in the waterways of Prince Edward Island in.  We did not canoe extensively during our footloose time – a few trips in the West River, a rather bizarre attempt, with my brother [[Steve]], to navigate the Morell River, and an unfortunate outing into the North River in the area surrounding a sewage outflow.  But we always had high hopes.

Those high hopes were diminished somewhat when Oliver arrived and our surplus time disappeared.  When Oliver was 3 years old we made some attempts at resurrecting our canoe lifestyle, but our initial personal flotation device trials resulted in a face-down-in-the-water Oliver, and so we were scared off.

This week, though, with summer speeding by and Oliver’s childhood advancing, I resolved to reapportion some of my summer time to getting the canoe up and running again.  

We started last night with a trip to Canadian Tire and the purchase of a wee paddle and a new PFD, followed by an hour-long session in the CARI Pool to insure that the PFD actually worked (it did!).

Tonight I dragged the canoe out of the carriage house, hosed it off, and assessed the wear and tear of 8 years among the skunks, cats and raccoons (in general, not to bad: some animal attempted to eat the styrofoam seats, but not too much damage was done, and the craft remains seaworthy).

Next on my list tonight: find a way of affixing the canoe to the roof of our 2000 VW Jetta.  When we first bought the canoe, Sporting Intentions sold us a $20 kit of nylon cord and styrofoam blocks that snapped on the canoe’s gunnels; they served us well, but were eaten by the aforementioned animals in the interim so needed to be replaced.

Alas I’ve had no luck replacing them: there are plenty of kayak carriers in the stores of Charlottetown, but nobody but Sporting Intentions sells canoe carriers, and they’ve “gotten away from the styrofoam system” and all they have to offer is a $400 Thule system that, while it’s beautifully designed and snaps right into the Jetta, is overkill for what will likely be a weekend outing or two this summer (I should add that the Sporting Intentions salesperson was super-helpful, both in selling me on the virtues of the Thule, but also in pointing us to good spots to put in).

So as of this writing I’m still searching.  Canadian Tire is selling styrofoam “pool noodles” for $1.99 each and I’m pretty sure I could adapt them for the job, add $20 worth of tie-down straps and have something that would safely get the canoe into fresh water.

Times a wasting and the snow will be here soon, so I’d better be quick about it.

Suggestions welcome.

About This Blog

Photo of Peter RukavinaI am . I am a writer, letterpress printer, and a curious person.

To learn more about me, read my /nowlook at my bio, read presentations and speeches I’ve written, or get in touch (peter@rukavina.net is the quickest way). You can subscribe to an RSS feed of posts, an RSS feed of comments, or receive a daily digests of posts by email.

Search