Of bombs and porn and the ‘net

In the summer of 1996 I presented a series on CBC Radio’s Island Morning program, produced by Ann Thurlow, called Consumed by Technology. I’ve managed to recover the audio of the episodes, along with the “show notes” and transcripts, from The Internet Archive and I’m posting each episode here for posterity.

This final, seventh, episode of Consumed by Technology focused on access to the Internet by students in public schools; it aired on August 20, 1996. Wayne Collins was the host.

With the growth of the Internet as an educational tool, the question of how to control what information students have access to has become a controversial issue. There have been several stories in the news recently about students gaining access to pornography, bomb making instructions and other “questionable” materials. The reaction from educators has ranged from the introduction of electronic monitoring to the insistence that students and their parents sign waivers before students are let loose on the ‘net.

Show Notes

Transcript

INTRO: With the growth of the Internet as an educational tool, the question of how to control what information students have access to has become a controversial issue. There have been several stories in the news recently about students gaining access to pornography, bomb making instructions and other “questionable” materials. The reaction from educators has ranged from the introduction of electronic monitoring to the insistence that students and their parents sign waivers before students are let loose on the ‘net.

To talk about this issue, Peter Rukavina joins me now for another in the series “Consumed by Technology.”

QUESTION: What is it about the Internet that makes controlling access to certain types of information so difficult?

ANSWER: We’ve talked a lot in this series about digital information — information that’s not that different in substance from any other sort of information, but for the fact that it is very easy to move from place to place using computers.

It’s very easy and very cheap to take any sort of information — pictures, maps, magazines, TV programs, whatever — convert it into digital information and then use computers to make as many copies as you like and to send these copies wherever in the world you like.

Compared with the old ways of moving information around — using printing presses and trucks or radio studios and transmitter towers — spreading digital information from place to place is almost effortless.

It’s precisely because moving digital information is so easy and so cheap that we’ve seen the explosive growth of the Internet over the past several years: the Internet provides, quite literally, an “information highway” which can provide people around the world with access to vast amounts of digital information.

One of the other interesting things about digital information is that it’s invisible when it’s moving around. In its most primitive state, digital information is just a bunch of 1’s and 0’s — electrical impulses really — moving up and down wires at the speed of light.

If you were able to somehow magically zoom in on a little piece of the Internet, what you would see would simply be a stream of digital information bits zipping along. These information bits, once assembled at the receiving end by a computer, could just as well be the complete works of William Shakespeare as they could be an episode of Compass or information about crop rotation. While they’re out there on the Internet, though, they’re just bits of a vast digital information soup.

When you hook up to the Internet, you immediately gain access to all of the public information that’s available on Internet computers around the world: it’s like you’re connecting a big pipe to the entire information soup.

And when you do this, you’re not only gaining access to things like the complete works of William Shakespeare, and information on when to plant begonias, you’re also gaining access to the latest issue of Playboy magazine, to discussion groups about bomb making, to a myriad of radical political views… whatever information is out there on the Internet — and that probably includes any sort of information you can imagine and much you cannot — whatever information is out there is accessible to anyone who’s connected to the Internet.

QUESTION: And this is true whether we’re talking about my computer at home or the computers at my child’s school?

ANSWER: When you’re plugged into the Internet, you’re plugged into the Internet. It’s the same information soup no matter where and who you are.

And that’s precisely why we’ve seen some controversy in the past year as schools have become connected to the Internet and students have started to be able to browse around and see what’s available.

As you might imagine, they’re not always browsing around the lofty educational stuff.

QUESTION: Here on the Island and elsewhere, there’s been talk of installing special computer programs which will filter out objectionable material… does this sort of thing actually work?

ANSWER: It works… sort of.

There are three problems which arise when you try and filter digital information: first, the problem of deciding what to filter, second, the problem of how to filter it, and finally the problem of the students simply finding ways to work around the filters.

On the surface, the problem of deciding what to filter out appears pretty simple. If you were to put 5 or 10 average parents in a room and ask them to come up with a list of what they consider “objectionable” information, information that they don’t want their kids coming across, you’d probably have a pretty easy time of it… at least to begin.

I don’t thing there are many parents around who would want their kids having access to violent pictures, pornography, or information about how to make bombs, and you’d probably get pretty quick agreement on those.

But then what about pictures of the aftermath of Hiroshima: that’s a pretty gruesome — a pretty violent — sight, but it’s also a very powerful tool in teaching about war and peace… so maybe no violent pictures, except pictures of Hiroshima.

And then there’s information about sex. Pornography is out, but what’s pornography? Two people kissing? “How to” information about sex? What about information about birth control? Some people think that information about birth control is pornography.

What about this “bomb making information?” If you screen out everything with the word “bomb” in it, you’re going to leave out most of the history of World War II and a lot of the items in the news recently.

The issue of figuring out what’s “good information” and what’s “bad information” isn’t cut and dried; information isn’t black and white and while it might be possible to get people to agree in a very general way about what’s “good” and what’s “bad,” doing anything more is like trying to get agreement on anything controversial… next to impossible.

This sort of problem is no different than the classic problem of trying to figure out whether “Catcher in the Rye” should be in school libraries or not, except that the challenge is not about one book which you can pick up and read and argue about, it’s about trying to sort through an entire world of information, sometimes before it even exists, and coming up with very specific rules for what’s “in” and what’s “out”… for what’s “good” and what’s “bad” Information.

QUESTION: Assuming we could all, somehow, come to agreement over what information should be filtered out, how does the actual filtering process work?

ANSWER: Well remember that digital information is, essentially, invisible: when it’s moving around it’s just a generic soup of bits and bytes. The filtering programs that schools are looking at work by intercepting information in this raw state as it enters the computer.

A sort of “information robot” sits and watches for patterns in the incoming information. If it detects one of the patterns, it can take actions that range from shutting the computer down immediately, to denying access to that particular page or Internet site.

The patterns this robot is looking for are a pre-defined set of keywords that are associated with the kind of information that students are denied access to. In most cases, this set of keywords is something that can be added to or changed to suit the particular needs of the school or the age group in question.

So a student walks up to a computer, clicks on “Open” and types in an Internet address like “www.playboy.com” and, because the word “playboy” is one of the keywords in the “watch list,” a warning pops up on the screen telling them that they’ve tried to access a banned Internet site.

The problem here is that it’s next to impossible to come up with a set of keywords that will both screen out anything objectionable and let good, useful information through. There are simply too many possible combinations of words and phrases and content to cover off everything, and inevitably in the process of trying to screen out “bad” information you end up throwing the baby out with the bathwater and screening some out “good” information too.

Perhaps the best example of this sort of thing happened last year when America Online, a large U.S. computer network added the word “breast” to a list of banned words for their electronic discussion groups. They view themselves as a “family” network and, in their well intentioned way, were just trying to “clean things up.” Unfortunately in the process, they ended up censoring an electronic discussion group for breast cancer survivors.

Just like trying to define what information is “good” and what information is “bad,” trying to come up with a comprehensive set of watch-words that will cover off all situations without making the Internet all but useless just isn’t possible.

Another problem that crops up when you try to filter digital information is that it’s next to impossible to filter out pictures based on their contents. You can filter out pictures based on what computer filename they’ve got — “porn.gif” or “dirtypicture.bmp” — but that’s about it. It’s almost impossible for a computer, at least with the technology we have today, to figure out what a picture is actually of. And so, again, even if you could figure out what sort of pictures you wanted to screen out, there’s no effective way of actually doing it.

QUESTION: You mentioned the problem of students finding ways to work around the filters… is that really a problem?

ANSWER: The natural inclination of any teenager when prevented from doing something — and I speak from considerable personal experience here — is to immediately find a way around whatever roadblocks have been placed in their way. Consider the fact that it is illegal for teens to buy beer and cigarettes and yet, somehow, many teens are able to get beer and cigarettes whenever they like.

The situation with Internet filters is no different: there’s always a way to work around the system and the very fact that the system is there at all is extra incentive to work harder at getting around it.

To test this out, I downloaded a program from the Internet called “Net Nanny” and installed it on my computer. It’s a pretty standard Internet content filtering program.

I set the program up on my computer, gave it a list of watch words and then took it out for a spin. Sure enough, whenever I did something “bad” by trying to go to an Internet site that contained any of the words I’d set up as watch words, I was prevented from doing so.

When I set out to work around Net Nanny, it took me about 45 seconds to erase all traces of the program from my computer and to again get unlimited access to the Internet. If I wanted to cover my tracks, I could have just set the Net Nanny program aside for a while and then, when I was done browsing, I could have put it back and no one would have been the wiser.

It didn’t take any great knowledge of computers for me to do this, and it would be well within the capabilities of any high school student with a bit of computer savvy to do exactly the same thing.

The reason this is so easy to do isn’t really because the filtering programs themselves aren’t bullet-proof enough, it’s just that computers are, by their very nature, malleable, flexible things that are designed to be easily modified. These filter programs are like deadbolt locks installed in a door with a balsa wood frame: they do what they’re supposed to do until you decide to just bust through the door and ignore them.

So again, to answer your original question — “do these filter programs actually work?” — my answer is that even if we can somehow agree on what we’re going to censor and figure out list of words that will do the task — and both of those tasks are next to impossible — we’re still left with an imperfect solution that can be easily worked around.

QUESTION: If we can’t use computers to filter out information, what other options do we have if we want to continue to give students access to the Internet, but only to certain parts?

ANSWER: To be honest, there really aren’t any technology solutions to this problem and I doubt that there ever will be.

The problem we’re really facing here is that we’ve relied for generations on our ability to simply physically prevent our children from coming across information we don’t want them seeing. Dirty magazines are on the top shelf in the local cigar store, public libraries simply don’t buy really controversial books, we don’t let our kids watch TV programs we don’t like, and so on. It’s been relatively easy to hold back the tide of “bad” information by placing that information out of reach.

And now we can’t do that anymore.

Short of unplugging the Internet and calling it a day, there isn’t going to be a technology solution that’s going to keep our kids from coming across all sorts of information we don’t want them seeing. Kids are going to see violence. They are going to see graphic sex. They are going to see things that, probably, you and I have never seen.

The solution to this problem isn’t going to be a technical one, it’s going to be an educational one. And it’s not going to be a universal, complete, blanket solution, it’s going to be a fuzzy, inexact, incomplete, evolving solution.

Because we’ve dealt with objectionable material for so long simply by physically preventing access to it, we’ve gotten lazy… we’ve not had to think that hard about why some material is objectionable to us and other material isn’t. And we haven’t, by and large, had to talk to our kids about this, at least not in a pressing, practical way.

We’re going to have to start.

If we admit to ourselves that our kids are inevitably going to come across materials which will shock them, make them afraid, turn them on, confuse them, and confound them, then rather than trying to pretend that some technological magic bullet is going to come along and screen our the bad bits for us, we can get on with the job of giving our kids the skills they need to deal with all of this information.

I don’t think we even know what those skills are. But we’re going to have to figure them out.

There are, unfortunately, no easy answers to this problem and I imagine we’ve only begun to see the beginning of the controversy.

We’re facing yet another one of those situations where technology, in solving problems for us — in giving us more access to more information — also introduces a host of new challenges we never imagined and never prepared for.

EXTRO: Peter Rukavina operates Digital Island in Kingston, PEI… he’ll be back next week with another in the series “Consumed by Technology.”

Comments