Thursday, December 13, 2007

eNemy At The Gates

If you want to see an entertainingly polarized debate among a lot of historians - or academics in general, but it seems historians in particular lately - you need do no more than invoke online sources. Particularly in the context of assignments, those two words tend to result in a couple of common reactions:
  1. "It's an online source. So what? They're just as good as conventional ones."
  2. "Online sources are intrinsically bad should never ever be used."
Of course there's others, but most of the time I've discussed the idea I've heard variants of those. I'll wax provocative for a moment and suggest that exhibiting either opinion likely shos a certain, perhaps conscious, lack of thought on the issue. Both views are highly problematic and not questioned often enough, which is where I'm going to try to come in tonight. I'm mainly going to focus on the latter claim with tonight's article - it is by far the more commonly-heard one. (Before doing so, I will cheerfully say that I consider automatic trust of Web sources to be at least as silly as automatic trust of AM talk radio, or perhaps the Weekly World News.)

There are usually a few, pretty predictable arguments presented when people argue for an automatic rejection or disdain of online sources. I'm going to address the most common ones I've run into, in order from most to least absurd. (I'm not going into arguments which go so far as to dismiss government or university sources for being online; that, I hope, is too self-evidently ridiculous to warrant refutation.)

Objection the First: "It's too difficult to track down references. You can't cite Web pages as specifically as you can books or other materials: there's no page numbers!" The crux of this argument is that online sources are not print sources - duh - and therefore are too difficult/unreliable/etc to source to bother citing, because of inconsistent layout and the fact that it may not be immediately obvious where one may get all the information needed to do a proper, full citation. A number of simple solutions exist here. If all the information isn't there, then that's fine; it's not your fault if the specific author or organization behind a Web site isn't explicit enough, for instance, provided most of the information (and the location itself) are there. As far as citing specific parts of a site, of course Web sites aren't going to have page numbers. They aren't books. I don't see a problem here. On the other hand, most sites out there - and all more static media like PDFs - are organized into small enough chunks that you can usually narrow down a cite to a moderately specific page. (They're also often equipped with anchors, which are great in properly-designed sites.) If one can't due to a large block of text, browsers come equipped with search functions for a reason, at least if one's simply concerned with confirming that the information's there.

There is a real problem with some aspects of online sources. "Deep Web" materials - material which is usually procedurally generated, only accessible in its specific form through cookies, searches or other forms of interaction, and so on, are considerably more difficult to get a hold of. As if that wasn't bad enough, they're growing: the Deep Web is hundreds of times larger than the "surface" one right now. There will have to be mechanisms to deal with this in time; handling them on a case-by-case basis is a bare minimum, however. I'm not convinced at the desirability of rejecting an entire field because of some slight inconvenience.


Objection the Second: "Just anyone can put up a Web page!" Oh ho! Yes, this is true - and so what? If you believe it's difficult to have a particularly absurd piece of work show up in book form - or, in the right circumstances, appear as a published article in an academic journal - then I have a bridge I'd like you to consider buying. This argument doesn't impress me at all, mainly because its main underlying assumptions - that "just anyone" can put something online, and that "real" repositories can effectively prevent people from getting their crackpottery in among them - are both flatly untrue.

Another implication of this claim bothers me considerably more. It is the claim, sometimes explicit but usually not, that the identity of a person making an argument has some bearing on the quality of the argument itself - or, indeed, is more important than said argument. This is a contemptible idea, built around a set of logical fallacies that all but the most sophistric freshmen are usually aware of. If we are talking about a world of debate and scholarship - and even amateurs can engage in either! - then these arguments should rise or fall on their own merits. An historical stance should be effective regardless of its creator, provided it stands up to scrutiny - but using its creators' identity as the sole point of that scrutiny is not an appropriate way to handle such things. The identity of a person can influence an argument to a point - after all, consistently good (or bad) arguments can imply more of either in the future - but in the end the effectiveness of a stance should be determined by, well, its effectiveness, and not its creator.

With that in mind, I also think it's fantastic that it's easier for people to put information up for all the world (or at least a specific subset of it) to see. The amount of lousy history - and economics, and science, and art, and recipes - will go way up as a result, but there's room for the good stuff as well. We shouldn't ignore the latter because of the presence of the former, any more than we should shun good archaeologists because von Daniken ostensibly published in the field. We're dealing with a medum here which allows people to do end runs around the gatekeepers for various fields. So what if things get somewhat nuts and over-varied as a result? Personally, I want to embrace the chaos.

Objection the Third: "Online material isn't peer-reviewed and therefore shouldn't be used." While this is often used synonymously with #2, above, it is a distinct complaint, and the only one of these three which I don't see as enirely without merit. While the first two complaints are ones of mere style or elitism, this is an issue of quality control. While the lack of (obvious) peer review - detailed criticism and corroboration by a handful of experts in a specific field - is indeed a problem, it is one which provides some good opportunities for the readers both lay and professional to hone some abilities.

A huge component of the discipline of history, on the research side of things, is the notion of critical examination of sources. Note that this is not the same as merely rejection them! We are taught to look with a careful, hopefully not too jaundiced, eye at any source or argument with which we are presented, keeping an eye out for both weaknesses and strengths. The things to which historians have applied this have diversified dramatically in the last several generations, moving out of libraries and national archives and accepting - sometimes grudgingly, sometimes not - everything from oral traditions to modern science to (as in public history) popular opinions and beliefs about the issues of the day or the past. It's a good skill, and probably a decent chunk of why people with history degrees tend to wind up just about everywhere despite the expected "learn history to teach history" cliche (which, of course, I plan to pursue, but hey!). Online sources shouldn't get a free pass from this - but they should not get the automatic fail so many seem to desire either.

To one point or another, we are all equipped with what Carl Sagan referred to in The Demon-Haunted World - find and read this - as baloney detection kits - a basic awareness of what may or may not be problematic, reliable, true or false about anything we run into in day-to-day affairs. There's semi-formal versions of it for different things, but to one level or another even the most credulous of us have thought processes along these lines. It's a kit which needs to be tuned and applied towards historical sources online - just like all other sources - and in a far more mature way than the rather kneejerk pseudoskepticism which is common these days.

(I compiled a sample BDK for evaluating online resources a couple of years ago as part of my TAing duties at SMU; once I'm back home for the holidays I intend to try to dig that up and I'll follow up with this post by sticking it here.)

The reflexive dismissal of sources of information based entirely on their media is not just an unfortunate practice. It involves a certain abdication of thought, of the responsibility to at least attempt to see some possibility in any source out there, even if it doesn't share the basic shape and style of academic standards. Besides, as I mentioned earlier, there are opportunities in this as well. The nature of online soruces isn't simply the "problem" that someone else didn't do our work for us, pre-screening them for our consumption ahead of time. Their nature is such that it underscores the fact that we need to be taking a more active role in this anyway. For the basic materials out there, it's far easier to vet for basic sanity than many might think - I did effectively show a room full of non-majors how to do it for historical sources in an hour, anyway - and giving everyone a little more practice in this sort of thing can't exactly hurt. In other words, we need to approach online sources with a genuine skepticism.

But guess what? This whole thing's just a smokescreen for a larger issue anyway. We're willing, indeed eager, to hold varying degrees of skepticism towards online sources, but why are we singling them out? Why the complacency as regards citations of interviews, of magazine articles, of books? If you're going to go swinging the questioning mallet, you should at least do so evenly, don't you think?


And on that note, I head off to be shoehorned into a thin metal tube and hurled hundreds of kilometers. I shall post at you next from Halifax!

Tuesday, December 4, 2007

Silently Posturing

As an aside exercise for my digital history class, we were asked to read a paper by Alan Cooper called "Your Program's Posture." Cooper categorizes programs as sovereign, transient, daemonic, or parasitic, the specific classification depending on how it interacts with the user, and the assignment asked us to consider where the programs we use in the course of our work lie in that grouping. I already had a good idea of where the software I use would lie, but I also felt I should read the article before going with my gut instinct of classifying everything as daemonic.

Cooper's categories are described in terms of "postures," essentially their dominant "style" or gross characteristics which determine how users approach, use, and react to them. The first of these four postures is the "sovereign" posture: sovereign programs are paramount to the user, filling most or all of the screen's real estate and functioning as the core of a given project.

The second is "transient," and is the opposite of sovereign software both visually and in terms of interfaces. Intended for specific purposes, meant to be up only temporarily (or, if up for a long time, not constantly interacted with), transient programs can get away with being more exuberant and less intuitive than sovereign applications.

I realized my gut reaction of describing half the annoying stuff I use as daemonic when I realized that the third posture refers to daemon in the computing sense of the word rather than the more traditional gaggle of evil critters with cool names. (Computing jargon tends to come from the oddest places.) Daemonic postures are subtle ones, running constantly in the background but not necessarily being visible to the user at any given time. Daemonic programs tend to either have no interface (for all practical purposes) or tend to have very minimal ones, as the user tends not to do much with them, if anything. They're usually invisible, like printer drivers or the two dozen or so processes a typical computer has running at any time.

The final set of programs are called "parasitic" ones, in the sense that they tend to park on top of another program to fulfill a given function. Cooper describes them as a mixture of sovereign and transient in that they tend to be around all the time, but running in the background, supplementary to a sovereign program. Clocks, resource meters, and so on, generally qualify.

In the interest of this not being entirely a CS post, I should probably answer the initial request on the syllabus as to how it can affect my historical research process. I'm not sure, fully, but I'm also answering this entirely on the fly and and more concerned with how it should affect my process. At present, I'm not using many programs specifically for research purposes. Firefox and OpenOffice (which I use en lieu of Microsoft Office, moreso since that hideous new interface in Office '07 began to give me soul cancer), the main programs I tend to have up at any given time and which I obviously do a lot of my work in, are definitely sovereign program, taking up most of my screen's real estate. The closest thing I have to a work-related application that's transient is Winamp, which is usually parked in the semi-background cheerfully producing background noise I need to function properly. I don't make much use of parasitic programs due to a lack of knowledge of the options about them, mainly, and of course my daemonic ones are usually invisible.

The chunks of this I make use of are mostly a case of "if it ain't broke, don't fix it." I've got my browser, through which I access a lot of my research tools (including Zotero, the most obvious parasitic application I have, and the aggregator functions of Bloglines, the, uh, other most obvious parasitic application I have); I've got my word processor, through which I process my words; I've got Photoshop for 2D graphics work and hogging system resources; I've got Blender for 3D graphics stuff (much though I am annoyed by its coder-designed interface); I've got FreeMind, which is great for planning stuff out. I've no shortage of big, screen-eating sovereign applications, in other words, most of which do their often highly varied jobs quite well.

Some of these can wander from one form to another, of course. I spent an hour earlier this evening working with Blender's animation function to produce a short CG video. When I started the program rendering the six hundred frames of that video, I wasn't going to be doing anything else with it for awhile, and was thus able to simply shunt it out of the way. That left me with a small window showing the rendering process in one corner of my screen, allowing me to work in some other stuff, albeit slightly more slowly as the computer chundered away. Cast down from the throne, the sovereign program became transitorily transient.

What I'm wondering about now, though, are applications which fill the other two postures; stuff that you can set up and just let fly to assist with research or other purposes. An simple and obvious example of this sort of thing would be applications which can trawl RSS feeds for their user. Some careful use setting the application up in the first place - search, like research, is something which can occasionally take significant skill to get useful results - and you could kick back (or deal with more immediate or physical research and other issues) and allow your application to sift thousands of other documents for things you're interested in. Things like this are not without their flaws - unless you're a wizard with searches or otherwise incredibly fortunate, you're as likely as not to miss quite a bit of stuff when trawling fifty or five hundred or five thousand feeds. Then again, that's going to happen anyway no matter what you're researching in this day and age, and systems like this would greatly facilitate at least surveying vast bases of information that would otherwise take up scores of undergraduate research assistants to get through.

The information is out there; there just need to be some better tools (or better-known tools) to dig through it. Properly done, something like this would need minimal interaction once it gets going; you set it up, tell it to trawl your feeds (or Amazon's new books sections, or H-Net's vast mailing lists, or more specialized databases for one thing or another, etc.), and only need to check back in daily or weekly or whenever your search application beeps or blinks or sets off a road flare, leaving you to spend more of your attention on whatever else may need doing. Going through the results would still involve some old-fashioned manual sifting, as likely as not, but if executed properly you would be far more likely to come up with some interesting results than you would by sifting through a tithe of the information in twice the time.

Something like this could help get data from more out-of-left-field areas, as well; setting up a search aggregator as an historian and siccing it, with the terms of whatever you're interested in, on another field like economics or anthropology or law or botany or physics might be a bit of a crapshoot, but could well also yield some surprising views on your current topic from altogether different perspectives, or bring in new tools or methods that the guys across campus thought of first (and vice versa). That sort of collision is what resulted in classes like this (or, at a broader level, public history in general), of course. I want to see more of that - much more.

It could be interesting to see what kind of mashups would result if people in history and various other fields began taking a more active stance on that sort of thing. Being able to look over other disciplines' shoulders is one of those things that simply can't hurt - especially if we have the tools to do so more easily than we could in the past.

I meant to segue into daemonic applications by talking some about distributed computing research, as much to see if I could find ways to drag history into that particularly awesome and subtle area of knowledge, but as usual my muse has gotten away from me and forced a tome onto your screen. So I do believe I shall keep that for some other time...

Thursday, November 15, 2007

Ending our Fences


I've been thinking about barriers between disciplines lately - in particular, what happens when we can tear those barriers down.

Last Saturday I only managed to catch a little bit of the Remembrance Day programming on TV, but I was pleased at the one thing I did see for a few reasons. It was part of a documentary about evidence from the Battle of the Somme in general, and the fate of the (Royal) Newfoundland Regiment's 1st Battalion at Beaumont-Hamel in particular. Most of the part that I was able to catch involved trying to identify and evaluate some of the footage allegedly taken during the battle, a particularly important thing to keep in mind considering that even then war footage was staged for propaganda reasons.

To confirm (or disprove) the veracity of the footage, the researchers drew together people from several different disciplines: historians, archaeologists, archivists, surveyors, video experts, forensic scientists, and I'm fairly sure I'm missing a few. The main piece of footage they focused on was the detonation of one of the great explosive mines at the very start of the battle (visible, very prominently, just under one and a half minutes into this compilation of clips from the battle). This one wound up confirmed as accurate; through piecing together footage, accounts of the battle, a large amount of surveying and GPS work around the mine crater, and talking to descendants of the Somme's veterans who were shown the still-scarred battlefield by their parents or grandparents after the war, and other research which turned up the records of the cameraman who had shot the scene. The confirmation was a spectacular success, as the crew found the exact point, to within a couple of feet, where the cameraman had stood that day. Demonstration of this, fading the original footage in and out on top of the new footage, created a fairly eerie effect, blurring the lines between past and present in an interesting way.

The project also confirmed the veracity of a few other sequences, which turned out to have been shot within minutes of that one, from the same spot, as the cameraman panned the camera to one side to capture some footage of the battalion's disasterous advance. I think that was an unintended discovery, but a good one nonetheless, another brick of This Really Happened in the knowledge wall. Alas, I surrendered the TV at that point to the roomies and the sacred tradition of The Game (and just when they were taking those videos a step further by trying to ID the figures in them - nice!), and didn't get to see what happened next.

But what I saw was some neat enough application.

The day before, a few of the other digital history students and I went to a guest lecture at the university given by Dr. John Bonnett of Brock University. Dr. Bonnett, an historian and Canada Research Chair in Digital Humanities, was giving a talk with the triple-fisted title of "new challenges, new opportunities for history: collaborative environments, high-performance computing, and the future of the historian's craft." The talk was, to be honest, a little on the disorganized and ill-paced side, and could've gone better in ninety minutes instead of sixty. On the other hand, we had a time slot of an hour, and Bonnet's hundreds of minutes' advance warning would make it difficult to get across fairly simple topics, never mind the highly-technical ones he discussed.

So what did he discuss? I could be a smartass and say that he talked about new challenges and opportunities for history by discussing collaborative environments, high-performance computing, and the future of the historian's craft, but I should probably give at least some detail. Dr. Bonnett's talk outwardly appeared to be something one would expect to see coming from a computer science (or at least information science), but there was a lot of meat in there which has potential uses in either digital history specifically, or the broader field as a whole.

Much of the first half of the talk was focused on the versatility of various sources of information - even original, primary documents - when combined with new tools and techniques which have become available over the course of the last generation. This was explained in the context of a project (description at another site here, for those who tire of the awkward site design) Dr. Bonnett was engaged in, where various primary sources such as photographs, street plans and so on were used to generate three-dimensional recreations of Canadian street scapes from the late nineteenth and early twentieth centuries. The reconstruction process also involved some judicious use of educated guesses (a notion I also consider sorely underrated) to fill in gaps, e.g., determining what the east side of a building may look like if photos only show the north and west faces.

The buildings were not the only result of the project; the plan was to produce not merely an exhibit, but a research tool in and of itself. To do this, various other sources including the primary materials used in the original reconstruction, modern sources, various other audio/visual records, and so on can be brought into the reconstruction. On top of this, the reconstructions weren't limited by a specific point in time, either; different views of the street at different times could be changed accordingly. The materials were all brought together in terms of a hub-spoke model, where objects were defined as much by their relationships with one another as in isolation. Done properly, this approach results in a detailed, interactive and highly nonlinear narrative, allowing the user to notice unexpected connections or create and explain their own. There's a lot of potential in this kind of arrangement, to say the least.

The remainder of the lecture had a far more technical focus involving two major concepts: the use of dedicated or distributed networks as collaborative research environments, and agent-based modeling as a source for simulation or experimentation in historical research. I want to go into some detail on those, but it would greatly expand an already-large post. If anyone's interested, prod me and I'll talk about those in a subsequent post. Instead, I'm going to go on to Dr. Bonnett's conclusions from all of this, as well as my own.

Dr. Bonnett made a rather bold - and, in my opinion, accurate - statement about the significance of all of these tools. He argues that the development and proliferation of these sorts of research and collaboration environments is at least as significant to the spread of human knowledge as the development of the book itself. Implications of Sturgeon's Law aside, the changes these sorts of things are potentially bringing into history in particular and communication in general really are a difference of kind, not simply degree. A lot of the results of the digital revolution that's been riccocheting around the world in the last few decades, whether research tools like Dr. Bonnett's or exhibitions for the public which make the most use of new tools (something I discussed in a previous post) simply could not exist, at all, in earlier years. Now they're here, and they're not going anywhere.

I'm convinced that tools and methods of these sorts are woefully misunderstood, in both a passive and a very, very active sense, in the field of history. Modern tools such as computing or other sci-tech applications are certainly studied a great deal in universities - a simple Google search can find a veritable cornucopia of examples of this sort of thing - but not nearly enough effort is being put into applying them, or even understanding them at a level beyond theory. This really does need to change; expanding the discipline's knowledge base in these sorts of directions (and others, such as merely interfacing with other disciplines considerably more than we tend to) will gain scholars and students both a great deal in terms of resources, topics and other opportunities. Avoiding this gains little at best.

While it's more or less taken for granted in an academic environment that we'll tend to erect our little picket fences (or trench lines) between departments or concepts or the like, I'm convinced that doing so too actively is a Very Bad Thing for a number of reasons. Rants about active refusal to learn an available topic at a university aside, I think that there are simply too many potential opportunities for most aspects of history - research, teaching, presentation on both the academic and public levels - to discard or mischaracterize as pointless out of hand. While I don't take things quite so far in the generalist direction as, say, Heinlein did, I do believe that these sorts of changes aren't going anywhere, and we should do a better job of recognizing that sort of thing than we currently do.

Sunday, November 11, 2007

Lest We Remember

It is, for at least the next half-hour or so, Remembrance Day: the day of the year so explicitly built around the concept of memory that the notion is enshrined in its very name.

So, of course, I'm going to spend a good chunk of this post talking about its inverse.

At the appropriate hour today in his time zone, a friend of mine in Australia made a fairly simple commemorative post on his personal blog. It consisted simply of the date and time of the Armistice and the word "Remember." I can certainly appreciate the minimalist nature of that kind of comment; it says most of what needs to be said about the date where it really matters. (Myself, I tend to post some form or another of antiwar poetry at whatever online presence I've been loudest at that year.)

What caught my interest, however, was a statement in the discussion comments beneath the post. Someone mentioned that they'd forgotten the date altogether, but seemed to find that alright, because "[w]ar isn't something to be remembered."

My initial reaction to that kind of statement tends to boil down to "the hell it isn't," but that's largely a combination of my inner historian and (as far as I can comfortably carry the concept) pacifist speaking. I'm old-fashioned enough as far as the notion of remembering events goes that I can say I think George Santayana got it right in one without feeling silly. But there's something else in there that warrants thinking about, as there is with all but the most inane statements.

I'm pretty sure everyone reading this - and everyone who isn't - has a few files parked in their brains' storage that they'd like to be able to delete for one reason or another. They might be big things - being on the wrong end of violence (including, yes, war) or a natural disaster, a major personal failure leaving one with a nagging case of the wouldacouldas - or they might not, being simply minor slights or shortcomings which anyone else would consider not a big deal but, to their bearer, ache like an old wound years after the fact. Most people do have a few things that they simply Don't Want To Remember, but we're largely forced to deal with not having that capacity outside of damaging levels of repression. (That is beginning to change, no doubt raising temptation and concern in equal measure in quite a few heads.)

That's forgetting on a very personal and individual level, of course, and I'm (generally) libertarian enough to think that people should think or do as they will in that regard - if we own anything at all, we certainly own our own minds. With statements like "war isn't something to be remembered," however, a larger issue comes up. It implies the recommendation that there are experiences and memories which are best excised not just from individuals' minds, but from the collective memory of entire cultures.

Cultures - families, towns, nations - deal with that issue of things they'd rather not have experienced on a level which may be somewhat more diffuse than you or I do, but the notion is still there. They handle it in different ways, some good and some bad. Witness Japan, still struggling with its role and experiences in the Second World War; witness Rwanda, taking very much the opposite route; witness Canada, which has integrated its current status as a (largely) tolerant and open multicultural society to the point where few people in my generation have the least clue that we have our own moments of shame. Different reactions for each one; Japan is simultaneously trying to remember and forget; Rwanda is drawing its memories out as much as possible to face and address them; Canada has largely successfully forgotten some of its own black marks.

I'm torn on this kind of thing. On a personal level I despise the idea of excising events from the community's memory. I don't like denial, and the idea that [insert concept here] should not be remembered or thought about is something I generally find deeply appalling. Of course, I'm one person; other historians may not have as much of a problem with this, and historians as a community don't exactly have very strong control over the community's memory at large. I'm not hubristic enough to see myself as The Gatekeeper Of Historical Memory. Simply put, the question of what to remember and what to forget just isn't my decision beyond an immediate level: my own mind, those of people I teach or speak to or write to. If a society at large decides that something is to be forgotten, I must admit that while I may have some influence over that decision, in practice I'm relatively powerless.

So here I am, left wondering precisely how to react to the attitude and the concept. I'd prefer not to vanish wailing over the precipice of despair. I believe historical memory is pretty important - especially when we're talking about it in the context of the current holiday, where we mainly remember a series of intertwined conflicts (I was going to say "from this century," but I reminded myself it's been "the previous century" for some time now) where over a hundred million lost their lives in the name of purest good, blackest evil and everything in between. "Really" remembering those events - having those who had been through them in person around to remind us in a more visceral way than a textbook ever could - is only going to become more difficult in the next decade or so as the survivors of that time pass on. What do we do toabout the people who believe that we should gloss over things like that, or forget their existence altogether, though?

Obviously we have it in us to continue to present material others would prefer not to think about. That's one of the fun things about the field, after all. (There's a running gag in political science that if you don't anger someone now and then, you're doing it wrong; I believe that applies to history as well, or possibly moreso.) Of course, on the public history side of things, there's going to be situations where that's not an option. We're likely to be told now and then that we must put such-and-such a face on things, emphasising one set of memories while pushing another set, which may be every bit as relevant and important and interesting, aside. We're going to be told now and then that we should whitewash, distort, forget or refuse to mention and discuss something, as the War Museum and Smithsonian (among other places) have discovered in recent years.

This issue's here, it likely has always been, and it likely always will be. So what do we do about it? Do we muddle through like we always have been or ignore the people who advocate such ignore-ance (to borrow Michael Frish's term), in effect forgetting them? Do we engage, or possibly confront them? How should we respond to having to choose between a representation (or misrepresentation) of history which mandates that we forget something we consider important on the one hand, and our careers on the other?

I have no particular clue at the moment. But then again, it's no longer the eleventh; it's 12:30 on the twelfth, and is starting to feel it. So I'll waive my personal responsibility to answer my own rhetorical questions even as I shout 'em into the darkness, and claim retroactively that it was merely my point all along to stick those questions into your head, allowing them to fester in a multiplicity of minds rather than just one. I meant for that to be the case. Really.

At least, I'd prefer you remember it that way.




And it's overdue, but I mentioned it as a tradition of mine at the start of the post. As it is still Remembrance Day in my head, I invite you all to have some Wilfred Gibson poetry.

I also invite you to think for awhile of that "long war" that consumed most of three decades of the twentieth century (for the Great War was not an isolated one); not just its courses and the numbers, but the reasons, the ideas, the dreams, and most importantly, the people it affected. History is a human thing, comprised of humans' stories and experiences; it cannot exist without us and is diminished when we are. That bloody century we're still staggering out of robbed us of far too many stories and storytellers both; if we at least hope that we do a better job with this century, then we've at least got a better start on this one.

They ask me where I've been,
And what I've done and seen.
But what can I reply
Who know it wasn't I,
But someone just like me,
Who went across the sea
And with my head and hands
Killed men in foreign lands...
Though I must bear the blame,
Because he bore my name.
- Wilfred Gibson (1878-1962), "Back"

Wednesday, October 17, 2007

A Review of Livius

As an assignment for my public history seminar, we were required to review an historical website. For my own target, I chose Livius - Articles on Ancient History, created and maintained by Jona Lendering in Amsterdam. This site has been around for a long time, as I'll mention below, and I've been familiar with it for most of its existence. I hadn't looked at it in a few years as of the assignment, actually; I figured that would be an interesting target for my newfound Mad History Skillz, and reviewed it last week. Without further ado:

Asked why he created Livius on his site's FAQ, Dutch historian Jona Lendering cites his impatience with scholars' tendency to write to specialists more than the general audience. That, along with the lack of clearly-written and easily-accessible, yet still scholarly, material for non-specialists, inspired him to launch his considerable website on ancient and classical history in 1996. For the last eleven years – an eternity in “Internet time!" – Livius has remained more or less exclusively a one-man endeavor. The site is regularly maintained, being modified or expanded roughly once or twice a week. Lendering refuses to accept outside help producing content for the site, preferring to bear sole responsibility – and blame – for any errors on the site. (Many of the site's pictures are the main exception, many of them having been taken by his colleague Marco Prins.)

For a personal project, Livius' scope is vast. As of its September 29 update, the site boasts over 3,200 separate pages. While many of these can be quite short, with Lendering promising to expand them later, several hundred are substantial, encyclopedia-style articles. Nearly all articles are illustrated to one extent or another, with a mixture of maps, images of coins or ancient artwork, and photographs of what different regions discussed look like today. Several articles expand into large subsections in their own right. For example, the section on Julius Caesar is a twelve-section biography with two dozen annotated and translated excerpts from primary sources, a single link in the main index branching into thirty-seven separate pages. The vast majority of articles on the site are heavily cross-linked to others, with some off-site links as well. The scope of the site is impressive geographically and chronologically as well: the broadest sections of the site are nine of the major regions generally accepted within ancient and classical history (Anatolia/Asia Minor, Carthage/North Africa, Egypt, Germany, Greece, Judaea/Palestine, Mesopotamia, Persia and Rome), with the sections on Greece and Rome the most developed. Other sections have their own strengths: for example, a large collection of Mesopotamian primary sources with images, transliterations, and translations.

Two major problems exist with the site's content: the issue of sourcing and the Livius' inward-looking nature. The first is perhaps the most serious: very few articles have formal bibliographies, although several (particularly in the Greco-Roman and Jewish sections) do discuss primary sources, often at length with excerpts. This is by no means consistent across the site, unfortunately. (Lendering mentions in his site's FAQ that he is reluctant to reference secondary sources often because of growing plagiarism using Livius.) The balance of links is another major problem, as the vast majority of links are within the site itself. While this means the site is very well cross-referenced, it limits the site's use as a jumping-off point to other resources, at least directly. Livius' front page does have a collection of links to “related websites,” however.

Lendering begins to run into accessibility problems with how he organizes and presents his information. Livius is organized as several layers of indices, which means someone accessing the site will usually have to encounter one or two alphabetized lists (sometimes roughly subcategorized into geography, biography, etc.) before getting to the articles they seek. This can be daunting if a reader is seeking general information rather than a particular topic. Lendering has recently added a Google custom search to his main page, however, which makes finding specific articles easier than in the past.

In terms of appearance, Livius betrays its age. Lendering first launched his site in 1996, before the combination of ubiquitous broadband and greatly expanded computer capabilities began to shape Web design. Lendering has continued to use many of the design principles of that earlier era on his page, keeping to a very minimalist, no-frills design which may appear (please pardon the pun!) rather Spartan to contemporary eyes. This approach, combined with a navigation bars at the top or bottom of most pages, makes navigating the site quite easy: links are obvious and pages load quickly, even on dialup connections. However, this sometimes causes problems visually; images are often themselves sized by the standards of lower-resolution monitors. Many appear unpleasantly small on modern screens, particularly for those viewers who like lots of detail or close-ups. As with the rest of the site, however, the images are themselves being steadily updated, with more “modern” sizes appearing in newer articles.

Lendering seems to have had mixed success in his stated goals for Livius. He does accomplish part of his intended purpose, by having a free resource online from which readers can get a fairly good picture of the ancient world, particularly classical times. However, ease of access to this information is limited by the site's significant organizational problems and some gaps in its selection. Livius is a work in progress, and due to the scope of the era which Lendering is attempting to document – and the fact that it is, at its heart, a personal project – it will likely remain so for some time. Perhaps unfortunately for Lendering's intentions, it is likely to be more accessible to students or hobbyists who already have some amount of ancient history knowledge under their belt before visiting, especially if his idea of the “general audience” is those just beginning to study the period. Livius is a site which aspires to be comprehensive and which aspires to be accessible to the wider public, but does not quite – as a living site, perhaps does not yet quite – meet these goals.

Thursday, October 11, 2007

Exhibits of The Future!(tm)

In my last post I linked to an interview on "technologies of persuasion." There's a pretty heavy advertising element to that, obviously, but it's an element I think could be used in producing history at times. Anyway, I'm bringing that up mainly because I found an example of this sort of thing the other day that could be fairly easily applied both to that concept - it is, at its heart, an advertisement - and as a neat way of presenting history that shows the kinds of things you can do with contemporary technology, a bit of creativity, and a tremendous amount of caffeine.

A few days ago, a friend of mine pointed me at a neat example of how one could present a history exhibit with modern technology in the form of this advertisement for Halo 3. What could a science-fiction FPS have to do with the presentation of history, you ask? Well, take a look at that site. It's Flash-heavy and has audio and video components for those of you whose computers may not be up to the task, but anyone with a moderately-recent machine shouldn't have a problem.

For those of you who can't (or won't) check out the URL, the basic premise of the advertisement, with all the game-setting stuff boiled out, is that it is a historical exhibit - specifically, a war diorama/memorial. It's a very large one, hence the tremendous amount of caffeine, but what's neat about this is the way it's displayed. The viewer's perspective isn't looming over the entire display, the way we tend to stand over most such exhibits in a typical museum, but it's down at the display's ground level as the camera pans and weaves through it. (That panning and weaving is largely under the user's control; you can go through it relatively freely.) That's just neat on a visual level, but what makes it especially neat, at least in my opinion, is how additional content is worked in at various points. At regular intervals in the tour through it, a link will pop up over one figurine or another. Those links lead to content which expands the context of the scene - a "first person account" in the form of a statement from Someone Who Was There on this link, a biographical sketch of another person on that one, a video of a veteran being interviewed elsewhere in the museum for another, a description of one alien baddie or another at another link, and the occasional spot where the tour pauses to allow a full panoramic view of an important location.

At first I simply looked at it thinking "well, this is certainly a damn cool piece of work" - I tend to have a healthy respect for anything that was obviously done painstakingly and well, and this is no exception to that. But after a few minutes I started thinking about it some more. This advertisement is in the form of an exhibit at a fictional museum, of course. It's an ad for a computer game, after all. But what if we got a few other people together and gave them some modern midrange hardware and software, a bit of creativity, and a tremendous amount of caffeine?

This thing isn't just an advertisement to me, although it is (at least to this semi-casual Halo fan) a pretty effective and extremely good-looking one. It is also, perhaps after one distills the game's elements out of it and looks at it on a more abstract level, a template for a pretty impressive, interactive type of exhibit in general. On top of the eye candy factor, it's a neat way of taking a diorama - normally a pretty passive sort of display, much like most things you'll see in museums - and turning it into something interactive.

If this could be made, then why not, say, a similar treatment of a diorama of Stalingrad?

Or Rome at its height?

Or 1930s New York City?

Or anything else, for that matter?

Wednesday, October 3, 2007

You Are Getting Verrrrry Innnnnterested....

A few years ago I earned the nickname "Patient Zero" among several of my friends. Fortunately for all involved, the infections involved were mental: I had a tendency for awhile to get bitten by one interest or another in such a way that those around me managed to pick it up as well. A couple of them would take advantage of that: "I want the guys to listen to this new album, so I'll get Patrick interested in it and the rest will take care of itself." Given how esoteric my interests, and those of my friends, are, this has caused some spectacular feedback loops at times.

So when I just stumbled across this interview over on WorldChanging with author Doug Rushkoff, in which he's asked about his recently-created course on "technologies of persuasion," my curiosity was piqued. Early in the interview, he takes issue with some popular ideas on what persuasion entails:
Seriously, I wouldn't want to use any tactic to get someone to take my course, or to do anything at all. Once a person has been cajoled, there's almost always a negative effect later on. Chairman Mao used to talk about this – how people can't be inspired to foist a revolution, but that it has to come from them. (Not that he lived or led true to this dictum.)

I get asked all the time, "how can we get people to be more this or more that?" Usually by Jewish groups looking to get kids to be more Jewish, progressive groups looking to get people to be more politically active (or at least to contribute money to the right PAC), or my editors asking me to get more people to buy my books. And I think the object of the game is to get out of the mindset of "getting people to do something" and instead just create a really nice, really open invitation.

The key to doing this, Rushkoff believes, comes in the form of connections:

My whole pitch on marketing and communications is for companies to stop creating mythologies and persuasion campaigns around the products that they're disconnected with, and to start getting involved in some aspect of the thing they're selling. [emphasis added]

It definitely has a larger focus on things like, say, marketing or politics than a broader, in-general How To Convince People About Stuff sort of persuasion, but I also believe there's room for some overlap here into "our" topics such as presentation of history outside the academy. As historians, we may not be selling a product in the conventional, give-us-money-we-give-you-stuff sense - though the universities may well be, given the rise of the student-as-customer mindset (which is a whole other rant anyway). But we are trying to get ideas across to others, and most of us at least aren't trying to limit that to a stagnant, preaching-to-the-choir sort of situation.

I'm not, at least, as someone who's studying public history, and also as someone who has his own portion of that vaguely ivorytowerian "why oh why don't people know anything about their history woe woe arrgh?"angst (which I'm sure I share with many of those who read this). Doing some looking around in order to find ways to reach audiences, or perhaps even create them, seems like something worth chasing to me. I'm normally allergic to marketing lingo, joking that people should need a license to use the word "paradigm" in a sentence, but this interview at least piqued my interest enough to try to getpersuade you guys to take a look at it and think about some of it.

It's worthwhile for those two points I quote above, I believe: that you can't really make someone be interested in something (after all, as several of us discussed yesterday, the consumers' - and audience's - thoughts and beliefs will remain their own, beyond our feasible reach, unless they themselves decide otherwise), and that some kind of involvement and connection - doing and being instead of simply selling or pushing - is probably a better way to spark interest in others.

Great! It's all so clear now!

Well, aside from the implementation part. Yeah.

I definitely like and agree with the idea. The question of how we can do these things, of course, depends on as many separate variables as our interests and circumstances and projects may present. So I don't know. On the one hand, the advice may seem unnecessarily vague, especially if we're a little outside the box as historians. On the other hand, it's still useful for all its vagueness: blank checks can be fun!

Wednesday, September 26, 2007

Technopeasants of the Academy, Unite!

Academia isn't, of course, the only realm where people are currently going at it hammer and tongsdebating the implications of the Internet as a tool for production and distribution of ideas both new and old. Some of those realms, though, might surprise people who are entering or becoming fully aware of the debate within history or other such fields, though.

Back in April, a fellow by the name of Howard Hendrix flew off on a self-described "rant" condemning writers who use the Internet to give their work away freely. He says that he is "opposed to the increasing presence in our organization of webscabs, who post their creations on the net for free," going on to define the neologism "webscab" as someone who undercuts his fellow workers (or in this case writers), thereby undermining the fight for better pay and working conditions, etc. He says they are "rotting our organization from within" along with a few other similarly loaded terms of phrase, and goes on to describe the victims of webscabs - the people who sell their work in the traditional venue - as being converted into "Pixel-stained Technopeasant Wretch[es]." The existence of these webscabs, in fact, offends him so much that, as part of his right to resist technology he sees as "destructive to [his] ways of life and [his] beliefs," that he's decided not to seek a renewed term as the vice-president of his organization. After his term ended, he would in fact step away from technology altogether, saying he'll answer emails but "won't blog, wiki, chat, post, LiveJournal, lounge or lurk -- and [he]'ll be the happier for it."

So what's so unusual about this? It does, after all, sound like a kind of complaint that has come from a variety of different directions in the last few years, though worded in less confrontational terms. And confrontational those terms are; Howard Hendrix's words sparked outrage of terrible power, still palpable when people in his field discuss it today, several months later.

Oh, I forgot to tell you what his organization is? I should probably do that - Howard Hendrix was vice-president of the Science Fiction and Fantasy Writers of America, one of the primary SF/F-related organizations on the planet, source of the Nebula Award, one of the higher honours a writer in either field can receive.

In a blog article over on Boing Boing, Cory Doctorow - who in January 2003 released his first novel, Down and Out in the Magic Kingdom, online for free in addition to releasing it as "a physical object" in bookstores, under a Creative Commons license - gives his own views on Hendrix's statements after having followed a debate between Hendrix and "web-novelist/podcast-novelist" Scott Sigler at a science-fiction event in San Francisco last week. As one can safely assume from the fact that Doctorow is one of the "webscabs" Hendrix rails against, he takes strong exception to the arguments against using the Net as a medium for releasing new material, particularly if it's being done freely. Hendrix made various arguments, ranging from economic problems to mere rhetoric, and these seem to be addressed in turn well enough. While Hendrix is obviously in the minority within the SFWA on this opinion - witness the vitriol in the LiveJournal thread relaying his original statement - there is still a debate going on there to the present.

So if big names in the science fiction community, one of the more technology-friendly bodies of people on the planet (at least for the most part), are arguing back and forth over the Net as a medium, there's clearly something worth discussing here. As I am posting it here, I rather obviously believe that there's some connection to digital history practices.

One of Doctorow's points, and IMHO his most important one, is that the Net as a medium "diversifies the ways in which works find audiences adn vice-versa, undoing the 20th century's enormous trend to concentration and more bargaining power for fewer media companies." The concerns about monopolization of knowledge are probably less pressing in academia - at least in the humanities or social sciences upon escaping the freshman level tomes. However, the potential benefit of getting information to audiences which want information but may not have access to it - or indeed, may not even know the information they want is out there[1] - is fairly obvious to me. I, like Doctorow, flatly reject arguments that suggest people will stop buying books, or the "ugly straw-man, visibly untrue" that those who support this kind of ready distribution are naive optimists.

What I do see as an issue in online distribution - particularly of the free and unfettered kind, particularly particularly of the free and unfettered kind dealing with academic topics such as history - is the problem of quality control. Sturgeon's Law holds for a lot of things released online, to the point where one may think that Sturgeon was perhaps being a little optimistic. There is no shortage of incomprehensibly weird, if sincerely-expressed material out there[2], and I do believe seperating the good from the bad, or the bad from oh-my-God-make-me-unsee-that, is a problem. While it's different in degree from what we run into in the average bookstore, or even the average university library, though, I don't think the matter is that different in kind.

Others may disagree, of course; I know full well that I'm nowhere near the "self-proclaimed Luddite" camp (and in fact often joke semi-seriously that I would love a brain-to-Photoshop interface for working with graphics). I certainly see more potential than risk or threat in the digital age, though. I'm unconvinced by the arguments surrounding The Imminent Death Of The Book and other such things, and have always believed that if someone wants to get their work out there for free (or for whatever they want to request in exchange for it), more power to them. The Internet and the media shooting off from it make it easier to do things like that in some unorthodox and interesting ways, and I enjoy seeing some of the things that can result from that.

As for the quality issue? I dunno. Even the bad stuff out there can spark discussions which can lead in interesting directions. And the bad stuff out there that doesn't do that, which doesn't languish either, but merely incites or misleads or otherwise displays itself as the result of abuses of history or cryptohistory? Well, they put their stuff out there, so what's preventing us from issuing forth refutations. Hendrix complained about the impact of free releases on the SF industry; Doctorow's response in both words (his refutation of Hendrix's argument) and deed (his first novel was a commercial success despite being available for no-strings-attached free download, and nearly won a Nebula besides[3]) is perfectly clear.

[1] - I assume you've all had your fair share of "so that's what that is! now I have a name for it!" moments. If not, what's your excuse?

[2] - A friend of mine is attempting to popularize the idea of using "the Timecube" as a unit of measurement for just how, well, stark raving mad a given source or person or argument sounds. "Some guy called into the radio talk show this morning, and flew into this incoherent rant that topped out around 0.8 Timecubes!"

[3] - As a result of this whole debacle, International Pixel-Stained Technopeasant Day was declared on April 23rd, in which authors (and anyone else who wanted in on it) could release "a professional-quality work" for free on their websites. Somewhere between scores and hundreds of people, including some fairly big names in the field, participated, and thousands of amateurs had fun with it as well.

Sunday, September 9, 2007

Cleverly-Named Introductory Post

"First post!" as they say over on Slashdot. Not quite as satisfying since I'm the only one who can post here, but oh well. That just means I can claim the right to a moderately rambling and self-indulgent introduction, doesn't it?

Who I am can be seen off to the right of this post, and this blog exists as a component of that public history program. Specifically, it's a component of History 513, also known as Digital History: Methodology for the Infinite Archive, taught by Professor Turkel here at the University of Western Ontario.

One of the fun things about being in a program as (relatively) obscure as public history is the raised eyebrows. If I had a nickel for every time in the last few months I've been asked "public history? What's that?" I would probably be gazing down upon you all in air-conditioned comfort from the privacy of my newly-purchased space hotel. It tends to result in interesting discussions, at least, and the topic is usually quickly understood, even if it's seen as a bit weird, not the standard "History of [Topic] in [Place] during [Time]" that most people associate with history classes.

When course schedules come up, and it's learned that I have something called Digital History on Wednesday afternoons, the same sort of question shows up, with a little less confusion, a little less "what's that?" and a little more incredulity, a little more "why's that?" Shouldn't history courses, after all, deal primarily with the hows and whens and whos and whats of the past - preferably involving, to complete the cliche, Dusty Old Tomes and entweeded professors with four hyphens in their surnames? Short of its use as an archiving tool, maybe, isn't most modern technology either irrelevant to the study of history or even an active hindrance to it?

I obviously don't think so, otherwise I wouldn't be here, doing this.

On the one hand, I'm well aware of the arguments and controversies and confusion and even contempt about the ubiquity of technology in modern life. I remember the large part of it firsthand, particularly the aspects to do with the Internet back in the early nineties. ("The Information Superhighway: Threat Or Menace?") On the other hand, I grew up sufficiently around and engaged with most of it that I take much of it completely for granted, wondering what the fuss over new technology is rather than wondering what the point of it is.

It's an interesting position to be in. Even though I can sympathise with some of the revulsion towards the more absurd or annoying[1] aspects of connected culture, and tend to take issue with a lot of the jargon and hype within it - I'm allergic to the prefixes "cyber" or (when followed by anything other than "-mail") "e" - I probably qualify as one of the digerati or whatever term is used to describe netizens this week. At least, I've been around net.culture long enough to remember "what's your major?" being a pickup line or understand why today's September 5122, 1993, so I know it's got its share of unique quirks and familiar mundanities. I think I've chosen my camp.

So here I am, in any case, one more drop in a delightfully growing noosphere. Obviously I'll be posting about digital history; it is, of course, required on the syllabus a terrific confluence of several of my main interests. This kind of topic, the wealth of resources and opportunities under its umbrella, brings together a lot of things: history proper, its research and presentation[2], ways of outreach that bring the materials to an audience outside the classroom or archives, and the various facets of digital and online cultures. It helps create the kind of environment in which someone can easily read Herodotus in the original, catch radio shows on obscure conspiracies, see the past without the monochrome, discuss counterfactual history (or more conventional fields) in a lively and active environment... the list goes on, and I am most happy that it does.

Much as I am aware and understanding of the anxiety and controversy over many aspects of the Net in this day and age, and much as I wonder about some of the people on it at times, I remain a pretty enthusiastic supporter of its adoption by historians (and people in just about any other field, from mathematics to metalworking). It's an area where a lot of the potentials are only starting to be tapped, and a lot of the concerns are, I believe, somewhat overblown, and it's something I intend to explore in considerable detail.

When I first became aware of this sort of field, back before I had a name for it, I almost immediately thought "I want in on this." As I start the transition from "just" studying history to "doing" it, that remains the case.

[1] - Don't lie: you've done this.

[2] - One of the more amazing examples of this I've encountered can be seen here. In this video, Professor Hans Rosling, a professor of international health at the Karolinka Institute in Sweden, makes a point of the usefulness of new technologies as teaching tools that is hard to ignore. Taking the fairly abstruse topic of developing-world demographics over the last several decades, he handily spins together a presentation which is both riveting and very easy to understand. Rosling's point - that a lot of relevant, important, practical information is lying fallow, and could be understood and applied very easily were it just for a different perspective - becomes clear as day starting at about five minutes in.