It’s been 2-ish months since I posted to this blog, thinking about more of the conceptual and technical issues of building and maintaining my own domain. This hiatus wasn’t entirely coincidental, as anyone carefully reading my earlier posts (*crickets*) might have predicted. I love tinkering with technical stuff, and writing blog posts (which I’ve continued to do regarding my classes and scholarship)–to the point where I know it can become a distraction. As we hit the later stretches of the semester–and after the earlier portions had included more attention to web-building–I had to abandon what was fast becoming a favorite distraction, in favor of spending more time on classes, student work, etc.

But now, with the semester drawing to an end, I have a bit of the time and energy I’d like to have had all along. In addition, that sense of guilt that lurks if I’m not being “productive” has intersected with a mental state that isn’t particularly sharp and/or focused. All of which suggests an opportunity to have some fun before I dive back into some scholarship next week.

So a bit about what I’ve been doing.

Cohort discussions from Domain of One’s Own convinced me to do just a bit more with LinkedIn and, where I did have existing profiles but hadn’t developed a presence beyond the basics (name, graduate school, and an email address). I have since filled those profiles in with updated work histories, contact information, brief biographies, profile pictures, and CVs that are consistent with what I have posted on my personal domain, and where possible I’ve integrated my site with those profiles using the tools at “If This Then That.” seems not to be so compatible with these other networks (either I’m missing something, or this is a semi-hilarious reflection of academia as a wholly separate world).

Twitter seems to be the most productive in a professional sense, though a “noisy” space. I installed a widget on my top-level domain that displays my latest tweets, and tweets/broadcasts my latest blog posts, and I know several have been retweeted even by people/organizations I don’t know (the one about “rewilding” got picked up by some organization in Europe, for instance). I don’t do a ton of tweeting beyond that (occasional exchanges with colleagues/students who are there), but I do follow–and continue to add–a number of relevant professional organizations (AHA, OAH, NY History, American Antiquarian Society, MA Historical Society, Historic New England, Library of Congress, Common-Place, the American History Museum, Chronicle of Higher Education, Inside Higher Ed), and historians at other institutions (Ann Little, Jeff Wasserstrom, Joseph Adelman); this has alerted me to calls for papers, conference announcements, new publications/scholarship, and blog posts on teaching and other issues in higher education. I have dropped a few people/organizations that were not helpful, and sometimes just plain irritating (if I’m following somebody for professional reasons, I don’t want to know about a breakfast cereal spill).

I never managed to get TinyTinyRSS working to my satisfaction, and I had absolutely no luck getting it to sync with my mobile apps. However, in the thread that followed when I asked for suggestions, people were fans of Feedly, which I have up and running. I’ve been using it to follow a number of blogs–written by friends (historians, botanists, Germanists-turned-academic-skeptics/critics), colleagues (Will Mackintosh, Jeff McClurken), and sportswriters (hey, I’m allowed to have some fun).

I have also managed to get caught up filling in some pages on my top-level domain that were in place but neglected, including some actual information on my “History resources online” page (places to read some scholarship, local links of relevance, and professional organizations). More important and a bit more involved than that was getting some course-related material up in place. My first rudimentary work-around involved uploading files to Dropbox and including links on my own pages to take readers to those documents posted in my public Dropbox, the solution I’d landed on for getting a PDF of my CV posted a few months ago. Inelegant, but it did work. However, right around the time of the whole Heartbleed thing, something about this formula changed, and readers had to download rather than view the file within the browser, which wasn’t quite what I wanted. Plus, it relied on an outside service. But I remembered Jim Groom’s advice on plugins, which was essentially “Wait until you need something, then search for plugins that address that need.” I found BSK PDF Manager, installed and activated it, uploaded my CV and course syllabi, copied/pasted the short-code into the spot on my page (after a little reorganization to get that page set up more efficiently/attractively), and deleted the old links to Dropbox. And while I was at it (and trying to again escape some of my ties to Dropbox, which in my case is fairly easy to do since I’m not all that committed), I set up a subdomain and installed OwnCloud so I could host my own files, especially conference papers and other things I might want to access when I’m traveling.

And of course I worked on some blog posts–flamingos, books, and Alice Cooper.

Just noticed an article from February’s issue of the American Historical Association’s Perspectives on History, announcing the formation of the organization’s ad hoc Committee on Professional Evaluation of Digital Scholarship. Sure, “Making Something Out of Bupkis” is a good title for an article in a professional journal, but more importantly, James Grossman and Seth Denbo here address some of the issues we (I?) raised in Domain of One’s Own regarding the value of digital scholarship–and our concerns about what it contributes to our professional advancement. I think many of us can agree on its value for our professional development, but with the lack of standards in assessing/evaluating that work, I, for one, currently feel more comfortable investing my time and energy in more traditional formats.

The AHA’s stance here is that the lack of standards, and consequent wariness, “robs our discipline of innovative energy…marginalizes scholars who do take the risks…impedes the development of genres that can contribute even more…[and] contributes to a culture that discourages the kinds of collaborative work that are valued–in some cases required–in nearly all other venues of creative enterprise.” To address those concerns, the committee has been tasked with developing guidelines for evaluating digital scholarship and engagement–including publication, collaborative work, public engagement, and teaching.

I think this is a great development, and I am thrilled that the AHA is actively working on issues like these–and that, as we’ve progressed through the DoOOFI, I’m more conversant in these issues.

Darcie is out of town this week, so I’m doing a bit more than usual to take care of the girls (with the invaluable help of Darcie’s parents). This means that rather than staying at my place after my 10am-8:45pm Wednesday teaching marathon, I went to Darcie’s apartment–and encountered the Wednesday-night math homework that I normally miss. This isn’t going to be a math post–I can multiply numbers, and there were only five problems to work on.

However. The sheet also had QR codes that students were supposed to scan to check their answers (incidentally:


I really do enjoy things like this, and for the above you can thank this doohickey.)

I was too tired Wednesday night to get really worked up about this, and maybe I’m insane for getting around to angst by Thursday morning, but this is 3rd-grade math homework. Are we really expecting kids to have access to a QR reader? Maybe we’re expecting parents to all have smartphones that they let their kids use? Or smartphones at all? Isn’t this simply perpetuating educational inequalities that already disproportionately disadvantage kids from certain socioeconomic backgrounds? If it’s classroom work, and there are iPads or whatever for kids to use, that’s fine, but as homework I worry this adds an element that all these kids won’t have equal access to.

And another thing: what value does this actually add? Sure, QR codes aren’t uncommon, and they have their uses, but I don’t see how this helps anyone learn math, or how QR-code-scanner-use is a relevant skill for a 9-year old. I use the things for boarding passes when I travel, and now I have one sending unsuspecting/irresponsible scanners to my website; you can get by without knowing what to do with the things, and by “get by” I mean “experience no real change in your quality of life.” At least using a calculator (slide-rule? Abacus?!) to check your answers requires you to think about what you’re doing, and use a device people frequently encounter in either physical or software form, instead of just pointing a camera phone at a code and seeing a number pop up in response.

Maybe the intent is simply to give students something kind of fun to play with, something interactive, and maybe there is value in that. Personally, I don’t think this particular approach is really all that fun or interactive (especially since I won’t hand the iPhone over for her to use herself), but maybe that was the intent. But frankly, this seems about as passive a way of checking an answer as I can think of, and I tend to think that getting it wrong wouldn’t be such a bad thing either.

Whatever the case with the QR reader and 3rd-grade math homework, I do think this raises a larger issue in relation to our application of digital technologies in higher ed. We tend to assume that our students do have internet access, and laptops or smartphones or tablets, and that they’ll be able to engage with whatever educational resources we’re developing. I realize these things are relatively standard for many of our students at UMW, and for those without, there are labs in the academic buildings and libraries to provide access. But the basic costs of higher education are already rising to an extent that does exclude many people, and technology is another cost. In my last job, in a county where more than 20% of the population lives below the poverty line, I did have students without smartphones, and without internet at home because it was unavailable or unaffordable where they lived in rural Minnesota; they could be out of touch for days in severe weather, unable to check the LMS or email (although I also suspect some just used that as an excuse).

As we assign digital technologies a more central role, students faced with the realities of those limits may be marginalized–despite what some of these approaches offer in terms of useful skills and wide accessibility, and sometimes lower costs in some respects. I’ve had students pull up readings on smartphones in class, and I’ve asked them to form groups around those with laptops on a given day to do some work online, but I’ve also been careful not to assume that everyone had some device they could use, and to form groups around students who had tech in evidence. Online discussions have provided adequate forums for student discussions as we’ve dealt with snow days this semester, but they are qualitatively different than the conversations we have in class (though I think more experience with that format could make my approach and my students’ involvement far more effective), and at least at this point they are a fallback rather than a substitute for seminar meetings (unlike my mother’s community health seminar meetings, in which students doing internships currently meet once a week, but which her department/college has decided will be moved to online-only spaces next year–I tend to think this is the result of a combination of administrative pressure to utilize Blackboard, and full-time faculty working to further minimize their face-to-face instructional time in a department where several “part-time” faculty are teaching higher credit loads than the full-timers).

But I do think most faculty adding digital elements are doing so in thoughtful ways–if nothing else, the fact that it’s rarely the easiest route for our teaching, takes additional work for us to learn and implement, and requires another level of monitoring student work, encourages us to be selective in what we use and how we use it, and to consider what it offers to our students. And I think that careful, conscious approach is important to maintain in the midst of what can sometimes be a rush to add digital elements simply for the sake of having them, to meet administrative mandates, etc. While these digital elements obviously can add tremendous educational value, they don’t necessarily do so, and the value they add may not always balance the costs–financial or educational or social–to our students.


(Nobody yell at me for the length of this post; I’m hiding from Elise’s birthday sleepover–which involves loud and crazy little girls–and feeling pity for Darcie, who’s out there in it [but not enough pity to brave it myself].)

Dash laments the “Web We Lost,” characterized by interoperability, open source, and sharing, but I think quickly skips over some of the advantages that do seem to characterize these closed ecosystems (although I think silo-ing seems like a better term, since it suggests contained boundaries but increasing depth, rather than growing reach; plus I just read Hugh Howey, so I’m receptive to silo metaphors in which some unseen power governs/manipulates entire populations and some renegade elements go rogue and threaten the entire complex). That tight control does seem to produce some stability (albeit with incredible vulnerability if it’s broached once)–apps and content are vetted, and I don’t have to worry too much about what I’m downloading from a curated store; content and activity can easily cross devices (if not platforms), and work I’m doing on my office desktop shows up on my phone and my laptop at home; and for the first time I own a computer and a cell phone with which I never have any major problems (the few minor ones seem to be fixable by a restart, and that’s with a laptop that’s now 6 years old), and which will talk to other hardware. Now, obviously part of the concern is that that all rests with one platform–in my case, Apple–which is uncomfortable if you’re worried about all your information being in one place, all your spending being channeled through one company, and your choices being limited/dictated by what that company chooses to offer (thus no MS Word on iPad, etc). And this is partly why I rarely buy iPhone apps, have an Android tablet, use non-Apple cloud services and email, am comfortable on Windows, etc. We had a Mac Performa when I was in high school, and it had Claris Works, which was not compatible with any other software, and pretty much stunk. Our Performa had a corrupt something or other, and wouldn’t talk to the printer sometimes–mostly when I had stuff I needed to work on at home and then get printed to turn in. But then, of course, I couldn’t move any of my work to another system because nothing would read Claris files. It sucked. I’m obviously talking about hardware and software rather than just the web, and I’m not thinking about social networks in this bit, but I feel like some of the user experiences may make sense across this discussion–especially the convenience and familiarity and safety and comfort aspects. Anyway, I get the concern, especially in terms of the inability to really retain full rights/control over your content on sites like Facebook, to make it portable and see what they’ve retained–and I think some of the solution lies with approaches we’ve discussed in terms of carefully considering what you put there since we do want to use these services, but also in terms of having our own spaces that we do control.

Jim Groom’s post about the ghettoization of the web, and the role of LMS in establishing that silo effect in universities brings some of this discussion into the realm of higher ed. One of my first thoughts reading this was that of course the LMS made sense as a safe place with a limited set of rules for students and instructors to interact in defined ways (private, limited, etc)–probably a good thing in some situations, though perhaps one that should now be in the process of becoming complementary rather than central to our students’ engagement with the digital. [Incidentally, I was reading all this stuff at the same time as I was noticing the #DML2014 hashtag on Twitter popping up, and putting together the fact that Jim Groom and one of my high school friends–now at USC’s School of Cinematic Arts–were in the same place and doing the same thing. On top of that, grad school colleagues at Chapman University and somewhere in TN follow Jim Groom and Jeff McClurken on Twitter. Just sort of an interesting collapse of time, distance, and my various relations into one Twitterverse, albeit all sitting on one service in a conversation to which I won’t be privy if I drop the Twitter.]

And of course I get that lots of this comes back to cash–targeted ads and otherwise, data collection, app purchases, media content purchases, and in the case of an LMS, presumably the tuition money that maybe seems not so carefully spent if it isn’t supporting something to which students have exclusive access (just a thought). Dash recognizes this economic issue, but also imagines some possible directions that might re-open web use (though I seem to recall something about choosing to be less efficient, which seems idealistic to such a degree that it strikes me as unlikely–though here again, higher ed could conceivably lead the way [something it’s not always good at] into a radically more open model not driven primarily by revenue).

So the web generates cash, but it also apparently creates value? This is where I don’t get Bitcoin (the whole digital currency, transnational, anonymous, etc I follow): where does the actual value come from? How does running the mining apps create money? I wondered this with the video explanation–is it like Office Space (or Superman 3) where bank transactions round off at these crazy decimals and drop the remaining fractions of a cent…into a bank account we opened? I feel like the better way for me to wrap my head around this than listening to Bitcoin promos and reporting that doesn’t seem to actually explain anything is to return to Neal Stephenson’s Reamde (he likes issues around interfaces, digital worlds, and currency–the Baroque Cycle featured Newton and the Mint, plus alchemy), in which a group of hackers released a virus that encrypted user files and held them for ransom in a digital space/role-playing game (in digital currency that functions in the game, but is somehow transferable outside the game), and wound up fighting a battle in that space, against a ton of other players, to protect the cash/cache. In the parallel reality, outside the digital, they were also fighting–each other, a terrorist organization (neighbors in building), and Russian mobsters (who were laundering money through the game, I think, and had it intercepted)–I’m a bit hazy on the details because I didn’t totally follow this angle when I read it, but I feel like I should revisit it. Anyway, I guess it’s interesting, but I don’t know enough to take much of a position here–Bitcoin strikes me as incredibly prone to abuse, and pretty ephemeral (and yes, I know my dollar bills are already primarily electronic and not linen/cotton, but at least when they go wrong I know there is a Federal Reserve and a government attached to them, and an entire country and economy committed to/invested in their continued existence). This was amusing, however.
The net neutrality discussion, especially in its latest manifestation centered on Comcast’s attempt to buy Time-Warner, has also intruded on my peaceful little world (and again centers on money–this time very clearly). Here, let’s challenge the highway analogy. One difference I see is that the majority of those roads are publicly owned and maintained–sure, there’s a monopoly, but it’s government, which is ostensibly a manifestation of “the public,” and isn’t exactly charging people different amounts to access those roads (unless it’s making Virginians pay a personal property tax on cars [and extra for you damn Prius drivers shirking on your responsibilities], rather than adding the taxes to gas prices). Sure, there are the pay-to-access toll roads (I’m sticking with my California terminology), and with web access you have the higher-speed infrastructure (I guess heading towards the fiber-optic networks, but formerly maybe ethernet at universities or the higher-cost broadband hookups, as opposed to the relatively inexpensive but basically functional dial-up option). I have nothing brilliant to say here, other than that the merger seems like a terrible idea, and that’s on top of the crappy cable service model anyway, with terrible bundles of channels and ridiculously high fees to pay for kajillion-dollar contracts to idiot stations like ESPN, blah blah blah. So hey, let’s empower a terrible industry and company to get worse. Yippee!

I’m outta steam. I’m gonna go read about megafauna, Indians, and volcanic activity on the Pacific Rim.

Martha Burtis reminded us to come up with great titles for posts. Great titles are my kryptonite. If anybody reads this post, that means they’ve read others, so they’re already aware. I stunk at titles as a student, stunk at them as a reporter, stunk at them as a grad student, and stink at them today. Bad puns are my crutch.

I linked Twitter to my blog earlier, adding the widget so the latest Tweets show up in the sidebar. Of course, most of my Tweets had previously been announcements about blog posts, since I’d already linked the blog to the Twitter account. Now I feel like I need to have real Tweets so all my latest don’t just cycle people back to the blog. However, the AHA magazine Perspectives had a January article suggesting that linking personal sites to social media sites (which Google algorithms like, apparently) helps move them up the search rankings, so there’s that. I think I have practically inert LinkedIn and profiles I could connect, and I guess I could connect with my professional biography in the AHA system, too (and since I have to pull one of those together for UMW anyway…). If I can figure out how it is that I link these things.

Also, adding blog posts and links and tweets, I just killed off a good hour I should have used for grading (which is among the last things I find appealing anyway, so not something I need another excuse to avoid). The writing is obviously much easier than academic writing, so goes much more quickly/doesn’t demand nearly as much time, but I posted on drought in California, data mining and studying the 1918 influenza epidemic, an update on an article, in addition to writing my FAAR this week–needless to say, other writing didn’t happen. I think I need a goal, like posting once a week or so, and not blogging about reading, teaching, technical stuff, musings, and research, but rather picking one or two per post. Otherwise I’m likely to spend way more time doing this fun stuff, and musing, than doing my “real” work (though if we can make this count as “real” work, that’d be grrreeeaaaat–Okay, thanks Peter.).

I suppose if I kept up on my reading in the monthly magazine issued by one of my professional organizations, I’d have been able to bring this to the table in our DoOOFI cohort meetings, or post on it in a more timely fashion. But I don’t, so I didn’t, but my Wednesday night reading proved timely nonetheless.

A team (of history, English, rhetoric, and engineering professors, plus computer science students and librarians) at Virginia Tech published a piece, “Mining Coverage of the Flu: Big Data’s Insights into an Epidemic,” in the AHA’s Perspectives on History, that I found enlightening. They concede that asking historians “accustomed to interpreting the multiple causes of events within a narrative context, exploring the complicated meaning of polyvalent texts, and assessing the extent to which selected evidence is representative of broader trends, the shift toward data mining (specifically text mining) requires a willingness to think in terms of correlations between actions, accept the ‘messiness’ of large amounts of data, and recognize the value of identifying broad patterns in the flow of information.” It’s asking quite a bit, but their measured optimism is, I think, quite reasonable.

Using 20 weekly newspapers from throughout the US, they identified topics (defined by words that frequently appeared together–something I actually worked on some as a grad student research assistant) to think about broad patterns in reporting on the disease, including change over time. I don’t see the historical developments they identify as especially groundbreaking (and this recalls what Debra Schleef raised in relation to sociology, where she has seen projects use such methods, but without accomplishing much that traditional methods couldn’t anyway), but the fact that the research team then closely read selected articles to confirm the larger patterns and to further develop arguments I think suggests their approach to data mining as a supplementary tool, and one in which researchers can build confidence over time as they gain experience and confirm some of what they find by applying more traditional tools as well.

A second component of the project involved identifying the tone of these newspaper reports, which the project could do on a larger scale than individual human readers could manage. Again, the categories of tone they identified–Reassuring, Explanatory, Warning, and Alarmist–weren’t surprising or really new, I don’t think. Nor did the classifier program’s 72% success rate “correctly” identifying tone seem especially high. Yet the team’s report was cautiously optimistic, noting, “It is therefore potentially valuable as a knowledge discovery technique, but only if it can be refined,” which also suggests this process alone would provide an incomplete understanding. As they say, “Tone classification illustrates the real challenges that the complexity of written language poses for data mining.”

In other words, they’re very much in favor of employing new methods, but advocate their combination with existing methods–and the application of this combined approach to history, and especially to the 1918 influenza epidemic (which I talked about as a pandemic in my US History survey just last week), resonated with me.


I’m not all grumpy-pants. I found the idea of an “open scholar,” whose project and process are not only visible both within and without academia, but continue to evolve in response to the conversation they foster (which itself is visible), particularly intriguing this week. That would provide quite a contrast to the more traditional approaches to scholarship, especially through academic journals (which in terms of the business model I tend to think are all kinds of problematic in their current form anyway). The tension between the two approaches seems to me reminiscent of our discussions about a pedagogy of abundance (as opposed to scarcity), in the sense that those journals are premised on the idea that due to the editorial and peer-review process, the knowledge/scholarship they present is of higher quality/authority (scarce), whereas knowledge/scholarship not vetted by those processes is perhaps not (though possibly abundant on the open web). So in a sense, the goal is to create the impression of scarcity even where it might not truly exist. The open scholar presents one avenue for escaping that model, though certainly not one without its own problems (not least of which, of course, is that mechanisms for determining quality may not be as well refined or tested).

However, I also have to question the viability of this model for all scholars (I’m thinking here in terms of rank more than discipline, though the latter could be a factor as well). In part that concern arises from my position as newly tenure-track, and my attention to what will secure/advance my own career–and in the current climate, becoming that “open scholar” wouldn’t. Sure, that’s something that might be combined with a more traditional approach, though that just increases the demands on my time/energy/resources. Besides the relative security of tenure, a more senior scholar would have advantages that I think would facilitate her/his transition into being an open scholar, most notably a well-established network (PLN?) of peers and contacts who could contribute one element of that open conversation, providing some continuity in the conversation as well as academic authority (which could potentially be constricting, but could also serve as “quality control” to some extent).

Part of what I want to accomplish with my domain is to share some of my work and my process, and even some random thoughts I’m unlikely to ever follow up on, and I’d love to continue conversations about scholarship that I’ve begun in other venues. Plus, I do want to make sure if people search for me and read my webpage, they can get a good sense of what I work on. At the same time, I’m wary of making that work too available, so available that no one has any reason to come to me to get it, or to publish it in a journal, etc. For the time being, I’ve settled on descriptions and abstracts, and perhaps a focus on process (what I’m doing) over product (what I have to say).

Perhaps I’m just being reactionary, but the Anderson article makes me cringe.

One fundamental flaw seems to actually be hinted at in the article itself. Anderson is talking about how models fundamental to the physical and biological sciences ultimately proved imperfect, simplistic, etc. as we gained greater understanding of them, as scientists continued to test and refine them (or refute them), and seems to suggest that that is a problem with models in general. But we’re supposed to have faith in a “Big Data” that just takes the data and identifies patterns? Isn’t that a similar simplification of nuanced and diverse behaviors and realities? I get that it’s useful, and opens up tons of avenues of inquiry, and certainly provides new capacities for answering questions we could have only theorized about before, but I don’t see it as completely addressing all those avenues of inquiry, like why there might be exceptions to its big patterns. To take the bacteriologist: he’s identified all these new bacteria, and might be able to make some guesses about other forms of life to which they’re related, or some of their characteristics. But so what? What good does that do? It gives you lots of information, lots more data, but to get anything particularly useful you have to ask the right questions of the data, which presumably relies upon models and theory; to actually do something with whatever you get out of the data requires the same.

I appreciate Caulfield’s intervention here. He notes that scientists etc have premised their work on the assumption that correlation does not mean causation–which seems reasonable enough. I appreciate his point that correlation is enough for some tasks (and I like the term “radical pragmatism” he tosses in there), but it also seems woefully inadequate for others. Take Google translations, which Caulfield mentions as being sets of statistical probabilities, the product of which are generally good enough to move a web page between languages or give a user a general sense of the content of some entered text. That is obviously a useful tool, but it’s also incomplete, likely struggling to convey connotations, rhythms of language, emphasis, the careful construction of sentences and the order of ideas, cognates and puns, and a thousand other nuances of language that do convey meaning. Here I come back to my own scholarly interest in cultural mediators/brokers involved in 17th- and 18th-century European-Indian relations, who were not simply translators, but rather spent inordinate amounts of time learning the structures of speeches and negotiations, the proper moments to employ mnemonic devices and what those objects should be, the histories and cultural logics of a given symbol or title–and when to fudge what was being said so as not to piss off the other participants in the conversation. All of that gets lost in something relying only on sets of statistical probabilities–not always a concern, but often.

Basically it seems that everybody’s favorite models for explaining why causation doesn’t matter and correlation is enough comes down to Amazon and Google–in short, the ability to sell crap. I want to see how it’s useful to the sciences, the social sciences, and I especially want to see how it works with the humanities–I assume there are instances out there I’m just not aware of as yet, and I’d be curious to see them. I’m sure Big Data and its ability to identify correlations can be useful in these other contexts, and will be/is, but I’m not convinced it is so independent of theory and models as Anderson implies.

Oddly enough, this is something I’ve been thinking about the last two weeks anyway, as I’ve worked with a Southern-California-based PhD student I’ve never met in person to organize a panel proposal for a big conference. He met my PhD advisor, who put us in touch since our work is somewhat similar. We emailed back and forth, then talked on Skype. From there, we emailed around to try to find a third panel member, a commenter, and a chair. I’m not sure what the connections were that helped him find the panelist and commenter, but I found the chair by asking a historian who will be visiting UMW later in the semester, and who was recruited for that in part because she attended grad school with one of my historian colleagues. That’s a lot of different mediums for communication, and a variety of types of connections, and none of them are all in the same place despite the fact that we’re all historians.

However, in some ways I find that process, messy as it is, more attractive than some of the alternatives. Sure, if there was a centralized location, things might be easier, and certainly things like Twitter might have potential for sending a CFP/panelist search into networks of people likely to respond (those “pools of expertise” Frey mentions), but I also think some of those elements of a Personal Learning Network could be overwhelming. In most of those categories, I’m somewhere on the edge of “Between Two Worlds” and “Entrenched in Real-World Networks,” and mostly I’m okay with that. I think it’s only in the Writing and Commenting that I’d want to even approach “In the Matrix” levels (I want to write more habitually, whatever the context, and whether or not the majority of it gets read/used), in part because that seems to require a level of commitment that demands too many of my resources (time, energy, etc) without a ton of immediate payoff for me (as far as I know, I get no credit for those in terms of production/scholarship, though some sort of recognition of those conversations as akin to participating in academic panels, conferences, etc might motivate me more). The social bookmarking/archiving I don’t find especially appealing (I don’t like watching stuff, YouTube account unlikely to ever happen). I have enough obligations, and this creates more, and I feel could pretty quickly grow beyond its capacity to be meaningful and/or manageable. Thus I appreciate the Hackademic Guide suggestion about seeing Twitter as a live conversation, and dropping in occasionally for just a bit, as even more attractive than Frey’s suggestion to vett and weed your networks (which in itself would require a lot of time and attention).

This is part of the reason I don’t aggressively expand, explore, search, and share a ton–the time, energy, effort, and attention are being spent elsewhere. I know there’s a lot out there, I know some people love those connections. I suspect quickly scanning, identifying the most useful, and paring/controlling these networks in ways that keep them from getting unwieldy is a skill acquired largely through experience, but that means I would need to make the commitment and deal with the learning curve and expenditures if I want to benefit. And I’m not sure these pieces acknowledge those kinds of costs.

That’s what UMW History’s department chair, Jeff McClurken, told one of his classes the other day, and I’m reminded of it after last night’s mad scramble to figure out what to do about the snow.

I have a senior seminar that meets once a week, on Wednesday nights from 6-8:45, and we already lost a week to the semester’s first snow day. That one was relatively easy to recover, since I was going to lead that discussion, rather than it being a student-led discussion, and thus a graded assignment. I moved that discussion online, and hosted it on Canvas, our LMS, monitoring the conversation from home and adding my thoughts as appropriate. That still let me model the advance post I was going to require of student discussion leaders, how I’d like them to participate in discussions, and the summary I want them to write afterwards.

That was the first time I had run a complete discussion online (I’ve had students post, but that in my absence, and I’ve summarized after rather than being involved in the ongoing posting), and so I turned for help to my significant other, Darcie, who recently completed her MLS through Rutgers University’s online program. She had lots of tips for refining what I was asking students to do, and how to explain it to them, and the whole thing seemed to work out fairly well.

Last night promised to be a different story. With impending severe weather, administration announced between 3 and 4 pm that campus would be closed and classes cancelled after 6pm. I’d been madly scrambling all day to be ready for class (6 hours of scheduled classes on Wednesdays), meet with students (two advisees, a potential thesis student, a current thesis student), and polish off an abstract for a conference panel proposal (which I felt obligated to do, since my collaborators had already spent time on theirs). Nonetheless, it was not a relief to return from my 3-4 class to discover the email announcing the cancellations. Now what was I supposed to do with my evening class, which would be missing another week, this time a student-led discussion of a great book?!

Luckily, I had the earlier online discussion to work from. I talked with a colleague to think it through, then modified the earlier discussion guidelines. This time, students would work from the advance post their classmates had shared, responding to prompts as well as to each other, and the student discussion leaders would check in and add their thoughts as the conversation progressed. I also told students to expect to spend the first hour of next week’s class revisiting the online threads, and addressing topics the discussion leaders felt needed additional attention. I emailed everyone, posted the plan on Canvas, and stuck around to make sure no one showed up for the start of class–no one did (I’m sure they were checking their email diligently every couple of minutes until the announcement came, but they may not have checked since).

I’m not sure I could have brought that together as effectively if I hadn’t done it already earlier in the semester, and perhaps more to the point, I’m not sure how fair it would have been to students to ask them to pivot on such short notice and expect it to go smoothly without them having had earlier practice. And I’m also absolutely positive that even the ability to move things online wouldn’t have been so plausible just a few years ago–other LMS’s I’ve used have been less user-friendly, the format would have been less familiar, and at my last university a shocking (to me, anyway) number of students didn’t have home internet access (lived in very rural areas, truly couldn’t afford it, etc). Nonetheless, I still wonder about whether my email and posts–all I can really do at the moment–made it to everyone, since some people likely don’t watch their email for that kind of information (perhaps deliberately), and could have gotten the announcement via other channels, like the UMW website, university announcements via text message, or Twitter.

And then I realized there’s a downside for me to all this. As I dropped files into Dropbox and added notes and reminders to the Notes app–both of which sync to my phone and are accessible from the laptop I often have with me–it occurred to me that there’s no snow day for me, either: no excuse not to write that letter for a student’s internship, keep up with the conference proposals and other writing, set up my grade book in Canvas, etc. And maybe those students who don’t check their email after the snow day is announced are on to something.

I know none of my thinking here is revolutionary, but I think that is telling in itself.