Tag Archives: computers

Stopping by to code on a snowy evening

Like most people, I’ve grown up with computers. I’m 52 and I’ve been messing around with computers since I was 12, in 1982.

Coding is cozy

I’ve always associated learning how to code with the cold, grey winter days near the solstice. There’s something about wanting to be inside when it’s blustery out, concentrating on a task. I think it’s the same reason people like to do jigsaw puzzles, crossword puzzles, build models, do knitting, crafting, bake cookies, or quiet board games. And maybe it’s not an accident that a lot of people take part in the Advent of Code this time of year. Coding is cozy.

Apollo, PA

My mom was a math teacher at a small Catholic elementary school in a town with a weird, retro-future connection: Apollo, Pennsylvania. Apollo was laid out in 1790 and renamed Apollo in 1848. It is one of the few—maybe the only— city/state palindromes. And although a tiny village the town, also had a moon landing festival that’s been running almost uninterrupted since 1969 to celebrate the original Apollo moon landing. To be clear, the town has nothing in connection with the NASA mission, other than the name “Apollo”. Apollo also had a nuclear facility subcontracted by Westinghouse to produce nuclear fuel. The plant had a terrible safety record and was at the center of a missing uranium-235 scandal, which gave the town a slightly creepy, cold-war vibe.

Our first computer

The school got a small grant to purchase a single TRS-80 model 1 in 1982 when I was in 7th grade. They later purchased a TRS 80 model III. None of the teachers knew anything about how to use this. And they were not interested. So my mom brought it home for a few weeks over the Christmas break in 1982, and we kept it for a little while while she learned how to use it. Of course, I want to learn how to use it too.

There was no hard drive. There wasn’t even a “floppy” drive. There was no connection to the internet (but of course you wanted the acoustic phone modem), and no preinstalled, software or apps. Data was stored on a cassette tape. If you wanted to run something, you often found the code printed out in a magazine or a book. The Model I guide book came with the code to produce several simple games and programs. And that’s how I learn to code. During winter break typing out TRS-BASIC Level 1.

It’s a photograph of me, probably in 1983, delicately handling floppy desk for a TRS 80 III computer

I don’t think I have a picture of me with the old Model I, but here I am with the model III a year later (it stored programs on a floppy disk, which I’m holding). I know this is during the winter time also because I’m still in my Pittsburgh Steelers pyjamas that I got for Christmas.

Coding is poetry

One of my favourite programs of all time was “Stopping by the Woods”, which was printed in the original TRS-80 I manual. It reproduced the poem by Robert Frost line-by-line on the screen. At the same time, it used a simple randomization subroutine to place pixel blocks on the screen that looked like snow. It was monochrome. A black and white monitor with only white pixels. So it worked really well. I remember doing this during the winter time and being struck by how this by the way, they brought together poetry, chance, and coding. I think that had a big influence on me.

Here’s what it look like, on a YouTube video.

Each line of the poem is listed separately, and after each line, a command sends things to a subroutine (GOTOSUB6000) to generate random snow, which then sends it back to where it came from (RETURN). Open the screenshot below to see.

If you have a few quiet days over the next few weeks, I hope you have a chance to do some coding, create and solve some new puzzles, read some winter poetry, or just find the time to reflect on the things that give you peace.

The one “productivity hack” that you probably avoid like the plague

A few weeks ago, my office phone rang. It rarely does, and even when it does ring, I rarely answer it. I usually let the call go to voicemail. My outgoing voicemail actually says: “I never check voicemail, please email me at…“. And even if someone does leave a voicemail, it’s transcribed by the university email systems and sent to my email.

This is so inefficient, and self-sabotaging, given that I, like most academics, moan about how much email I have to process.

But this time, noticing that it was another professor in my department who was calling, I picked it up. My colleague who was calling is the co-ordinator for the Psychology Honours program and he had a simple question about the project that one of my undergraduate Honours students was working on. We solved the issue in about 45 seconds.

If I had followed my standard protocol, he would have left a voicemail or emailed me (or both). It would have probably taken me a day to respond, and the email would have taken 5-8 minutes for me to write. He’d have then replied (a day later), and if my email was not clear, there might have been another email. Picking up the phone saved each of us time and effort and allowed my student’s proposed project to proceed.

Phone Aversion

Why are we so averse to using the phone?

I understand why in principal: it’s intrusive, it takes you out of what you were doing (answering email, probably), and you have to switch tasks.  The act of having to disengage from what you are doing and manage a call is a cognitively-demanding action.  After the call, you then have to switch back. So it’s natural to make a prospective judgement to avoid taking the call.

And from the perspective of the caller, you might call and not get an answer, then you have to engage a new decision-making process: Should I leave a message, call again, or just email. This cognitive switching takes time and effort. And of course, as many of us resent being interrupted by a call, we may also assume that the person we are calling also resents the interruption and so we avoid calling out of politeness (maybe this more of a Canadian thing…)

So there are legitimate, cognitive and social/cognitive reasons to avoid using the phone.

We Should Make and Take More Calls

My experience was a small revelation, though. Mostly because after the call, while I was switching back to what I had been doing prior, I thought about how much longer (days) the standard email approach would have taken. So I decided that, going forward, that I’m going to try to make and take more calls. It can be a personal experiment.

I tried this approach a few days ago with some non-university contacts (for the youth sports league I help to manage). We saved time and effort. Yes, it might have taken a few minutes out of each other’s day, but it paled in comparison to what an email-based approach would have taken.

For Further Study

Although I’m running a “personal experiment” on phone call efficiency, I’d kind of like to study this in more detail. Perhaps design a series of experiments in which two (or more) people are given a complex problem to solve and we can manipulate how much time they can spend on email vs time on the phone. We’d track things like cognitive interference. I’m not exactly sure how to do this, but I’d like to look at it more systematically. The key things would be how effectively people solve the problems, and if and how one mode of communication interferes with other tasks.

Final Thoughts

Do you prefer email or a phone call? Have you ever solved a problem faster on the phone vs email? Have you ever found the reverse to be true?

Or do you prefer messaging (Slack, Google Chat, etc.) which is more dynamic than email but not as intrusive as a phone call?

 

A Computer Science Approach to Linguistic Archeology and Forensic Science

Last week (Sept 2014),  I heard a story on NPR’s morning edition that really got me thinking…(side note, I’m in Ontario so there is no NPR but my favourite station is WKSU via TuneIn radio on my smart phone). It was a short story, but I thought it was one the most interesting I’ve heard in last few months, and it got me thinking about how computer science has been used to understand natural language cognition.

Linguistic Archeology

Here is a link to the actual story (with transcript). MIT computer scientist Boris Katz realized that when people learn English as second language, they make certain errors that are a function of their native language (e.g. native Russian speakers leave out articles in English). This is not a novel finding, people have known this. Katz, by the way, is one of many scientists that worked with Watson, the IBM computer that competed on jeopardy

Katz trained a computer model to learn from samples of English text productions such that it could detect the writer’s native language based on errors in their written English text. But the model also learned to determine similarities among other native languages. The model discovered, based on errors in English, that Polish and Russian have historical overlap. In short, the model was able to determine the well know linguistic family tree among many natural languages.

The next step is to use the model to uncover new things about dying or languages. As Katz says

But if those dying languages have left traces in the brains of some of those speakers and those traces show up in the mistakes those speakers make when they’re speaking and writing in English, we can use the errors to learn something about those disappearing languages.”

Computational Linguistic Forensics

This is only one example. Another one that fascinated me was the work of Ian Lancashire, an English professor at the University of Toronto and Graeme Hirst, a professor in the computer science department. The noticed that the output of Agatha Christie—she wrote around 80 novels, and many short stories— declined in quality in her later years. That itself is not surprising, but they thought there was a pattern. After digitizing her work, they analyzed the technical quality of her output and found richness of her vocabulary fell by one-fifth between the earliest two works and the final two works. That, and other patterns, are more consistent with Alzheimer’s than normal aging. In short, they are tentatively diagnosing Christie with Alzheimer disease, based on her written work. You can read a summary HERE and you can read the actual paper HERE.  It’s really cool work.

Text Analysis at Large

I think this work is really fascinating and exciting. It highlights just how much can be understood via text analysis. Some of the this is already commonplace. We educators rely on software to detect plagiarism. Facebook and Google are using these tools as well. One assumes that the NSA might be able to rely on many of these same ideas to infer and predict information and characteristics about the author of some set of written statements. And if a computer can detect a person’s linguistic origin from English textual errors, I’d imagine it can be trained to mimic the same effects and produce English that looks like  it was written by a native speaker of another language…but was not. That’s slightly unnerving…