long, rambling post alert. it’s been awhile since I’ve posted, so lots of things have been stewing. bear with me.
It’s fashionable to hate the LMS. It’s the poster child for Enterprise Thinking and lazy (online) pedagogy, so it is easy to rail against the LMS as The Cause of All Educational Evil. The LMS is put into the stocks, and we are expected to stand in the town square and throw rotten fruit at it.
We’re pushed into a false binary position – either you’re on the side of the evil LMS, working to destroy all that is beautiful and good, or you’re on the side of openness, love, and awesomeness. Choose. There is no possible way to teach (or learn) effectively in an LMS! It is EVIL and must be rooted out before it sinks its rotting tendrils into the unsuspecting students who are completely and utterly defenseless against its unnatural power!
I feel like I’m cast in the role of an LMS apologist, because I have a more nuanced approach.
I have been an advocate, proponent, supporter, and contributor to open source communities, open content licensing, and generally sharing stuff because why not? I have also played a key role in the recent adoption of a new LMS by my university. But. How on earth can I reconcile these two diametrically opposed world views? Gasp.
It’s almost as if different tools are used for different purposes.
When I think about the LMS, and its role in the enterprise, this is what makes many peoples’ hair stand on end. THE ENTERPRISE HAS NO BUSINESS IN THE CLASSROOM! etc. Except that’s largely bullshit. Of course classrooms are an Enterprise issue – whether physical (buildings and facilities are expensive to build and maintain, and need to be managed properly etc…) or online.
But, the arguement goes, online means there are no rules, no boundaries, no constraints. People should be free to do whatever they want.
That’s great – I think it is truly awesome that people can craft their own online environments, to support whatever online activities they want to do. And that instructors, staff, and even students (gasp!) can do this stuff on their own, with no interference or meddling from The Enterprise.
But. We can’t just abdicate the responsibility of the institution to provide the facilities that are needed to support the activities of the instructors and students. That doesn’t mean just “hey – there’s the internet. go to it.” It means providing ways for students to register in courses. For their enrolment to be automatically processed to provision access to resources (physical classrooms, online environments, libraries, etc…). For students’ grades and records to be automatically pushed back into the Registrar’s database so they can get credit for completing the course. For integration with library systems, to grant acccess to online reserve reading materials and other resources needed as part of the course.
Anyone who pushes back on this hasn’t had to deal with 31,000 students, and a few thousand instructors. This stuff needs to be automated at this scale. Actually – “scale” is another divisive issue. Why worry about scale? SCALE? WILL IT SCALE? As if scale is irrelevant. If a university needs to deal with tens of thousands of students, I assure you that scale is absolutely relevant. Anyone who thinks we shouldn’t spend time worrying about providing a common and consistent platform as a starting point needs to spend a week helping out at a campus helpdesk, answering questions from instructors and students.
OK. So the LMS is primarily used by institutions to make sure that there is a common starting platform for online courses. That courses are automatically created before a semester. That students, instructors, TAs, etc… are given access with appropriate privileges. That archives and backups are maintained. That records of activities and grades are kept. This is the boring stuff that is supposed to be invisible. But, it’s necessary if we are to responsibly teach online.
If instructors and/or students want or need to, they can of course do anything else they feel like doing online. Providing an LMS doesn’t mean “YOU SHALL NOT USE ANY OTHER TOOL” – there is no mandate to say “ONLY THE LMS SHALL BE USED”. It’s a starting point. And for some (many? most?) courses, it’s sufficient.
GASP! THE LMS IS SUFFICIENT? HOW CAN HE SAY THAT? BURN THE HERETIC!
Calm down. Take a step back, and think about some of the courses at a university. How about, say “Introduction to Chemistry” – yup. An LMS is entirely sufficient for that kind of course. Provide course info, share documents, maybe do some formative or summative assessment, and store some grades. LMS? check.
How about, say, “Calculus III”. Same pattern. LMS? check.
“Introduction to Shakespeare”? Students might want to blog about passages in Othello. Or link to performances of Macbeth. Maybe post photos of a campus production of King Lear. Great! Throw in a blog. Use the LMS for the basics, and do other things where needed. The LMS course becomes a source of links to other resources, and takes care of the boring administrative stuff.
But – why wouldn’t the instructor for the Shakespeare course want to be completely free of the shackles imposed by the LMS? THE SHACKLES! They might. Or, they might want to have a private starting point, before moving out into The Wide Open.
Even if the instructor decides to completely ignore the course shell that’s automatically created in the LMS, and go out on their own – say, using a WordPress mother blog site – they still need to take care of the boring administrative stuff. They’ll need to come up with a system for adding students to the mother blog site (and removing students when they drop the course). They’ll need to come up with a way to store grades (unless they’ve been able to convince adminstration and students that grades aren’t necessary – I haven’t met anyone who’s had luck there). They need to keep adding features to their custom website, until it starts accumulating lots of bits to handle the boring administrative nonsense.
Eventually, you come up against Norman’s Law of eLearning Tool Convergence:
Any eLearning tool, no matter how openly designed, will eventually become indistinguishable from a Learning Management System once a threshold of supported use-cases has been reached.
The custom platform starts to need care and feeding, maintenance, hacks to import and export data. It starts to smell like an LMS. So now, instead of a single LMS that can be supported by a university, we have an untold number of custom hacks that must all be self-supporting.
And here is where the pushback from the Open camp is strongest – BUT WE DON’T NEED OR WANT SUPPORT. JUST LET US DO OUR THING!
Which is great. Do your thing. But, what about the instructors (and students) who don’t have the time/energy/experience/resources to build and manage their own custom eLearning platform? Do we just tell them “hey – I did it, and it wasn’t that hard. I can’t see any reason why you can’t do it too.”? That starts to smell awfully familiar.
Which brings me back to my personal position on this. There is room for both. Who knew? The LMS is great at providing the common platform, even if it’s just a starting point. And the rest of the internet is awesome at doing those things that internets do. There’s lots of room for both.
“GREAT? NO WAY! THE LMS MAKES PEOPLE TEACH POORLY!”
No. It might make it easy for lazy people to just upload a syllabus and post a Powerpoint and think they’re teaching online. But that’s no different than physical classrooms being used by lazy people to show endless Powerpoint slides punctuated by more slides. Lazy teachers will teach poorly, no matter what tools they have access to. Just like awesome teachers will teach well, no matter what tools they have access to. The LMS is not the problem.
“But – why waste taxpayer dollars on an LMS at all? Just cancel the contracts and use the money for other stuff!” Um. It doesn’t work that way. We have a responsibility to provide a high quality environment to every single instructor and student, and the LMS is still the best way to do that.
And, although the costs have risen rather dramatically in the last decade, and seem ungodly high in comparison to, well, free… universities spend an order of magnitude more on the software that runs the financial systems – stuff that doesn’t have any direct impact on the learning experience. Hell, there are universities who pay their football coaches more than what they spend on the LMS for all students to use (thankfully, my campus doesn’t do that). For universities with $1B operational budgets, this kind of investment in online facilties is almost lost in budgets as a rounding error.
Anyway. Whew. I’ll try to write some more on this. 1600 words of rambling is a sign that I need to work on this some more…
Looks like the Connected Courses open course thing is shaping up to be kind of awesome. This is a placeholder post to let it sniff out the feed for the #connectedcourses tag here on the old blogstead. Here’s hoping my copious free time will be put to good use.
Fall 2014 Block Week kicked off today, meaning we just pushed into the 2014-2015 academic year. Holy. The last one is basically just a blur. But, we did a surprisingly epic number of major things as a team1:
- Migrating from Blackboard to D2L in about 8 months, including:
- building and testing the integration with Peoplesoft & Elluminate
- designing and conducting workshops to support a couple thousand instructors
- working to help get the 31,000 FTE student body through the move
- building online resources to help, at the UofC’s elearn website
- Doing an emergency migration from Elluminate to Adobe Connect, in response to the Javapocalypse of January 2014
- Probably a bajillion other things that got forgotten in the blur. what a year.
To get the campus community through the whole thing, I’d been using a diagram to outline the flow and timeline:
The 2 stars indicate (left) when we got access to our D2L server, and (right) when we had to turn off access to the Blackboard servers. Everything was driven by those dates, and mapped out over the academic year with semesters defining the major stages. The surprising/amazing/relieving thing is that we actually stuck to the schedule. I didn’t have to revise that document once, after using it last summer to outline the process. Wow.
On top of that, the shiny new Technology Integration Group in the Taylor Institute for Teaching and Learning’s Educational Development Unit had a bunch of other stuff to do:
- providing instructor training and support for D2L and Adobe Connect (working closely with the Instructional Design team)
- launching the new Teaching Community website
- rebuilding the “team formation tool”, from an old java-based codebase to a modern application implemented using the D2L Valence API
- producing a pretty awesome student orientation video
- building a new intranet website to manage data within the EDU
- preparing a new website for the new EDU (to be launched later this month)
- building a mobile app for D2L, using the Campus Life framework
- supporting the campus blogging and wiki platforms
- investigate additional tools within D2L to support learning, such as ePortfolios, badging, repositories, etc…
- exploring other learning technologies, including beacons, and a long list of other things we didn’t have nearly enough time to play with…
So, while 2013-2014 was a year of pretty epic and overwhelming changes, I’m looking forward to the big pieces stabilizing this fall, so we can start pushing at the edges a bit more. We’ve got lots of ideas for things we can do, once the major changes are done for a bit. That roadmap will be sorted out later this month, but it’s going to be a really fun year!
- this was a truly multi-department interdisciplinary team, with folks from the Taylor Institute EDU and Information Technologies working flat out together to get stuff done
John sent a link to our loose group of cycling buddies, and I’ve read the article 3 times now. Each time, it feels like it hits closer to home.
I’ve been riding my bike as the primary way of getting around, and have been communiting by bike almost exclusively since 2006. I’ve always ridden, but never really considered myself a cyclist until then. I was never athletic, never good at sports. But I was happy on a bike. Over the years, I actually got pretty good on a bike. I could make it go fast. I could climb hills. I could ride far. It was awesome.
And then it started feeling less awesome. Most recently, with my bad knee. Late last year, I somehow managed to get a stress fracture at the top of my tibia. I didn’t even know it had happened, and only wound up at the doctor because I thought I was dealing with progressive arthritis or something. Nope.
We couldn’t find any specific incident that might have caused it, but the doctors thought it may have been related to repetitive stress and strain while riding ~5,000km/year. Which meant it was self-inflicted. I’d been pushing myself for the last few years to try to keep up that pace. And, while limping around like a 70-year-old, I realized that I hadn’t been doing myself any favours. One knee is already pretty much shot, the other is likely not far behind it. And pushing to hit 5,000km/year wasn’t helping things. I’m largely recovered now – the knee is still sore, and feels weaker than it should, but it works. Physio has helped, but it’s obvious I need to pay attention to it before it gets worse.
I’ve been tracking personal metrics since 2006 – with detailed GPS logs since 2010, thanks to my use of Cyclemeter. Recently, I’ve added Strava to the mix. I really notice that I push myself more when I know a ride will be posted to Strava – either I need to let go of that, or I need to stop posting rides1.
I’m not really sure why I was pushing myself to keep hitting 5,000km/year. I think it was the feeling of accomplishment, of achieving a goal that not many people do. Some kind of macho “I’m not getting old! look what I can do!” thing. Whatever. I’m letting that go. I’m still going to ride as much as I can, but I’m not going to push it. I’m going to slow down, again. And have fun.
I’m registered in the Banff Gran Fondo this weekend. 155km, from Banff to Lake Louise and back2. I had been stressing out, because I lost 6 months of riding – of TRAINING! – and there was no way I’d be able to keep up a competitive pace. But that’s OK. I’m going to go for a nice ride. Stop at the rest stops. Enjoy the mountains. And I’ll finish when I finish.
- but ride data from Strava is now being used to inform policy and decisions about cycling infrastructure and civic planning, so I think I need to keep posting it for now…
- depending on how well the local bear population cooperates
C.G.P. Grey posted this fantastic video on the inevitability of automation, and what it might mean for society at large.
We think of technological change as the fancy new expensive stuff, but the real change comes from last decade’s stuff getting cheaper and faster. That’s what’s happening to robots now. And because their mechanical minds are capable of decision making they are out-competing humans for jobs in a way no pure mechanical muscle ever could.
You may think even the world’s smartest automation engineer could never make a bot to do your job — and you may be right — but the cutting edge of programming isn’t super-smart programmers writing bots it’s super-smart programmers writing bots that teach themselves how to do things the programmer could never teach them to do.
via a post by Jason Kottke
For an extra-sobering good time, tie this in with Audrey Watters’ writing on robots in education.
The point of a lecture isn’t to teach. It’s to reify, rehearse, assemble and celebrate.
via Stephen’s Web.
Stephen ended his post linking to Tony’s blog post with what appears to be a throwaway line. It’s not. This is where the tension is centred when it comes to teaching. Lectures aren’t teaching, but have been used as a proxy for teaching because how else are you going to make sure 300 students get the appropriate number of contact hours? Butts-in-seats isn’t a requirement anymore. We can do more interesting things. And we can then use lectures for what they are good at. To reify, rehearse, assemble and celebrate.
It’s one of those things that sound unbelievably geeky – it’s like geocaching (a geeky repurposing of multibillion dollar GPS satellites to play hide and seek) combined with capture the flag, combined with realtime strategy games, bundled up as a mobile game app (kind of geeky as well), with a backstory of a particle collider inadvertently leading to the discovery of a new form of matter and energy (particle physics? a little geeky). It’s the kind of thing where peoples’ faces glaze over on the first description of portals and XM points, and resonators and links and fields.
One thing that’s been stuck in the back of my head as I worked my way up to Level 5 Nerd of the Resistance in the game, is the lack of an apparent business model. It’s a global-scale game, with thousands? millions? of users checking in from all around the world. There don’t appear to be ads in the game – I’ve never seen any – and there appears to be an unwritten rule that portals should be publicly accessible. That unwritten rule largely negates a business model that would have businesses pay for placement in the game in order to draw customers into their stores etc…
Niantic started the game in 2013, and launched it under the “release it free so we build a user base, then sell the company” business plan. It worked, as Google bought the company and ramped the game up. It’s now available for both Android and iOS platforms, free of charge, with no advertising or premium subscriptions or in-game purchases.
So, what is Google getting out of it? I think their largest draw is likely in crowdsourced geolocation of networks. They have every Ingress user actively (collectively) wandering the globe, reporting every wireless SSID and cell tower they come across, along with GPS coordinates. The game gently pushes players to stay at the location of a portal, confirming the geolocation and refining precision over time. It’s kind of a genius plan – it is constantly updating Google’s network geolocation database, which can then be used to more accurately track and target all users of the internet for advertising etc…. They’ve turned a bunch of nerd’s nerds into a crowdsourced network geolocation reporting system. And, at Google’s scale, it costs them a pittance to have this system running.
We may collect device-specific information (such as your hardware model, operating system version, unique device identifiers, and mobile network information including phone number). Google may associate your device identifiers or phone number with your Google Account.
When you use a location-enabled Google service, we may collect and process information about your actual location, like GPS signals sent by a mobile device. We may also use various technologies to determine location, such as sensor data from your device that may, for example, provide information on nearby Wi-Fi access points and cell towers.
Common TOS for all Google services, but especially relevant in a geolocation-based game that is actively pushing users to wander their neighbourhoods to gather this data and send it back to Google.
If they’d released the app as a “report network locations to improve google’s ad targeting” tool, it would have gotten huge pushback, and not many people would have downloaded it. But, by hiding that function and wrapping an insanely addictive game over top of it, it’s gone viral.
brb. I need to go recharge the portal at the playground down the street…
If I ever spew anything like this, kill me.
I’ve been trying to get my head around the reasoning for the corporate rebranding to Brightspace12, and I’m coming up short. I like the name, but it feels like everything they’ve described here at Fusion could have been done under the previous banner of Desire2Learn. I’m more concerned about signs that the company is shifting to a more corporate Big Technology Company stance.
When we adopted D2L, they felt like a teaching-and-learning company. What made them interesting to us is that they did feel like a company that really got teaching and learning. They were in the trenches. They used the language. They weren’t a BigTechCo. But, they were on a trajectory aspiring toward BigTechCo.
Fusion 2013 was held at almost the exact same time that we had our D2L environment initially deployed to start configuration for our migration process. We were able to send a few people to the conference last year, and we all came away saying that it definitely felt more like a teaching-and-learning conference than a vendor conference. Which was awesome.
We’ve been working hard with our account manager and technical account team, and have made huge strides in the last year. We’ve developed a really great working relationship with the company, and I think we’re all benefiting from it. The company is full of really great people who care and work hard to make sure everyone succeeds. That’s fantastic. Lots of great people working together.
But it feels like things are shifting. The company now talks about “enablement” – which is good, but that’s corporate-speak, not teaching-and-learning speak. That’s data.
Fusion 2014 definitely feels more like a vendor conference. I don’t know if we’re just more sensitive to it this year, but every attendee I’ve talked to about it has noticed the same thing. This year is different. That’s data.
As part of the rebranding, Desire2Learn/Brightspace just rebooted their community site – which was previously run within an instance of the D2L Learning Environment (which was a great example of “eating your own dog food”), and now it’s a shiny new Igloo-powered intranet site. They also removed access to the product suggestion platform, which was full of carefully crafted suggestions for improving the products, provided by their most hardcore users.
The rebranded community site looks great, but the years worth of user-provided discussions and suggestions didn’t make the journey to the new home. So, the community now feels like a corporate marketing and communication platform, rather than an actual community because it’s empty. I’m hopeful that there is a plan to bring the content from the actual community forward. The content wasn’t there at launch, and it was about the branding of the site rather than the community. That’s data.
And there are other signs. The relaunched community site is listed under “Community” on the new Brightspace website, broken into “Communities of Practice”:
The problem is, those aren’t “communities of practice” – they are corporate-speak categories for management of customer engagement. Communities of Practice are something else entirely. I don’t even know what an “Enablement” community is. That’s data.
It feels like the company is trying to do everything, simultaneously. They’re building an LMS / Learning Environment / Integrated Learning Platform, a big data Analytics platform, media streaming platform, mobile applications, and growing in all directions at once. It feels like the corporate vision is “DO EVERYTHING” rather than something more focused. I’m hoping that’s just a communication issue, rather than anything deeper. Which is also data.
They’re working hard to be seen as a Real Company. They’re using Real Company language. They’re walking and talking like a Real Company. Data.
The thing is – they’ve been working on the rebranding for awhile now, and launched it at the conference. The attendees here are likely the primary target of the rebranding, and everyone I talk to (attendees and staff) are confused by it. It feels like a marketing push, and a BigTechCo RealCo milestone. It feels like the company is moving through an uncanny valley – it doesn’t feel like the previous teaching-and-learning company, and it’s not quite hitting full stride as a BigTechRealCo yet.
I really hope that Brightspace steps back from the brink and returns to thinking like a teaching-and-learning company.
- this isn’t about the name – personally, I like the new name, and wish they’d used it all along. But the company had built an identity around the previous name for 15 years, and it looks like they decided to throw that all away
- and there’s the unfortunate acronym. 30 seconds after the announcement, our team had already planned to reserve bs.ucalgary.ca
Or, how I spent about 15 hours debugging our MediaWiki installation at wiki.ucalgary.ca, trying to figure out why file uploads were mysteriously failing.
We’ve got a fair number of active users on the wiki, and a course in our Werklund School of Education’s grad program is using it now for a collaborative project. Which would be awesome, except they were reporting errors when uploading files. I logged in, tried to upload a file, and BOOM, got this:
Could not create directory "mwstore://local-backend/local-public/c/cf"
um. what? smells like a permissions issue. SSH into the server, check the directories, and yup, they’re all owned and writable by apache (this is on RHEL6). Weird. Maybe the drive’s full?
df -h. Nope. Uh oh. Maybe PHP or Apache have gone south – better check with another site on the server. Login to ucalgaryblogs.ca and upload a file. Works perfectly. So it’s nothing inherent in the server.
Lots of searching, reading about LocalSettings.php configuration options. Nothing seems to work. I enable Mediawiki logging, check the apache access and error logs, and find nothing. It should be working just fine. The uploaded file shows up in the /tmp directory, then disappears (as expected) but is never written into the images directory. Weird.
So, I try a fresh install of mediawiki elsewhere on the server (in a separate directory, with a new database called ‘mediawikitest’). Works like a charm. Dammit. So it’s really nothing wrong with the server. Or with mediawiki. Maybe there’s some freaky security restriction on the new server1, so I set up a new VirtualHost to spin up the new MediaWiki install in exactly the same way as wiki.ucalgary.ca (using a full hostname running in its own directory, rather than as a subdirectory of the “main” webserver’s
public_html directory). And it works like a charm.
Hrm. Searching for the error message turns up mentions of file permission errors, and file repository configs. I mess around with that, but everything looks fine. Except that uploads fail for some reason.
Maybe there’s something funky about the files in the wiki.ucalgary.ca Mediawiki install – it goes back to May 2005, so there’s over 9 years of kruft building up. There’s a chance. So I copy the whole wiki.ucalgary.ca mediawiki directory and use it to host the test instance (still pointing at the mediawikitest database). Works fine. So it’s nothing in the filesystem. It’s not in the Apache or PHP config. Must be in the database.
So, I switch the test instance to use the production mediawiki database (named ‘wiki.ucalgary.ca’). And uploads fail. Dammit. I assume something is out of sync with the latest database schema, so I eyeball the ‘images’ table in both databases. AHAH! Some of the field definitions are out of date – the production database is using
int(5) for a few things, while the new test database uses
int(11) – maybe the file upload code is trying to insert a value that’s longer than the table is configure to hold. So I manually adjust the field definitions in our production images table. That’ll solve it. Confidence! But no. Uploads still fail. But the problem’s got to be in the database, so I modify my search tactic, and find a blog post from 2013:
Problem solved. Turns out the new database backend thing in mediawiki doesn’t like database names with dots in them, and doesn’t tell you. Thank you Florian Holzhauer for finding it!
Dafuqbrah? Really? That can’t possibly be it… The wiki’s been working fine all along – it’s up and running and people are actively using it. If the database wasn’t working, surely we’d have noticed earlier…
renames database from wiki.ucalgary.ca to wiki_ucalgary
Son of a wiki. It works.
So. At least 15 hours of troubleshooting, debugging, trial and error, modifying configurations, installing test instances, and being completely unable to figure it out. And it was a freaking . in the database name that was doing it. With no mention of that in any error message or log file. Awesome. An error message that says “
could not create directory” actually means “
hey - a portion of my code can't access databases with . characters in the database name - you may want to fix that.“
- we moved recently from an old decrepit server onto a shiny new VM server hosted in our IT datacentre, which is awesome but I’m rusty on my RHEL stuff, so there’s a chance I’m missing something important in configuring the server…
Having spent the last 2+ years of my life working on the LMS selection, implementation and replacement here at UCalgary, I can relate to this awesome new article on a pretty profound level. My life in educational technology has been almost entirely redefined in relation to the LMS. That’s a horrifying realization.
This part weighs particularly heavily…
The demands of sustaining infrastructure have continued to dominate institutional priorities, and the recent promise of Web 2.0 has been unevenly integrated into campus strategies: instances of broad, culture-shifting experimentation along these lines in higher education can be counted on one hand. IT organizations have started outsourcing enterprise systems in the hope of leveraging hosted solutions and the cloud more broadly to free up time, energy, and resources. The practice of outsourcing itself seems to have become the pinnacle of innovation for information technology in higher education. Meanwhile, IT organizations are often defined by what’s necessary rather than what’s possible, and the cumulative weight of an increasingly complex communications infrastructure weighs ever heavier.
and a faint glimmer of hope:
Starting now. A technology that allows for limitless reproduction of knowledge resources, instantaneous global sharing and cooperation, and all the powerful benefits of digital manipulation, recombination, and computation must be a “bag of gold” for scholarship and for learning. It is well within the power of educators to play a decisive role in the battle for the future of the web. Doing so will require the courage to buck prevailing trends. It will require an at-times inconvenient commitment to the fundamental principles of openness, ownership, and participation. It will require hard work, creativity, and a spirit of fun.
It will require reclaiming innovation. Our choice.
This is where I go out on a bit of a limb, but I think it’s important to share this kind of info to see if it’s on the right track, too ambitious, or not ambitious enough.
Basically, the last year has been one of constant change in learning technologies at the UofC. We changed LMS, from an antique version of Blackboard, to the latest version of Desire2Learn1. We replaced Elluminate with Adobe Connect2. We rolled out Top Hat as the campus student response system. It’s been a lot of things changing, some while the academic year was under way. I’m hoping we have these things stabilized by the end of the Fall 2014 semester, so we can move on to more interesting things.
We have had difficulty in keeping our key learning technologies up to date over the years, in a kind of digital parallel to deferred maintenance on our facilities. Then, when we reach a crisis, we have to react and strike Urgent High Priority Projects to enable massive change to respond to impending technology failures. We need to get past that reactive mode, which keeps our resources tied up in emergency projects, and into a more proactive mode that is forward-looking, so that we’re able to plan ahead rather than panicking about averting imminent disaster.
As a university, we offer a set of common tools that form up the core learning technologies platform. This is important, because it provides a common starting point for all 14 faculties and various service units. If they can start with a common set of tools, we can provide some cohesive support and enable people to get up and running. It provides a consistent experience for students, so they don’t have to learn one LMS for a class in Sciences, and a different one for a class in Arts, and a different one for one in Kinesiology, etc…. That consistency is downplayed, but it is incredibly important.
Students are under a pretty extreme level of pressure to succeed. They face rising costs, crushing debt, and more competition than ever before. If we can provide them with a consistent experience, they spend less time learning the tools and more time engaging each other and doing more interesting things.
Also, we have students whose success depends on this consistent experience – anyone that uses a screen reader to support their visual challenges will tell you that inconsistent interfaces essentially destroy the learning experience for them, as they have to battle with various navigation models and learn to find things in each environment.
The University of Calgary’s common learning technology platform currently consists of:
- Online Tools
- Adobe Connect
- Top Hat
- Classroom Tools (classroom podium software stack)
- SMART Notebook
- MS Office
- various media players
- various web browsers
These are common tools that are centrally funded, and are available for use by every instructor, student, and staff member in our community. They provide various levels of flexibility, which cover the most common use cases (and UCalgaryBlogs.ca even lets you pick various themes and enable plugins to really customize your site as needed). And UCalgaryBlogs.ca and wiki.ucalgary.ca probably wouldn’t have been considered part of the core common learning technology platform before I started my new role and basically started telling everyone that that’s what they were.
But, these tools don’t cover all common needs. We still need to add a few tools. The most urgent needs are for video hosting and survey management.
Video hosting is important because we currently don’t have a place where we can say “hey. you need to share a video? just use this…”. Today, individuals have to spin up their own YouTube, Vimeo, Flickr, or other accounts and publish their videos there. Which works, but what happens when an instructor leaves the university? The videos they published to their accounts on various services disappear. We have no way to provide support for these services. Students, again, have an inconsistent experience (why does video from my Math course work on my iPad, while my Chem course videos don’t?). At the moment, the only campus platform for hosting videos is a static webserver. So, 500MB video files are uploaded to a server, students are expected to download the video and install whatever video player is required (is it MP4? Will that work in QuickTime, or do I need Windows Media Player? Can I play a WMV file? etc… and we still have courses with Real Media files. Yeah.
So, by adding a video hosting service to our common platform, instructors and students can host videos in a place that’s managed and consistent, and will be able to know that videos will work on whatever device they use, and won’t disappear when a prof moves to another institution.
Similarly for survey management. We currently rely on individuals to spin up their own Survey Monkey or similar account, learn how to use it (there are many many online survey platforms that are used), and pay for the pro account so you can export your full data set. This adds up quickly. If we provided a campus survey platform, individuals wouldn’t have to pull out their credit cards, and we’d be able to manage the data and provide a level of support that isn’t possible otherwise.
OK. So we add video hosting and online surveys to the common learning technology platform. That provides the starting point for all faculties to build from. But it still won’t cover unique needs – we have 14 faculties, each with signature pedagogies, and it’s just not realistic to assume that their unique needs can be fully met by a handful of campus-provided common tools. How do we provide a common set of tools and support innovation and unique requirements?
Departments often manage their own platforms – our Faculty of Medicine is involved in an open source consortium to build and maintain an online platform tailored to their needs. They also get to manage the servers and software required to run that platform. Other faculties use other platforms, and are able to have dedicated servers managed by Information Technologies in our campus datacentre, but each one is essentially a standalone project, requiring separate dedicated resources to maintain and monitor the server and its software.
What if we were able to provide something like the Reclaim Hosting model on campus? That would give individuals access to host whatever software they need, mostly through the one-click installers built into CPanel, while running the whole thing within the campus datacentre so that everything is properly managed and backed up. Again, this is stuff that’s possible now, by having individuals go to GoDaddy, MediaTemple, iWeb, or any of a long list of hosting providers where they can set up whatever they want. But, they have to find a good provider. They have to learn the unique way of managing the software on that provider’s platform. They have to remember to pay the annual fee charged by the service provider. And they need to back their stuff up. That’s a lot of points of failure. We need to provide a more streamlined way of supporting this kind of innovation, so they’re able to focus more on the innovation and less on the management of the service.
I think this is where we need to go as a university. We need to provide the best core tools as a common platform. We need to provide consistency while not stifling innovation. And we need to provide support for innovation, exploration, and truly unique use cases.
So, my visionary plan3 for campus learning technologies is to finish stabilizing things by the end of the Fall 2014 semester (which means before Christmas 2014), adding in the missing pieces to make sure we have a really solid core platform. And then, to be able to start working on more interesting things, including planning out what would be required to implement a Reclaim Hosting service on campus.
- that’s probably going to wind up as a blog post or two, but later…
- still in progress, but it started back in January
- which isn’t actually that visionary – it feels more like common sense and catch-up in 2014
Just processed a quick time-lapse test, using the camera1 that we installed to monitor construction of the new digs. This’ll work nicely… Now, to test a few video hosting platforms, to see which one mangles the video the least…
original H.263 .mov file (66.3MB)
also, holy smokes does YouTube compress video files, even at “HD” quality. yikes. Facebook looks janky and won’t seem to embed at full width. Vimeo looks decent, but won’t play HD embedded unless I pony up for a premium account (and made me wait in line for 43 minutes before compression began, for reasons). Flickr and MediaCore seem to be the best so far…
- a Foscam FI9821W V2, installed hanging upside-down at an undisclosed location on campus
We’re working on something that would benefit from being produced in the form of a well-designed online document, so I’m gathering some samples and links…
- Harvard: Beyond the Horizon
- Interactive Documentaries via @cogdog’s iDocs presentation at Skidmore College
- Tim Owens – Community Web Hosting
- New York Times – Snowfall: The Avalanche at Tunnel Creek
- Medium – The Importance of being Satoshi Nakamoto
- The Guardian – NSA Files Decoded
- Smithsonian – Oral History of the March on Washington
- Esquire Magazine – Seven Strange Days With the Young Syrians Fighting Assad
- The Seattle Times – Coffee in India
- The Chicago Tribune – Curtis Duffy’s Saving Grace
- Women & Tech – Nora Young Interview
- ESPN: The Long, Strange Trip of Dock Ellis
- LiveStrong 2012 Annual Report
- Ford Foundation 2011 Annual Report
- Bootstrap responsive design framework (and a sample skeletal mockup )
- ScrollKit – recently acquired by Automattic, to be included with WordPress
- Aesop Story Engine (WordPress plugin, installed on UCalgaryBlogs)
- Pages – produces PDF or ePUB versions of documents
- iBooks Author – author eBook PDF/ePUB/iBook format
- BookCreator – for iPad and Android
Any other awesome examples or useful tools to make this kind of design activity not cost a fortune or rely on a large team?
I was fortunate to be able to present a session at CNIE 2014, to share some of the campus engagement stuff we did as part of our long LMS replacement project. I tried to stay away from the technology itself, and focus on the engagement process. Full slide deck is available online, and fuller reports describing the engagement and findings are still available online, as well as the GitHub repository of LMS RFP requirements1.
Basically, I described the process, which started as a conventional inventory of shiny things. We then realized that we had the opportunity to have a more meaningful discussion as a campus community, and the conversation shifted to more interesting topics such as how people actually teach and learn, and what they actually care about.
I billed this as a hands-on session, and was rewarded with a coveted 90 minute slot. The first activity was to have participants try working through building a “fishbone diagram”, based on the research of Jeffrey Nyeboer. It’s a useful way of organizing the description of organizational attributes – things that make up the workflow of an organization – in a way that’s more meaningful than simple word clouds.
(photo by the awesome and talented Irwin DeVries)
It’s a process we used with faculty leadership across our campus, to describe what they mean by “teaching and learning”. We provided them with a simplified template as a null hypothesis, and asked each faculty to correct/complete/adapt/recreate it as needed to describe what they care about. The beauty of this kind of diagram is that it’s pretty inclusive – it’s easy to work on with a group, and when there is disagreement about something that’s on it, or something that has been missed, it is easy to hand people markers to hack away at the diagram until they like it. Used that way, it’s an interesting way to build consensus around what things an organization cares about, which is something that often triggers conflict and defensive postures. The cool thing about the engagement model is that it has lead to some much deeper discussions about things that are much more interesting than what they need from an LMS – it’s opened the door to ongoing discussions about teaching and learning that would have been difficult, impossible, or unavailable otherwise.
Here’s the simplified fishbone we used as a starting point for each faculty on campus:
Here’s one of the fishbones that was adapted by one of our faculties:2
And the fishbones that some of the session participants came up with, to describe various contexts:
In the session, I also talked about how we identified the various types of people/groups that make up our community, which is surprisingly difficult at a complex organization such as a university.
The session went really well, even though it was an “LMS session” at a time when we’re finally getting some movement away from The LMS As All That There Is™ – but this engagement model would work well (and has worked well) for anything – the LMS change on our campus just provided us with the Macguffin to get the plot moving.
- but I would strongly recommend that you don’t use the full set – this was far too much for everyone, and with enough items, things basically cancel each other out. pick a subset of items that you really care about, and have the respondents tailor their responses to that, rather than the whole shooting match. at a high level, they’re essentially all the same thing anyway…
- we provided these via copies of documents in Google Docs, so people could happily add/edit/remove stuff without worrying about access or tools
The results were immediate and powerful. The employees exhibited significantly lower stress levels. Time off actually rejuvenated them: More than half said they were excited to get to work in the morning, nearly double the number who said so before the policy change. And the proportion of consultants who said they were satisfied with their jobs leaped from 49 percent to 72 percent. Most remarkably, their weekly work hours actually shrank by 11 percent—without any loss in productivity. “What happens when you constrain time?” Lovich asks. “The low-value stuff goes away,” but the crucial work still gets done.
I’d love to set this policy up at the office. I’m as guilty of this as anyone.
Update: and… 5 minutes after sending the link to the article, and we have an informal policy in the Taylor Institute to try out prohibiting work-related emails before 8am and after 5pm, and on weekends. Awesome. It’s a start.
Aggregated stats for D2L usage during the Winter 2014 semester (Jan-Apr 2014). Counts number of visits, not pageviews.
The first week of January was the ramp-up to the official semester start. Reading week is visible as the slump in February. Kind of trails off as finals approach…
News of a new collaboration between UGuelph and D2L, on a major pedagogy research initiative:
The pedagogy research project strives to help schools track and report on learning outcomes across programs over time. Researchers will use D2L’s predictive analytics capabilities to document and discover the effectiveness of assessment tools on specific subjects while working with educators to develop a curriculum that results in greater student success.
2 quick thoughts1 on this:
- awesome! D2L really does play well with others, and invests in improving teaching and learning rather than just polishing shiny baubles.
- surely there is more to this than just predictive analytics. I’d love to see a pedagogical collaboration that was about in-the-trenches teaching (and learning) online, and not just massaging the data gathered about online activities. D2L has been trying to foster an online community of teachers (and others) in their D2L Community site23. It would be really cool to push that community up a few notches and open the doors so anyone can follow along (or join in).
Desire2Learn really feels like they care about teaching and learning – the Fusion conference last year was different from any other vendor conference I’ve been to, and felt decidedly like a good teaching-and-learning conference rather than a buy-our-shiny-products vendor conference.
- my own thoughts, not the official position of the university or anything
- which is actually running in the D2L LMS itself
- but it requires a login to see the stuff that goes on inside it