I did some googling1 on Michael Betzler, who was the director on the previous skateboarding documentary. Looks like he now is/was director at the olympic media consortium. Before that, he was involved in this bit of awesomeness.
I would have been the same age as my son is now, when this footage was shot. Wow. My dad had his insurance agency in the Lougheed Building downtown, so I would have been down there pretty regularly. Amazing, how much the city has changed in just a handful of decades…
- DuckDuckGoing? that’s not a thing yet, is it?
This documentary is awesome.
So many 1986.
I picked up a Swivl robot camera mount to kick off our “tech lending library” here in the EDU. It’s a pretty interesting piece of kit that will let anyone record a session without having to spend $100K retrofitting a classroom with PTZ cameras and switching boards. Slap this thing onto a desk or tripod, drop your iPhone (or iPad, or Android device) into the slot, plug the microphone cable into the mic jack on your device, and hit record. Done. It now automatically tracks the lanyard, which also has a built-in microphone that sends decent audio to the recording device. Nice.
If you want to sign the thing out to experiment with it, let me know.
I’ve been noticing this for awhile under iOS7, but had been hoping it was a storage bug that would have been fixed in iOS8. Nope.
I “cheaped out” by only springing for the 16GB iPhone5, which means that I effectively get 12GB of space for stuff like apps, music, photos, etc… Shouldn’t be a problem, but I’ve been hitting the cap pretty regularly now. I’ve resorted to deleting big apps, deleting all of the music that I’d put on the phone (thankfully the train ride is very short now), but still the danged phone reports no free storage.
Looking at the General > Usage reports, it looked like I was chewing up a couple of gigs just for photos. After grabbing then from my iCloud photostream, they’re all in Aperture anyway, so I figured I’d just turn off Photostream temporarily to flush stuff out. After deleting all local photos manually, and then deleting the “recently deleted” items, I still get:
(3GB of the free space came from nuking the music I’d downloaded onto the phone)
Tethering the phone by USB and firing up Image Capture, it reports the iPhone completely empty of photos/video.
834 MB used for something, somewhere. But I can’t seem to find it, and I can’t seem to free it up. That’s about 7% of the storage space eaten up by mystery ghost media files. Hopefully there’s a way to force-nuke these mystery files so I have enough room for music on my phone (as well as room for software updates without having to delete stuff first).
Looks like when I eventually upgrade my phone, I’ll have to spring for the extra cash to have more than 16GB (well, 12GB) available. The Cloud™ was supposed to make onboard storage less critical, no? Anyway. I’ll keep trying to figure this out, and will post an update if I ever manage to clear up the phantom space…
UPDATE: @poploser recommended PhoneClean. I ran it, and it freed up over a gig of space (awesome!) but didn’t seem to do anything about the phantom media files. Progress, though…
Designing Libraries for the 21st Century
I attended the 3rd annual Designing Libraries for the 21st Century conference on campus. Library-design-folks from around North America (and Australia and the UK) came together to talk about what future libraries need to be. It was my first library conference, and I was struck by 3 things:
- What an amazing, open, inviting group of people. It didn’t matter who you were, or where you were from, people actively welcomed everyone in conversation.
- Librarians are really thinking critically about what a “library” means, and coming at it from how to best support the activities of the people. Books? Necessary but not sufficient. They’re doing some amazing design work on how to deconstruct and redesign library spaces.
- They sure do like to sit and listen to people talk. The presentations were good, but many could have been ably replaced by MP3 files.
I have 10 pages of notes from this, and it’s triggered and reinforced some plans I’m working on for our group in the EDU. Faculty Makerspaces? Hell yeah. Collaboration with the TFDL (and other library) folks? You bet. Technology lending fleet? Yup (already have some cool things to loan out for experimentation by profs). Field trips and site visits? Yeah! And more to come, once plans are worked out a bit more.
Moving D2L from “project” to “sustainment”
We had been running the D2L transition as a full-on Project for the last 15 months. And now we’re moving it into ongoing sustainment mode as a regular production service. We’ll be seeing different composition of the D2L teams as we figure out the best way to run/support/extend it now that everyone is in the pool together. Lots of planning meetings to figure out that transition, made more fun by a re-org in IT.
John Dawson in the house!
He’s visiting for a few days, and the team got to pick his brain yesterday. We had a really great conversation that covered just about every topic from how to design a multi-year biology program, to how to do quick-and-dirty DIY classroom lecture capture, to how to set up a course in D2L to let students have as much access to their own data as possible. And lots of other stuff. We tried recording the session on the new Swivl camera mount, which worked GREAT!
John wound up his visit by giving a presentation with Natasha on “Using Curriculum Mapping as a Vehicle for Faculty Engagement in Teaching & Learning“. Great discussion of what is involved with the process, with an emphasis that it’s not about the data as much as asking “what are we trying to do? and what are students learning?” etc… Looking forward to seeing these conversations grow on campus.
Planning for Peer Review
We started the early discussions/planning for what might be involved in building/integrating a peer review process into D2L (or offering it as a standalone tool/platform/service). Still too early to even have a timeline, but this is going to be an interesting project. We’ve been looking at options (including native D2L functionality, which is absent, and other tools which don’t appear to be shared or open source), but it looks like we may need to build our own tool. Which will, of course, be made available on our GitHub account when we have something ready to share.
This week, we saw a new Technical Account Manager, and a new Account Manager. We seem to burn people out pretty quickly. Not sure if we’re just extra-demanding, or if there’s something else going on…
12! Dang. Almost a freaking teenager. So fast.
I didn’t really (fully) articulate my position(s) on in my recent LMS post, either. I kind of ran out of steam at 1600 words. Maybe for the better. (I’m still not fully articulating things yet – more to come later, if I can come up with the energy – but I wanted to respond quickly to Brian)
I am really not a fan of the LMS as an end-state, but it’s a symptom of institutional models, not the illness itself. Unless/until the nature of post-secondary institutions changes pretty radically, the LMS (or something like it) is here to stay. Yeah. I feel it too.
Which leaves me thinking about how to proceed. I’m powerless to change the nature of The Institution™, so I have 2 choices – either give up entirely and write off online learning for anything larger than incubator/startup/pilot scale, or embrace the fact that the LMS (or something smelling awfully similar) will be around for awhile. How to turn that around so that we can still do interesting things?
If the LMS is set up to be the institutional plumbing – access control, grades, basic functionality for most users – what if we let it do that, so that the pressure of “scale” is taken off the innovators at the edges? Let the majority of people do their thing in the LMS, and slowly change as that beast evolves, while we work on the awesomeness in the fringes. Or something.
- I’m trying something – if I say people can respond by posting on their own sites and tracking back to here, I need to be doing the same thing. Reclaim all of the comments. or somesuch.
I’m taking a page from Clint Lalonde’s book – he’s been writing “week in review” posts for awhile, and it’s been really interesting to see what he’s up to. And of course there’s the weekly recaps by Audrey Watters! I don’t think I’ll be able to recall that level of detail, but having some kind of record of at least the bigger things each week will be helpful to me. So…
New website launched
The new website for the new Educational Development Unit of the new Taylor Institute for Teaching and Learning launched on Monday. It was a highly collaborative design project, built within the Unit. I’m really happy with the website, because we managed to steer completely away from “hey! let’s reproduce an org chart as HTML and call it a website!”. I think it’s a much more usable/useful website, and early feedback from the folks that actually need to find people and stuff has been extremely positive. Awesome.
Teaching and Learning Committee
I was asked to give an update to the General Faculties Council Teaching and Learning Committee (basically, the Associate Deans from all faculties, coming together with Vice Provosts to make decisions and actually work on teaching-and-learning stuff – it’s a really amazing group!). I had to give an accelerated version of State of D2L and Connect (our 2 key institutional eLearning tools, both of which are new at the institution this semester). I’d
forgottenblocked out that for about half of the campus, D2L is new this semester. We’ve been working on the project since last summer, and have been adding faculties and courses since then, but Fall 2014 was the first full-scale semester with 100% of online courses being run in D2L.
I described the semester launch, and that we are now basically done with the transition. It’s time to reduce the number of intensive F2F workshops, and start the shift to a community support model. The EDU will take the lead in coordinating and building capacity, but it’s time to move past the initial orientation-level sessions. Also, it’s finally time for us to move past “OMG LMS”, and get into the more interesting/impactful/meaningful things we can be doing to improve teaching and learning, and the tools we use to support that.
Still getting support emails for this – mostly from people who don’t receive the account notification email, so they have no idea how to login. I had a new email@example.com address set up so that it would hopefully bypass spam filters that have apparently latched onto the account built into the server. Now, to make sure all sites are able to send mail via SMTP using that account…
Technology Integration Group planning session
We’re still a new group, and we need to find our voice. Part of that involves developing a shared vision and mission for the group within the context of the EDU, and within the broader University. We had a really amazing brainstorming session on Monday, and came up with a really great description of the purpose for our group:
To enable innovation and creative integration of learning technologies to continuously enhance the learning experience.
Yeah. Still sounds a little marketroid buzzwordy, but we can work with it. It’s good.
Outcomes and Competencies
Had another great meeting with some folks who need to run a faculty-wide outcomes/competency analysis as part of their accreditation process. We’re looking at the tools within D2L (which appear to come up rather short from what they need, but we’re hopeful that this will eventually help), as well as dedicated tools to manage the data and analysis. Holy. That’s going to be a big project, but it has the potential to really change (and improve) the entire learning experience in the faculty.
Met with the IT Partner for our Faculty of Nursing in Qatar, to talk about how we can better work together to support instructors and students over there. We have lots of great ideas (unfortunately, none of them involved “send D’Arcy to Qatar for a few days”).
Starting to plan our shift to community support model
As mentioned above, it’s time to shift from intensive face-to-face support – that was absolutely necessary to help people with the transition to D2L and Connect, but that’s just not sustainable. We have essentially 2 FTE dedicated to instructor support, and we need to plan how to change what we do to help people Out There, without them having to come to the mothership for support. Lots of ideas there – we’ll be rebuilding the elearn.ucalgary.ca and open.ucalgary.ca websites, and building many more resources to help out. We’ll also be working with the IT Support Centre to help build capacity there, so instructors and students are able to get many of their questions rapidly handled at that level without needing to be escalated over to the EDU for high-level support.
This is all based on the Strategic Framework for Learning Technologies, which was approved by the Board of Governors over the summer. The Framework lays out a plan to provide staff in each faculty to act as “coaches” or “educational technology specialists” to work within the faculty context to help instructors in a distributed/network/community model. We’ll be working closely with them (once they’re hired), to build capacity in each faculty and then share stuff across the whole campus. It’s going to be a really powerful way to bridge centralized support and services with the various unique needs in each of our 14 faculties (and we have a precedent in the IT Partner model, which has worked really well for the pure IT side of things).
President’s Community Report 2014
Holy cow. What an update! It’s easy to gloss over the whole “Eyes High” vision as just a marketing ploy, but dang have we ever picked up our game as a university! I’ve been (mostly) on campus since 1987 (starting as an undergrad, then an undergrad in another field, then as a consultant, staff member, and now a manager). I’ve never seen this level of activity on campus. We’re aiming high, and are actually pulling it off. Wow.
The biggest takeaway for me is that the University is counting on us making the Taylor Institute the “go-to place in North America” for research and innovation in teaching and learning. That’s a big goal, especially considering the building currently looks like this and we’re just getting started. Pretty amazing, to know we have that level of support at the university!
I’m involved in a proposal for a research project involving theatre, library collections, teaching and learning, and some really high goals. Can’t share details yet, because it’s just starting to get fleshed out, but this collaboration has been absolutely amazing. My brain hurts after our last planning session.
every conversation with faculty members about copyright goes something like this…
long, rambling post alert. it’s been awhile since I’ve posted, so lots of things have been stewing. bear with me.
It’s fashionable to hate the LMS. It’s the poster child for Enterprise Thinking and lazy (online) pedagogy, so it is easy to rail against the LMS as The Cause of All Educational Evil. The LMS is put into the stocks, and we are expected to stand in the town square and throw rotten fruit at it.
We’re pushed into a false binary position – either you’re on the side of the evil LMS, working to destroy all that is beautiful and good, or you’re on the side of openness, love, and awesomeness. Choose. There is no possible way to teach (or learn) effectively in an LMS! It is EVIL and must be rooted out before it sinks its rotting tendrils into the unsuspecting students who are completely and utterly defenseless against its unnatural power!
I feel like I’m cast in the role of an LMS apologist, because I have a more nuanced approach.
I have been an advocate, proponent, supporter, and contributor to open source communities, open content licensing, and generally sharing stuff because why not? I have also played a key role in the recent adoption of a new LMS by my university. But. How on earth can I reconcile these two diametrically opposed world views? Gasp.
It’s almost as if different tools are used for different purposes.
When I think about the LMS, and its role in the enterprise, this is what makes many peoples’ hair stand on end. THE ENTERPRISE HAS NO BUSINESS IN THE CLASSROOM! etc. Except that’s largely bullshit. Of course classrooms are an Enterprise issue – whether physical (buildings and facilities are expensive to build and maintain, and need to be managed properly etc…) or online.
But, the arguement goes, online means there are no rules, no boundaries, no constraints. People should be free to do whatever they want.
That’s great – I think it is truly awesome that people can craft their own online environments, to support whatever online activities they want to do. And that instructors, staff, and even students (gasp!) can do this stuff on their own, with no interference or meddling from The Enterprise.
But. We can’t just abdicate the responsibility of the institution to provide the facilities that are needed to support the activities of the instructors and students. That doesn’t mean just “hey – there’s the internet. go to it.” It means providing ways for students to register in courses. For their enrolment to be automatically processed to provision access to resources (physical classrooms, online environments, libraries, etc…). For students’ grades and records to be automatically pushed back into the Registrar’s database so they can get credit for completing the course. For integration with library systems, to grant acccess to online reserve reading materials and other resources needed as part of the course.
Anyone who pushes back on this hasn’t had to deal with 31,000 students, and a few thousand instructors. This stuff needs to be automated at this scale. Actually – “scale” is another divisive issue. Why worry about scale? SCALE? WILL IT SCALE? As if scale is irrelevant. If a university needs to deal with tens of thousands of students, I assure you that scale is absolutely relevant. Anyone who thinks we shouldn’t spend time worrying about providing a common and consistent platform as a starting point needs to spend a week helping out at a campus helpdesk, answering questions from instructors and students.
OK. So the LMS is primarily used by institutions to make sure that there is a common starting platform for online courses. That courses are automatically created before a semester. That students, instructors, TAs, etc… are given access with appropriate privileges. That archives and backups are maintained. That records of activities and grades are kept. This is the boring stuff that is supposed to be invisible. But, it’s necessary if we are to responsibly teach online.
If instructors and/or students want or need to, they can of course do anything else they feel like doing online. Providing an LMS doesn’t mean “YOU SHALL NOT USE ANY OTHER TOOL” – there is no mandate to say “ONLY THE LMS SHALL BE USED”. It’s a starting point. And for some (many? most?) courses, it’s sufficient.
GASP! THE LMS IS SUFFICIENT? HOW CAN HE SAY THAT? BURN THE HERETIC!
Calm down. Take a step back, and think about some of the courses at a university. How about, say “Introduction to Chemistry” – yup. An LMS is entirely sufficient for that kind of course. Provide course info, share documents, maybe do some formative or summative assessment, and store some grades. LMS? check.
How about, say, “Calculus III”. Same pattern. LMS? check.
“Introduction to Shakespeare”? Students might want to blog about passages in Othello. Or link to performances of Macbeth. Maybe post photos of a campus production of King Lear. Great! Throw in a blog. Use the LMS for the basics, and do other things where needed. The LMS course becomes a source of links to other resources, and takes care of the boring administrative stuff.
But – why wouldn’t the instructor for the Shakespeare course want to be completely free of the shackles imposed by the LMS? THE SHACKLES! They might. Or, they might want to have a private starting point, before moving out into The Wide Open.
Even if the instructor decides to completely ignore the course shell that’s automatically created in the LMS, and go out on their own – say, using a WordPress mother blog site – they still need to take care of the boring administrative stuff. They’ll need to come up with a system for adding students to the mother blog site (and removing students when they drop the course). They’ll need to come up with a way to store grades (unless they’ve been able to convince adminstration and students that grades aren’t necessary – I haven’t met anyone who’s had luck there). They need to keep adding features to their custom website, until it starts accumulating lots of bits to handle the boring administrative nonsense.
Eventually, you come up against Norman’s Law of eLearning Tool Convergence:
Any eLearning tool, no matter how openly designed, will eventually become indistinguishable from a Learning Management System once a threshold of supported use-cases has been reached.
The custom platform starts to need care and feeding, maintenance, hacks to import and export data. It starts to smell like an LMS. So now, instead of a single LMS that can be supported by a university, we have an untold number of custom hacks that must all be self-supporting.
And here is where the pushback from the Open camp is strongest – BUT WE DON’T NEED OR WANT SUPPORT. JUST LET US DO OUR THING!
Which is great. Do your thing. But, what about the instructors (and students) who don’t have the time/energy/experience/resources to build and manage their own custom eLearning platform? Do we just tell them “hey – I did it, and it wasn’t that hard. I can’t see any reason why you can’t do it too.”? That starts to smell awfully familiar.
Which brings me back to my personal position on this. There is room for both. Who knew? The LMS is great at providing the common platform, even if it’s just a starting point. And the rest of the internet is awesome at doing those things that internets do. There’s lots of room for both.
“GREAT? NO WAY! THE LMS MAKES PEOPLE TEACH POORLY!”
No. It might make it easy for lazy people to just upload a syllabus and post a Powerpoint and think they’re teaching online. But that’s no different than physical classrooms being used by lazy people to show endless Powerpoint slides punctuated by more slides. Lazy teachers will teach poorly, no matter what tools they have access to. Just like awesome teachers will teach well, no matter what tools they have access to. The LMS is not the problem.
“But – why waste taxpayer dollars on an LMS at all? Just cancel the contracts and use the money for other stuff!” Um. It doesn’t work that way. We have a responsibility to provide a high quality environment to every single instructor and student, and the LMS is still the best way to do that.
And, although the costs have risen rather dramatically in the last decade, and seem ungodly high in comparison to, well, free… universities spend an order of magnitude more on the software that runs the financial systems – stuff that doesn’t have any direct impact on the learning experience. Hell, there are universities who pay their football coaches more than what they spend on the LMS for all students to use (thankfully, my campus doesn’t do that). For universities with $1B operational budgets, this kind of investment in online facilties is almost lost in budgets as a rounding error.
Anyway. Whew. I’ll try to write some more on this. 1600 words of rambling is a sign that I need to work on this some more…
Looks like the Connected Courses open course thing is shaping up to be kind of awesome. This is a placeholder post to let it sniff out the feed for the #connectedcourses tag here on the old blogstead. Here’s hoping my copious free time will be put to good use.
Fall 2014 Block Week kicked off today, meaning we just pushed into the 2014-2015 academic year. Holy. The last one is basically just a blur. But, we did a surprisingly epic number of major things as a team1:
- Migrating from Blackboard to D2L in about 8 months, including:
- building and testing the integration with Peoplesoft & Elluminate
- designing and conducting workshops to support a couple thousand instructors
- working to help get the 31,000 FTE student body through the move
- building online resources to help, at the UofC’s elearn website
- Doing an emergency migration from Elluminate to Adobe Connect, in response to the Javapocalypse of January 2014
- Probably a bajillion other things that got forgotten in the blur. what a year.
To get the campus community through the whole thing, I’d been using a diagram to outline the flow and timeline:
The 2 stars indicate (left) when we got access to our D2L server, and (right) when we had to turn off access to the Blackboard servers. Everything was driven by those dates, and mapped out over the academic year with semesters defining the major stages. The surprising/amazing/relieving thing is that we actually stuck to the schedule. I didn’t have to revise that document once, after using it last summer to outline the process. Wow.
On top of that, the shiny new Technology Integration Group in the Taylor Institute for Teaching and Learning’s Educational Development Unit had a bunch of other stuff to do:
- providing instructor training and support for D2L and Adobe Connect (working closely with the Instructional Design team)
- launching the new Teaching Community website
- rebuilding the “team formation tool”, from an old java-based codebase to a modern application implemented using the D2L Valence API
- producing a pretty awesome student orientation video
- building a new intranet website to manage data within the EDU
- preparing a new website for the new EDU (to be launched later this month)
- building a mobile app for D2L, using the Campus Life framework
- supporting the campus blogging and wiki platforms
- investigate additional tools within D2L to support learning, such as ePortfolios, badging, repositories, etc…
- exploring other learning technologies, including beacons, and a long list of other things we didn’t have nearly enough time to play with…
So, while 2013-2014 was a year of pretty epic and overwhelming changes, I’m looking forward to the big pieces stabilizing this fall, so we can start pushing at the edges a bit more. We’ve got lots of ideas for things we can do, once the major changes are done for a bit. That roadmap will be sorted out later this month, but it’s going to be a really fun year!
- this was a truly multi-department interdisciplinary team, with folks from the Taylor Institute EDU and Information Technologies working flat out together to get stuff done
John sent a link to our loose group of cycling buddies, and I’ve read the article 3 times now. Each time, it feels like it hits closer to home.
I’ve been riding my bike as the primary way of getting around, and have been communiting by bike almost exclusively since 2006. I’ve always ridden, but never really considered myself a cyclist until then. I was never athletic, never good at sports. But I was happy on a bike. Over the years, I actually got pretty good on a bike. I could make it go fast. I could climb hills. I could ride far. It was awesome.
And then it started feeling less awesome. Most recently, with my bad knee. Late last year, I somehow managed to get a stress fracture at the top of my tibia. I didn’t even know it had happened, and only wound up at the doctor because I thought I was dealing with progressive arthritis or something. Nope.
We couldn’t find any specific incident that might have caused it, but the doctors thought it may have been related to repetitive stress and strain while riding ~5,000km/year. Which meant it was self-inflicted. I’d been pushing myself for the last few years to try to keep up that pace. And, while limping around like a 70-year-old, I realized that I hadn’t been doing myself any favours. One knee is already pretty much shot, the other is likely not far behind it. And pushing to hit 5,000km/year wasn’t helping things. I’m largely recovered now – the knee is still sore, and feels weaker than it should, but it works. Physio has helped, but it’s obvious I need to pay attention to it before it gets worse.
I’ve been tracking personal metrics since 2006 – with detailed GPS logs since 2010, thanks to my use of Cyclemeter. Recently, I’ve added Strava to the mix. I really notice that I push myself more when I know a ride will be posted to Strava – either I need to let go of that, or I need to stop posting rides1.
I’m not really sure why I was pushing myself to keep hitting 5,000km/year. I think it was the feeling of accomplishment, of achieving a goal that not many people do. Some kind of macho “I’m not getting old! look what I can do!” thing. Whatever. I’m letting that go. I’m still going to ride as much as I can, but I’m not going to push it. I’m going to slow down, again. And have fun.
I’m registered in the Banff Gran Fondo this weekend. 155km, from Banff to Lake Louise and back2. I had been stressing out, because I lost 6 months of riding – of TRAINING! – and there was no way I’d be able to keep up a competitive pace. But that’s OK. I’m going to go for a nice ride. Stop at the rest stops. Enjoy the mountains. And I’ll finish when I finish.
- but ride data from Strava is now being used to inform policy and decisions about cycling infrastructure and civic planning, so I think I need to keep posting it for now…
- depending on how well the local bear population cooperates
C.G.P. Grey posted this fantastic video on the inevitability of automation, and what it might mean for society at large.
We think of technological change as the fancy new expensive stuff, but the real change comes from last decade’s stuff getting cheaper and faster. That’s what’s happening to robots now. And because their mechanical minds are capable of decision making they are out-competing humans for jobs in a way no pure mechanical muscle ever could.
You may think even the world’s smartest automation engineer could never make a bot to do your job — and you may be right — but the cutting edge of programming isn’t super-smart programmers writing bots it’s super-smart programmers writing bots that teach themselves how to do things the programmer could never teach them to do.
via a post by Jason Kottke
For an extra-sobering good time, tie this in with Audrey Watters’ writing on robots in education.
The point of a lecture isn’t to teach. It’s to reify, rehearse, assemble and celebrate.
via Stephen’s Web.
Stephen ended his post linking to Tony’s blog post with what appears to be a throwaway line. It’s not. This is where the tension is centred when it comes to teaching. Lectures aren’t teaching, but have been used as a proxy for teaching because how else are you going to make sure 300 students get the appropriate number of contact hours? Butts-in-seats isn’t a requirement anymore. We can do more interesting things. And we can then use lectures for what they are good at. To reify, rehearse, assemble and celebrate.
It’s one of those things that sound unbelievably geeky – it’s like geocaching (a geeky repurposing of multibillion dollar GPS satellites to play hide and seek) combined with capture the flag, combined with realtime strategy games, bundled up as a mobile game app (kind of geeky as well), with a backstory of a particle collider inadvertently leading to the discovery of a new form of matter and energy (particle physics? a little geeky). It’s the kind of thing where peoples’ faces glaze over on the first description of portals and XM points, and resonators and links and fields.
One thing that’s been stuck in the back of my head as I worked my way up to Level 5 Nerd of the Resistance in the game, is the lack of an apparent business model. It’s a global-scale game, with thousands? millions? of users checking in from all around the world. There don’t appear to be ads in the game – I’ve never seen any – and there appears to be an unwritten rule that portals should be publicly accessible. That unwritten rule largely negates a business model that would have businesses pay for placement in the game in order to draw customers into their stores etc…
Niantic started the game in 2013, and launched it under the “release it free so we build a user base, then sell the company” business plan. It worked, as Google bought the company and ramped the game up. It’s now available for both Android and iOS platforms, free of charge, with no advertising or premium subscriptions or in-game purchases.
So, what is Google getting out of it? I think their largest draw is likely in crowdsourced geolocation of networks. They have every Ingress user actively (collectively) wandering the globe, reporting every wireless SSID and cell tower they come across, along with GPS coordinates. The game gently pushes players to stay at the location of a portal, confirming the geolocation and refining precision over time. It’s kind of a genius plan – it is constantly updating Google’s network geolocation database, which can then be used to more accurately track and target all users of the internet for advertising etc…. They’ve turned a bunch of nerd’s nerds into a crowdsourced network geolocation reporting system. And, at Google’s scale, it costs them a pittance to have this system running.
We may collect device-specific information (such as your hardware model, operating system version, unique device identifiers, and mobile network information including phone number). Google may associate your device identifiers or phone number with your Google Account.
When you use a location-enabled Google service, we may collect and process information about your actual location, like GPS signals sent by a mobile device. We may also use various technologies to determine location, such as sensor data from your device that may, for example, provide information on nearby Wi-Fi access points and cell towers.
Common TOS for all Google services, but especially relevant in a geolocation-based game that is actively pushing users to wander their neighbourhoods to gather this data and send it back to Google.
If they’d released the app as a “report network locations to improve google’s ad targeting” tool, it would have gotten huge pushback, and not many people would have downloaded it. But, by hiding that function and wrapping an insanely addictive game over top of it, it’s gone viral.
brb. I need to go recharge the portal at the playground down the street…
If I ever spew anything like this, kill me.
I’ve been trying to get my head around the reasoning for the corporate rebranding to Brightspace12, and I’m coming up short. I like the name, but it feels like everything they’ve described here at Fusion could have been done under the previous banner of Desire2Learn. I’m more concerned about signs that the company is shifting to a more corporate Big Technology Company stance.
When we adopted D2L, they felt like a teaching-and-learning company. What made them interesting to us is that they did feel like a company that really got teaching and learning. They were in the trenches. They used the language. They weren’t a BigTechCo. But, they were on a trajectory aspiring toward BigTechCo.
Fusion 2013 was held at almost the exact same time that we had our D2L environment initially deployed to start configuration for our migration process. We were able to send a few people to the conference last year, and we all came away saying that it definitely felt more like a teaching-and-learning conference than a vendor conference. Which was awesome.
We’ve been working hard with our account manager and technical account team, and have made huge strides in the last year. We’ve developed a really great working relationship with the company, and I think we’re all benefiting from it. The company is full of really great people who care and work hard to make sure everyone succeeds. That’s fantastic. Lots of great people working together.
But it feels like things are shifting. The company now talks about “enablement” – which is good, but that’s corporate-speak, not teaching-and-learning speak. That’s data.
Fusion 2014 definitely feels more like a vendor conference. I don’t know if we’re just more sensitive to it this year, but every attendee I’ve talked to about it has noticed the same thing. This year is different. That’s data.
As part of the rebranding, Desire2Learn/Brightspace just rebooted their community site – which was previously run within an instance of the D2L Learning Environment (which was a great example of “eating your own dog food”), and now it’s a shiny new Igloo-powered intranet site. They also removed access to the product suggestion platform, which was full of carefully crafted suggestions for improving the products, provided by their most hardcore users.
The rebranded community site looks great, but the years worth of user-provided discussions and suggestions didn’t make the journey to the new home. So, the community now feels like a corporate marketing and communication platform, rather than an actual community because it’s empty. I’m hopeful that there is a plan to bring the content from the actual community forward. The content wasn’t there at launch, and it was about the branding of the site rather than the community. That’s data.
And there are other signs. The relaunched community site is listed under “Community” on the new Brightspace website, broken into “Communities of Practice”:
The problem is, those aren’t “communities of practice” – they are corporate-speak categories for management of customer engagement. Communities of Practice are something else entirely. I don’t even know what an “Enablement” community is. That’s data.
It feels like the company is trying to do everything, simultaneously. They’re building an LMS / Learning Environment / Integrated Learning Platform, a big data Analytics platform, media streaming platform, mobile applications, and growing in all directions at once. It feels like the corporate vision is “DO EVERYTHING” rather than something more focused. I’m hoping that’s just a communication issue, rather than anything deeper. Which is also data.
They’re working hard to be seen as a Real Company. They’re using Real Company language. They’re walking and talking like a Real Company. Data.
The thing is – they’ve been working on the rebranding for awhile now, and launched it at the conference. The attendees here are likely the primary target of the rebranding, and everyone I talk to (attendees and staff) are confused by it. It feels like a marketing push, and a BigTechCo RealCo milestone. It feels like the company is moving through an uncanny valley – it doesn’t feel like the previous teaching-and-learning company, and it’s not quite hitting full stride as a BigTechRealCo yet.
I really hope that Brightspace steps back from the brink and returns to thinking like a teaching-and-learning company.
- this isn’t about the name – personally, I like the new name, and wish they’d used it all along. But the company had built an identity around the previous name for 15 years, and it looks like they decided to throw that all away
- and there’s the unfortunate acronym. 30 seconds after the announcement, our team had already planned to reserve bs.ucalgary.ca
Or, how I spent about 15 hours debugging our MediaWiki installation at wiki.ucalgary.ca, trying to figure out why file uploads were mysteriously failing.
We’ve got a fair number of active users on the wiki, and a course in our Werklund School of Education’s grad program is using it now for a collaborative project. Which would be awesome, except they were reporting errors when uploading files. I logged in, tried to upload a file, and BOOM, got this:
Could not create directory "mwstore://local-backend/local-public/c/cf"
um. what? smells like a permissions issue. SSH into the server, check the directories, and yup, they’re all owned and writable by apache (this is on RHEL6). Weird. Maybe the drive’s full?
df -h. Nope. Uh oh. Maybe PHP or Apache have gone south – better check with another site on the server. Login to ucalgaryblogs.ca and upload a file. Works perfectly. So it’s nothing inherent in the server.
Lots of searching, reading about LocalSettings.php configuration options. Nothing seems to work. I enable Mediawiki logging, check the apache access and error logs, and find nothing. It should be working just fine. The uploaded file shows up in the /tmp directory, then disappears (as expected) but is never written into the images directory. Weird.
So, I try a fresh install of mediawiki elsewhere on the server (in a separate directory, with a new database called ‘mediawikitest’). Works like a charm. Dammit. So it’s really nothing wrong with the server. Or with mediawiki. Maybe there’s some freaky security restriction on the new server1, so I set up a new VirtualHost to spin up the new MediaWiki install in exactly the same way as wiki.ucalgary.ca (using a full hostname running in its own directory, rather than as a subdirectory of the “main” webserver’s
public_html directory). And it works like a charm.
Hrm. Searching for the error message turns up mentions of file permission errors, and file repository configs. I mess around with that, but everything looks fine. Except that uploads fail for some reason.
Maybe there’s something funky about the files in the wiki.ucalgary.ca Mediawiki install – it goes back to May 2005, so there’s over 9 years of kruft building up. There’s a chance. So I copy the whole wiki.ucalgary.ca mediawiki directory and use it to host the test instance (still pointing at the mediawikitest database). Works fine. So it’s nothing in the filesystem. It’s not in the Apache or PHP config. Must be in the database.
So, I switch the test instance to use the production mediawiki database (named ‘wiki.ucalgary.ca’). And uploads fail. Dammit. I assume something is out of sync with the latest database schema, so I eyeball the ‘images’ table in both databases. AHAH! Some of the field definitions are out of date – the production database is using
int(5) for a few things, while the new test database uses
int(11) – maybe the file upload code is trying to insert a value that’s longer than the table is configure to hold. So I manually adjust the field definitions in our production images table. That’ll solve it. Confidence! But no. Uploads still fail. But the problem’s got to be in the database, so I modify my search tactic, and find a blog post from 2013:
Problem solved. Turns out the new database backend thing in mediawiki doesn’t like database names with dots in them, and doesn’t tell you. Thank you Florian Holzhauer for finding it!
Dafuqbrah? Really? That can’t possibly be it… The wiki’s been working fine all along – it’s up and running and people are actively using it. If the database wasn’t working, surely we’d have noticed earlier…
renames database from wiki.ucalgary.ca to wiki_ucalgary
Son of a wiki. It works.
So. At least 15 hours of troubleshooting, debugging, trial and error, modifying configurations, installing test instances, and being completely unable to figure it out. And it was a freaking . in the database name that was doing it. With no mention of that in any error message or log file. Awesome. An error message that says “
could not create directory” actually means “
hey - a portion of my code can't access databases with . characters in the database name - you may want to fix that.“
- we moved recently from an old decrepit server onto a shiny new VM server hosted in our IT datacentre, which is awesome but I’m rusty on my RHEL stuff, so there’s a chance I’m missing something important in configuring the server…