What I’m afraid of is the society we already live in. Where people like you and me, if we stay inside the lines, can enjoy lives of comfort and relative ease, but God help anyone who is declared out of bounds. Those people will feel the full might of the high-tech modern state
Where everything goes on a permanent record of some sort, the only dissent allowed involves which colour of avatar to select on Twitter.
Pariser, E. (2011). The Filter Bubble: How the new personalized web is changing what we read and how we think.
I (finally) started reading this on the flight to Toronto. Fascinating take on the whole you-are-the-product thing.
Just as the factory farming system that produces and delivers our food shapes what we eat, the dynamics of our media shape what information we consume. Now we’re quickly shifting toward a regimen chock-full of personally relevant information. And while that can be helpful, too much of a good thing can also cause real problems. Left to their own devices, personalization filters serve up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.
The manifesto that helped launch the Electronic Frontier Foundation in the early nineties championed a “civilization of Mind in cyberspace” – a kind of worldwide metabrain. But personalized filters sever the synapses in that brain. Without knowing it, we may be giving ourselves a kind of global lobotomy instead.”
I’ve been in the edtech game for a long time. I started as a programmer in 1994, then moved into instructional design, and now am working with an amazing group of folks to integrate learning technologies into the practices of instructors and students.
I just came from a workshop that made it clear that many in the edtech field see innovation as something like “working out creative licensing deals with vendors and/or publishers.”
No. It isn’t.
Edtech is important because it can be transformative.
It can literally change the nature of the learning experience. It can shift people from consume mode, into collaborate and publish mode. It can knock down walls. Evaporate silos. Connect people across campus, across campuses, and across the globe.
None of that has anything to do with the lame excuses for “innovation” being described in the field of educational technology. Where brokering a 50% reduction in the cost of textbooks is an acceptable goal. If the textbook publishing model is broken – and it is demonstrably broken by any conceivable metric – the only acceptable goal is to either opt out of that model, or to toss a grenade into it.
This is akin to negotiating with a buggy salesman to get the best deal, when what we really need is a bicycle. Or a pair of shoes. Or a hovercraft. Or a factory.
Edtech is important not because of the tech, but because of the educational activities enabled by it. Which means that the licensing agreements seen as “innovative” are often necessary, but not sufficient.
This stuff is important because it can change the nature of the educational activities. It can make resources and people accessible to those who would not have had access otherwise. It can amplify the voices of people who would not have been heard otherwise. It can make what you do matter outside of your own isolated context.
And higher education is uniquely positioned in such a way as to lead the development and adoption of educational technology. It is in our mandate to create new knowledge, to disseminate this knowledge, and to share what we learn with as many people as possible. That is a truly awesome responsibility, and one that some would outsource to commercial entities.
To allow that to happen, to outsource educational innovation to commercial interests (or, really, to outsource it at all), is to shirk the responsibility that we have as members of institutions of higher education. It is our job to work in the interest of the public – the people that pay our bills – to build ways to share the research that we conduct, to enhance the learning of our students, and to make learning accessible to as many people as possible.
It is not our job to reorganize our institutions around managing and enforcing DRM that is designed to prop up companies who have built entire industries around bilking our students for every penny they can siphon out of them.
Our job is to provide the best possible learning experience to our students. Full stop. That’s it.
Now, if that happens to be best done through a commercial solution, then let’s do that. Let’s sign the best damned contracts we can sign.
But, there will be times. Many times. When that means going against the interests of commercial entities. To share what we have and do so freely and willingly, despite potentially reducing the direct profits of others. And so we shall. Our mandate is not to serve companies that profit from our students (or our taxpaying supporters). Our job is to provide the best damned experience to our students. That’s the guiding principal that should shape every decision, every project, every action. Is this the best thing for our students’ experience? If so, do it. If not? Don’t. It’s that simple.
Edtech is important because it is transformative. Because it has the potential to amplify (or mitigate) innovations across the field of higher education (and beyond). It is our responsibility to take this seriously, and to do what is best for our students above all else.
Thought fodder for this morning. First, this from Jim Groom:
Oh, how far we have fallen! Just two decades later the LMS, not the web, has become where universities do most of their web-related work with students. University websites are little more than glorified admissions brochures. In a depressing twist of fate, higher ed has outsourced the most astounding innovation in communications history that was born on its campuses. Through a process that started in earnest during the late 1990s—roughly at the same time the dot.com market boom—universities moved to a market-driven corporate IT logic. Digital communications were understood as services, and the open web got lumped with email, intranets, and the LMS as a business application. Somewhere during this time the internet was confused with efficiency and the web was mistaken for an interactive fact sheet.
Follow that up with this from Jack Hylan:
We have moved from a Read/Write Culture to a Read/Consume and bicker culture. It is time for us to retake our creativity and expand upon our most wildest dreams. Stop consuming and start creating. That is the importance of the internet for future generations.
I left a rambling comment on Jim’s post:
Absolutely. I got onto the internet in 1987, the semester I started a as a biology undergrad. I had to get a prof to sign a piece of paper saying that I was worthy of being granted access. I fooled him into signing it anyway. And everything – EVERYTHING – on the internet back the was on higher education servers, with a few governmental ones, and a handful of corporate. The internet, from my n00b undergrad perspective, was owned by SUNY, CUNY, Stanford, and UCalgary. (UCalgary, because that was how I got online via AIX terminals, and accessed the command-line tools to get to the others. SUNY and CUNY were the big Gopher servers back in the day, full of awesomeness).
Over the years, it got more crowded, and then the web hit and the shift to corporate holdings began.
I think we can push innovation from higher education again, by not caring about venture capital and the other nonsense that completely derailed the sense purposeful design and collaboration. It’s largely lip service now. Web 3.0 is all about collaboration! No. It isn’t. It’s about tricking users into creating accounts on your servers so you can sell the company to yahoo/Facebook/google and cash out. Real collaboration is building the tools and platforms together, not just posting our animated gifs on the same servers.
The reality check part of my brain is tingling, whispering something about nostalgia and revisionist history, but I’m ignoring that particular set of voices at the moment.
We can do this. Again. Still.
Also, I realize there is an insane amount of privilege that needs to be unpacked from my description of the early days. It was restricted to those who were worthy due to being able to be at a post-secondary institution, etc… The modern corporate internet is more readily accessible by everyone, so it’s definitely better in that sense. But I’m still not comfortable with delegation of real innovation to corporations who are mandated with leveraging us for profit. That’s diametrically opposed to the culture of the early days – and something we need to try to restore on some level.
I’ve been kind of working both sides of the fence for a few years not – pushing for real collaboration and innovation, while also trying to work from within the IT organization to infuse a sense of purpose and connection to why we’re doing this stuff in the first place. I think I’ve had varying levels of success at that, but it’s important to keep pushing.
I picked up a scanner a couple of months ago, and have been slowly scanning in old photos when I get a chance. A few batches in, and I’ve already done 451 photos. I’m viewing the activity as potentially rescuing family history from fading pieces of paper. I have no idea if JPEG files will still be readable in 100 years, but it’s worth a shot to try to preserve photos going back well over 100 years (the oldest photo is from before 1893).
As I scan, I try to add as much metadata as I have – often it’s just a scrawl of a name on the back of the photo print. I’m struck by how metadata poor many of these photos are. I can’t even guess at what decade some of them were taken in. Some have a name or two provided, some a year, and some have a wealth of hand-written documentation. And the quality of the photos themselves – there are a handful that I’d describe as not bad. The vast majority are absolute crap, technically. Blurry, poorly exposed, and small – several of the older photos are only available as 1-square-inch prints – I’m guessing the cost of printing photos back in the 1920s-1940s made it prohibitive, but even when cranking up the resolution of the scans to 1200dpi, there’s just not a lot of image to work with.
Some of the oldest photos, of my grandparents and my dad in his childhood, kind of blow me away. Especially, knowing that they’d likely be lost forever without being rescued. The only metadata available for many of the photos is inferred – either through association with other photos that have rudimentary data scribed on the back – or via automated tools like face recognition. It’s interesting to see names get pulled out of group photos, and to see photos of individual family members spanning over several decades.
Contrast that with the almost 30,000 photos in my own family archive (with a few hundred more ready to be scanned in a couple of banker’s boxes in the basement). Most of those photos have at least some metadata, and everything in the last few years has buckets of automated data – location, time, etc… – assuming Aperture library and image files will be somehow readable in a few decades…
One of the things I had on my 1-year plan for The New Job™ was development of an “Open UCalgary” website, akin to the awesome work done by others1. At the last Teaching & Learning Committee meeting, we were sketching out a revised draft of a memo to faculty members, intended to showcase strategies to reduce costs to students. One of the items was about open education resources and the like, so I floated the idea of the website2. And, just like that, boom. Green light for the website. Which meant I had to throw something together pretty darned quickly, to be online in time for the memo to be finalized and sent out.
So. The early version of Open UCalgary is now online.
It’s super basic at the moment, to serve as a starting point to refer folks to resources and projects available both on-campus and elsewhere. I’ll be building the website up over the next few months, and will be working to showcase the great stuff that’s going on at the UofC, as well as pulling in the inspiring and immediately applicable stuff that’s being done elsewhere.
And, this is just the first of many things I’ll be working on from my 1-year plan. Most of them involve blatantly ripping off the awesome stuff being done by folks I respect and admire3. It’s going to be a fun year!
To start out the new year, I’m moving to a new position at the University of Calgary. I am now “Manager, Technology Integration Group” in the new Taylor Institute for Teaching and Learning. That’s a mouthful.
Basically, I get to work with a great team, building tools to enhance teaching and learning, and supporting instructors and students to integrate these tools effectively. In many ways, it’s a formalization of the kinds of things I’ve been doing in various roles on campus, but with some truly amazing people to work with, and resources to dedicate to the task. The mandate is essentially: support the successful integration of appropriate technologies into teaching and learning, and work with instructors and researchers to build and extend tools to enhance the learning environment.
From the job profile:
- Leadership: Within the vision for the Taylor Institute for Teaching and Learning and in consultation with the Director of Educational Development Unit of the Institute, demonstrate leadership in building leading-edge technology integration capacity across the university.
- Technology Integration Practice and Scholarship: Demonstrate leadership and expertise in developing effective educational technology initiatives that support successful learning and teaching experiences at undergraduate and graduate levels, and across academic disciplines.
- Administration /Management: Provide overall management of staff and affiliated faculty/staff/students, program development, and resources.
- IT Strategy for the Taylor Institute for Teaching and Learning: Serve as a liaison between the Taylor Institute and IT leadership, providing support for strategic decision making and comprehensive planning.1
Sounds like fun to me! (of course, the details of the profile will be changing as we get up and running…)
I’ll be able to share more information once it’s sunk in a bit and I find my footing. Although parts of this role will feel familiar, much of it is also new – it’s a much higher profile role than I’m used to, and will have much more official responsibility. I’ve taken on some high profile and responsibility tasks over the years, but this is the first time it’s been baked into my job profile. And the first time I’ve been part of a new $40M facility intended to radically change the culture of teaching and learning on campus, and to serve as a working research sandbox for innovation. That’s pretty exciting stuff.
The Taylor Institute is going to be a fantastic place to work – it’s starting off as a somewhat virtual organization, with the previous Teaching & Learning Centre forming the heart of it, but many new groups will be added. The construction of the new building makes a nice metaphor – currently, the building site is an empty patch of dirt, with the previous crumbling building removed and blueprints ready to be realized. It’s going to take a couple of years for the building to be completed, and we’re hoping to have the organizational side of the Institute up and running well beforehand, ready to move in.
I’m super excited about being able to work with the awesome people in the Technology Integration Group, and the other groups within the new Taylor Institute – and working closely with great folks across campus. This is going to be a heck of a year, with (hopefully) many more to come.
- this part is essentially my previous role on campus – IT Business Partner – which I will continue to do, within the scope of the Taylor Institute
Here’s a gem that shows what happens when you talk about stuff in the open. Mike Caulfield asked a question about structuring a wiki. John Robertson replied, with a nudge to Brian Lamb. Boom. Here’s your MOOC on structuring a wiki in an academic context.
This is how it’s done, people.
This post might be ephemerally tweeted by dozens of avatars I might or might not recognize, accumulate a number in a database that represents the “hits” it had, and if I’m lucky might even get some comments, but when I get caught up in that the randomness of what becomes popular or generates commentary and what doesn’t it invariably leads me to write less. So blog just for two people.
First, write for yourself, both your present self whose thinking will be clarified by distilling an idea through writing and editing, and your future self who will be able to look back on these words and be reminded of the context in which they were written.
Second, write for a single person who you have in mind as the perfect person to read what you write, almost like a letter, even if they never will, or a person who you’re sure will read it because of a connection you have to them
I’ve been withdrawing from relying on Google wherever possible, for various reasons. One place where I was still stuck in the Googleverse was with the embedded site search I was using on my self-hosted static file photo gallery site. That was one of the few places where I couldn’t find a decent replacement for Google, so it stayed there. And I wasn’t comfortable with that – I don’t think Google needs to be informed every time someone visits a page I host1. I use that embedded search pretty regularly, and cringe every time the page loads.
There had to be a good search utility that could be self-hosted. I went looking, and tried a few. My requirements were pretty basic – I don’t need multiple administrators, or shards of database replication, or multiple crawling schedulers etc… I don’t want to have to install a new application framework or runtime environment just for a search engine. I want it to be a simple install – ideally either a simple CGI script or something that can trivially drop onto a standard LAMP server.
Today, I installed a website indexer on a fresh new subdomain. Currently, the only website it indexes is darcynorman.net/gallery, but I can add any site to it, and then index and search on my own terms, without feeding data into or out of Google (or any other third party).
The search tool is powered by Sphider and seems pretty decent. It’s a simple installation process, and uses a MySQL database to store the index. Seems pretty fast – on my single-site index, with one user (me).
The biggest flaw I’ve found with Sphider so far is in how it handles relative links. Say you have a website structure like this:
If index.html uses a simple relative link like
<a href="page1.html">Page 1</a>, Sphider skips it. Unless the index.html page has a
<base> element to tell Sphider explicitly how to regenerate full URLs for the relative links. Something like this:
<base href="http://darcynorman.net/gallery/" />
Which Sphider can then use to turn relative links into fully resolved absolute links.
But this is strange – I had 2 choices:
- hack the Sphider code to teach it how to behave properly (and then re-hack the code if there’s an update)
- update each gallery menu page to add the
I chose #2, because I just didn’t have the energy to fix Sphider, and the HTML fix was simple enough. It definitely feels like a bug – there’s no way that editing every page to add a
<base> element should be required, but whatever.
Bottom line, Sphider works perfectly for my needs. It’s now powering the site search for my photo gallery site, and works quite well for that. And, it’s going to be available to index any of my other projects if needed.
Tony Bates, on acknowledging being the recipient of the 2013 Downes Prize, suggests that someone needs to bestow a similar honour on Stephen. I concur. So, the inaugural Norman Prize is hereby awarded to Stephen Downes.
I’ve been lucky enough to know Stephen for over a decade – first meeting him as part of the Edusource national learning object repository project back in 2001(?!). Even back then, Stephen had ideas that were years ahead of where everyone else was. We all looked at him like he was crazy, but he persisted. And eventually we realized he was right.
One of my most important, and scary, professional experiences was co-keynoting the BCEdOnline conference back in 2006, along with Stephen and Brian Lamb. It was important (to me) because it was a professional risk – an unkeynote presentation, flipped presentation, Phil Donahue style. The audience was the presentation. We took a huge risk – having no presentation, and having an anonymous chat back channel projected onto the main screen. We had no idea if it would work, and we paced in front of maybe 300? 500? (Felt like 10,000) people. At first, it didn’t work. At all. Scariest moment of my professional career. And then it did. And we had an interesting discussion with the entire conference rather than just blabbing at them. Years ahead.
That’s kind of the history of knowing Stephen. First, you think he’s crazy. Then, he persists, clarifies, elaborates, and keeps true to his vision. And, eventually, the rest of us realize that he was right all along, and that he was (and is) years ahead of us. And also right there playing with us.
What makes Stephen’s work so remarkable and important isn’t that he’s been the source of many foundational ideas, but that he has the energy and persistence to keep pushing the boundaries and to work to bring the rest of us with him.
So, thank you Stephen. Keep on OLDailying!
“If you take a look at the progressive changes that have taken place in the country, say, just in the last 50 years – the civil rights movement, the antiwar movement, opposition to aggression, the women’s movement, the environmental movement and so on – they’re not led by any debate in the media,” Chomsky said. “No, they were led by popular organizations, by activists on the ground.”
Sounds consistent with what we see in education and edtech – progressive changes are made in the trenches. The media (and other parasitic corporate organizations) does not lead the changes – they follow (usually with a poor level of fidelity, and with co-option of ideals through monetization, financialization and other greedmongering urges).
It might be interesting if we recognize the power of in-the-trenches progressive changes led by activists (i.e., us), and use that influence to harness the corporate lapdogs rather than limply ranting against them…
Clint mentioned that he’d disabled adblock, and gave his reasoning. Stephen somewhat disagrees. Here’s my take:
I have been running adblockers as browser extensions, CSS overrides, and .htaccess filters for years now. It’s not bulletproof, but it sure takes care of most of the ads. The web is a much less tacky place with these tools in place.
But, in my role as a lowly edtech geek 1, I’ve been bitten by this before. Case in point: we’d gotten reports from instructors who were seeing ads in our Desire2Learn environment. WTF? I’ve never seen any ads. That’s not possible. They must be mistaken, or have a popup from somewhere else. Then, I checked on my phone, without Flash and without any adblockers, and saw this:
Not only were there ads in our D2L environment, they were incredibly stale. I checked with our D2L contacts, and the ads were not inserted by D2L. They were put there by Adobe, through their “hey! you need flash!” download “helper”. Working with D2L, they tried to get Adobe to avoid inserting ads on their clients’ D2L learning environments. Not sure if they succeeded, yet, though.2
So, my use of adblockers and flashblockers and privacy enforcement utilities was actually changing my experience (for the better) in such a way as to make it inconsistent with what the people I work with and for were seeing. Now, I could just advocate that everyone must install flashblockers and adblockers etc… but that’s just not realistic. We still have people who insist on using Internet Explorer 6 or 7. They’re not going to install a modern browser, and they’re definitely not going to install any of these other utilities that help make the web suck less.
If I’m going to be deploying, managing, configuring, supporting, integrating and using online tools to support teaching and learning, I need to see what the instructors and students will be seeing, warts and all. if for no other reason than to work with service providers to get ads and their ilk out of our educational environments.
Now, for almost everyone else – please install adblockers. And flashblockers. And privacy enforcement tools. According to the latest neuroscientific research, the web is on average 86% less painful to use with these tools in place .
- integrator? consultant? advocate? evangelist? what do they call people like me now?
- and before you get all smug that your open source LMS would never (NEVER) have such an issue – if anyone ever (EVER) embeds Flash in any of their course content, this same ad will be helpfully inserted by Adobe.
An article on The Guardian that initially seems like an “OH NO KIDS THESE DAYS!” reaction to everyone having a decent phone in their pocket 24/7. I was prepared to read it through, groan, and then ignore it.
Then, this gem from Nick Knight, a fashion photographer:
But doesn’t incessant picture-taking, as psychologists argue, make us forget? “That’s old rubbish,” says Knight. “Like that old nonsense about how sitting too close to the TV will infuse you with x-rays. My dad went around a lot of the time shooting with a video camera when I was a kid. Now we have lots of great old home videos as a result. So what if someone stands in front of a Matisse and takes a picture to look at on the bus home? I think that’s great if they want to.”
Exactly. Sure, someone with a smartphone in their hand isn’t going to replace a Photographer. But, so what? People are capturing stuff that means something to them. That’s awesome.
As an aside – I went to Mexico last year for my niece’s wedding. I was asked to photograph the event, so used a DSLR with fancy lenses to get Photographs. They were OK. But, my favourite photo from the entire set was one I shot with my phone. There’s only so much you can do, without staging and lighting, managing the entire event to optimize for photography. But that sounds like the kind of invasive un-presentness that photography snobs whine about with kids these days, and their infernal smartphone contraptions…
Smartphones aren’t the death of photography, any more than these were:1
It’s less elitist – the barrier to entry has never been lower – but it’s still kind of elitist – photography snobs lament that any schmuck can take photos. Neither is a new phenomenon.
- Case in point – I just shot both of the photos above – with my phone – using a DIY lightbox in my basement. Is that Photography? Maybe. Maybe not. Who cares? My phone’s camera produces images with the same number of pixels as my DSLR. The lens isn’t as good, but it’s not horrible. And the software automatically geotags the file and uploads it to The Cloud™ over the wi-fis, where I dragged it into MarsEdit on my laptop upstairs, as I wrote this post.
Hugh Howey is the author of some really great science fiction novels. Most famously, his “WOOL” collection (the Silo Series), but also his Molly Fyde series is definitely worth picking up.
He has been an indy author/publisher, hitting the scene through online distribution of his books. Here’s his thoughts on that:
Where you once had vanity presses that suckered people out of tens of thousands of dollars for crates of books that would never get sold, you now have the ability to make professional-looking books that are in print forever at a fraction of the cost. And people still want to focus on the fact that “most authors lose money.” No shit. Most musicians lose money. Most painters lose money. Most photographers lose money. It’s art. Nobody is really losing anything. We are creating something. We are expressing ourselves. We are doing something positive and lasting with our free time. There’s no losing here, only winning.
If it costs you a few hundred bucks to make an infinite supply of your book, which will be available until humanity goes extinct, and anyone is going to claim that you lost something in this exchange, tell them to go talk to an amature photographer. Photographers enjoy a good laugh.
I’m seeing lots of parallels to education and edtech here. Cue Gardner’s Bags of Gold speech. We’re living in a time when it’s never been easier to share what we do, at little or no cost, and people get hung up on how they will need to squeeze their creations through a press to extract every last drop of monetization out of it. That’s not the point. Create because you are creative. Share because you give a shit. Or don’t.
I don’t generate a profit from anything I do outside of my Day Job™. At least, not directly. But being creative and sharing makes me better at my Day Job™, so has likely made me “profit” indirectly. How do you calculate that? Easy. You don’t. Well, I don’t.
Then, go make some art, dammit! If your primary concern is to be worried about what that costs, you’re doing it wrong.
- although probably geared more at a YA audience, but whatever. you’re not the boss of me.
We’re in the middle of our Fall 2013 “Pilot” semester – almost 5,000 students are using D2L1 this semester, with extremely positive feedback from students and instructors. We’re now in the process of setting up for the Winter 2014 semester – where 4 faculties will be moving to use Desire2Learn for 100% of their online- and blended courses (and many courses from other faculties thrown in for good measure). Likely 10-12,000 students using it next semester. That’s a lot of students. And a lot of courses. We still don’t have automated course creation integrated with PeopleSoft, and are working feverishly on that (the thought of managing course enrolments for 12,000 students using CSV uploads makes me break into a cold sweat).
The basic process for setting up courses for a semester looks something like this:
Create a data feed that triggers course creation. Course Code, Course Title, Department, other key metadata about the courses. This can either be done through the connection with PeopleSoft (which isn’t working for us yet), or via the Bulk Course Create (BCC) tool. Feed BCC the CSV of course info (SFTP it to the D2L server), wait until the scheduled processing job crunches it, and boom. Courses are created. But they’re empty. And nobody can see them.
Enrol users in the courses. This can either be done via scheduled data feed from PeopleSoft (again, not yet), or via another CSV file that associates a user with a course and applies a role. This is done using a second tool, built into the Users admin interface. This Bulk User Management (BUM) tool2 takes the CSV, and crunches it on demand. No scheduled processing job to wait for. The CSV can also be cumulative, so you don’t have to scoop out previous entries. Separate files are needed to handle CREATE, ENROLL, UNENROLL etc… because they all have different columns, in different orders.
The courses are empty. They need to be populated with any default content that a faculty uses. We have set up “template”3 that needs to be copied into each course in a faculty. This uses another separate utility, Copy Course Bulk (CCB), with another CSV format. This utility is different, because it lives inside a special course in our D2L instance. You go to the course, open the “Manage Files” interface, and upload the CSV file (named input.csv) into an “Inbox” folder. Every night, at about 12:30am, a process crunches that file (if it exists), copies the content as specified in it, logs the result, and moves the file to the “Outbox” folder. But, this only copies course content, grade items, assignments, grading schemes, etc…
To have the courses in each faculty use the proper homepage as designed by the key people in each faculty, yet another utility is needed. With yet another CSV format. We haven’t seen the Automated Course Branding Tool (ACBT) yet4 but I assume it lives as a special course offering within our D2L instance, as the CCB tool does. This tool will set the homepage (the layout of the course – which widgets are visible, where they are, etc…) – as well as setting the NavBar (the navigation menu for the course).
There. That’s all it takes. Of the 4 steps, 2 will eventually be automatable through our connection with PeopleSoft, when that comes online. The other 2 will require semi-manual intervention, to create the list of courses for each faculty, tying course codes to “template” courses5.
Of these tools, the Copy Course Bulk (CCB) and Automated Course Branding Tool (ACBT) require additional licenses, and need to be separately deployed in your D2L environment. This takes time. We weren’t even aware these tools were separate, or that they needed to be licensed and deployed, until we went to use the functionality (assuming it would exist in the core product). Plan ahead. These tools do the job, but take some time to set up (and push through campus purchasing processes…).
- I should start an acronym-based drinking game, except my liver wouldn’t survive it
- BUM. giggle. No, just upload it to the BUM. D2L takes it in the BUM. oy. productive project meetings discussing this tool…
- not the D2L concept of “Template” which is strictly an administrative thing – all Math 251 courses are set up with the D2L Template of “Math 251″, making it easier to find all instances of a certain course. The “Faculty Template” I’m talking about here is just a Course Offering that is used by key people in each faculty to set up how they want all of their courses to look – they add stuff to the Content area. Items to News. Preload any content etc… that will then be copied into each course in their faculty.
- stuck in the fun of University Purchasing etc…
- that aren’t actually D2L templates
I still can’t figure out why this isn’t baked into the Evernote application as a Service available system-wide, but there’s a way to add a Service to send messages from Mail into Evernote as notes for archive. There was a previous applescript solution, but I hadn’t used it (and it apparently borked on the 10.9 upgrade anyway).
I’d been using the Evernote email address feature, to just forward messages I need to archive for automatic importing into Evernote, but it’s a pain. I have to remove the *FWD: * prefix on the note title. I need to decrease the indented quote level of messages. etc… It works, but it’s funky.
Some quick searching turned up this post on the Evernote website. It talks about using an Automator app as a GTD workflow. Awesome. Except the app they provide includes a step to move archived messages into an “Archive” mailbox in Mail. I don’t want to do that, so I modified the app ever so slightly, to remove that step.
Now, I have it set up as a Service, and have followed the instructions on the Evernote post to add a keyboard shortcut. Command-option-E sends selected message(s) to Evernote. Done. Awesome.
Here’s my modified Automator app – download the .zip, extract it, and double-click on the app inside. It’ll ask you if you want to install it. If you haven’t already turned off the “Allow Apps from Anywhere” setting, you’ll get a warning saying that you haven’t done so. Easy fix. Open System Preferences, click “Security & Privacy” and then click the lock at the bottom left. Then, click “Allow apps downloaded from: anywhere”. Done.
The one wrinkle I’ve seen so far is that the current version doesn’t pull attachments over. That sucks. Attachments are one of the reasons I archive stuff. Looking into solutions for that now…
Just checked my RSS reader – I subscribe to
79 101 feeds tagged as “edublogs”. Not all are active – some are still in there, in the hopes that the owner of the site comes back to play. It’s also not comprehensive. There are lots of feeds I don’t subscribe to. But, these are my go-to reads, with a decent signal:noise ratio, with little breathless hype. Likely not everyone’s cup of feed, but I find them useful.
Remember when RSS was dead? And blogging? Me either.
Update: I ran the OPML through this handy dandy OPML link checker utility. It found 3 bad feeds, so I’ll need to update them ASAP…
Update 2: just added a bunch of new links thanks to a nudge from Alec
Reading the post/transcript of Audrey Watters’ presentation from the OpenVA pre-conference, and something struck me.
Compare the predictions of two experts in their fields, extrapolating their personal visions forward a few decades:
“I think there is a world market for maybe 5 computers.” — Thomas Watson, 1943
“In 50 years, there will be only 10 institutions in the world delivering higher education.” — Sebastian Thrun, 2012
I’m carrying 2 computers with me right now, and each one would have been considered high-end workstation-class devices only a few years go. I use several more, as does everyone else. Watson wasn’t wrong – his vision clearly led to giant computers run by governments and giant corporations. Time sharing systems meant monstrous computers would be tasked with jobs from many client organizations. In 1943, he couldn’t have possibly seen microprocessors and coprocessors and GPU-offloading and miniaturization of devices. Or the internet.
I think Thrun believes he is correct. But his forecast is interesting because it points out his utter disregard for anything beyond content dissemination and the entrenchment of control in the hands of the few who are worthy of such a task.
I think Thrun is on the wrong track, though.
In my mind, Thrun’s model positions The Giant MOOCs as toxic, and diametrically opposed to the real and essential goals of education.
Sure, there will be consolidation. There will be new institutions, different institutions, and institutions that wither and/or die. I think that the proliferation of connections and of the ability for individuals to publish content, connect with others, and to access information means that there will essentially be “universities” for every one of us. 7 billion universities controlled by individuals, rather than just 10 uber-universities controlled by governments and giant corporations. Open education, competency-based badging, and portfolio-based accreditation would naturally lead in that direction.
Does that mean that traditional universities will go away? Absolutely not. I think they will remain essential, perhaps even moreso, but that their roles will shift. To what? I have no idea. But we will need to adapt and respond, and be able to enable and support learning and research that grows well outside the traditional boundaries of post-secondary institutions.
I checked the Activity Monitor page1 for UCalgaryBlogs this morning, and noticed that there had been several thousand attempts by people (or “people”) to login using the usernames “admin” (the default WordPress admin account, which isn’t what’s used on UCalgaryBlogs) and “siteadmin” (which is the username for our server – scripts must have sniffed it from blog posts on the main site…)
Curious. I’d installed the fantastic Limit Login Attempts plugin to prevent people from brute-forcing logins, but that plugin only kicks in if the same IP address hits the login form repeatedly. This botnet attack was different – each request had a different IP address, and a different user-agent string. So Limit Login Attempts wasn’t blocking them, and my htaccess user-agent filter wasn’t catching them because they were either valid user-agents, or close enough to get through.
Looks like they were using a dictionary attack, starting at aardvark and working through zyzzyva. Thankfully, I don’t use actual words for passwords, but I decided to change the password to use something stronger than I’d been using. Thanks to 1Password for making that trivial. I don’t actually know the password now. And it has nothing to do with any word found in a dictionary (except that it might use some of the same ascii characters).
Some quick googling for “wordpress distributed botnet protect” turned up a new (to me) plugin called Botnet Attack Blocker. Sounds interesting. It was written in response to some recent distributed botnet attacks, and handles logins spread across different IP addresses.
Installed. Activated on the main blog site. And the attack stopped instantly. I can still login from the campus network even if the plugin kicks in and blocks admin logins. But the botnets can’t continue to brute-force passwords.
So, now it’s been over 3 hours since activating the plugin. And the attack has (for now) been blocked.
- Do not (ever) use actual words in passwords. Ever. Generate something secure, and use a tool to store/retrieve them.
- Keep up to date on the security environment for the tools – including WordPress. I hadn’t been aware of a distributed botnet attack problem recently, nor of a plugin developed specifically to block that.
- Install Limit Login Attempts to stop single-IP-address attacks.
- Install Botnet Attack Blocker to stop distributed botnet attacks.
- using an old version of the WPMUDev Blog Activity plugin