• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Thursday, 07 Aug 2014 13:00

You may have heard the rumblings already—Infragistics is working on a new HTML5 page designer to make developing modern Web apps for line of business much easier. In this post, I’m glad to announce and introduce you to more of the details of that designer.

Check out this full walkthrough video here and/or read on for a quick overview of the highlights. When you're done, just click the BECOME A BETA TESTER button at the bottom of this post to get started!

[youtube] width="640" height="360" src="http://www.youtube.com/embed/8HD6IOfDJ14" [/youtube]

1. WYSIWYG for HTML5


Ignite UI Page Designer

Devs who are used to native coding environments may be used to having a What-You-See-Is-What-You-Get (WYSIWYG) interface to help them lay out and configure screens. While there are some options for plain HTML, even some with basic Bootstrap support, there are no developer-oriented WYSIWYG designers that are targeted at more sophisticated modern Web interfaces like what you can build with Infragistics’ advanced Ignite UI controls.

With the Page Designer, we are introducing just that—a drag-n-drop way to lay out and configure advanced modern Web components.

Toolbox Chock Full of Advanced Ignite UI Components


Ignite UI Page Designer Toolbox

A WYSIWYG surface is nice on its own, but it’s not that awesome unless you also have awesome components to use with it. In our Page Designer, we supply the many, advanced Ignite UI controls so that you can drag and drop them and easily configure them how you want them.

Easy-to-Use Component Editor

Ignite UI Page Designer Component Editor

Drill down into complex properties, get drop downs that enumerate fixed values, easily toggle booleans, even click to add event handlers with the right method signatures! Our component editor makes it much easier to configure powerful components like the Ignite UI chart, grid, and more.

2. Responsive Web Design (RWD)

One of the challenges with modern Web dev is being able to support a much broader range of devices than most developers were required to in the past. RWD is by far the most popular technique for grappling with that complexity, but it can be difficult to deal with, especially for devs who are new to it.

In addition to the Ignite UI Grid’s built-in RWD support, the Page Designer adds a few more helpful RWD features.

Bootstrap Grid Framework Support

Ignite UI Page Designer Bootstrap Grid

You can drag and drop a Bootstrap row and easily pick from several column layout options. Then just drop your components into columns to quickly layout your page in a responsive grid layout.

RWD CSS Breakpoint Visualizer and Editor

Ignite UI Page Designer RWD

Using our RWD GUI, you can add, edit, visualize, and easily preview what your designs look like for your different CSS breakpoints.

Add/Edit Class for Current Breakpoint

Ignite UI Page Designer RWD Classes

Select the breakpoint you want to preview/edit. Then select a component and double-click its CSS class—the Page Designer will add or take you to the class for that breakpoint in your page.

3. Code Editor with Clean Code

No dev tool would be complete without a code editor, and the Page Designer is no exception to that. We’ve leveraged the world-class ACE code editor and augmented it with specialized code completion capabilities as well as integrated API help. We have made clean code a number one priority because we know how important that is and how most WYSIWYG designers fail on this point.

4. Easier Data Access

Connecting components to data requires a good bit of hand-written code normally, but the Ignite UI Data Source helps make that a lot easier. On top of that, the Page Designer helps you configure and then easily select your data source for data-bound Ignite UI components.

Ignite UI Page Designer Data Sources

5. Integrated API Help

One of the biggest challenges with developing against highly capable components and frameworks is that you regularly have to reference the API docs to ensure you are getting things right.
In the Page Designer, our Component Editor integrates Ignite UI’s extensive API help by (1) providing a link to the relevant API docs and by (2) pulling out and showing API information as you hover over individual component events and properties/options. Plus, this same help integration is shown in the code editor when editing component values.

Ignite UI Page Designer Integrated Help

Help Us Test!

As you can see, the Ignite UI Page Designer brings a whole lot to the game, but this is just the beginning. This is the first time we are sharing it with the public, and we are looking for devs who are willing to do some beta testing, report issues, and provide feedback on how we can improve it and make it as useful as it can be for you.

If you’re willing to do that, please step on over, sign up, and start plugging away!

Become a Beta Tester

Author: "ambrogio" Tags: "Ignite UI, Page Designer, jQuery"
Send by mail Print  Save  Delicious 
Date: Wednesday, 25 Jun 2014 21:31

Hey, do you know about the Infragistics Web Design Council? If not, read on!

What is the Infragistics Web Design Council?
It’s a select group of Infragistics customers who want to have early access to what we are working on to provide early feedback and help shape the Web products to better suit your needs.

Why join?
In this program, you will have access to exclusive information, early-preview software, and avenues of feedback. Under a mutual Non-Disclosure Agreement, we can more freely share information that is not released publicly yet and thereby better integrate you into our design and development process, which means you know what is coming and, more importantly, you get to help shape the future of Infragistics Web tools to maximize your productivity and effectiveness.

How does it work?
The primary mode of involvement is through a private mailing list. Once you are a member, you will be able to send to and receive from that list. When we are ready to share new software builds, we’ll email the list to let you know about it and how to get it, as well as the timeframe for feedback.

If you are interested, you can grab it, try it out, and let us know how you think it could be tweaked to help you and your company more.

We may also just ping you with general questions, maybe even share some prototypes, design docs, etc. Our goal is to get you involved more and earlier so that what we make is as close as it can be to what you need.

How much time does it take?
That’s pretty much up to you. It’s not going to be a high volume list most of the time. Traffic will come in spurts around when we send out questions/things for review. It’s totally up to you if you respond and participate at any given time.

Okay! How do I apply?
Send an email to igniteui [at] infragistics [dot] com with your customer information and asking to join our private Web Design Council. We will then pass along our mutual NDA form that we’ll need you to sign for your company and either scan and email or fax back to us. After that, we will add you to the email list, and you’re in!

(If you think you already are under an NDA with us, let us know who your account rep is so that we can verify.)

That’s it. We hope you will join—we want our customers to feel empowered to shape the future of Infragistics Web tools.

Oh, and by the way, another great way to contribute to the future of our products is by suggesting and voting on product ideas—we use that to help us prioritize what we do for you! Drop by any time—we want to hear from you!

Author: "ambrogio" Tags: "ASP.NET"
Send by mail Print  Save  Delicious 
Date: Tuesday, 28 Jan 2014 20:39

If you haven’t heard of Yeoman, I gladly recommend you check it out. It’s a nifty little CLI-based scaffolding tool for modern Web apps.


Yeoman

One of the nifty things about it is that their generator (scaffolding) architecture is extensible, so anybody can add to it, and many have, such as the popular Angular generator. And now we have a simple one ready for you to use with Ignite UI.

Because Ignite UI is not an application framework (like Angular, Ember, Backbone, etc.), it didn’t seem to make a lot of sense for us to presume to set up a whole project for you. Instead, you can use whatever your preferred project generator is (such as the basic “webapp” one) to generate your app, and then you can use the Ignite UI generator to augment it. (Or you can use it on its own inside pretty much any project.)

So What Do You Get?

Well, for this initial release, I thought I’d keep it super simple but try to be somewhat helpful, so what you get is a single HTML page that has the Ignite UI boilerplate built in for you, plus a bonus of two nifty samples that illustrate using the Ignite UI Data Chart and the Ignite UI Grid based on some simple JSON data.

The code is well-commented, so you can see which parts you can and should safely gut/modify/replace with your own. As I said, I wanted to keep it simple for this initial release, and let the generator options organically grow from there.

So check it out. The instructions for using it are on the generator-igniteui GitHub repo. Once you have it installed, you can just type yo igniteui wherever you want to add a new page that uses Ignite UI.

If you have suggestions for what you’d like to see added as generator options/sub-generators (or have other issues), please report them on the repo.

Enjoy!

Author: "ambrogio" Tags: "Development"
Send by mail Print  Save  Delicious 
Date: Wednesday, 13 Feb 2013 22:32

Tell Me Again How That Is Not IntuitiveThe word "intuitive" as applied to design is a bad word; it is just shorthand for "I like it" or "I find it easy or familiar." As such, it has no place in design critiques and discussions, unless it is heavily qualified. For instance, "our primary audience of professional statisticians will find it intuitive if we represent the normal distribution using this chart." But even in this case, it is just shorthand for "familiar," and it'd be better to just use the more accurate and meaningful word.

So why is calling something "intuitive" so bad? Well, because it provides no useful information on how to improve a design. It provides no groundwork to have an intelligent discussion of design alternatives. Because for it to be true, someone has to share some significant part of your life experiences.

In short, it is bad because you are falling into the old, lazy trap of imagining that the rest of the world is like you, that what you find intuitive, compelling, etc. is what others find intuitive, compelling, etc. That assumption is an enchanting lie; it leads to all sorts of bad design decisions. Even if it happens to be true in some qualified sense, you need to understand why that would be true. And that requires going beyond simplistic, squishy statements like, "this is intuitive."

Intuitive for whom?

What shared experiences, learning, skills, tools, background, etc. would others need to have for them to find it intuitive?

For whom would it be not intuitive?

Write your answers down. These are assumptions you're making about your users. Next, ensure that at least the vast majority/your primary audience will share the things you identified. By the way, are you sure you know your users so well? Did you actually research them? Or are you piling assumptions on top of assumptions?

Precious little of human understanding is truly intuitive. The vast majority of what we understand is based on observing, trying, failing, learning. Wash. Rinse. Repeat.

Baby Iain Handling a CupWalking is intuitive, right? Tell that to your six-month-old. I'm not a child psychologist, but I do have five kids, ranging from ten months to twelve years. I've had the opportunity to watch each of them slowly, painstakingly learn a whole bunch of simple, "intuitive" things. And that's the basic stuff of life; the stuff that really should be intuitive, if anything is.

On the other hand, if you're talking about software, there are layers upon layers of learning for people. I remember back in '96, when I worked for a geophysical logging corporation, watching one of the field guys struggle with a mouse. You think the fact that links are usually underlined meant anything to him? You think he'd find it intuitive that he can click on that underlined bit of text, much less have any expectation of what would happen afterwards if he did? Hopefully you get the idea. What people don't know but that you think is obvious can shock you--because you have these layers upon layers of learning that you're starting from. Problem is that it's basically guaranteed your layers are different from theirs.

We can even zoom up the shared knowledge stack to software professionals. You'd think this group, by and large, would have a lot more common ground (and you'd be right). But when's the last time you tried to learn a new technology or tool and just found everything "intuitive"? Did you perhaps think, "why can't this be just like <insert name of thing I already learned>? Everybody does it that way." Really? Have you done the research to justify that assumption, or are you just basing it on your own experience?

Chances are, there's probably a wide variety of solutions for similar problems, even in our rarified atmosphere. None of them is always the right way or always the wrong way. Certainly, none of them is definitively "intuitive." And until you dig in to analyze the alternatives, to understand their rationales, there's very little ground to be offering a critique, much less to be making sweeping generalizations about something being "intuitive."

The good news is that there are much better, more useful and productive ways to critique and evaluate designs. There are whole branches of knowledge around perception and cognition, much of which has been distilled and applied to software design for you. People have been designing things for many years, learning from those experiences, and have built up good, common design principles and heuristics. You can often leverage well established patterns (and not so well established ones)--but you need to understand them and make sure they apply in your context. You can't and shouldn't start from scratch with every design problem, but you need to be rigorous in the selection of patterns and the application of principles, especially when you diverge from patterns. And last, but certainly not least, test--test as much as you can, as early as you can, with people who are as close as you can get to your actual target users.

So the next time you are tempted to call a design "intuitive" or "not intuitive," just stop. Analyze your assumptions. Discover why, and instead of using that word, just say all the reasons why you find it that way, leveraging the knowledge, principles, patterns, and test results mentioned above. And if you can't find good reasons to justify your feelings, then just say "I don't like it. I don't know why; I just don't." It's okay to say that, because it is honest, and maybe someone with more Design/UX background can help you to tease out the underlying reasons. But whatever you do, don't say "intuitive"; it's a useless, lazy word when discussing design.

P.S. For much of the above, you could swap in "usable" or "user friendly" or "easy to use" for "intuitive." In this context, they are nearly synonymous and all boil down to basically the same thing--you need to be more thoughtful and rigorous in your design critiques.

Author: "ambrogio" Tags: "UX, Design"
Send by mail Print  Save  Delicious 
Date: Tuesday, 29 Jan 2013 23:10

Responsive Design is Liquid Layout in New JeansWhat is responsive Web design (RWD)? You don't need me to tell you--there are hundreds if not thousands of articles on the Web, not to mention a few books. I've personally read about twenty some odd articles on it, as well as books/booklets. So far, this is my favorite primer, although one of the more thorough and up to date pieces can be found in Smashing's Mobile Book. I don't need to tell you who coined the term, nor that it was coined in 2010, as seems to be the custom in any piece on the subject. ;)

I also don't need to tell you that we're at (possibly--let's hope) the peak of the hype cycle for responsive Web design. And in fact, my intent is to add a little reality and grounding to the conversation to help us get as quickly as possible to the "plateau of productivity." Perhaps more to the point, I hope to offer some advice by way of caveats and considerations in regards to how responsive Web design relates to interaction design and user experience (UX).

"Earth Calling RWD, Come Back Down Here, RWD!"
I'm certainly not the first to attempt this. Just in November, Carin van Vuuren told executives (they're the only ones who read Forbes, right??) about "The Trouble With Using 'Responsive Design'," and one of my preferred puissant pedagogues, Luke Wroblewski, has written extensively about his trials with RWD (note that was written quite some time ago, early on in the hype cycle).

RWD is just a relatively recent, clever implementation technique to address the old problem identified in Liquid Layout, a pattern first catalogued by Jenifer Tidwell in her seminal work on UI patterns, Designing Interfaces (1st Edition). This came out of her research at MIT that began in 1997. As you can see in the examples for the pattern (and elsewhere if you look), people have been responsively adapting their designs for a very long time.

So why the hype now? Well, again, the formula for RWD articles will explain to you that the pressing need is due to the proliferation of non-desktop devices. There is a lot of truth to that, especially as "mobile" Web becomes more important than desktop.

However, as the pattern examples referenced above illustrate, the need for such adaptation has been with us longer. Even one RWD piece I read frankly asserted that we've all been living under a consensual delusion that we could safely design for fixed form factors. We all know that it's not reality, even (or especially) on windowed desktops. Further, we've had mobile for quite some time now, and I remember working with the early builds of ASP.NET's rather advanced control adapter technology well before it was released in 2006.

That "adapter" terminology is apropos for what it was/is. You'll see some folks disambiguate between "adaptive" and "responsive" design by emphasizing the latter is more active--it happens on the client, so someone who resizes a browser window will see the changes immediately, on the fly, while "adaptive" means when the pattern is applied elsewhere (such as on the server as in the ASP.NET control adapters or the server-side components alluded to by LukeW's multi-device article referenced above).

RWD Is More About Technology and Budget, Less About Users and Design
This distinction between "adaptive" and "responsive" illustrates quite well a key caveat that we all should keep in mind: responsive Web design is not particularly concerned with users. It is more concerned with technical details and rather unimportant distinctions--I mean, like, does it really matter to people if a design updates its layout immediately as you resize a window? How many people are even going to do that? How many of your users even can--after all, it's not possible on most non-desktop devices, which is purportedly the catalyst for RWD.

No, RWD is, as said above, an implementation technique. The special sauce that Ethan Marcotte (doh! I said I wouldn't mention him!) brought to the table was not the idea of responsive/adaptive design but the peculiar combination of it applied to Web design using flexible grids, flexible images/videos, and CSS media queries.

Many electrons have flown through the intertubes about RWD, and it is currently the talk of the town in Web design circles. But we need to keep this key thing in mind. RWD does very little to keep us focused on the people using our designs, which is really at the core of what makes a design good or bad. In fact, if these numbers are any indication, the focus on RWD by designers fascinated with the power of media queries may actually be hurting rather than helping users on mobile devices.

Let me be clear: I am not suggesting that applying the Liquid Layout pattern is not helpful. Nor am I arguing that RWD is inherently flawed in any way. On the contrary, where RWD can be a big win for users is chiefly when it makes applying the Liquid Layout pattern feasible--when users would not otherwise get any optimization for their mode of accessing your content due to budget/resource constraints. And let's be honest--this budgetary consideration is probably more than anything the driving factor in RWD's popularity.

Admittedly, there is some user consideration involved in RWD, but it is a bare minimum kind of consideration--ensuring some basic layout optimization for various devices and potentially addressing the multiple URL issue. But that hardly justifies the hype and, more importantly, neglects or obscures more important design considerations that can be impacted by pre-selecting RWD as a baseline for your design efforts.

The Advice
Users - We Really Do Exist! And that is the problem with hype peaks like this one--people start from the solution as an assumed baseline. If you understand design patterns, you'll know that implementations should be customized to fit the context, and that the context should qualify whether or not the pattern is even applied. Please, don't just blindly apply RWD as a solution to your next project. Before you choose it as a solution:

  • Learn about the challenges and trade offs first.
  • Consider hybrid approaches that leverage both client and server-side.
  • If you're designing an app (that is, your solution is activity oriented rather than content-consumption oriented--a.k.a., a "site"), consider if it is even appropriate at all, or if it wouldn't still be more appropriate to directly optimize for target devices or device classes (phones, tablets, TVs, desktops).
  • Design mobile first, even if you're going with RWD.

Whatever you do, don't get so caught up with the hype or fascination with new implementation techniques such that you give more importance to them than to your actual users:

  • Don't neglect design research, which helps you to know if RWD is the best solution.
  • Don't neglect sketching, interaction prototyping, and testing with users first--to find the best design(s) for your known users, devices, and context.
  • Don't fall into the trap of imagining that your design is "for anybody" so you have to design it "for everybody" on "every device," because this usually equates to designing it for you (the imagined, ambiguous user in your head) on your devices (whatever devices you happen to have access to).

This last caveat is perhaps the most dangerous siren to avoid in relation to RWD, because it sings to you so soothingly:

Come, dear designer. Stop thinking about specific devices. There are too many of them. You cannot possibly anticipate all of the devices that will access your content. Perhaps, you cannot even know your users.

Thus it beckons you to a kind of user agnosticism.

And if you start from thinking you can't know your users, then you easily fall into the we-shouldn't-even-try-to-know rut. And if you don't know your users because you didn't do research, then whom will you design for? And if you don't know whom to design for, then whom will you test your designs with? And if you don't know whom to test your designs with, then how will you know if it is a good design or not?

...

Ahh! Forget all that! This media query stuff is cool! RWD FTW!

Author: "ambrogio" Tags: "Best Practices, UX, Design"
Send by mail Print  Save  Delicious 
Date: Wednesday, 22 Aug 2012 20:12

Indigo Studio Sneak Peek

Have you ever found yourself in the situation where you had an app idea, but you didn’t have the time, or maybe even the skills, to prototype it? Have you tried using some of the existing app prototyping tools on the market today only to find that they were either too basic in their interactions support, too technical, or just too fiddly and complicated? Have you tried to give a set of static wireframes or mockups to be developed—annotated with descriptions about transitions and animations, only to find that the design you had in your head was very different from what was implemented? 

If you answered yes to any of these, or questions like them, then Indigo Studio is a tool for you!

Will the Real Interaction Design Tool Please Stand Up?

There are plenty of tools out there that people use today for interaction design—everything from paper and pen to hand coding prototypes on the target platform, and everything in between. Indigo Studio is the first tool that took doing interaction design as its core design problem and tries to support not only UI design in itself but situating UI design in the context of human living.

Indigo Studio Storyboard

As good interaction design starts from real, human stories, so Indigo starts you off with the ability to capture your stories, in storyboard format. You can start out by simply narrating your story/scenario, blocking it out on the storyboard.  Indigo Studio has hundreds of real-world “scenes” (sketches of people in real-world contexts) that you can just drop into your storyboard boxes to provide that context.

Once you get the basic story narrative down and spice it up with some real-world pictorial context, you can just double-click on any particular step and jump right into designing a UI concept for that part of the story. And as you may see in the screenshot above, the scenes that include a visible device screen let you literally show the screens you are designing in place, as if they were on that device in the scene.  Indigo creates a link between the state of the screen you design and that box in the storyboard, so it always stays up to date, and you can even run your prototype from any one of your steps.. but we’re getting ahead of ourselves.

While you don’t have to use the storyboarding part of Indigo, using a “storyboard first” approach helps keep the focus of design where it should be—on the people and their stories, but that’s just the beginning…

Serious Interactive Prototyping and Animations

What would an interaction design tool be if it didn’t support designing actual interactions?  Well, surprisingly, most tools that interaction designers use don’t really support this! Most let you at best create click through static mockups, but that’s hardly what is expected of rich, high-quality apps with great UX these days, right?  We thought so, too, so we made interactions absolutely core to Indigo Studio—not just a simple click through (though you can do that!), not just tacking some kind of “dynamic” content onto a mostly static design. No, interactions are the bread and butter of Indigo.

Add Interaction

Every element you put on a screen invites you (right there with no extra steps needed) to make it interactive, to respond to a user action with some kind of change in the prototype. You can respond to user actions in a number of ways, such as navigating to new screens, opening URLs, or you can just change the state of the current screen, for instance, by adding an overlay.

Making and Animating Changes

As you make changes to screens, those changes are recorded on the timeline for you. By default, of course, the changes are immediate, but all you have to do is drag or resize them to animate, and voila! you have not only an interactive prototype, but an interaction designed with a meaningful animation (assuming that was your intent, of course!).  All this without ever thinking once about writing a line of code, so you can stay focused on the interactions between the people you’re designing for and the awesome app you have in your head.

It takes just seconds to try out an interaction design concept in Indigo Studio. Of course you can still design static wireframes and just make them click through, but if you’re designing an interactive app, why would you choose to do that when you are empowered to easily and rapidly explore interactions? By getting not just the static screens but also the transitions out of your head and into some designed form, you can quickly try out multiple design ideas, validate if they will work like you think they do, and even give you something to go try with your target users, review with clients, and share with devs.

Interested?

Indigo Studio is currently in a limited private preview, but we are actively seeking out people who are interested in participating.  If it looks like something you’d find helpful, just send an email to indigo at infragistics dot com.  Tell us your name, a little bit about what you do, and how you think Indigo can help you, and we’ll see what we can do to get you onto the private preview.  At a minimum, this will get you on the first notification list for when we release Indigo (which is not far off!).

UPDATE (16 Oct 2012) - The release is very soon, and all the gears are humming here on the Indigo Studio Team!  We are not currently accepting more folks into the private preview; however, if you wish to be notified of the release, feel free to send us an email as above!  Thanks for your interest!

Author: "ambrogio" Tags: "UX, Design, Indigo Studio"
Send by mail Print  Save  Delicious 
Date: Sunday, 01 Jul 2012 15:12
 

UX for Devs Manifesto

  1. Make time for research.
  2. Keep your hands off the keyboard.
  3. Try many things. Keep none of them.
  4. Test designs with people.
  5. When all else fails, code.
  6. Details make or break it.

 

 

If you're scratching your head at or intrigued by some of these, that's a good sign. If you're feeling a bit indignant, that's even better. If you totally agree--awesome! So lemme splain.

Who's This For?

Well, like the name says, it's for devs. Why is it for devs?  Well, hopefully, if you're a designer, all of this is old hat to you, and you know this and a lot more.  If you're a dev fortunate enough to work with UX designers, then probably this is something you're familiar with already, and you can relax a bit, let your UX folks take the lead, and work with them to make the design effort successful.  But mainly, this is for the vast majority of devs who aren't also designers and don't have the luxury to work with them.

Why?

We are several years into a significant transformation of the software industry. The change is from a technology-centric view of software to a human-centric view of software. The change is all around us, and if you can't see it, I probably can't help you. ;) So rather than beat the "why UX is important" drum more, I'm going to assume you already know that it is and move on to explaining the manifesto. I'm also assuming the principle that you, a dev, will take some time to at least read a book or two on UX and Design. You don't need to become a profressional, but just learning and trying to practice some of the fundamentals will help you make software that your clients and bosses appreciate a lot more (and that means good things for you, right?).  And if you have UX folks on staff, this will only help you to collaborate with them more effectively.  Without further ado...

Let's Put Some Meat on the Bones

1. Design from the outside in. - This means you don't start your software project thinking about the database, or even the object model. You don't start worrying about how to optimize relationships or transfer data objects efficiently over the wire. You don't just throw forms over database tables and say "done!".  On the contrary, you start by thinking about your software from the outside, i.e., from the perspective of the people who will be using what you make. Put yourself in their shoes as you go throughout design and development--try to forget (compartmentalize) your knowledge of all the inner workings and see what you're making from the outside in. Think about the most important, critical activities and optimize for them--don't make everything equally easy, which really means equally hard. Remember that you are not (usually) the user. Say "I am not the user" three times before starting to design or critique a design.

2. Make time for research. - Do some research before you start designing. Don't be satisfied with a business requirements doc, nor with a basic interview with a stakeholder. Learn about the people who will be using your software. If you can, meet them, observe their work--even for a short time, talk to them about their pain points, look for ways that you could make their lives better with the software you may be building.  Look at what others have done, find patterns. Don't blindly copy--understand why a design works and when it makes sense and doesn't before you use it as inspiration. Capture that research in a way that helps you think from the outside in (e.g., personas, stories, storyboards).

3. Keep your hands off the keyboard. - Resist the urge to start typing code from the get go. Don't even start drawing up diagrams. Stay away from the computer! Start with paper and pen, or a whiteboard if you're with colleagues. Grab those personas and stories, come up with narratives from your users' perspectives (this means you don't start from people sitting in front of your software, as if they rolled out of bed and directly in front of your interface--you start from a real-world context where they might need to use your software). Block the narratives out where you might have interactions with software (make storyboards).  THEN start sketching UIs to fill in the empty boxes on your storyboards.

4. Try many things. Keep none of them. - In your sketching, sketch many (like more than three) alternatives--especially if this is a core activity. Sketch out the obvious one you already have in your head, then do another, completely different one--even if it seems stupid or incredibly difficult to implement. Putting yourself in your users' shoes, how would they like to do that activity? What is the easiest, most delightful way it could be done? Do two more alternatives. Once you have several alternatives, look for what's good in them (get colleagues involved in this if you can). Don't keep any one of them--combine the good from all of them into new alternatives that synthesize the good from the many.  Create higher fidelity design (or two) from them that validates the design in terms of your target devices (resolution, interaction capabilities, etc.), but don't code it yet or worry about styling--use a wireframing or prototyping tool (even the ones that are simply stencils on paper).

5. Test designs with people. - Get your best designs in front of people--ideally the people who will be using the software, but at least someone not involved in the design process. Ask them to do what your narratives describe--don't tell them how to do it; just give them the goal and ask them how they'd do it. You can do this with paper, print outs of your design, or if you have an interactive prototype, with the prototype.

6. When all else fails, code. - Only write code if you can't try your designs without it. Avoid at all costs writing productional code in the design process; certainly do not write unit tests for a prototype. The goal of a prototype should not be with a view to how you can reuse/port the code at the end of it. The goal of the prototype is to test out a design idea (or test technical feasibility). Only code enough to test the idea and no more.

7. Details make or break it. - After you've done all this, THEN you start thinking about how you can design the internals of the software and implementing it to make your UI design a reality, doing your best not to compromise good UX for expediency. The little details--like how you handle unexpected exceptions, input validation & helps, alignment and spacing, judicious use of color, and making a user do something that you could do for them--these all add up to make or break a great experience with the software you're making. 

What Do You Think?

This manifesto touches the surfaces of key aspects in changing the way devs can approach making software. Some of you are already doing bits and pieces; some are doing most. Some might have a more intensive process for a few of the steps, especially if you're working with UX folks already. I'm trying to keep the manifesto lightweight enough to remember and be fairly practicable in most dev environments. I'm trying to keep it fairly process agnostic--only prescribing a high level ordering based on known Design best practices. There are MANY particular design processes, and even more software dev processes. Hopefully you can find a way to adapt some or all of these into whatever process you work in--it can only help make you more successful, and it's only as time intensive as it needs to be in order to solve the problem at hand.

So what do you think?  Does it work? Are there points you would remove? Points you would add? Embellish? Simplify?  Let me know! This is just the first draft, an attempt to codify a simple set of best practices for devs who care about UX.

--

Want to discuss? Feel free to comment here, tweet @ambroselittle, or connect on Google+ or LinkedIn.

P.S. If you need professional UX help, Infragistics UX Services can help you!

 

Update (2 July 2012) - Got feedback saying this was the UX designer's job, so I added the section titles and the section "Who's This For?" to clarify the intended audience. 

Author: "ambrogio" Tags: "Best Practices, UX, Design, Development"
Send by mail Print  Save  Delicious 
Date: Sunday, 01 Jul 2012 15:12
 

UX for Devs Manifesto

  1. Design from the outside in.
  2. Make time for research.
  3. Keep your hands off the keyboard.
  4. Try many things. Keep none of them.
  5. Test designs with people.
  6. When all else fails, code.
  7. Details make or break it.

 

A sketch on paper.

 

If you're scratching your head at or intrigued by some of these, that's a good sign. If you're feeling a bit indignant, that's even better. If you totally agree--awesome! So lemme splain.

Who's This For?

Well, like the name says, it's for devs. Why is it for devs?  Well, hopefully, if you're a designer, all of this is old hat to you, and you know this and a lot more.  If you're a dev fortunate enough to work with UX designers, then probably this is something you're familiar with already, and you can relax a bit, let your UX folks take the lead, and work with them to make the design effort successful.  But mainly, this is for the vast majority of devs who aren't also designers and don't have the luxury to work with them.

Why?

We are several years into a significant transformation of the software industry. The change is from a technology-centric view of software to a human-centric view of software. The change is all around us, and if you can't see it, I probably can't help you. ;) So rather than beat the "why UX is important" drum more, I'm going to assume you already know that it is and move on to explaining the manifesto. I'm also assuming the principle that you, a dev, will take some time to at least read a book or two on UX and Design. You don't need to become a profressional, but just learning and trying to practice some of the fundamentals will help you make software that your clients and bosses appreciate a lot more (and that means good things for you, right?).  And if you have UX folks on staff, this will only help you to collaborate with them more effectively.  Without further ado...

Let's Put Some Meat on the Bones

1. Design from the outside in. - This means you don't start your software project thinking about the database, or even the object model. You don't start worrying about how to optimize relationships or transfer data objects efficiently over the wire. You don't just throw forms over database tables and say "done!".  On the contrary, you start by thinking about your software from the outside, i.e., from the perspective of the people who will be using what you make. Put yourself in their shoes as you go throughout design and development--try to forget (compartmentalize) your knowledge of all the inner workings and see what you're making from the outside in. Think about the most important, critical activities and optimize for them--don't make everything equally easy, which really means equally hard. Remember that you are not (usually) the user. Say "I am not the user" three times before starting to design or critique a design.

2. Make time for research. - Do some research before you start designing. Don't be satisfied with a business requirements doc, nor with a basic interview with a stakeholder. Don’t just throw together a survey and count that as research. Learn about the people who will be using your software. If you can, meet them, observe their work--even for a short time, talk to them about their pain points, look for ways that you could make their lives better with the software you may be building.  Look at what others have done, find patterns. Don't blindly copy--understand why a design works and when it makes sense and doesn't before you use it as inspiration. Capture that research in a way that helps you think from the outside in (e.g., personas, stories, storyboards).

3. Keep your hands off the keyboard. - Resist the urge to start typing code from the get go. Don't even start drawing up diagrams. Stay away from the computer! Start with paper and pen, or a whiteboard if you're with colleagues. Grab those personas and stories, come up with narratives from your users' perspectives (this means you don't start from people sitting in front of your software, as if they rolled out of bed and directly in front of your interface--you start from a real-world context where they might need to use your software). Block the narratives out where you might have interactions with software (make storyboards).  THEN start sketching UIs to fill in the empty boxes on your storyboards.

4. Try many things. Keep none of them. - In your sketching, sketch many (like more than three) alternatives--especially if this is a core activity. Sketch out the obvious one you already have in your head, then do another, completely different one--even if it seems stupid or incredibly difficult to implement. Putting yourself in your users' shoes, how would they like to do that activity? What is the easiest, most delightful way it could be done? Do two more alternatives. Once you have several alternatives, look for what's good in them (get colleagues involved in this if you can). Don't keep any one of them--combine the good from all of them into new alternatives that synthesize the good from the many.  Create higher fidelity design (or two) from them that validates the design in terms of your target devices (resolution, interaction capabilities, etc.), but don't code it yet or worry about styling--use a wireframing or prototyping tool (even the ones that are simply stencils on paper).

5. Test designs with people. - Get your best designs in front of people--ideally the people who will be using the software, but at least someone not involved in the design process. Ask them to do what your narratives describe--don't tell them how to do it; just give them the goal and ask them how they'd do it. You can do this with paper, print outs of your design, or if you have an interactive prototype, with the prototype.

6. When all else fails, code. - Only write code if you can't try your designs without it, or if coding will actually help you validate more quickly than other choices. Avoid at all costs writing productional code in the design process; certainly do not write unit tests for a prototype. The goal of a prototype should not be with a view to how you can reuse/port the code at the end of it. The goal of the prototype is to test out a design idea (or test technical feasibility). Only code enough to test the idea and no more.

7. Details make or break it. - After you've done all this, THEN you start thinking about how you can design the internals of the software and implementing it to make your UI design a reality, doing your best not to compromise good UX for expediency. The little details--like how you handle unexpected exceptions, input validation & helps, alignment and spacing, judicious use of color, and making a user do something that you could do for them--these all add up to make or break a great experience with the software you're making.

What Do You Think?

This manifesto touches the surfaces of key aspects in changing the way devs can approach making software. Some of you are already doing bits and pieces; some are doing most. Some might have a more intensive process for a few of the steps, especially if you're working with UX folks already. I'm trying to keep the manifesto lightweight enough to remember and be fairly practicable in most dev environments. I'm trying to keep it fairly process agnostic--only prescribing a high level ordering based on known Design best practices. There are MANY particular design processes, and even more software dev processes. Hopefully you can find a way to adapt some or all of these into whatever process you work in--it can only help make you more successful, and it's only as time intensive as it needs to be in order to solve the problem at hand.

So what do you think?  Does it work? Are there points you would remove? Points you would add? Embellish? Simplify?  Let me know! This is just the first draft, an attempt to codify a simple set of best practices for devs who care about UX.

--

Want to discuss? Feel free to comment here, tweet @ambroselittle, or connect on Google+ or LinkedIn.

P.S. If you need professional UX help, Infragistics UX Services can help you!

 

Update (2 July 2012) - Got feedback saying this was the UX designer's job, so I added the section titles and the section "Who's This For?" to clarify the intended audience.

Update (19 September 20120) – Francis Rowland pointed out that I can’t count (that there was a mismatch in the top/bottom numbering).  And “when all else fails” is hyberbole—amended to add the bit about coding if it helps validate idea more quickly—suggestion from Adrian Howard. Also, added warning against just doing a survey as research (via Adrian).   Thanks, y’all!

Author: "ambrogio" Tags: "Best Practices, UX, Design, Development"
Send by mail Print  Save  Delicious 
Date: Friday, 29 Jun 2012 15:22

Skeuomorphism is badYou've probably heard someone say "skeuomorphic design is bad" by now, or maybe they wrote "skewomorphic" or even "skuomorphic" (who knows, maybe even "skeumorphism")--it's a tough word to spell. However you spell it, you can read the article on skeuomorphism on good ol' Wikipedia to get the more or less official meaning. Also, here's an article that lavishes you with skeuomorph screenshots from the iPad--sometimes it's better to use examples to learn a concept. Of course, with the iPad, it's more than just pretty pictures.  Some of their skeuomorphism uses interactions as well, because it can thanks a lot to its primary touch interface.  

My colleague @brentschooley pointed out to me the other day that Apple's new Podcasts app in iOS 6 not only looks like a reel to reel, it has realistic motion, and as the time progresses, the thickness of the tape on each roll inversely changes (as it would if really playing), in addition to a host of minute details that really make it seem pretty magical (bouncing tape guide, for instance). And I guess I'd say that more than anything, that's what skeuomorphism does for digital interfaces--it adds a certain kind of magic. The point is not metaphor. The point is not even strictly usability, although something could be said for that, depending on how well the original source of inspiration was designed.  It's magic--it's taking something you're familiar with, maybe even seen as old and dated, and making it new, and more than it was before.

Some designers are lambasting skeuomorphic designs because they theoretically interfere with usability. But let's think about that for a minute, and not just in terms of initial learnability (which is commonly seen as the only benefit of this kind of design). Many of the physical objects we use were designed. Not only that, they have had years, even centuries, to tinker with and improve their designs for human use.  Consider the book.  The earliest writings (that we know of) were on the walls of caves. Later came clay and stone tablets. These pose many practical challenges, so designs were improved. Papyrus. Scrolls. Parchment. Individual leaves. Still more improvements could be made--bound books.  Not all bindings are created equal--small books that fit in pockets, large books for public use in ritual. The simple, physical act of turning a page. The point is not to argue for books as the ultimate in recording and reading technology, but there are few designs as well tested and well used and well known. 

If you read, for instance, The Design of Everyday Things, you begin to appreciate better the design that did (or didn't) go into all these physical artifacts that surround us. So much of what we take for granted was designed.  Much of it was improved upon after years of use and pressure to improve. Give someone a modern hammer (with the teeth to pull nails on the back) who has never seen one, and they will wonder at what the two prongs are for. They sure look decorative, but no, they were added, designed, honed over many years.  

Consider the reel to reel Podcasts example. The reels moving is feedback that it is playing. The increasing/decreasing tape thickness is feedback on progress. (You don't see a knob for volume but rather a slider, which works better on this device than a knob.) Or take the classic page turner example--the page moving with your finger as you drag it is direct feedback. The design of the calendar, especially month and week layouts were done well before technology, tried and true ways of mapping out time. People don't get anxious that most calendars and pickers in software try to emulate that layout.

"But but!" you'll say, "those aren't technically skeuomorphism, which singles out elements of design that are no longer necessary due to new material/technology." I'd say this is both true and not true. Surely we could come up with novel ways (and have in some cases) to tackle the similar design challenges, provide such feedback in other ways that are more "digitally authentic." And sometimes we can find new ways that are less cumbersome (e.g., tapping the edge of a page or a flip button). Yet that doesn't mean that the old ways have to be discarded, or even that they can't work together (the iPad apps fuse both skeuomorphic and authentically digital designs often quite successfully). That doesn't mean that they don't work at all. That doesn't mean that they are less aesthetically pleasing.  

And this is in addition to the learnability win that such design brings with it. People argue that using skeuomorphs or even just metaphors doesn't bring much more to the table than this initial learnability, but learnability can make or break software's success. Especially in a market flooded with so many little apps--the initial experience people have can mean everything. If you can hook someone in with a familiar design, even a metaphor that only partly works, then let them discover the more efficient "authentic" design elements in time, that can be a win-win. Consider the post above that leans towards non-skeuomorph preference:

It uses a "book" icon next to the blog label. Surely the label is enough, and what place does such an outmoded thing as a book have in digital graphic design--for a blog? Icons are notorious for this kind of "baggage." Even just considering metaphors, it is rarely a simple binary yes or no, as to whether they are employed. We're not babies. We don't need to rediscover the world completely, slowly, building little by little on new experiences. We have a wealth of learned knowledge to draw on. Even what seems "intuitive" (like touching and dragging something on a surface) is not innate knowledge. I have five kids, and while I'm no child clinical psychologist, I have watched them learn and explore their world, first just learning that they have arms and hands, then learning to control them, then picking up more understanding of what they can do and are good for bit by bit year over year--things that we take for granted as adults.

The point is, categorically eschewing a design approach because it relies on metaphor or design elements that are no longer necessary to the material is, to put it simply, naïve.  It is always a question of how much we draw on past experiences in the design and how much we introduce new ones.  I guarantee any new design that tries to be completely innovative first of all will be a non-starter and secondly would be completely unintelligible to those not involved in the design process. 

Good design is not just about functional efficiency (and it's certainly not just about novelty or supposed "authenticity").  This brings me back to magic. Apple has been mocked for using "magical" as a buzzword, but there really is something to it, and the sense of magic is, at least in part, created by their fusing of skeuomorphic design with digital design (and capabilities). I don't need to drag out the cliché Asimov quote on this, do I?  Taking something that seems ordinary and familiar and granting it new, unexpected, and to the uninformed, inexplicable powers is magical.  Sure, familiarity will eventually rub off the initial tingling sensation of awe, but the lingering sense of wonder or at least appreciation will stick with you. And if it is, for instance, a gesture that you used (or visual you have seen) all your life and have many happy memories associated with it, those happy memories will transfer quickly, creating an emotional attachment (a GOOD THING for both usability and general product success), and you'll find yourself lingering on these design elements occasionally, long after the initial amazement wears off.

I don't think any designer would argue that you should always use skeuomorphism, but this meme that "skeuomorphic design" is bad or "weak" is something that designers need to stop and rethink. Maybe your app could use a little magic.

--

Want to discuss? Feel free to comment here, tweet @ambroselittle, or connect on Google+.

Author: "ambrogio" Tags: "UX, Design"
Send by mail Print  Save  Delicious 
Date: Friday, 29 Jun 2012 15:22

Skeuomorphism is badYou've probably heard someone say "skeuomorphic design is bad" by now, or maybe they wrote "skewomorphic" or even "skuomorphic" (who knows, maybe even "skeumorphism")--it's a tough word to spell. However you spell it, you can read the article on skeuomorphism on good ol' Wikipedia to get the more or less official meaning. Also, here's an article that lavishes you with skeuomorph screenshots from the iPad--sometimes it's better to use examples to learn a concept. Of course, with the iPad, it's more than just pretty pictures.  Some of their skeuomorphism uses interactions as well, because it can thanks a lot to its primary touch interface.  

My colleague @brentschooley pointed out to me the other day that Apple's new Podcasts app in iOS 6 not only looks like a reel to reel, it has realistic motion, and as the time progresses, the thickness of the tape on each roll inversely changes (as it would if really playing), in addition to a host of minute details that really make it seem pretty magical (bouncing tape guide, for instance). And I guess I'd say that more than anything, that's what skeuomorphism does for digital interfaces--it adds a certain kind of magic. The point is not metaphor. The point is not even strictly usability, although something could be said for that, depending on how well the original source of inspiration was designed.  It's magic--it's taking something you're familiar with, maybe even seen as old and dated, and making it new, and more than it was before.

Some designers are lambasting skeuomorphic designs because they theoretically interfere with usability. But let's think about that for a minute, and not just in terms of initial learnability (which is commonly seen as the only benefit of this kind of design). Many of the physical objects we use were designed. Not only that, they have had years, even centuries, to tinker with and improve their designs for human use.  Consider the book.  The earliest writings (that we know of) were on the walls of caves. Later came clay and stone tablets. These pose many practical challenges, so designs were improved. Papyrus. Scrolls. Parchment. Individual leaves. Still more improvements could be made--bound books.  Not all bindings are created equal--small books that fit in pockets, large books for public use in ritual. The simple, physical act of turning a page. The point is not to argue for books as the ultimate in recording and reading technology, but there are few designs as well tested and well used and well known. 

If you read, for instance, The Design of Everyday Things, you begin to appreciate better the design that did (or didn't) go into all these physical artifacts that surround us. So much of what we take for granted was designed.  Much of it was improved upon after years of use and pressure to improve. Give someone a modern hammer (with the teeth to pull nails on the back) who has never seen one, and they will wonder at what the two prongs are for. They sure look decorative, but no, they were added, designed, honed over many years.  

Consider the reel to reel Podcasts example. The reels moving is feedback that it is playing. The increasing/decreasing tape thickness is feedback on progress. (You don't see a knob for volume but rather a slider, which works better on this device than a knob.) Or take the classic page turner example--the page moving with your finger as you drag it is direct feedback. The design of the calendar, especially month and week layouts were done well before technology, tried and true ways of mapping out time. People don't get anxious that most calendars and pickers in software try to emulate that layout.

"But but!" you'll say, "those aren't technically skeuomorphism, which singles out elements of design that are no longer necessary due to new material/technology." I'd say this is both true and not true. Surely we could come up with novel ways (and have in some cases) to tackle the similar design challenges, provide such feedback in other ways that are more "digitally authentic." And sometimes we can find new ways that are less cumbersome (e.g., tapping the edge of a page or a flip button). Yet that doesn't mean that the old ways have to be discarded, or even that they can't work together (the iPad apps fuse both skeuomorphic and authentically digital designs often quite successfully). That doesn't mean that they don't work at all. That doesn't mean that they are less aesthetically pleasing.  

And this is in addition to the learnability win that such design brings with it. People argue that using skeuomorphs or even just metaphors doesn't bring much more to the table than this initial learnability, but learnability can make or break software's success. Especially in a market flooded with so many little apps--the initial experience people have can mean everything. If you can hook someone in with a familiar design, even a metaphor that only partly works, then let them discover the more efficient "authentic" design elements in time, that can be a win-win. Consider the post above that leans towards non-skeuomorph preference:

It uses a "book" icon next to the blog label. Surely the label is enough, and what place does such an outmoded thing as a book have in digital graphic design--for a blog? Icons are notorious for this kind of "baggage." Even just considering metaphors, it is rarely a simple binary yes or no, as to whether they are employed. We're not babies. We don't need to rediscover the world completely, slowly, building little by little on new experiences. We have a wealth of learned knowledge to draw on. Even what seems "intuitive" (like touching and dragging something on a surface) is not innate knowledge. I have five kids, and while I'm no child clinical psychologist, I have watched them learn and explore their world, first just learning that they have arms and hands, then learning to control them, then picking up more understanding of what they can do and are good for bit by bit year over year--things that we take for granted as adults.

The point is, categorically eschewing a design approach because it relies on metaphor or design elements that are no longer necessary to the material is, to put it simply, naïve.  It is always a question of how much we draw on past experiences in the design and how much we introduce new ones.  I guarantee any new design that tries to be completely innovative first of all will be a non-starter and secondly would be completely unintelligible to those not involved in the design process. 

Good design is not just about functional efficiency (and it's certainly not just about novelty or supposed "authenticity").  This brings me back to magic. Apple has been mocked for using "magical" as a buzzword, but there really is something to it, and the sense of magic is, at least in part, created by their fusing of skeuomorphic design with digital design (and capabilities). I don't need to drag out the cliché Asimov quote on this, do I?  Taking something that seems ordinary and familiar and granting it new, unexpected, and to the uninformed, inexplicable powers is magical.  Sure, familiarity will eventually rub off the initial tingling sensation of awe, but the lingering sense of wonder or at least appreciation will stick with you. And if it is, for instance, a gesture that you used (or visual you have seen) all your life and have many happy memories associated with it, those happy memories will transfer quickly, creating an emotional attachment (a GOOD THING for both usability and general product success), and you'll find yourself lingering on these design elements occasionally, long after the initial amazement wears off.

I don't think any designer would argue that you should always use skeuomorphism, but this meme that "skeuomorphic design" is bad or "weak" is something that designers need to stop and rethink. Maybe your app could use a little magic.

--

Want to discuss? Feel free to comment here, tweet @ambroselittle, or connect on Google+.

Author: "ambrogio" Tags: "UX, Design"
Send by mail Print  Save  Delicious 
Date: Tuesday, 26 Jun 2012 14:22

John Kolko, a well-respected colleague in the interaction design world, just posted his thoughts on why designers must learn to code.  As he points out, he's not the first to suggest this, and in fact, there have been several conversations in blogs and on IxDA that I've observed over the years. For myself, having been a software developer and architect prior to moving into Design, I can say that having that background and the ability to understand not just code but the capabilities of platforms and the challenges of, for instance, working with asynchronous communication over the Web versus more direct, synchronous connections, latency, etc.--general software architectural considerations--does help. At the very least, it reduces some friction and iteration.

On the other hand, even with my dev knowledge and experience, I work with devs who I consider to be much more knowledgeable and wizards--they can code things that I can't, or at least things I'd have to spend a fair bit of time figuring out how to do. They're so good, in fact, that I still find myself asking for "too much" at times, with my faith in their ability to deliver. So the idea that a designer learning the basics of coding eliminating all of the "asking for the impossible" is not exactly a realistic sentiment, nor can the average designer I suspect become the code wizard they need to be in order to fully account for the design they want to express. The danger then becomes a designer who thinks he or she understands the limitations of the medium (based on his or her inadequate knowledge) preemptively precluding better design options.

John cites Jared Spool's earlier post on 3 reasons why learning to code makes you a better designer. Jared's first point is most compelling in my view--to better understand the medium you work in. I don't necessarily agree with his suggestion that this makes you "know what's easy" and what's not.  What may seem really hard to one coder could be child's play to another--because the latter has experience implementing a design like it.  In other words, it's about known versus unknown. I don't think it's helpful for a designer to try to guess at levels of difficulty up front. Even I have been surprised by this in both directions (sometimes I think something is hard that my devs tell me is easy, and vice-versa). It is helpful to know that something would require magic and is totally impossible with current technologies (e.g., mind reading or telekenetics), but it's not so helpful to try to guess, for instance, how long it would take to execute on something that at least seems possible. One of the common complaints I've heard from devs is designers asking for the impossible, so as long as a designer stays within the realm of the possible, a dev can help determine difficulty. Plus, it's important to realize the vast differences in experience and skill and how those can impact any particular dev's ability to execute (or even prototype) a design idea.

To Code

So should a designer learn to code? I'd say without reservation yes--learn to code something. Anything. Having just that much experience with some aspects of programming is helpful, but I'd say it's more important to learn the high level capabilities of the particular platform you're working in. It's not particularly helpful to be able to write a simplistic program in C#, to understand the syntax, or even how to create a domain model, what the differences are in type visibility, the difference between something being on the stack versus on the heap. That kind of knowledge is barely useful to a designer, but it's important for a developer. But just having some experience with flow control and basic data structures will help some.

On the other hand, learning what sensors are available on the devices that you are developing for, what they are capable of, what input modalities are available and which will be more commonly used, whether or not people will be connected all the time or not, how they will be connected, some basic grasp of what is executionally intensive for your environment, what kind of network latency you have to work with--these are important things to know as a designer that can meaningfully inform your design choices, even before anyone writes a single line of code. If you're dealing with existing data entities or services, learning what those are and what they are capable of, those are important and helpful. If you're dealing with browsers, learn about what the particular challenges and capabilities are of the ones you intend to support.  If you're going to spend time understanding your medium, these are the kinds of things that are far more useful than language syntax and "being able to code." 

Another benefit of learning about coding is reducing the language barrier between you and the developers and having some idea of the challenges they have to face will help smooth discussions.  But this one is definitely a two-way street--devs need to step up and learn the language of Design, and the basic concepts and concerns. I had the T-shirt above made. It can be taken two ways, but the point either way is basically the same:

  1. A dev reads it and chuckles, "that's not real coding--there's a lot more to coding than that. It doesn't make you a developer."
  2. A designer reads it and says, "yeah, that's the point, and being able to draw a wireframe on a whiteboard doesn't make you a designer."

The moral is that learning the basics of your complementary disciplines is helpful, but don't imagine for a second that understanding the basics makes you a professional. At best, it can help communication, and no matter what, a healthy mutual respect is necessary, including a deference to each other in what you are respectively good at (and paid for). A designer who has learned to code shouldn't presume to second guess her devs, and a dev learning design fundamentals shouldn't ignore his designer's designs in favor of what he thinks is a better design. 

Not to Code

Just as there are good reasons to learn to code, it is important to be mindful of the pitfalls. As I touched on above, there is some unavoidable bias that comes with knowing how to code. There are flavors of it. One is just eliminating what seems like a good, reasonable design because you don't know how to do it. Another one is the inverse--to choose a design just because you are familiar with it and think it is "easy." This latter is especially dangerous when you have to contribute to the code yourself or are under pressure of a deadline (and who isn't, right?). There is a time to make design compromises, and up front during design generation is not the best place for them to occur.  Take the time to explore even design concepts that are "hard."  Correspondingly, it is easy to get distracted in the "how" to make a design, when that is not appropriate. 

Another pitfall that I didn't immediately think of, but Jenifer Tidwell did (of the UI patterns book fame), is that when you learn to code, you might be asked to code more than you would if you didn't. It certainly depends on the company, but I can attest from personal experience and discussing with other designers that almost without exception coding is generally perceived as more valuable than design. After all, you can make software without Design, but you can't make software without coding. The outcome of coding is a product (bad though it may be); the outcome of Design, without coding, is essentially just an idea (freakin' awesome, though it may be) and "deliverables" of some sort that reflect the idea but are not the end product.

 

No one who knows him would say that Alan isn't opinionated, but the underlying gist of what he's saying is important to keep in mind--if even "professional" developers are commonly seen to be "criminally incompetent," it pays to keep in mind what was mentioned above--trying to guess at difficulty is not particularly going to be helpful. Ask a developer to estimate something he hasn't done before, and you will usually get a famously inaccurate guess. (How much more inaccurate will it be for a designer who has picked up some basics of coding?) When faced with an unknown, the most important thing to do is to prototype and prototype fast and cheaply. If it's a question of technical feasibility, do it on the target platform, get the dev who is going to build it involved. If it's a question of exploring interaction design, use the many tools you have to simulate a design and try it out.

This is where I depart from some of my colleagues in Design--learning to code so that you can prototype is not a great reason to learn to code. There are so many ways to prototype designs before writing code, and surely people will keep creating tools to help with this so that you don't need to code for this purpose. Sure, if no tools suit your needs and you have a very specific, high-fidelity design you need to test, then coding skills can come in handy. But the drawbacks of learning to code for prototyping are that you will be tempted to bypass lower fidelity when it is more appropriate, invest too much time and effort just coding a simple prototype, when you could effectively explore it with tools that don't require coding. 

If you learn to code (or worse, have a coding background), you have to actively guard against these pitfalls. Trust me, it takes time to break bad habits, and it is a challenge avoiding falling into new ones. If you are going to learn to code (or already know how), just keep these pitfalls in mind and do your best. You don't have to learn to code to be a good designer, but it can certainly help in many cases.  One thing I can totally agree with John on is that Design doesn't stop after the initial designs are done. It is crucial for designers and developers to work closely throughout implementation, and who knows, just doing that may begin, after some time, to give you the understanding of your medium (and your colleagues' perspectives and vice-versa) so that learning to code, as such, will be less important than it may seem at first glance.

First, be true to good Design; second, learn what you need to in order to see it through to completion. A great Design is pointless if it is not realized in the final product.

 

Author: "ambrogio" Tags: "Design, Development"
Send by mail Print  Save  Delicious 
Date: Tuesday, 26 Jun 2012 14:22

John Kolko, a well-respected colleague in the interaction design world, just posted his thoughts on why designers must learn to code.  As he points out, he's not the first to suggest this, and in fact, there have been several conversations in blogs and on IxDA that I've observed over the years. For myself, having been a software developer and architect prior to moving into Design, I can say that having that background and the ability to understand not just code but the capabilities of platforms and the challenges of, for instance, working with asynchronous communication over the Web versus more direct, synchronous connections, latency, etc.--general software architectural considerations--does help. At the very least, it reduces some friction and iteration.

On the other hand, even with my dev knowledge and experience, I work with devs who I consider to be much more knowledgeable and wizards--they can code things that I can't, or at least things I'd have to spend a fair bit of time figuring out how to do. They're so good, in fact, that I still find myself asking for "too much" at times, with my faith in their ability to deliver. So the idea that a designer learning the basics of coding eliminating all of the "asking for the impossible" is not exactly a realistic sentiment, nor can the average designer I suspect become the code wizard they need to be in order to fully account for the design they want to express. The danger then becomes a designer who thinks he or she understands the limitations of the medium (based on his or her inadequate knowledge) preemptively precluding better design options.

John cites Jared Spool's earlier post on 3 reasons why learning to code makes you a better designer. Jared's first point is most compelling in my view--to better understand the medium you work in. I don't necessarily agree with his suggestion that this makes you "know what's easy" and what's not.  What may seem really hard to one coder could be child's play to another--because the latter has experience implementing a design like it.  In other words, it's about known versus unknown. I don't think it's helpful for a designer to try to guess at levels of difficulty up front. Even I have been surprised by this in both directions (sometimes I think something is hard that my devs tell me is easy, and vice-versa). It is helpful to know that something would require magic and is totally impossible with current technologies (e.g., mind reading or telekenetics), but it's not so helpful to try to guess, for instance, how long it would take to execute on something that at least seems possible. One of the common complaints I've heard from devs is designers asking for the impossible, so as long as a designer stays within the realm of the possible, a dev can help determine difficulty. Plus, it's important to realize the vast differences in experience and skill and how those can impact any particular dev's ability to execute (or even prototype) a design idea.

To Code

So should a designer learn to code? I'd say without reservation yes--learn to code something. Anything. Having just that much experience with some aspects of programming is helpful, but I'd say it's more important to learn the high level capabilities of the particular platform you're working in. It's not particularly helpful to be able to write a simplistic program in C#, to understand the syntax, or even how to create a domain model, what the differences are in type visibility, the difference between something being on the stack versus on the heap. That kind of knowledge is barely useful to a designer, but it's important for a developer. But just having some experience with flow control and basic data structures will help some.

On the other hand, learning what sensors are available on the devices that you are developing for, what they are capable of, what input modalities are available and which will be more commonly used, whether or not people will be connected all the time or not, how they will be connected, some basic grasp of what is executionally intensive for your environment, what kind of network latency you have to work with--these are important things to know as a designer that can meaningfully inform your design choices, even before anyone writes a single line of code. If you're dealing with existing data entities or services, learning what those are and what they are capable of, those are important and helpful. If you're dealing with browsers, learn about what the particular challenges and capabilities are of the ones you intend to support.  If you're going to spend time understanding your medium, these are the kinds of things that are far more useful than language syntax and "being able to code." 

Another benefit of learning about coding is reducing the language barrier between you and the developers and having some idea of the challenges they have to face will help smooth discussions.  But this one is definitely a two-way street--devs need to step up and learn the language of Design, and the basic concepts and concerns. I had the T-shirt above made. It can be taken two ways, but the point either way is basically the same:

  1. A dev reads it and chuckles, "that's not real coding--there's a lot more to coding than that. It doesn't make you a developer."
  2. A designer reads it and says, "yeah, that's the point, and being able to draw a wireframe on a whiteboard doesn't make you a designer."

The moral is that learning the basics of your complementary disciplines is helpful, but don't imagine for a second that understanding the basics makes you a professional. At best, it can help communication, and no matter what, a healthy mutual respect is necessary, including a deference to each other in what you are respectively good at (and paid for). A designer who has learned to code shouldn't presume to second guess her devs, and a dev learning design fundamentals shouldn't ignore his designer's designs in favor of what he thinks is a better design. 

Not to Code

Just as there are good reasons to learn to code, it is important to be mindful of the pitfalls. As I touched on above, there is some unavoidable bias that comes with knowing how to code. There are flavors of it. One is just eliminating what seems like a good, reasonable design because you don't know how to do it. Another one is the inverse--to choose a design just because you are familiar with it and think it is "easy." This latter is especially dangerous when you have to contribute to the code yourself or are under pressure of a deadline (and who isn't, right?). There is a time to make design compromises, and up front during design generation is not the best place for them to occur.  Take the time to explore even design concepts that are "hard."  Correspondingly, it is easy to get distracted in the "how" to make a design, when that is not appropriate. 

Another pitfall that I didn't immediately think of, but Jenifer Tidwell did (of the UI patterns book fame), is that when you learn to code, you might be asked to code more than you would if you didn't. It certainly depends on the company, but I can attest from personal experience and discussing with other designers that almost without exception coding is generally perceived as more valuable than design. After all, you can make software without Design, but you can't make software without coding. The outcome of coding is a product (bad though it may be); the outcome of Design, without coding, is essentially just an idea (freakin' awesome, though it may be) and "deliverables" of some sort that reflect the idea but are not the end product.

 

No one who knows him would say that Alan isn't opinionated, but the underlying gist of what he's saying is important to keep in mind--if even "professional" developers are commonly seen to be "criminally incompetent," it pays to keep in mind what was mentioned above--trying to guess at difficulty is not particularly going to be helpful. Ask a developer to estimate something he hasn't done before, and you will usually get a famously inaccurate guess. (How much more inaccurate will it be for a designer who has picked up some basics of coding?) When faced with an unknown, the most important thing to do is to prototype and prototype fast and cheaply. If it's a question of technical feasibility, do it on the target platform, get the dev who is going to build it involved. If it's a question of exploring interaction design, use the many tools you have to simulate a design and try it out.

This is where I depart from some of my colleagues in Design--learning to code so that you can prototype is not a great reason to learn to code. There are so many ways to prototype designs before writing code, and surely people will keep creating tools to help with this so that you don't need to code for this purpose. Sure, if no tools suit your needs and you have a very specific, high-fidelity design you need to test, then coding skills can come in handy. But the drawbacks of learning to code for prototyping are that you will be tempted to bypass lower fidelity when it is more appropriate, invest too much time and effort just coding a simple prototype, when you could effectively explore it with tools that don't require coding. 

If you learn to code (or worse, have a coding background), you have to actively guard against these pitfalls. Trust me, it takes time to break bad habits, and it is a challenge avoiding falling into new ones. If you are going to learn to code (or already know how), just keep these pitfalls in mind and do your best. You don't have to learn to code to be a good designer, but it can certainly help in many cases.  One thing I can totally agree with John on is that Design doesn't stop after the initial designs are done. It is crucial for designers and developers to work closely throughout implementation, and who knows, just doing that may begin, after some time, to give you the understanding of your medium (and your colleagues' perspectives and vice-versa) so that learning to code, as such, will be less important than it may seem at first glance.

First, be true to good Design; second, learn what you need to in order to see it through to completion. A great Design is pointless if it is not realized in the final product.

 

Author: "ambrogio" Tags: "Design, Development"
Send by mail Print  Save  Delicious 
Date: Wednesday, 20 Jun 2012 19:15

Creating good software design is hard. You can read all the books. You can go to all the conferences. You can practice practice practice.  But it all comes down to making judgment calls, millions of tiny, tiny judgments that all add up to create the whole design, which becomes so many particular experiences with your software, experiences that you can never fully foresee, kinds of people you never knew, contexts that fall out of the ones you were aware of. Show me a thing that is generally agreed upon as "great design," and I'll show you tons of posts on social media, forums, etc. where people are bashing, complaining, and scratching their heads in confusion. The old maxim really holds true--you can't please everybody, not even a subset of everybodies that fall into your neatly crafted target audiences or personas.

So what are we to do? Well, you just gotta keep trying. You gotta keep learning, keep practicing, keep prototyping, keep testing, keep iterating. Apply those good patterns where they make sense for your design space; keep a tight focus on the users you think you know about, their stories and their contexts. And pay attention to the details. 

When faced with design choices, one thing you have to balance is what I'm loosely framing as a spectrum between "usability" and "aesthetics."  To define the terms in the illustration above:

  • "clever" - by this I mean a design that is unexpected, maybe something that once a user learns and understands it, seems really clever; it solves a problem but is anything but...
  • "obvious" - I put this forward for designs that go to some length to be painfully obvious, holding the user's hands, often with clear affordances, reliance on patterns and conventions, and even textual guidance in one form or another

These two are positioned on a sliding scale of usability (how easily and effectively I can do what I want to do, with minimal frustration, confusion, and error), where increasing "cleverness" requires a corresponding decrease in "obviousness."  

Correspondingly:

  • "minimal" - as in a form of minimalism--the removal of all elements that aren't strictly necessary, with the typical accompaniment of an interface seeming simpler, cleaner, and thus often perceived as more desirable, as opposed to...
  • "bulky" (or "busy") - this would be an interface that can seem at first glance unwieldy, maybe hard to make it look stylish and desirable

These are positioned along the same scale as inversely proportionate to the usability values, because in practice, they often have such tension. (Although aesthetics broadly understood can encompass "usability," just stick with me...)  As the capabilities of the software increase, so this tension increases.  It's one thing to design a simple sign up or check out form and find a good balance between these relatively easily.  But try balancing these for, say, a word processing app, or a system control interface.

It seems to me that these design values do often fight with each other, and the trick is to find the right balance between them. Not all apps need to be dead simple, holding the hand step by step by step, but that doesn't mean you flip a binary switch to the other side either. And even within an app--this or that capability might need the slider moved based on its perceived relevance to the activity at hand, the commonality of need for it, or even how important it is to the business.

Often though (especially at a certain level of detail), the best position on this scale cannot be directly informed by known personas, stories, business cases, design principles, or metrics. It will be a judgment call the designer has to make, and it will have to be informed indirectly by all of these things, but ultimately it is a judgment call, and that is where the rubber meets the road for Design, where Design becomes an art, where talent and skill become more important than knowledge and scientific approaches. It is in all these little judgment calls that the totality of the design is created. It calls for focus and dedication to get it right, and even then, you will get it wrong sometimes. That's why it's so important to try, fail, learn, improve, to not be afraid to fail, but to do it as quickly and cheaply as you can.

So when you're in the throes of design, fighting with your teammates about the "best" design, keep this in mind. There is rarely a single "right" solution in design (even with a nice set of research behind things), and it's even rarer to find the best solution on the first try (no matter who you are), even after surviving a critique. Everybody has a personal design aesthetic that informs where they tend to place things on the scale above. One personal aesthetic isn't inherently better than the other--they need to be conditioned thoroughly by context of use, the people you are designing for, the totality of the app aesthetic, etc. Have patience with each other. Sweat the details, but don't let that paralyze you into not trying things, failing, and trying again. Ultimately you will ship something you know isn't going to be as good as it could be in any case, so do your best to balance but don't expect it to be perfect on the first go.

Author: "ambrogio" Tags: "UX, Design"
Send by mail Print  Save  Delicious 
Date: Wednesday, 20 Jun 2012 19:15

Creating good software design is hard. You can read all the books. You can go to all the conferences. You can practice practice practice.  But it all comes down to making judgment calls, millions of tiny, tiny judgments that all add up to create the whole design, which becomes so many particular experiences with your software, experiences that you can never fully foresee, kinds of people you never knew, contexts that fall out of the ones you were aware of. Show me a thing that is generally agreed upon as "great design," and I'll show you tons of posts on social media, forums, etc. where people are bashing, complaining, and scratching their heads in confusion. The old maxim really holds true--you can't please everybody, not even a subset of everybodies that fall into your neatly crafted target audiences or personas.

So what are we to do? Well, you just gotta keep trying. You gotta keep learning, keep practicing, keep prototyping, keep testing, keep iterating. Apply those good patterns where they make sense for your design space; keep a tight focus on the users you think you know about, their stories and their contexts. And pay attention to the details. 

When faced with design choices, one thing you have to balance is what I'm loosely framing as a spectrum between "usability" and "aesthetics."  To define the terms in the illustration above:

  • "clever" - by this I mean a design that is unexpected, maybe something that once a user learns and understands it, seems really clever; it solves a problem but is anything but...
  • "obvious" - I put this forward for designs that go to some length to be painfully obvious, holding the user's hands, often with clear affordances, reliance on patterns and conventions, and even textual guidance in one form or another

These two are positioned on a sliding scale of usability (how easily and effectively I can do what I want to do, with minimal frustration, confusion, and error), where increasing "cleverness" requires a corresponding decrease in "obviousness."  

Correspondingly:

  • "minimal" - as in a form of minimalism--the removal of all elements that aren't strictly necessary, with the typical accompaniment of an interface seeming simpler, cleaner, and thus often perceived as more desirable, as opposed to...
  • "bulky" (or "busy") - this would be an interface that can seem at first glance unwieldy, maybe hard to make it look stylish and desirable

These are positioned along the same scale as inversely proportionate to the usability values, because in practice, they often have such tension. (Although aesthetics broadly understood can encompass "usability," just stick with me...)  As the capabilities of the software increase, so this tension increases.  It's one thing to design a simple sign up or check out form and find a good balance between these relatively easily.  But try balancing these for, say, a word processing app, or a system control interface.

It seems to me that these design values do often fight with each other, and the trick is to find the right balance between them. Not all apps need to be dead simple, holding the hand step by step by step, but that doesn't mean you flip a binary switch to the other side either. And even within an app--this or that capability might need the slider moved based on its perceived relevance to the activity at hand, the commonality of need for it, or even how important it is to the business.

Often though (especially at a certain level of detail), the best position on this scale cannot be directly informed by known personas, stories, business cases, design principles, or metrics. It will be a judgment call the designer has to make, and it will have to be informed indirectly by all of these things, but ultimately it is a judgment call, and that is where the rubber meets the road for Design, where Design becomes an art, where talent and skill become more important than knowledge and scientific approaches. It is in all these little judgment calls that the totality of the design is created. It calls for focus and dedication to get it right, and even then, you will get it wrong sometimes. That's why it's so important to try, fail, learn, improve, to not be afraid to fail, but to do it as quickly and cheaply as you can.

So when you're in the throes of design, fighting with your teammates about the "best" design, keep this in mind. There is rarely a single "right" solution in design (even with a nice set of research behind things), and it's even rarer to find the best solution on the first try (no matter who you are), even after surviving a critique. Everybody has a personal design aesthetic that informs where they tend to place things on the scale above. One personal aesthetic isn't inherently better than the other--they need to be conditioned thoroughly by context of use, the people you are designing for, the totality of the app aesthetic, etc. Have patience with each other. Sweat the details, but don't let that paralyze you into not trying things, failing, and trying again. Ultimately you will ship something you know isn't going to be as good as it could be in any case, so do your best to balance but don't expect it to be perfect on the first go.

Author: "ambrogio" Tags: "UX, Design"
Send by mail Print  Save  Delicious 
Date: Tuesday, 15 May 2012 15:34

Since Microsoft debuted the new Windows 8 "Metro" OS interface, there has been no end of people in the Microsoft dev space opining on both the interface and its applicability or lack of applicability.  Not long after BUILD last year, one blogger announced that there are "only five Metro style apps in the world," definitely not including LOB.  (I'm not linking so as not to embarrass that person..)  I've also often heard a similar sentiment in private discussion lists that I participate on.  Clearly, the sentiment goes, Metro is only for consumer apps. No real line of business work could be done using that interface. 

I mean, as we all know, LOB apps have to have busy, intimidating, and overly complex, fiddly UIs, right?  LOB implies an unfortunate huddled mass of pitiable information workers who don't deserve usable interfaces.  The Man can keep them down--because they're paid to use these apps, after all.  "You need a new feature?  Okay, here's another menu item."  "What, you think this is hard to use?  Just read the manual!"  And my personal favorite when justifying atrocious LOB app design: "We can just train them." 

Okay, so granted, there are some valid concerns when considering a data-entry-intensive app on a touch-first interface. But for the most part, it seems that whether cognizant of it or not, these devs have some of the above assumptions in their minds. In addition, they are thinking of their existing, typically developer/BA-structured apps and trying to do a direct port, as if the Metro stack is just a new UI technology calling for yet one more port of outdated, circa early 90s app design. After all, it's all about the technology, right?  

Or maybe they just personally don't like the Metro design language. I've certainly heard a lot of grumbling about that, and you can see it on comments on the Windows 8 blog. It is an extension of the self-referential design that has powered the vast majority of business app design since it became technologically feasible for such apps to proliferate. But good design is not about you/what you like--it's about the people who are going to use it and the businesses that they work for. And it's crucial to keep in mind that you are not they.

It seems that one of the key messages that Microsoft is trying to get across is being lost on these folks--that we need to "re-imagine" these apps. In fact, as I see it, Microsoft is using this new technology to force the point home. You simply cannot continue the same design approach on the new technology stack.  They are in essence trying to wake up the huge swaths of Microsoft developers to what real Design is all about. It's not about technology. It's not about features. It's not about back end systems or services. It is about the user experience; it is all about the user experience.  And great user experience very rarely (with some specific exceptions) is about loading an interface with tons of capabilities that requires the use of a special, high-precision pointing device (the mouse) in order to use it effectively. Certainly the vast majority of LOB apps do not call for this kind of design.

There is a lot of evidence that focusing on good design in LOB provides important ROI. Don't take my word for it--Bing or Google it.  Great LOB app design increases employee job satisfaction; it increases effective team collaboration; it reduces error rates, and increases productivity.  This means higher retention rates through happier employees and streamlined business operations. It is, in short, a win-win.

Since the announcement, slowly we've been seeing evidence to confirm that Microsoft does indeed intend for those building on this new technology stack to target LOB (and that others in the space are getting it).  Here's a brief sampling of that:


This sampling delves into different aspects that indicate the intent. Some of them are OS features, like portability, security, reliability, and so forth--Microsoft is continuing to focus on these aspects for the Windows 8 OS.  Some of the articles are more telling, though, in relation specifically to Metro-style apps (i.e., apps that run on the WinRT stack and have all the Design requirements we're talking about that supposedly aren't suitable for LOB).

Some prime examples of the latter are the Dynamics examples and the new Office 15 examples.  But even in the current preview, you can see some central LOB apps--what is more LOB than email and calendar? What is more LOB than Office and CRM?

You can see the direction in that ARM is not supporting desktop apps.  How many execs and business stakeholders want to be juggling multiple devices that almost do the same thing, switching back and forth between their lightweight, long-battery-life tablets and a heavier, clunkier laptop just to take care of business? The impetus is toward these lightweight devices that are touch first. There's a whole focus in IT these days on BYOD--bring your own device--because of this.

The Challenge

And that brings me back to the point. Instead of making snap judgments that this or that LOB app is not for Metro, how about seeing it as a challenge? The tech world has been and is changing, moving ever so surely towards more ubiquitous computing, more devices that can access the same information and augment and enhance human endeavors no matter where we are. Technology limitations managed to chain line of business workers to desks and then portable "laptop" desks for years, but we are shifting away from that.  The more concrete bottom line: the days of assuming bigger and bigger screens with a keyboard and mouse-like pointing device are dwindling; the time for multiple screens and multiple modes of interaction is here.

Now is a great time for Microsoft devs. Microsoft is carrying forward a lot of the familiar development tooling and experience. You can still use XAML. You can use the ever-present HTML-driven stack. You can even bring forward C++ skills. But as part of the deal, you need to start re-imagining the way you think about application design. They're giving you lots of the design framework and guidance--MSDN has some great design guidance.

Now is not the time to be prejudging the Metro way of things based on your own preferences. It is indeed a strong design language, but that also carries benefits in helping you to design more usable apps that play well in the new world of technology (and to break away from a lot of bad design habits).  And you don't have to rely on just Microsoft, either. You can look outside of what they offer. Luke Wroblewski has been promoting "mobile first" design for quite some time. Much of mobile first is applicable to Metro app design, including the core starting point of re-imagining and re-focusing on people--what they need, when they need it, instead of everything they could need at any time they might need it. Beyond mobile first, simply applying human-centered design in general will help you a lot in Metro app design.

Metro isn't just about whether or not someone wants to walk around with your app or not. It's not just about literal mobility. It is about focusing on great design to facilitate great user experiences, even in line of business. If after re-imagining your app, you find that Metro style still doesn't work for you, so be it. You still have (for now) the option to use the Desktop side of Windows 8 (on non-ARM devices).  But please, at least try. This isn't about porting your app as-is to the latest technology; this is about using this as an opportunity to create a better app for real people, that truly suits the way they can work best.

For what it's worth, I have done some thinking about adapting LOB apps for Metro style. Here are a few questions to ask yourself that might help (in addition to the user-centered mobile first principles):

  • How can I craft new pickers to minimize reliance on the keyboard? (This is I think an area ripe for innovation in a touch-first platform.)
  • How can I chunk up my current app into smaller, more task-focused apps?
  • How can I leverage live tiles to create a dashboard for these small apps?
  • How can I design a snapped layouts to coordinate meaningful leveraging of two apps at once?
  • How can contracts help easily and effectively share information across these LOB apps?
  • How can I learn more about human-centered design? Should I perhaps hire/advocate for full-time UX staff? (Devs can do a lot to improve UX if they apply themselves to it, but getting "professional help" can certainly take things to the next level.)


As I said, the MSDN site offers much more design guidance for Metro as well; these are just some initial considerations that might help.  But the first step is to break out of your preconceived notions about LOB apps! You can do it! :)

--

Want to discuss? Feel free to comment here, or you can reach me @ambroselittleG+, or LinkedIn.

Author: "ambrogio" Tags: "UX, Metro, Design, Win8"
Send by mail Print  Save  Delicious 
Date: Tuesday, 15 May 2012 15:34

Since Microsoft debuted the new Windows 8 "Metro" OS interface, there has been no end of people in the Microsoft dev space opining on both the interface and its applicability or lack of applicability.  Not long after BUILD last year, one blogger announced that there are "only five Metro style apps in the world," definitely not including LOB.  (I'm not linking so as not to embarrass that person..)  I've also often heard a similar sentiment in private discussion lists that I participate on.  Clearly, the sentiment goes, Metro is only for consumer apps. No real line of business work could be done using that interface. 

I mean, as we all know, LOB apps have to have busy, intimidating, and overly complex, fiddly UIs, right?  LOB implies an unfortunate huddled mass of pitiable information workers who don't deserve usable interfaces.  The Man can keep them down--because they're paid to use these apps, after all.  "You need a new feature?  Okay, here's another menu item."  "What, you think this is hard to use?  Just read the manual!"  And my personal favorite when justifying atrocious LOB app design: "We can just train them." 

Okay, so granted, there are some valid concerns when considering a data-entry-intensive app on a touch-first interface. But for the most part, it seems that whether cognizant of it or not, these devs have some of the above assumptions in their minds. In addition, they are thinking of their existing, typically developer/BA-structured apps and trying to do a direct port, as if the Metro stack is just a new UI technology calling for yet one more port of outdated, circa early 90s app design. After all, it's all about the technology, right?  

Or maybe they just personally don't like the Metro design language. I've certainly heard a lot of grumbling about that, and you can see it on comments on the Windows 8 blog. It is an extension of the self-referential design that has powered the vast majority of business app design since it became technologically feasible for such apps to proliferate. But good design is not about you/what you like--it's about the people who are going to use it and the businesses that they work for. And it's crucial to keep in mind that you are not they.

It seems that one of the key messages that Microsoft is trying to get across is being lost on these folks--that we need to "re-imagine" these apps. In fact, as I see it, Microsoft is using this new technology to force the point home. You simply cannot continue the same design approach on the new technology stack.  They are in essence trying to wake up the huge swaths of Microsoft developers to what real Design is all about. It's not about technology. It's not about features. It's not about back end systems or services. It is about the user experience; it is all about the user experience.  And great user experience very rarely (with some specific exceptions) is about loading an interface with tons of capabilities that requires the use of a special, high-precision pointing device (the mouse) in order to use it effectively. Certainly the vast majority of LOB apps do not call for this kind of design.

There is a lot of evidence that focusing on good design in LOB provides important ROI. Don't take my word for it--Bing or Google it.  Great LOB app design increases employee job satisfaction; it increases effective team collaboration; it reduces error rates, and increases productivity.  This means higher retention rates through happier employees and streamlined business operations. It is, in short, a win-win.

Since the announcement, slowly we've been seeing evidence to confirm that Microsoft does indeed intend for those building on this new technology stack to target LOB (and that others in the space are getting it).  Here's a brief sampling of that:


This sampling delves into different aspects that indicate the intent. Some of them are OS features, like portability, security, reliability, and so forth--Microsoft is continuing to focus on these aspects for the Windows 8 OS.  Some of the articles are more telling, though, in relation specifically to Metro-style apps (i.e., apps that run on the WinRT stack and have all the Design requirements we're talking about that supposedly aren't suitable for LOB).

Some prime examples of the latter are the Dynamics examples and the new Office 15 examples.  But even in the current preview, you can see some central LOB apps--what is more LOB than email and calendar? What is more LOB than Office and CRM?

You can see the direction in that ARM is not supporting desktop apps.  How many execs and business stakeholders want to be juggling multiple devices that almost do the same thing, switching back and forth between their lightweight, long-battery-life tablets and a heavier, clunkier laptop just to take care of business? The impetus is toward these lightweight devices that are touch first. There's a whole focus in IT these days on BYOD--bring your own device--because of this.

The Challenge

And that brings me back to the point. Instead of making snap judgments that this or that LOB app is not for Metro, how about seeing it as a challenge? The tech world has been and is changing, moving ever so surely towards more ubiquitous computing, more devices that can access the same information and augment and enhance human endeavors no matter where we are. Technology limitations managed to chain line of business workers to desks and then portable "laptop" desks for years, but we are shifting away from that.  The more concrete bottom line: the days of assuming bigger and bigger screens with a keyboard and mouse-like pointing device are dwindling; the time for multiple screens and multiple modes of interaction is here.

Now is a great time for Microsoft devs. Microsoft is carrying forward a lot of the familiar development tooling and experience. You can still use XAML. You can use the ever-present HTML-driven stack. You can even bring forward C++ skills. But as part of the deal, you need to start re-imagining the way you think about application design. They're giving you lots of the design framework and guidance--MSDN has some great design guidance.

Now is not the time to be prejudging the Metro way of things based on your own preferences. It is indeed a strong design language, but that also carries benefits in helping you to design more usable apps that play well in the new world of technology (and to break away from a lot of bad design habits).  And you don't have to rely on just Microsoft, either. You can look outside of what they offer. Luke Wroblewski has been promoting "mobile first" design for quite some time. Much of mobile first is applicable to Metro app design, including the core starting point of re-imagining and re-focusing on people--what they need, when they need it, instead of everything they could need at any time they might need it. Beyond mobile first, simply applying human-centered design in general will help you a lot in Metro app design.

Metro isn't just about whether or not someone wants to walk around with your app or not. It's not just about literal mobility. It is about focusing on great design to facilitate great user experiences, even in line of business. If after re-imagining your app, you find that Metro style still doesn't work for you, so be it. You still have (for now) the option to use the Desktop side of Windows 8 (on non-ARM devices).  But please, at least try. This isn't about porting your app as-is to the latest technology; this is about using this as an opportunity to create a better app for real people, that truly suits the way they can work best.

For what it's worth, I have done some thinking about adapting LOB apps for Metro style. Here are a few questions to ask yourself that might help (in addition to the user-centered mobile first principles):

  • How can I craft new pickers to minimize reliance on the keyboard? (This is I think an area ripe for innovation in a touch-first platform.)
  • How can I chunk up my current app into smaller, more task-focused apps?
  • How can I leverage live tiles to create a dashboard for these small apps?
  • How can I design a snapped layouts to coordinate meaningful leveraging of two apps at once?
  • How can contracts help easily and effectively share information across these LOB apps?
  • How can I learn more about human-centered design? Should I perhaps hire/advocate for full-time UX staff? (Devs can do a lot to improve UX if they apply themselves to it, but getting "professional help" can certainly take things to the next level.)


As I said, the MSDN site offers much more design guidance for Metro as well; these are just some initial considerations that might help.  But the first step is to break out of your preconceived notions about LOB apps! You can do it! :)

--

Want to discuss? Feel free to comment here, or you can reach me @ambroselittleG+, or LinkedIn.

Author: "ambrogio" Tags: "UX, Metro, Design, Win8"
Send by mail Print  Save  Delicious 
Date: Monday, 30 Apr 2012 15:30

Over this last weekend, I and a couple of Infragistics colleagues spoke at TechBash 2012 in NE Pennsylvania. The talk is one that I've shared at a number of places over the years, but because it's not technology based, it doesn't really age much (apart from my hair).  

The content is all about UX--providing a brief intro, some of the key aspects/disciplines, why devs should care, what they can do to improve UX in their apps even if they don't hire UX folks, and the last chunk dives into select UX patterns. The slides are pretty full of info, so you could read through them and their notes to get a lot of the content I share, though it won't be as much fun, of course. :)

I hope by now all devs are investing in UX in one way or another, so if it's a new topic to you, please do dig in. Feel free to use my slides as a starting point, but there are tons of other good resources out on the Web, such as the MSDN Usability in Practice series I co-authored with Dr. Kreitzberg that focused on helping devs get into UX (a quick Web search will bring up many other awesome resources as well).  Our free UX patterns explorer, Quince, is another great tool.  And of course, Infragistics prioritizes good UX in our own work, so if you leverage our products or even our UX Services, we can help you get where you need to be in improving the UX of your apps.

Author: "ambrogio" Tags: "UX"
Send by mail Print  Save  Delicious 
Date: Monday, 30 Apr 2012 15:30

Over this last weekend, I and a couple of Infragistics colleagues spoke at TechBash 2012 in NE Pennsylvania. The talk is one that I've shared at a number of places over the years, but because it's not technology based, it doesn't really age much (apart from my hair).  

The content is all about UX--providing a brief intro, some of the key aspects/disciplines, why devs should care, what they can do to improve UX in their apps even if they don't hire UX folks, and the last chunk dives into select UX patterns. The slides are pretty full of info, so you could read through them and their notes to get a lot of the content I share, though it won't be as much fun, of course. :)

I hope by now all devs are investing in UX in one way or another, so if it's a new topic to you, please do dig in. Feel free to use my slides as a starting point, but there are tons of other good resources out on the Web, such as the MSDN Usability in Practice series I co-authored with Dr. Kreitzberg that focused on helping devs get into UX (a quick Web search will bring up many other awesome resources as well).  Our free UX patterns explorer, Quince, is another great tool.  And of course, Infragistics prioritizes good UX in our own work, so if you leverage our products or even our UX Services, we can help you get where you need to be in improving the UX of your apps.

Author: "ambrogio" Tags: "UX"
Send by mail Print  Save  Delicious 
Date: Thursday, 26 Jan 2012 22:58

Did you know we CTP'd a new Undo/Redo framework with our last release?  Well, we did!

And we'd love to get feedback from you--do you think it's useful? How would you use it? How would you change it?  That sort of thing.

Here's a quick rundown of the key features:

  • Undo/Redo History Stacks – The UndoManager automatically manages the Undo/Redo history stacks. The collections raise change notifications and therefore may be used as a source for providing a UI that displays the entries in the history stacks.
  • Built-In UndoUnits – The framework includes a number of built in undo units that provide support for undoing/redoing property and collection changes.
  • Delegate UndoUnit – The framework makes it easy to add a method/delegate as an operation in the undo/redo history.
  • ObservableCollectionWithUndo – The framework includes a derived ObservableCollection that makes it easier to add support for undo/redo of collection changes.
  • Transactions – Provides the ability to group one or more operations into a single history entry. One may also rollback a transaction performing the operations that were stored up to that point.
  • Custom UndoUnits – The framework allows developers to create their own custom UndoUnit to support undo/redo of other types of operations.
  • Commands – A number of built in commands are provided to make it easier to invoke an Undo/Redo operation from within your UI.

Go download (for WPF or Silverlight) and play with the bits!

Let us know what you think. You can either email me (ambrose at infragistics dot com) or post in our for WPF or for Silverlight forum. We'd really love to get your feedback, and no better time than now!

Author: "ambrogio" Tags: "Silverlight, WPF"
Send by mail Print  Save  Delicious 
Date: Thursday, 26 Jan 2012 22:58

Did you know we CTP'd a new Undo/Redo framework with our last release?  Well, we did!

And we'd love to get feedback from you--do you think it's useful? How would you use it? How would you change it?  That sort of thing.

Here's a quick rundown of the key features:

  • Undo/Redo History Stacks – The UndoManager automatically manages the Undo/Redo history stacks. The collections raise change notifications and therefore may be used as a source for providing a UI that displays the entries in the history stacks.
  • Built-In UndoUnits – The framework includes a number of built in undo units that provide support for undoing/redoing property and collection changes.
  • Delegate UndoUnit – The framework makes it easy to add a method/delegate as an operation in the undo/redo history.
  • ObservableCollectionWithUndo – The framework includes a derived ObservableCollection that makes it easier to add support for undo/redo of collection changes.
  • Transactions – Provides the ability to group one or more operations into a single history entry. One may also rollback a transaction performing the operations that were stored up to that point.
  • Custom UndoUnits – The framework allows developers to create their own custom UndoUnit to support undo/redo of other types of operations.
  • Commands – A number of built in commands are provided to make it easier to invoke an Undo/Redo operation from within your UI.

Go download (for WPF or Silverlight) and play with the bits!

Let us know what you think. You can either email me (ambrose at infragistics dot com) or post in our for WPF or for Silverlight forum. We'd really love to get your feedback, and no better time than now!

Author: "ambrogio" Tags: "Silverlight, WPF"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader