The Myth of the Perfect Sketch

I have a feeling that my group is suffering from a type of paralysis that makes it hard to produce sketches. But sketching is exactly what we need to do. We’re almost halfway through our two-week intensive Design Experience module, where we have to come up with a navigational device for tourists and present our work in the form of a poster. And the only way we’re going to arrive at a design is by sketching it out.

But I have a feeling that some of us feel that sketching should be a “finalized” output. That one or two should be fine. And we spend time instead discussing what the sketch should look like rather than actually doing it.

Then, I stumbled across this post on metacool, and it gave me a sense of comfort that design isn’t about being as good as someone else or something else. In fact, there are probably a ton of stuff that expert designers have produced before stumbling upon “the right one”. Sometimes, we just need to get over ourselves and get stuff done.

That being said, sketching is an art, not a science. I’ve used it in at least these following ways:

  • to understand high level requirements
  • to visualize types of designs
  • to visualize ways of presenting information
  • to illustrate flow and sequence
  • to illustrate interaction
  • to provide contextual  background
  • to highlight portions of a design
  • to consolidate designs together
  • to tell a story

There are probably a million other things you could do with sketching, and that’s the point. It’s a visual language. Think about the million and one ways you could say “hello”, and try to put that as a sketch on paper.

I guess I consider myself lucky that I got into comics at a very young age, not just reading them, but drawing my own comics. Sketching was just part of the whole process. It does takes practice, and after the millionth time you’ve drawn a wireframe with pencil and paper, you’re not going to ask the person next to you – “so, how do I do this?”.

So, I guess I should get on with it.

Blogging Definitions Overload – One to Rule Them All

I started my first web log on Blogger years ago, probably in my little corner of the office as a software developer. At the time, it was just a way to post up random stuff about life, but over the years I slowly realized the potential it had to touch other people’s lives (as well as mine). But by the time I had realized that, I had put up so much junk on my blog that no one would ever read apart from myself, I feared no one would ever take me seriously.

So, I launched a separate blog to discuss about more serious things I cared about, like jobs and careers. And then, when I got into the masters program, I launched this blog, to talk about UX. Now, I manage three blogs, plus a food aggregator that caters for two countries I currently don’t reside in, and that can be a lot of work, sometimes. That’s when I start to relate to some people about what blogs are about, and what they should be, and what they’re not.

So sometimes I think it’s a way to post random junk. Then I think no, it’s a way to inspire others. Then, I think… no, I should make it sell – sell my ideas and make me rich (right).

And the plain fact is – it’s just a tool, dammit. Use it however you want.

*bonk*

Recently, a classmate of mine who is a total news junkie (his own words) introduced me to Dave Winer’s blog, Scripting News. Any self-respecting internet pundit would know about Winer’s claim to fame (the invention of RSS). He is someone who the NY Times calls, “The protoblogger”. Thus, skimming through his articles, I caught glipses of his “proto-posts”.

Dave’s posts are brief, but packed with insight. They are personal, but not revealing. They are vocal, but not contentious. And I think there is a lot of variation as you move out into the blogosphere, but Dave’s blog is like smack in the middle.

And if I ever really needed to give a good definition of a blog, Dave’s would be it.

So, there.

Is There Such a Thing as a Lone UX cum Web Developer

I’ve just spent the last 10 or so hours mucking over Kohana, Doctrine ORM and jQuery – all of which I really enjoy and think are great, but I’m starting to doubt my own ability to code. Do Javascript programmers spend more time building functionality and interaction, or mangling the libraries and fussing over browser compatibility? While I think jQuery is a brilliant API, I’m always wary of the quality of plugins that people write. Same goes for WordPress plugins. I guess free does come with a price (like the price of not using Java).

Which leads me to think – can there be a lone UX expert who also does web development? I’m sure there are folks out there who make a living doing this, but the literature treats the fields so separately it’s hard to see how these experts manage the line between the two.

Even with a team of two people – a UX designer and a web developer. How do they interact? Does the UX designer have a head start to come up with all the wireframes and storyboards, who then hands it over the programmer to make it functional? Do they work together in an agile fashion?

I think that as most technical work goes – there’s less and less breathing space for UX designers and web developers to work in very small, efficient teams, unless they are very, very good. I’m not saying that everyone else just sucks, but building websites can take a lot more time than you think it does, and unless you’re designing mom and pop websites all the time, it’s going to be hard to guarantee how much time is required to build good sites.

While plugins and APIs can be useful in increasing speed, they also can lock down the interaction and degrade the user experience if not planned well. Maybe established sites understand this all too well, and take a phased approach. Maybe this is why Flickr only launches a new (but exciting) feature only once every few months.

I used to think that as technology improved, so would our ability to build products. But I find that this isn’t always that simple. In fact, despite all the effort being put in to build so many plugins, APIs, platforms, patterns, components, etc – it still takes a lot of effort to put things together properly.

Thus, all software is bespoke, and are not exactly a lego-like mashup of neatly interfacing components that we tend to think it does.

My only question is, if we are to get better, how? Apart from being willing to devote ourselves to our tools and simply, get our hands dirty.

update: I found this presentation from Leah Buley from Adaptive Path, which she gave at SXSW’09. It comes quite close to what I was talking about. I’m not quite sure it always works out so simply, but I like the idea.

Practice Seminars at UCLIC

A List Apart recently published several articles about Web Education, spelling out the inherent difficulties for students to come out prepared for jobs in the web industry. While I can surely sympathize with that, myself being a practitioner for several years, I can also see how academic institutions struggle to reconcile pressure from the perspectives of research, teaching, and learning.

Contrast this to simply “learning on the job”, there’s a huge amount of tacit knowledge that you can acquire in a relatively short period of time, but only if the conditions are right. Surely, in industry, you do it every single day. But in class, it’s quite hard to teach that to students who haven’t quite grasped what it’s like to work on websites or other interactive systems on a daily basis.

The Interaction between Academia and Industry

The HCI program at UCL tries very hard to give students a good flavor of not just the academic and theoretical side of things, but the practical side as well. And because of its strong background in psychology and computer science, there may be a tendency to think they lack the kind of practicalities designers live by on a daily  basis – but the people who run the program understand this and design does have its place in the program.

Certainly, in the academic community for HCI, design is seen as a black box – as though some kind of magic takes place whenever you design a website. But there’s a whole lot that goes on in the design process. And I’m glad to be given the opportunity to not just learn the skills that are required, but to see the history of both industry and academic trends evolving over the years, which tells you a lot about the different perspectives of the industry, which is probably why we have so many terms for practitioners (e.g. information architect, usability engineer, interface designer, etc.).

However, there has been significant contributions and conversations from both sides. Norman and Nielsen have a long history in the HCI community, whose works are often cited. On the other hand, you have folks like Alan Cooper and Steve Krug who are more known in industry.

Straight from the Horse’s Mouth

One of the things that UCLIC has done from the very beginning (back when it was more strongly associated with Human Factors and Ergonomics), was to have people from industry come in and give their perspectives of what the ‘real world’ is like. It’s an optional session and we don’t get slapped for not coming in, but it’s such a good way to hear so many different perspectives from people in industry.

We’ve had ergonomists who’ve done work on air traffic control centers, information architects who have done work on massive knowledge systems and even simple sites for financial institutions, interaction designers and user experience researchers from Microsoft and Google, all-rounders from small design companies like Clearleft, usability practitioners who do work on game testing…

It’s just amazing to see the spectrum and application of HCI in industry.

They come in different shapes and sizes

Although this sounds like a plug for the program that I’m attending, I really don’t know what other programs are like. I certainly considered a more design-focussed program like the ones offered at the University of the Arts, London or Savannah College of Art and Design. Even my alma matter, the University of Kansas, has begun offering modules in interaction design. But since my background is deeply technical, I went for something more HCI-based, and hoped that it would give me some exposure regarding design (it has).

I appreciated that the different terms (IA, UxD, UX researcher, etc.) meant something specific, even if one person seemed to be doing all of them. An information architect may be doing usability work, but not the other way around sometimes. Also, if you’re a designer, you’re not always equipped to do good qualitative research about user behavior, even though it may be extremely helpful to your work – while anthropologists do this every day. Engineers have insight to how technolgy works, but psychologists are needed to show how the mind works.

It’s during the practice seminars that I got the sense that I don’t have to box myself in a particular category, but that it’s just learning to use my skills and presenting them appropriately to whoever is consuming my services – and to embrace the constantly changing nature of the field.

Change Blindness and Short-term memory buffers

The duration of the flickers (not flickrs) that are used to demonstrate Change Blindness in the video posted in the link below only last a few miliseconds, but it’s a powerful visual tool to demonstrate just how easily it is to lose a reader or viewer’s attention.

This means that visual clutter can have a major effect on interface design, if not used in a purposeful way. More so  because of the interactivity of sites – how is the site designed for the user’s goals? And issues like whether distraction is appropriate, and even branding and immersion can affect the overall experience for users.

link

Okay, so maybe pages aren’t designed with milisecond lapses of flashing gray blobs, but what if a sidebar that presents new information keeps getting missed? What about ad placements? A good place to start might be theory, so some Gestalt psychology might help:

lawofclosure
Closure
lawofproximity
Proximity
lawofcontinuity
Continuity
lawofsimilarity
Similarity
Pragnanz
Pragnanz

Basically, these funny shapes just mean that people tend to group things together to form some kind of meaningful unit (the closure pattern looks like a circle, the proximity pattern makes the four blocks look like one unit, the continuity pattern makes the user want to fill in the blanks, and there’s some kind of vertical order in similarity vs. a vertical one). There are more laws, but the basic gist is – things need to make sense, and here we have visual representations that are more likely to be in one order than another.

In summary of these laws, the law of Pragnanz is sort of an overriding principle – one to rule them all.

Couple this with Change Blindness, and you might wonder how these patterns may help to either diffuse or illuminate particular elements. Visual clutter can be easily achieved by dumping a random collection of these patterns into one thick slush.

Add to that the tendency for users to leave your site within seconds of not finding what they want.

Caveat emptor. Design isn’t just a pretty thing.

Web Frameworks: Key to User Experience

I ‘ve been working on web applications for a few years now, and while I don’t claim to be an expert on the subject, I have gotten my hands dirty in building apps on all sorts of different web frameworks (about 8 now). After being exposed to the UX community for nearly a year now, I realized just how diverse this group is, though most of us work on UX for websites. I’m curious to know just how much UX people understand web frameworks, and its impact on the web development community.

In this post, I’m going to share a bit about my background as a software developer, in hopes to shed some light on how web frameworks impact UX-related work. My hope is to generate some understanding between the two camps (UX folks and programmers) so that we can attain a common goal, which is to build applications and services that are robust, functional, and satisfying.

Since this post is aimed towards a general UX audience, I’m going to avoid talking about the philosophy and thinking behind UX. I think there is enough literature on that subject for software-types who are interested in UX. Instead, I will talk about the other side of the fence – what are software developers like? What makes them tick? Why do they do the things they do? And why can’t we get along sometimes?

The Philosophy of Features and Functions

In Cooper’s book titled, “The Inmates are Running the Asylum“, he describes developers who commonly make poor assumptions about users, forcing technology on them in ways that cause fustration and confusion. This is partly because of the kind of thinking and immersion that often goes on in building a software application is very counter cultural, and it’s hard to appreciate, let alone design applications for users, while you’re focussing on building applications.

Software developers are extremely technical people, and the whole work of building software doesn’t just include knowledge and skill, but includes other factors such as the culture of working in software teams, physical environments, system-allegiance (open source vs. microsoft, oracle vs. mysql, etc.), organizational structures, the nature of how projects run, and relationships with clients and/or internal staff.

From this tacit and specialized knowledge, software people can often wield enormous influence. It is not difficult to find groups of software people that are isolated or contained, in an environment that they require to do what they feel is needed to build software. Usually, this involves having a computer to yourself, a space that is condusive for software development, some degree of flexibility in terms of work schedules, and close proximity to teammates (though this isn’t always the case, with certain software practices).

A new recruit who is given his/her table in the software group will feel removed from marketing, sales, admin, human resource, and management. There are of course, exceptions to this – but there are many companies that operate in this fashion, and it seems to work fine for a lot of people.

While this kind of environment is good for producing application functionality, it reinforces the idea that functionality is the ultimate goal – thus there is this effort to provide for and manage the production of “function”, since this will dictate the health of the team. Software people are rewarded on producing functionality, after all, and to this end they will deliver. This is why feature creep is so common – software people gain rewards from building features, intrinsically or otherwise.

Frameworks are Bespoke

This is where web frameworks come in. Because of the increasing demand for software of all kinds, there is tremendous pressure on developers to build applications fast enough that meet the requirements, and yet robust and scalable. Software teams often use web frameworks to leverage on, because building applications from scratch can be costly and time consuming.

However, while most frameworks are actually built with enormous flexibility in mind, they are actually bespoke (standalone, designed for a specific purpose), and will dictate the kind of thinking and culture that revolves around the building of an application.

One of the first web frameworks that I started using was Apache Struts, which is based on Java. It was an amazing platform that enabled me to be in control of so many things about an application, such as databases, internationalization (multiple language support), file organization, and templating. However, it required a steep learning curve, and once I managed to integrate all my knowledge and build a software, my time was consumed and wrapped up in making sure everything was built and designed with the framework in mind.

I had to be extremely careful that certain configuration files were in the right place, that I didn’t misunderstand the “thinking” behind the Struts framework that governed the way applications were meant to be built on the framework. And because Struts was based on Java, it also had to have a philosophy compatible with “The Java Way” of doing things, which meant that you also needed to appreciate things beyond the framework itself.

When software platforms change, or when teams migrate to different web frameworks, this thinking needs to change. It is almost like a paradigm shift, because Java does not work like PHP or Ruby, and Oracle is quite different to MySQL and so on. So when technologies change, web frameworks follow suit.

This is why some folks advocate that interaction designers and software developers should not be the same person – building software takes up a lot of effort.

Bridging the Gap

Understanding the differences between web frameworks can be a boon (no pun intended) for UX practitioners who work closely with software people. This of course implies that they appreciate the thinking and the culture behind them. The reason why I focus on frameworks and not just any software technology in general is because it’s a very common way of building web applications today, and are the enabling platforms for interface developers and designers – just that they will probably only see the tip of the iceberg through template files (e.g. Smarty templates, HTML, CSS), Javascript and images.

37Signals popularized the notion that “the interface is the application”. This can lead to some serious misconceptions about the way applications should be built. In the software world, the whole is usually more than the sum of its parts. Interfaces depend on not just images, HTML, Javascript and CSS alone, but on the architecture it sits on – database, servers, and very often, the web framework.

Because of this, software developers are constantly aware of the separation between “view” and “logic”, which are synonymous with the relationship between designer and programmer, interface and framework, user and system. While this makes it very clean and organized, the decoupling of these relationships can sometimes create an artificial “gap”, which can lead to animosity between designers and programmers.

This is the point that is most crucial to both sides. Having a shared understanding of how interfaces are designed and how frameworks run will lead to better software. As an example, certain systems are designed to handle textual data better than others, certain frameworks are designed for certain types of platforms (Struts and Spring for enterprise apps, Django for content heavy/layout-based apps), and the way you build an application will dictate the performance of an application on the interface.

This is only the beginning

We’re only beginning to scratch the surface here, since the discussion on what a web framework really looks like and how do applications get built with it will require more time than this post can afford. But it’s a good start, and no doubt UX practitioners and programmers will share a partnership around technologies for a long time to come. My hope is that we’ll be able to build a better understanding of how this partnership can be built, strengthened, and communicated.

References and related links:

Safari 4 – initial impressions

I downloaded and installed Safari 4 Beta and ran it on my Vista-installed laptop.

Performance and Functionality

It takes a bit of time to load up, but seems to run faster than the previous version of Safari. Some bits of Javascript functionality on sites failed to work when I had too much tabs open. I like that the tabs are similar to Mac’s menu that’s flushed at the top, but I don’t like how the tabs are so thin and quite unreadable – hard to tell them apart. When I had too many tabs open the were hard to scroll to. I got used to firefox’s mousewheel navigation when I had too many tabs open on a window.

For some odd reason, when I start Safari up, the window isn’t maximized – which is kind of annoying, but I can live with that.

Favourite/Top sites

If you love Apple, you’ll probably love this. Whenever you open a new tab, it displays a grid of your favourite sites (like the image at the top). It was nice eye candy to start with, but later I wasn’t sure what I’d do with it. And I realized I don’t have to do anything – it kind of ‘remembers’ which sites I’ve been to. I’m not sure if I’m ready for Safari but I guess if I were to use it on a regular basis, it might be useful to have this feature.

I want X feature

I’m still not convinced it’s blazingly fast, so my bet’s still on Firefox. It would be nice to see some hotkey commands, that are straight-up easy to use (key help displayed in menus or what not).

Has a lot of potential.

The value of practice in design methodology

I’ve been exposing myself to more literature surrounding the way design works. In some ways I could call them design literature, but that’s much too broad. Or, I could say it’s about how designers work, but again – not all designers work the same way.

In a way, I’m searching for something about the way design works that’s more tacit – implicit, yet indispensable.

The reason is because working from a methodology or a process vs. coming up with a really good idea isn’t quite the same thing. Speaking to a classmate recently about the way design works, he found it hard to articulate why he designs the way he does. But we both agreed that there are bad designs, and bad designers, and things designers shouldn’t do. And it seems you could put that into a list of things to tick off, but it’s not quite like that.

I stumbled across an article by Michael Beirut on the Design Observer site about his design experience. The honesty in which he admits that design is not too far from a ‘magic spark’ is indeed revealing about the sort of work that goes behind the scenes.

He borrowed the analogy presented by Rob Austin and Lee Devin regarding theatre innovations – how the collaboration of stage crew and artists both handle minute by minute executions within a tightly controlled environment and time span in order to deliver an impact to the audience. The synergy, well-executed, is hardly a random exercise, but isn’t easy to put down in books or spreadsheets.

It’s easy for me to say a piece of software has to be built this way because it requires so-and-so features, but it’s another thing to create something from scratch, and say ‘this is going to work best for the user experience’.

This analogy is clearly based on my own background as a software engineer, and it’s causing me to be more sensitive to how designers learn and apply their “tricks of the trade”. In this sense, Buxton’s Sketching User Experiences book has been extremely insightful – but only if you truly appreciate that what the book is really offering is a peek into the mind of a designer, and not just another tool to whack together a pretty interface.

I’m not discounting the fact that building software relies on tacit and practical knowledge as well, but in the realm of design – this seems more magic than method – hence the question put forth in the ixda discussion board. While good software can most certainly be replicable, can good design be replicable too?

The more I read into it, the more I’m realizing that the answer lies in practice, and the understanding of it.

Serif or not to Serif

I found this article awhile back regarding the use of sans serif and serif fonts.

http://www.alexpoole.info/academic/literaturereview.html

Although sans-serif fonts are widely regarded in its use in webby stuff, that doesn’t mean that a serif font can’t work as well toward building highly legible sites.

The conclusion is obvious – use type appropriately based on your needs.

Other useful references:

Other issues regarding font-use:

  • legibility vs. readability
  • dyslexia / accessibility
  • aesthetics
  • technical boundaries
  • custom fonts
  • internationalization

Vimeo encourages sign-up through comments subtly


I was watching Don Norman’s talk from UX Week ’08, and I’m not a member on Vimeo or anything, but the blurb at the comments section below was really nicely done. It really is encouraging me to be part of this, and this is an interesting example of “persuasive technology”.

For one, it’s almost static. There’s no “wizard of oz”-guy behind the system trying to get potential recruits to interact with the site.

Then, it’s partly contextual – because the video is a conference talk, it makes it even more appropriate to contribute to the conversation (I don’t know if they did that on purpose).

Thirdly, it’s placed appropriately in the comments section, although it doesn’t even say it’s a comments section. How did I know it? Well, I just assumed it. Most of us have gotten to a point of getting used to seeing comments as a trail below the main content. It just got picked up by Vimeo and used very subtly but very aptly.

Although it didn’t sign up immediately (because I wasn’t intending to participate in the conversation), I think someone who was interested in taking part in the conversation would, and that’s the point – making it easier for users to accomplish their goals – cordially, contextually, and effectively.