Friday, May 15, 2009

Monday, May 4, 2009

The Inmates Are Running The Asylum

Comments:
Comment #1
Comment #2
Comment #3

Summary:
The Inmates Are Running The Asylum points out that programmers are running the show most of the time when it comes to designing how a user interacts with a program. The author shows that this will have negative consequences on the finished product. This could either be because the programmers are taking shortcuts when designing the interactions or, most likely, programmers misinterpret what would be best for the users.

Programmers tend to have a different mindset than average computer users. Programmers want power and control at the expense that an application could be more complex to interact with. Most people would rather give up the power to have a simpler interaction with a computer. Programmers do not understand this and it is a mistake to let them design the interaction. You need to bring in people who know how to design interactions.

Discussion:
While I like that this book pointed out that programmers are selfish when they program and tend to program what they would want in a program, I did not like that the book did not give much hope to programmers. Looking back, I can see that I usually assumed that what I thought was best for me would tend to be best for everyone. But looking forward, I would like to think that I can approach designs from a casual user's perspective now that I know to do that. I do not like that the author suggests bringing in interaction designers instead of teaching the programmers who are willing to learn.

Journal of Experimental Psychology

Journal of Experimental Psychology

Paul M. Fitts

Comments:
Comment #1
Comment #2
Comment #3


Summary:
Mr. Paul Fitts, along with his Journal of Experimental Psychology, are responsible for Fitts' Law. Fitts' Law is a function that can predict the time it will take a person to move to a target area. Fitts' Law takes into account four variables: the start and stop time of the device being used to reach the target area (a), the inherent speed of the device (b), the distance from the start area to the target area (D), and the width of the target area (W). These all combine to reach the time it takes to reach the target area (T): T = a + b*log2(1 + D/W).

This formula was discovered by running multiple experiments in which participants were timed as they moved to a target area. Looking at the results from these experiments, Fitts found the trends that occurred and derived the formula.


Discussion:
I do not think I will ever pull out my Fitts' law formula and use it to see if a button in my user interface is big enough or a menu item is too far away. I will, however, remember the principles that come out of this: the bigger and/or closer a target is, the quicker and easier it will be to reach that target. It seems like this does not need a paper to be written to know that. It seems like it should be known to everyone through intuition. It is a very important concept though that I do not think I would have every consciously considered if I had not read this paper.

Human-Centered Design Considered Harmful

Human-Centered Design Considered Harmful

Don Norman

Comments:
Comment #1
Comment #2
Comment #3


Summary:
At the time that this paper was written, the CHI community focused on creating tools and interfaces that were centered around the user. This is seems that it would be in the best interest of everyone, after all, users are the ones who will be using the tools. However, Don Norman suggests that centering a design around a user will not always result in the best design. There are times when the design needs to be centered around activities.

Norman points out that some great designs that were created with the task as the center of the design of instead of the user. One of these examples is the famous automobile. Norman suggests that users will adapt to the tool, and in some cases this is preferable over the tool adapting to the user.


Discussion:
Another Norman reading. I liked it as usual, and am very grateful that it was wrapped up into 6 short pages. Like all of the other things he has written that I have read, I think the information is very useful. His recommendations should never be the sole source of guidance, but it will always be beneficial to consider everything he has had to say in the past.

Sunday, May 3, 2009

Ethnography Considered Harmful

Ethnography Considered Harmful

Crabtree et. al

Comments:
Comment #1
Comment #2
Comment #3


Summary:
The CHI community have been increasingly using ethnographies as a method of analyzing people and cultures with the intent of developing applications and interfaces that will better cater to the targeted demographics. The authors of this article suggest that the increased use of these ethnographies in addition to straying from the traditional methods of carrying out an ethnography is harmful to the CHI community and CHI projects.


Discussion:
This paper was difficult for me to understand. I had a hard time figuring out where the authors were coming from. It seems to me that it would be beneficial to know who you are developing interfaces and applications for. Even if the interface or application was not written entirely for the user, but for a task (like Don Norman suggests in Human-Centered Design Considered Harmful), I still do not see how it can be detrimental to know your users a little better.

Usability Evaluation Considered Harmful

Usability Evaluation Considered Harmful

Saul Greenberg, Bill Buxton

Comments:
Comment #1
Comment #2
Comment #3


Summary:
It has been almost required by the CHI community to have usability evaluations on any project and the results of these evaluations are used to judge and rate any novel ideas presented by a project. The authors of this paper suggest that not only are these evaluations unnecessary at times, but it can even be detrimental to a project.

If an idea receives a bad evaluation in a usability study, it is pretty much dead in the water. This can kill a project before any novel ideas have really been fleshed out. There is no need to evaluate an idea before it has been fully developed. It has not reached its fully potential yet and, if the evaluations are bad, it will never reach that full potential.

There is also the chance that an idea, even if it is an improvement over the standard conventions can be poorly rated against the standard conventions in a usability evaluation because the users are already comfortable with the standard conventions. This will also kill a potentially great project because a usability evaluation was used.


Discussion:
I enjoyed this paper and fully understood what the authors were saying. If a project is directed by the numbers that are given to it in a usability evaluation, the developers of that project are prone to give up on it and call it a loss. This can be very detrimental when you consider the fact that, if the developers had spent a few more hours, days, or even weeks enhancing and polishing their novel ideas, it could have been a very successful project. This is not to say that usability evaluations are always harmful, sometimes they are still necessary. It will take discernment on the developers' part to properly decide whether or not an evaluation will be necessary or beneficial.

Wednesday, April 29, 2009

CHI 2009

Comparing Usage of a Large High-Resolution Display to Single or Dual Desktop Displays for Daily Work
Xiaojun Bi, Ravin Balakrishnan


Comments:
Comment #1
Comment #2
Comment #3


Summary:
Previous studies have shown that users prefer large, high-resolution displays over smaller single displays and multi-monitor displays. Studies have also shown that, while the high-resolution displays are preferred, these displays also have some flaws, such as "keeping track of the cursor, distal access to windows and icons, and window management." These flaws can be attributed to the fact that the operating systems were not designed with these high-resolution displays in mind. This paper covers a user study that focused on studying how users managed windows when using a large display.

The authors observed that users managed windows very differently when using the large display then when using a normal single or dual desktop setup. When using a single display, users would have to do a lot of window switching. On a dual monitor setup, users would have a focal region and a peripheral region. The focal region would be one entire screen and the peripheral region would be the second screen. The main tasks get taken place in the focal region and the user will glance over at the peripheral region when information is needed from that region.

When using the large display, users would have a focal region and a peripheral region also. The focal region would be in the center of the screen and the peripheral region would be an on the left, top, and right sides of the screen. The peripheral regions are used for passive windows that are holding information but are not interacted with. Whenever a window that is in the peripheral region of the screen needed to be interacted with, the users always tended to grab the window, move it to the center of the screen, and resize it.

Looking at the trends of the users that used the high-resolution display, the authors suggested some improvements to current operating systems so that they can cater to larger displays. First, it was suggested to have the minimize and maximize buttons replaced by a button that would bring a window to the center of the screen and brought into the focal region. Second, whenever is dragged into the peripheral region, the window should automatically enlarge so that it can be easily seen even though it is at the perimeter of the screen.


Discussion:
This was an interesting paper. I personally enjoy the larger screens I have used and I am glad that some research is going into catering to those that use these large monitors. I have no good guess on how long it will take any of the major operating system developers to integrate this research into their operating systems. I hope that these developers start taking notice of these new user strategies of window management as computer displays become cheaper and larger.

Multi monitor displays have been around for awhile now and the operating systems do not seem to cater to these setups. To get good window management for this kind of setup, third party utilities need to be bought. Even though these large desktops are useful, window management for high-resolution displays might get as much attention from the operating system developers as the multi monitor window management has gotten: little to none. The only reason I can think of that might lead to more window management utilities at the operating system level for high-resolution displays is because it could become more popular than multi monitor setups. Setting up multiple monitors may present enough extra difficulty that it deters casual computer users. However, to get a high-resolution display setup, a user just has to throw more money in to buying the monitor. There is no added complexity to installing a high-resolution monitor over a standard resolution monitor.

Tuesday, April 28, 2009

UIST 2007

Dirty Desktops: Using a Patina of Magnetic Mouse Dust to Make Common Interactor Targets Easier to Select
Amy Hurst, Jennifer Mankoff, Anind K. Dey, Scott E. Hudson

Comments:
Comment #1
Comment #2
Comment #3


Summary:
The authors of this paper are attempting to create a platform independent system that will aid users in selecting highly used buttons by jumping the mouse to these buttons when the mouse is close by. This is also independent from the applications that are being used. There is no code required by application developers for this to work in their applications.

The whole idea is to use every mouse click as "magnetic mouse dust." Wherever there was a mouse click on the window of an application, it would be recorded. As buttons were repeatedly clicked, magnetic mouse dust would gather over them. Whenever the mouse approached these dirty areas, the mouse would be attracted to the button and land on the buttons. This made it very easy to land the mouse on a button once the system had learned where all of your clicks have congregated. This is able to be independent from the application because system events just record the x and y coordinate of the click in relation to the window.

There were two types of dust: dust that accumulated when the mouse was clicked and dust that accumulated when the mouse was dragged. The dust that correlated to dragging the mouse aided in dragging scrollbars and the like. Users found the entire system to be helpful. Some users took longer to grow accustomed to the mouse being controlled by the computer, but after they got the hang of it, they enjoyed it.


Discussion:
This seems like an effective system. I would have to use it to see if it would benefit me at all, but I know that there are some users with physical disabilities that could really benefit from this kind of assistive device.

The one thing that the paper did not address that I had questions about was what happened when the window resizes, either by force of the user or the computer. It could either clear all the dust and start over, or, since the dust is computed in relation to the top left corner of the window, the dust move to still be at the same coordinates in relation to that corner. Both could be useful, depending on the application. Some applications anchor everything to the top left corner and it would be fine to keep the dust. Some buttons anchor to other sides though and the dust would be congregating over a portion of the screen that was no longer a button after a resize. The paper was not clear on what happened to the dust.

Tuesday, March 24, 2009

Emotional Design

By Donald A. Norman

Comments


Summary
In Emotional Design, Norman again discusses the design of the things around us. This time it is focused on the emotions contained within designs and the emotions evoked by designs.

People invariably interpret objects as if they have emotions, it does not matter whether they are animate or inanimate. Because of this, designers should consider this when working on their new creations. Especially for games, movies, music, and robots. Emotion can easily be added to these mediums and are, in fact, necessary if the product is going to be successful.

In the beginning of the book, Norman discusses the impact a design can have on our emotions. A design impacts our emotions on three different levels: visceral, behavioral, and reflective. The visceral level is the immediate, natural, and instinctive impressions people get from looking at utilizing a design. Good behavioral design leads to a functional product. The product might be ugly, but if it gets the job done right, it is scoring points with the user on a behavioral level. The reflective level is affected when a person consciously thinks back on using that product. The reflective emotions are not necessarily accessed when the product is in use. It is when the user is thinking.


Discussion
This was a decent read. It is hard to discuss this book as if it was new though. This is the third book by Donald Norman that has been required reading and I'm growing tired of reading numerous books about design without being able to put any of it into practice.

I think that if we were going to be putting the things that we are learning from the Norman books into practice, it would have been good to do this with the first books before reading this one. Emotional Design focuses on an entirely different part of design and it would have been nice to put the other focal points into practice before moving onto this one. There is a lot of information to take in and it will be hard to utilize it all at once instead of taking it in piece by piece.

CHI 2008 Evaluating Visual Cues for Window Switching On Large Screens


R. Hoffmann, P. Baudisch, D. Weld

Comments:
Comment #1
Comment #2
Comment #3

This paper evaluated the effectiveness of different cues to help direct a user's attention to a new window to focus on. The intended environment is a system set up that include multiple and large resolution monitors. Because there is so much screen real estate, the user needs this additional help to find the windows they are looking for otherwise it wastes time.

The methods of attracting attention were varied. There were four different frames that would highlight the window, there was a mask that dimmed everything except for the active window, and there were four trails that would lead to the new window.

The most effective method by itself was a trail called CenterSplash. This began in the center of the screen and tapered up to the new window. The tests were measured by the time it took a user to find the new window. When different frames and trails were combined, the best result was the CenterSplash, RedFrame, ShadowFrame, and BubbleFrame amalgam. This was only slightly better than the CenterSplash by itself when comparing times, but it scored much better on a user preference test.

Wednesday, March 11, 2009

The Man Who Shocked The World: The Life And Legacy Of Stanley Milgram

by Thomas Blass

Comments:
Comment #1
Comment #2
Comment #3


Summary:

The Man Who Shocked The World is a biography of Stanley Milgram. Milgram was a very bright child who always had an interest in science and was known to carry out many experiments. He went to college, and did predictably well. Even though he had a love for the natural sciences as a child, his graduate studies were focused on social psychology.

Milgram's most famous contribution to social psychology is his obedience experiments. In these experiments, he tested to see how long a subject would obey when put in a situation that contradicted with their morals. The subjects were told to shock another person with an increasing number of volts in order to "teach" them to remember a sequence of words. Unknown to the subjects, the person getting shocked was in on the experiment and was not actually getting shocked. The results were surprising to everyone. A very large percentage of people continued to obey the orders to shock the "learner" even after the shocks were apparently dangerous to his health.

The obedience experiments were very controversial and brought Milgram into the public eye. Even though he started other experiments, he could never fully move past the obedience experiments. His last years consisted of teaching at CUNY while studying city life.


Discussion:

The most discussed aspect of Milgram's life was whether or not his obedience experiments were ethical. It is hard for me to draw a conclusion on how I feel about this issue. The methods could have been detrimental to the subjects. The stress they were put under may have caused health issues if they had heart problems. They could have also been affected mentally if they were unable to accept the fact that they were willing to shock a person just because they were told to. Those are some of the unethical issues that I see with these experiments.

However, the participants reported that they were happy to be in the experiment and I do think the results are significant. So, since the outcome of the experiment was good, I approve of it. If it were to be done again, I think that more prescreening should be done to ensure that the participants can handle it physically and mentally.

Wednesday, February 25, 2009

The Design of Future Things

by Donald A. Norman

Comments

Summary:

The Design of Future Things introduces design ideas that should be kept in mind when designing novel technologies that people are not currently familiar with. The two main illustrations used are automobiles, that either somewhat take over driving or fully take over driving, or smart homes. The smart homes described monitor your living habits around the house and either make suggestions on what you do (e.g. tell you what to eat) or predict what you want to do and take the necessary actions to make that possible (e.g. turn on lights and music when you walk into a room).

Norman has a summary of Design Rules at the end of the book that pretty much summarize the book into a few lines:

Design Rules for Human Designers of "Smart" Machines:
1. Provide rich, complex, and natural signals.
2. Be predictable.
3. Provide good conceptual models.
4. Make the output understandable.
5. Provide continual awareness without annoyance.
6. Exploit natural mappings.

Design Rules Developed by Machines to Improve Their Interactions with People:
1. Keep things simple.
2. Give people a conceptual model.
3. Give reasons.
4. Make people think they are in control.
5. Continually reassure.
6. Never label human behavior as "error."


Discussion:

Even though it was not the focus of the book, my favorite parts were when the author discussed current research projects and the technologies that were being produced. I was not very interested in their design as much as I was just interested in what they can do. I think that will be the way most people will approach these future technologies. They will not consciously care about the design, only how cool or useful the product is portrayed by ads. If the design is poor, the products will be frustrating to use, but if they are a novel product, they will still be purchased nonetheless. Most of the time, design will only become a factor when two similar products are released and their utilities are the same. Then, when a good design is what gives a product its advantage over its competition, is when design will be given priority.

Tuesday, February 24, 2009

UIST 2008 Backward Highlighting: Enhanced Faceted Search


Comments:
Comment #1
Comment #2
Comment #3

Wilson, et al.

Summary:

Faceted searches rely on columns of categories to filter search results. One example which was heavily used is the column view in iTunes. There are two current methods of faceted search: directional and non-directional. iTunes uses the directional browsing, in which every column to the right of the selected column is filtered according to the selection. So, for example, if a particular Artist (middle column) was chosen, the Albums (right column) would be filtered to only show the albums by the particular artist. The Genre column (left column) would not be touched though. Non-directional browsing filters results in both directions. If an Artist is selected, both the Genre and Album columns would be filtered out.

Backward Highlighting (BH) is a new middle ground for faceted searches. BH leans more to the side of directional browsing. If a column is selected in the middle, all results to the right are filtered out as in directional browsing, and all columns to the left retain all data, but the related data is highlighted. The idea is that the added highlights should show every possible combination of selected data that could be used to get to the current filter results.

There were three hypotheses going into user studies. 1) Users will be able to discover more facts. 2) Users will remember more facts. 3) Users wil use the remembered facts to improve search behavior. The tests that were done used three different settings: no BH, BH, and BH that grouped all highlighted items at the top. The results concluded that all three hypotheses were correct and that there was no significance difference between grouped and ungrouped BH.

Discussion:

BH seems very logical and it makes me wonder why this has not been done before. It adds metadata to the screen without taking away from the data like non-directional faceted search but, at the same time, it does not overburden the user with this metadata. I do question the usefulness of this metadata. The tests show that users remember the highlighted rows, but what good does that do if the user knows how to get there already? I am sure there is a good application for this, but I cannot think of one at the time of writing.

Taskpose: Exploring Fluid Boundaries in an Associative Window Visualization


Bernstein, et al.

Summary:

Taskpose is a combination of a window manager and a task manager. It determines which windows are stuck in a common task and group those windows together when Taskpose is called up (the presentation draws heavily from Mac OS X's Expose). It also tries to determine which windows are the most important and enlarges the important windows' size.

The window importance and window relations are based on window switching. The more a window is switched to, the more important it becomes. And the relationships between windows are developed by monitoring how many times Window A switches to Window B and vice versa. The windows then move around the screen to congregate with similar windows. The more important a window is, the larger the thumbnail becomes. The larger, more important windows have a static tendancy and the smaller, related windows are attracted to them.

After a week long user study, Taskpose was found to be useful but still had some weaknesses. Users tended to like Taskpose. It was very useful when the open windows were too numerous for the Windows Taskbar to appropriately handle. There were two prominent shortcomings of Taskpose. First, it did not recognize "parent-child" relationships, such as a buddy list and the chat windows. Second, when multiple projects were being worked on simultaneously, the two projects would merge in Taskpose.

Discussion:

Taskpose's presentation takes from Expose, which I really like. However, its usefulness falls short due to the two problems discovered during the user study, especially the second shortcoming. If these issues could be fixed, it could be useful and would make window switching quicker and require less mental work. Another problem, which is mentioned in the paper, is that importance and relationships are solely based on window switching. Importance should also include the time a window is active.

Bringing Physics to the Surface


Wilson, et al.

Summary:

This papers looks into how to effectively add real physics to tabletop touchscreen surfaces. The idea is to use physics that have already been developed for the game industry, e.g. Nvidia's PhsyX. Surfaces already mimic physics when users rotate or stretch out pictures with their hands. This is not real physics computation though, they are merely scripts that replicate real physics.

There are three different forces that must be considered when using the physics computations: static friction, kinetic friction, and collisions. There are already methods for computing these forces, but none that compute all three well. Direct force is one method that only detects collisions based on one contact point. Virtual joints and springs connect a contact point to a virtual object via a spring or a joint. This results in a drag-and-drop functionality and does not work well with collisions. Another method is to use proxy objects. This creates an object, such as a sphere or square, underneath the contact point that can interact with the virtual objects by means of friction and collisions. Although this uses all three forces, the results can be unexpected because the proxy object does not match the contacts points.

To compensate for the weaknesses of the previous methods, a new method is introduced: proxy particles. This creates a stream of particles underneath the contact point that accurately represent all contact points. These particles act in the same way that a proxy object works; if the contact point starts on an object, friction takes over. Otherwise, if the contact points starts off of an object and runs into an object, collisional force takes over.

User studies indicate that proxy particles may be the best way of applying physics to touch surfaces. Six participants were given three tests each. The joint method and the proxy particle method had the fastest completion times. The joint method was easy for the users to pick up because it keeps in line with the drag-and-drop mentality that mouse users are accustomed to. However, the participants commented that the joint method was "limiting" and "less satisfying." The joint method also poses problems when creating two separate contact points on an object and pulling in separate directions. When using the proxy particles, users interacted with objects with comparable times to the joint method, but enjoyed it more.

Discussion:

These surface computers are getting popular and the idea of introducing real physics into the technology looks very promising. There are still some kinks to be worked out though and I think the initial surfaces will only contain pseudo-physics. Real physics will have to be introduced later. While the proxy particles worked well for the tests given, it does not look like it is ready to be commercially introduced. Problems will occur once the surface becomes filled with 3D virtual objects and cluttered. There will have to be a logical way to apply 3D physics and move 3D objects around a 2D screen.

Thursday, February 19, 2009

The Mole People

Summary:

The Mole People: Life in the Tunnels Beneath New York City by Jennifer Toth is ethnography that studies the lives of the people that live underground in New York City. A lot is unknown about these people, it is not even know how many people live underneath the city. Jennifer Toth went underground to study these people and learn about them, including the culture underground and even why the people move underground to begin with.

Discussion:
While entertaining and educational, The Mole People does not seem to be have been done like a standard ethnography. Jennifer Toth seems to have an agenda and it appears as if her goal is to show the world that these mole people really are not that dangerous or different than the average person, at least not to the extent that most people assume.

The Battalion Ethnography

Summary:
Brad Twitty, Cole Jones, and I spent time watching The Battalion newsstands to see who was taking the paper. Brad spent his time watching Blocker in the morning, Cole watched Bright in the afternoon, and I watched Bright in the morning. I used tally marks to keep track of how many people took the paper, how many people did not take the paper, and whether or not these people were leaving or entering the building when they passed by the newsstand. According to my results, about 30% of the people entering or leaving the building took the paper.

Discussion:
The detailed data recorded can be found in the written report, but the data recorded in the morning at the Bright building leads to the idea that most students take The Battalion if they have arrived to class early and have extra time to kill. Students that enter the building within the last ten minutes before class are less likely to grab a paper. This is probably because they have less time to kill and may even be late for class and are in too much of a hurry to bother themselves with the paper.

Wednesday, February 11, 2009

The Media Equation

Commented on:

Summary:
The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places by Byron Reeves and Clifford Nass is written based on studies done on how people treat computers and media. The tests they conduct conclude that people treat these new medias like they do other people or real life. Hence their media equation: media = real life. The results suggest that people who interact with computers and other media respond better when the computer interacts with them the same way that they prefer people to interact with them. After each study, the authors suggest an interface change that would benefit the user based on the results of the study. Examples of these suggestions are a friendly spell checker or ensuring that a computer's voice is male.

Discussion:
After reading the book, I initially thought that most of the suggestions were common sense. However, I really had not given any thought to making a computer polite or conform to a culture's norms of treating people. Before, if there was an error message, you just relay it. If a user need to be informed of anything, just print out what they need to know to the screen and get the job done. Now I think that there is a reward for turning the dry interactions of computers into something more appealing for most consumers.

Wednesday, January 28, 2009


It is almost harder to open these DVD cases that are bound with the tabs than it is to open a new, shrink-wrapped CD. The problem with these cases are not that they are overly complicated, they just violate the standard that has been set for DVD cases. I assume that there is no tabs because that is what the norm is. I usually do not find out that the tabs exist until it is too late and the case is broken. At least the tabs break and even though that is not a problem for me, it is very self-defeating on the tabs' part. 




Another example of this flaw is the GUI for Internet Explorer 7 and Google Chrome. Even though the menu is simple and straightforward, it is a deviation from the standard menu bar. I cannot get used to the new set up. I have to click on each button and use the process of elimination to find the correct menu.




On a more positive note, the deviation from the standard toolbars to the ribbon in Microsoft Office 2007 was a good change. I might have adapted to the new design of Office's toolbars because the tabs are still labeled, unlike the two browser interfaces, which only use pictures. It might be good if the browser GUIs had an option to add text to the buttons. Then users could remove the text after they are comfortable with the new design.

The Design of Everyday Things

The Design of Everyday Things by Donald A. Norman

(commented on Brian Salato's blog)

Summary

The Design of Everyday Things speaks to two different audiences. First, the author, Donald A. Norman, speaks to users of devices. He wants users to know that if they have trouble using a device, it is most likely the designer of the device's fault, not the user's fault. And even though it is probably because of poor design that people have trouble using a device, most people blame themselves instead of the device.

The other audience that is addressed in the book is designers. Norman points out four essential principles that lead to good design. First, the designer should provide a conceptual model so that the user understands how the device works and is less prone to err. Second, the designer should provide feedback. Feedback lets the user know that if he did something correct or incorrect or did not do anything at all. Third, the device should have constraints built in so that the user cannot perform undesirable actions. Fourth, the designer should provide affordances so that the user is given visual cues on what can be done.

Discussion

I do not totally agree with Norman when he puts such a huge burden of the blame on designers instead of users when there are problems operating a device. I think the user can be blamed a little more than Norman says. However, I do like the pressure that this way of thinking puts on designers. 

Of his four design principles, I think feedback and constraints are the most important. Conceptual models and affordances are very useful for novices just as much as feedback and constraints. However, once a user becomes familiar with a device, he already knows how to operate it and extra visual cues and conceptual models to direct operation are no longer needed. Feedback and constraints will always be needed when operating a device, even for an expert. Feedback is needed so that the operator can confirm that his actions were successful. Constraints are needed because, even though an expert knows what would cause an error, slip ups are inevitable.