Sunday, March 21, 2010

Conversation Clusters: Grouping Conversation Topics through Human-Computer Dialogs (CHI-2009 Round 2)

Conversation Clusters: Grouping Conversation Topics through Human-Computer Dialogs (CHI-2009 Round 2)

By: Tony Bergstrom, Karrie Kerahalios

Summary:
There are many different ways that information can be logged for future referencing. One of the most common things that is archived is conversations. Searching through them is often a very difficult task which requires reading the entire record to find what is needed. This paper tries to solve this problem by using the strengths of machines and man together. Computers have the ability to store large bits of information and people have the ability by augmenting our judgment based on the context of the message being received. This paper presents two different ways to store information. A topic view and a historical view. The topic view serves to see the topics discussed over the course of the meeting while the historical view allows the user to see the progression of topics as time passes. Conversation clusters is something that emerged from this research. It is an attempt to bridge communication between the verbal languages of humans and machines. These conversation clusters use a dynamic visualization on a shared public tabletop to allow the users to see the most recent discussions and the topics associated to them. They do this by showing topics of conversation as a thread history. Threads can merge, split and they can even die when a topic is no longer discussed. The researchers took out topic words from conversations by having each person in the conversation wear a microphone and recording them. Their research showed that conversation clusters were successful in correctly showing what the conversation was about and allowing for faster and easier to find lookups.

Discussion:
This paper seemed kinda cool, but kinda not. I can see the importance of it, but it is not something that I would be interested in at all. The paper seemed rather draggy and not very well written. It made it rather dull to read. The concept of conversation clusters however was very interesting and I think that it is a good design for archiving things in the future.

TypeRight: A Keyboard with Tactile Error Prevention (CHI-2009 Round 2)

TypeRight: A Keyboard with Tactile Error Prevention (CHI-2009 Round 2)

By: Alexander Hoffman, Daniel Spelmezan, Jan Borchers

Summary:
This paper was about TypeRight. TypeRight is a new input device for txt entry which combines the advantages of a tactile feedback system with the error prevention methods of word processors through magnets. It does this by extending the regular keyboard so that the resistance to press a key that would create a spelling error is harder than the resistance that would not create a spelling error. This tactile feedback is very important for letting users now that an input error needs to be corrected and it conveys additional information. Typing errors can be addressed in one of three ways: Prevention, live correction and aftercare. Prevention tells the users that an error will be made before making it. Live correction corrects errors while the typing is happening. Aftercare is the most widely used, and takes place after the fact. Prevention is the strictest of these. Another way to prevent errors would have been to add a short timeout where nothing can be pushed, but this is not a good solution because it the timing is never right. It would either end up being too slow or too fast for users to be of use. The study found that the number of backspace key presses was reduced by 46% with users using the TypeRight. The researchers also discovered that TypeRight reduced the number of mistyped letters by 87%. For the novice user average execution time of typing was very similar, but in the expert TypeRight user it was proven to increase typing speed by 10%. One thing they would like to work on for the future is to make the TypeRight quieter.

Discussion:
I thought this was an interesting area of research, but I’m not sure how much faster the human body can type without some help from other sources. I think the keyboard has a good design, but that to see a significant improvement we are going to have to see something off the wall emerge. For now though, the TypeRight seems like a reasonable solution to allowing users to type faster.

Designing Trustworthy Situated Services: An Implicit and Explicit Assessment of Locative Images’ Effect on Trust (CHI-2009 Round 2 – Assigned one)


Designing Trustworthy Situated Services: An Implicit and Explicit Assessment of Locative Images’ Effect on Trust (CHI-2009 Round 2 – Assigned one)

By: Vassilis Kostakos, Ian Oakley

Summary:
This paper studies a visual design element called locativeness which is the extent to which media showing a service relates the physical environment it is in. Because of mobile devices that can wirelessly access many services the design of these services and what invokes trust in them has been highly speculative. These researchers wanted to discover which areas of design either add to trust, or take away from a services trust. This study was conducted completely online and consisted of 3 stages of experimentation. The first stage was the Demographics capture. This was where the researchers got a list of data from the subjects through a questionnaire. They gathered information such as age, gender and occupation. The second stage was the gathering of data through an implicit assessment and distinguish which things related good and bad to each subject. The last part was to show them pairs of websites and ask the users which one they would be more comfortable giving their credit card information to. Researchers discovered that the following were trust enhancing words: safe, unshakeable, credible, honest, protected and loyal. Words that tore down trust consisted of: Hazard, suspect, deceitful, unsafe, disbelieving and cautious. They discovered that some people are completely brand-oriented but that most people are influenced by quality and locativeness. For some people locativeness was even the strongest influence.

Discussion:
This paper seemed to touch on a good subject area, but I don’t know about the whole locativeness part. It just seemed like introducing some new word that didn’t really mean a lot to me. I think that research into what makes websites form credibility with clients is a interesting part of research and something that we should look into as our society moves further towards a technological stage.

Predictive Text Input in a Mobile Shopping Assistant: Methods and Interface Design (IUI-2009)

Predictive Text Input in a Mobile Shopping Assistant: Methods and Interface Design (IUI-2009)

By: Peter Peltonen, Petri Saarikko, Petteri Nurmi, Andreas Forsblom (may have forgotten one or 2)

Summary:
Everyone goes grocery shopping on a regular basis and many people use grocery lists to keep track of their information on what items they need to purchase. Even though many people acknowledge the importance of shopping lists little research has been done into the creation and management of them. This paper discusses a predictive text input technique that uses association rules and item frequencies to integrate a shopping assistant into a web based mobile device. There are two types of shopping assistants (those for shopping in malls and those for shopping in individual shops). The ones for malls are more concerned for getting the user to the store while the ones for individual shops focus on item level location. This device was limited to grocery items which sampled products and then mapped them into a shopping list. This list of items they searched through took out any unit sizes or things that dealt with brand names and simply referred to the base item (8oz pkg of kraft macaroni = macaroni). They then calculate the Term Frequency-Inverse Document Frequency and score for each word in the product name and return the lowest TF-IDF score when entering a new item. If the rules do not trigger a text prediction then they use a frequent item for prediction.

Users were given 2 different versions of the interface. One that had predictive text and one that didn’t. After the users completed a list with their device they filled out a satisfaction survey. The predictive text input functionality increased the user’s input speed. On average, the increase was around 5 words per minute. It also reduced error rates by about 80% and increased user satisfaction.

Discussion:
I thought the ideas behind this topic were interesting and that the paper was well written. I like the idea of a shopping assistant that would keep track of my grocery list because I know there are a lot of times were I go to the store to get items that are related. Like when making spaghetti you go to the store for butter, noodles, spaghetti sauce, mushrooms, beef etc. I think this is a very useful point of research that could be extended into many other aspects of every day life.

Multi-Touch Interaction for Robot Control (IUI-2009)

Multi-Touch Interaction for Robot Control (IUI-2009)

By: Mark Micire, Jill Drury, Brenden Keyes, Holly Yanco

Summary:
This paper discussed how robot interaction is highly limited when using conventional items such as joysticks, buttons and sliders. They suggested a multi-touch interaction sequence that would allow for a better array of possible actions and better control for robots. Their screen had many visual aids that allowed better control of a robot and then did a user study on how people would use them. The researchers discovered that there are three main components that are important when talking about screen interaction: gesture magnitude, gesture alignment and gesture sequence. Some of the users seemed to think that the magnitude of their gestures was significant (closer for slower, further away for faster etc.). This concept is called proportional velocity. The second component, gesture alignment describes how users gestures interacted with respect to the x and y axis. Whether or not the users usually went left and then right of if they moved diagonally off the main axis. The third component had to deal with gesture sequence, or the pattern of primitive actions (defined as touch, drag and hold). There were several emergent behaviors that came out of this study. These emergent behaviors are things that were not thought of as an initial action sequence when using the product, but that users experimenting with the system came up with. Even though each user was trained in the same way they interacted with the robot in their own way. The researchers plan to do more research on interaction placement on the screen (localizing commonly used controls) and the effects of fatigue on user interaction.

Discussion:
I thought this paper was an interesting topic, but not really as innovative as some of the other papers or as well written. Multi-touch is a very popular topic of research and for good reason. I think that with all of the multi-touch research there will be many advances in this area over the next several years.

Crafting an Environment for Collaborative Reasoning (IUI-2009)

Crafting an Environment for Collaborative Reasoning (IUI-2009)

By: Susanne Hupfer, Steven Ross, Jamie Rasmussen, James Christensen, Daniel Gruen, and John Patterson

Summary:
Many problems in the world today are very challenging because they require many diverse fields of expertise and require sophisticated reasoning that is beyond the capacity of just one person. These problems tend to need the effort and knowledge of many different people from many different fields to find an acceptable solution to them. Many times these problems are very complex and have changeable situations. This is where the term “Sensemaking” comes into play. It is defined as the motivated continuous effort to understand connections (among people, places and events) in order to anticipate their trajectories and act effectively. There are many different knowledge sources (wikis etc) that help to solve these problems by sharing information, but these different tools are still inadequate to solve the problems individually and without the proper use. Many of the problems may not even have a well defined solution of ending point. The goal of this paper was to develop an intelligent interface and infrastructure that would be able to support people in gathering information, analyzing it, and making decisions based off of it. Some important concepts are: awareness (knowing who is working on what) and expertise location (where relevant experts are located). The main important aspects of a Collaborative reasoning environment are “Collaboration, Semantics and Adaptability.” Collaboration is defined as working together to get information. Semantics is making sure the information gathered about situations is fully understood by all parties (people, places, and events etc.). Adaptability allows people to adapt to the problems that have an ever ending nature.

This led the users to develop the CRAFT (Collaborative Reasoning and Analysis Framework and Toolkit) which uses the above ideas to establish a way for better collaborative reasoning, information sharing and decision making. This toolkit uses nodes on a graph to represent people, groups, locations activities, questions, hypotheses and evidence. This allows a centralized collection of data. This study showed that the toolkit was a good way for users to communicate information.

Discussion:
I thought that this paper was rather interesting and well written. I liked the way it first described everything and then tied it together. This made it easier to tie together all of the pieces to understand the whole concept of what they were discussing. I like the concept of being able to share information and collaborate with a group of people through a centralized data structure such as a graph. This could definitely be evolved to be used in a working scenario.

Extending 2D object Arrangement with Pressure-Sensitive Layering Cues (UIST-2008)

Extending 2D object Arrangement with Pressure-Sensitive Layering Cues (UIST-2008)

By: Philip L. Davidson, Jefferson Y. Han

Summary:
This paper discusses a pressure-sensitive depth sorting technique on 2D objects with multitouch or multipoint controls. The direct manipulation of objects encourages the grouping of objects, especially when Rotating, Scaling or Translating them. It presents two novel techniques for 2D layering by using a stack of selected objects as well as using a drag and drop model. The first thing that this paper discusses is the multiple studies that have been done which show that a humans estimation of pressure is often bad and is more appropriate as a rate control. It has been shown that humans can get better pressure control with visual feedback at the point of contact as well. A tilt calculation was also performed while the Rotation, Translation and Scaling (RST) calculations were being calculated depending on the pressure that was applied. This system uses a directed acyclic graph of tests which can show intersections to detect overlapping layers. A combination of rendering cues are used to show the tilt to the user by showing “out of plane” will alter the visible outline of the object being altered. Element to element layering was discovered to be useful only when the edges of the objects could be seen, but as they become more complex it became less useful. For future work, they talked about inserting prior overlap relationships into the DAG as lower priority constraints. They also suggested allowing the user to freeze the layering relationships for groups of elements regardless of their overlap state.

Discussion:
To be honest this paper was a little confusing and not very much interesting for me. I understood what they were trying to do, but they seemed to jump around a bit with how they talked about things. It seemed to go from topic to topic with no real linking of anything between them. I felt like if I took individual parts out of this paper then they could be separated into their own papers or paragraphs and still make sense at times. It just seemed to lack a sense of cohesiveness.

SideSight: Multi-touch Interaction Around Small Devices (UIST-2008)

SideSight: Multi-touch Interaction Around Small Devices (UIST-2008)

By: Alex Butler, Shahram Izadi, Steve Hodges

Summary:
There are many problems that can occur when trying to interact with small devices (such as mobile phones) that have small screen real estate to interact with. Fingers can occlude the screen and make them hard to use, and hard to see everything that is needed. For many small devices using a touch screen display is completely impractical and almost impossible. This paper talks about a prototype device which uses infra-red proximity sensors embedded along the side of a device that can detect movement and distance of finger movements. This gives the device a larger input area. One area of research that is been posed to solve this problem is the stylus, but it also brings in its own problems (another object to use and possibility to be lost).

The SideSight prototype works by first thresholding the image that is being used and then carrying out a connect component analysis. It was discovered that many times when people are interacting with one finger the other fingers also get in the way of what the user is trying to do. They are currently experimenting on ways to make the device track the input of multiple fingers for greater flexibility with the interaction. In conclusion, it has been shown that the input area of a device can be extended outside of its physical area and can sometimes be a more intuitive way to interact with the device because of increased utility.

Discussion:
I think this is a very important area of study for us to continue to look into because this is something that is a constant problem in society. Cell phones and music players keep getting smaller and smaller which is leading to these increased functionality and ease of use with their interactions. Furthermore, users are getting more acclimated to increased functionality and are demanding more features that are easier to use. Any area of research that goes into this topic is highly important to our technological future.

Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles ( UIST – 2008)

Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles ( UIST – 2008)

By: David T. Gallant, Andrew Seniuk, Roel Vertegaal

Summary:
This paper talks about Foldable User Interfaces (FUI) which is a combination of a 3D GUI imbued with the physical properties of a piece of paper. This paper contains Foldable Input Devices (FIDs). It is basically a piece of construction paper with IR reflectors inserted in it that are tracked by computer vision. FIDs allow many interaction techniques including: folding, bending, flipping, stacking, thumb slide, scooping, top corner bend, leafing, squeezing, hovering and shaking. These foldable input devices have been highly influenced by Origami. Some interactions that can be achieved by using the interaction techniques include, but are not limited to: Navigation, browsing, selection, sorting, making origami and zooming. Navigation is done by picking up the FID and moving it. Selection can be achieved by using a transparency sheet and a thumb slide. Origami can be achieved by folding the paper into complex shapes through a series of folds. These folds are made permanent by using a shaking technique. Browsing is accomplished by using the leafing technique to go to the next page in a document. Lastly zooming is achieved by using the hover technique.

Discussion:
At first this paper was a little hard for me to wrap my mind around, but once I got down the concept of a FID it was a rather interesting implementation of a new input device. I had never thought of using a piece of paper as an input device to control something else much like a mouse or keyboard would, but now that it has been mentioned it is a very unique but sensible invention. I am interested to see where this goes over the years and how much smaller, or thinner or devices get, but with better user interaction.

Inferring Player Engagement in a Pervasive Experience (CHI 2009 – Round 1)

Inferring Player Engagement in a Pervasive Experience (CHI 2009 – Round 1)

By: Joel E. Fischer, Steve Benford

Summary:
This paper was about a game called Day of the Figurines which required users to send and receive messages every day so that they could explore a town, chat and help other players and receive missions from the game. The players of the game were usually unhappy when game notifications interrupted normal life, or sad when they were not notified soon enough to act on something when they did want notifications. The gaming experience would be greatly improved if the game could notify users more or less depending on their level of engagement with the game. It was discovered that this could be done to a certain degree by using the elapsed time (et) between two player turns or activities. This could also by determined by using response time (rt) which is the time that it took for a player to respond to a notification. The results of this experiment based off of elapsed and response time showed that a players level of engagement can be determined and that systems can adapt to the user after detecting it. Some advantages of this are that not only can the user receive more or less messages as wanted, but other players can be notified of their disengagement. Another potential benefit could be a summary of what happened while the user was not engaged. Any system that used this would however need to have a mechanism for the user to override the adaptation if desired or improperly interpreted.

Discussion:
I think that this was an interesting paper because it could be applied to more than just games. I think that a similar response and elapsed time concept could be put into effect on websites and other user applications. It would be useful for websites to limit or increase the number of email notifications received, and could be used in other applications to conserve computer power. There are many other ways this could be applied.

Going My Way: A User-aware Route Planner (CHI 2009 – Round 1)

Going My Way: A User-aware Route Planner (CHI 2009 – Round 1)

By: Jaewoo Chung, Chris Schmandt

Summary:
When people get directions they typically find a nearby location that they are familiar with and then get directions to the desired location from there. “Going My Way” attempts to do something similar. It first learns about where you are familiar traveling with, then identifies the areas that are close to your desired location and finally presents a set of directions based off of familiar landmarks. This implementation was done with a phone that had a server for accessing GPS data that it stored as well as a UI for requesting directions. Part of the work done in this paper was to determine what landmarks were memorable and why. To do this they asked users sets of questions and locations of different places. They discovered that people remember places near an intersection better and remember more unique places instead of a chain store. The next part was to have users use “Going My Way” and try to find their way to certain locations. The application got user specific landmarks and found out that users typically use less than a quarter of the amount of landmarks that were stored in its database. The results of this directions test showed that “Going My Way” is far more useful when the user is traveling in an area that the user is somewhat familiar with so that they recognize the landmarks easier. Some users said that it was easier to visualize the location when they got to explore all of the landmarks around it.

Discussion:
I thought this was a very interesting paper because this is the way that many people get directions. I can’t count the number of times where I have been going out to eat and not sure where I’m driving to. When I ask for directions 95 percent of the time the response I get is similar to “Right across the street from ” or “Behind the shopping center that is in.” I think that if this was further modified so that users could use it in unfamiliar areas then it would be way more useful.

Tuesday, March 2, 2010

Emotional Design

Summary:
There were 3 main aspects of product design that were discussed in this book: Visceral, Behavioral, Reflective. Visceral design is decided upon very quickly by the user. It is usually triggered by one of the senses (sight, sound, smell, feel, or touch). This occurs in a way such that our mind is made up before we even know how because this response is usually run on a subconscious level and is usually inalterable. You either like the way it looks or you dont. Sometimes it is associated with phrases such as "The product is me, or that product is not me."

Behavioral Design is based off the way the object is used and how happy the user is while using it. This is usually triggered by a design that the user deems as "good" or "bad." "Bad" designs will usually lead to an unhappy user and can make the muscles tense up. "Good" designs usually lead to enhanced productivity as well as the possibility to enter a state of flow which allows the user to operate at optimum efficiency. Not too easy, but not too hard either. Just enough to make sure that the brain stays fully engaged throughout using it. Iterative testing can benefit here.

Reflective design is based on each individual and their intellect. It is affected greatly by the users understanding of the product, how they feel about the product, and how they integrate their self image into the product. These are very vulnerable to cultural variability, education and personal experiences.

Discussion:
While I thought this book was a little long winded once again, I feel like the author was not quite as long winded as before, and that it was much better. I thought it was a very interesting concept that he incorporated ideas on reflection, behavioral and visceral ideas into how we view everyday things. I think that he had a lot of good facts in the book, or things that I hadnt thought about. I know that I agree with many things that he said about how we use things, and how our user experience is affected by all of our senses, as well as by what memories are tied to it.