Sunday, March 21, 2010

Conversation Clusters: Grouping Conversation Topics through Human-Computer Dialogs (CHI-2009 Round 2)

Conversation Clusters: Grouping Conversation Topics through Human-Computer Dialogs (CHI-2009 Round 2)

By: Tony Bergstrom, Karrie Kerahalios

Summary:
There are many different ways that information can be logged for future referencing. One of the most common things that is archived is conversations. Searching through them is often a very difficult task which requires reading the entire record to find what is needed. This paper tries to solve this problem by using the strengths of machines and man together. Computers have the ability to store large bits of information and people have the ability by augmenting our judgment based on the context of the message being received. This paper presents two different ways to store information. A topic view and a historical view. The topic view serves to see the topics discussed over the course of the meeting while the historical view allows the user to see the progression of topics as time passes. Conversation clusters is something that emerged from this research. It is an attempt to bridge communication between the verbal languages of humans and machines. These conversation clusters use a dynamic visualization on a shared public tabletop to allow the users to see the most recent discussions and the topics associated to them. They do this by showing topics of conversation as a thread history. Threads can merge, split and they can even die when a topic is no longer discussed. The researchers took out topic words from conversations by having each person in the conversation wear a microphone and recording them. Their research showed that conversation clusters were successful in correctly showing what the conversation was about and allowing for faster and easier to find lookups.

Discussion:
This paper seemed kinda cool, but kinda not. I can see the importance of it, but it is not something that I would be interested in at all. The paper seemed rather draggy and not very well written. It made it rather dull to read. The concept of conversation clusters however was very interesting and I think that it is a good design for archiving things in the future.

TypeRight: A Keyboard with Tactile Error Prevention (CHI-2009 Round 2)

TypeRight: A Keyboard with Tactile Error Prevention (CHI-2009 Round 2)

By: Alexander Hoffman, Daniel Spelmezan, Jan Borchers

Summary:
This paper was about TypeRight. TypeRight is a new input device for txt entry which combines the advantages of a tactile feedback system with the error prevention methods of word processors through magnets. It does this by extending the regular keyboard so that the resistance to press a key that would create a spelling error is harder than the resistance that would not create a spelling error. This tactile feedback is very important for letting users now that an input error needs to be corrected and it conveys additional information. Typing errors can be addressed in one of three ways: Prevention, live correction and aftercare. Prevention tells the users that an error will be made before making it. Live correction corrects errors while the typing is happening. Aftercare is the most widely used, and takes place after the fact. Prevention is the strictest of these. Another way to prevent errors would have been to add a short timeout where nothing can be pushed, but this is not a good solution because it the timing is never right. It would either end up being too slow or too fast for users to be of use. The study found that the number of backspace key presses was reduced by 46% with users using the TypeRight. The researchers also discovered that TypeRight reduced the number of mistyped letters by 87%. For the novice user average execution time of typing was very similar, but in the expert TypeRight user it was proven to increase typing speed by 10%. One thing they would like to work on for the future is to make the TypeRight quieter.

Discussion:
I thought this was an interesting area of research, but I’m not sure how much faster the human body can type without some help from other sources. I think the keyboard has a good design, but that to see a significant improvement we are going to have to see something off the wall emerge. For now though, the TypeRight seems like a reasonable solution to allowing users to type faster.

Designing Trustworthy Situated Services: An Implicit and Explicit Assessment of Locative Images’ Effect on Trust (CHI-2009 Round 2 – Assigned one)


Designing Trustworthy Situated Services: An Implicit and Explicit Assessment of Locative Images’ Effect on Trust (CHI-2009 Round 2 – Assigned one)

By: Vassilis Kostakos, Ian Oakley

Summary:
This paper studies a visual design element called locativeness which is the extent to which media showing a service relates the physical environment it is in. Because of mobile devices that can wirelessly access many services the design of these services and what invokes trust in them has been highly speculative. These researchers wanted to discover which areas of design either add to trust, or take away from a services trust. This study was conducted completely online and consisted of 3 stages of experimentation. The first stage was the Demographics capture. This was where the researchers got a list of data from the subjects through a questionnaire. They gathered information such as age, gender and occupation. The second stage was the gathering of data through an implicit assessment and distinguish which things related good and bad to each subject. The last part was to show them pairs of websites and ask the users which one they would be more comfortable giving their credit card information to. Researchers discovered that the following were trust enhancing words: safe, unshakeable, credible, honest, protected and loyal. Words that tore down trust consisted of: Hazard, suspect, deceitful, unsafe, disbelieving and cautious. They discovered that some people are completely brand-oriented but that most people are influenced by quality and locativeness. For some people locativeness was even the strongest influence.

Discussion:
This paper seemed to touch on a good subject area, but I don’t know about the whole locativeness part. It just seemed like introducing some new word that didn’t really mean a lot to me. I think that research into what makes websites form credibility with clients is a interesting part of research and something that we should look into as our society moves further towards a technological stage.

Predictive Text Input in a Mobile Shopping Assistant: Methods and Interface Design (IUI-2009)

Predictive Text Input in a Mobile Shopping Assistant: Methods and Interface Design (IUI-2009)

By: Peter Peltonen, Petri Saarikko, Petteri Nurmi, Andreas Forsblom (may have forgotten one or 2)

Summary:
Everyone goes grocery shopping on a regular basis and many people use grocery lists to keep track of their information on what items they need to purchase. Even though many people acknowledge the importance of shopping lists little research has been done into the creation and management of them. This paper discusses a predictive text input technique that uses association rules and item frequencies to integrate a shopping assistant into a web based mobile device. There are two types of shopping assistants (those for shopping in malls and those for shopping in individual shops). The ones for malls are more concerned for getting the user to the store while the ones for individual shops focus on item level location. This device was limited to grocery items which sampled products and then mapped them into a shopping list. This list of items they searched through took out any unit sizes or things that dealt with brand names and simply referred to the base item (8oz pkg of kraft macaroni = macaroni). They then calculate the Term Frequency-Inverse Document Frequency and score for each word in the product name and return the lowest TF-IDF score when entering a new item. If the rules do not trigger a text prediction then they use a frequent item for prediction.

Users were given 2 different versions of the interface. One that had predictive text and one that didn’t. After the users completed a list with their device they filled out a satisfaction survey. The predictive text input functionality increased the user’s input speed. On average, the increase was around 5 words per minute. It also reduced error rates by about 80% and increased user satisfaction.

Discussion:
I thought the ideas behind this topic were interesting and that the paper was well written. I like the idea of a shopping assistant that would keep track of my grocery list because I know there are a lot of times were I go to the store to get items that are related. Like when making spaghetti you go to the store for butter, noodles, spaghetti sauce, mushrooms, beef etc. I think this is a very useful point of research that could be extended into many other aspects of every day life.

Multi-Touch Interaction for Robot Control (IUI-2009)

Multi-Touch Interaction for Robot Control (IUI-2009)

By: Mark Micire, Jill Drury, Brenden Keyes, Holly Yanco

Summary:
This paper discussed how robot interaction is highly limited when using conventional items such as joysticks, buttons and sliders. They suggested a multi-touch interaction sequence that would allow for a better array of possible actions and better control for robots. Their screen had many visual aids that allowed better control of a robot and then did a user study on how people would use them. The researchers discovered that there are three main components that are important when talking about screen interaction: gesture magnitude, gesture alignment and gesture sequence. Some of the users seemed to think that the magnitude of their gestures was significant (closer for slower, further away for faster etc.). This concept is called proportional velocity. The second component, gesture alignment describes how users gestures interacted with respect to the x and y axis. Whether or not the users usually went left and then right of if they moved diagonally off the main axis. The third component had to deal with gesture sequence, or the pattern of primitive actions (defined as touch, drag and hold). There were several emergent behaviors that came out of this study. These emergent behaviors are things that were not thought of as an initial action sequence when using the product, but that users experimenting with the system came up with. Even though each user was trained in the same way they interacted with the robot in their own way. The researchers plan to do more research on interaction placement on the screen (localizing commonly used controls) and the effects of fatigue on user interaction.

Discussion:
I thought this paper was an interesting topic, but not really as innovative as some of the other papers or as well written. Multi-touch is a very popular topic of research and for good reason. I think that with all of the multi-touch research there will be many advances in this area over the next several years.

Crafting an Environment for Collaborative Reasoning (IUI-2009)

Crafting an Environment for Collaborative Reasoning (IUI-2009)

By: Susanne Hupfer, Steven Ross, Jamie Rasmussen, James Christensen, Daniel Gruen, and John Patterson

Summary:
Many problems in the world today are very challenging because they require many diverse fields of expertise and require sophisticated reasoning that is beyond the capacity of just one person. These problems tend to need the effort and knowledge of many different people from many different fields to find an acceptable solution to them. Many times these problems are very complex and have changeable situations. This is where the term “Sensemaking” comes into play. It is defined as the motivated continuous effort to understand connections (among people, places and events) in order to anticipate their trajectories and act effectively. There are many different knowledge sources (wikis etc) that help to solve these problems by sharing information, but these different tools are still inadequate to solve the problems individually and without the proper use. Many of the problems may not even have a well defined solution of ending point. The goal of this paper was to develop an intelligent interface and infrastructure that would be able to support people in gathering information, analyzing it, and making decisions based off of it. Some important concepts are: awareness (knowing who is working on what) and expertise location (where relevant experts are located). The main important aspects of a Collaborative reasoning environment are “Collaboration, Semantics and Adaptability.” Collaboration is defined as working together to get information. Semantics is making sure the information gathered about situations is fully understood by all parties (people, places, and events etc.). Adaptability allows people to adapt to the problems that have an ever ending nature.

This led the users to develop the CRAFT (Collaborative Reasoning and Analysis Framework and Toolkit) which uses the above ideas to establish a way for better collaborative reasoning, information sharing and decision making. This toolkit uses nodes on a graph to represent people, groups, locations activities, questions, hypotheses and evidence. This allows a centralized collection of data. This study showed that the toolkit was a good way for users to communicate information.

Discussion:
I thought that this paper was rather interesting and well written. I liked the way it first described everything and then tied it together. This made it easier to tie together all of the pieces to understand the whole concept of what they were discussing. I like the concept of being able to share information and collaborate with a group of people through a centralized data structure such as a graph. This could definitely be evolved to be used in a working scenario.

Extending 2D object Arrangement with Pressure-Sensitive Layering Cues (UIST-2008)

Extending 2D object Arrangement with Pressure-Sensitive Layering Cues (UIST-2008)

By: Philip L. Davidson, Jefferson Y. Han

Summary:
This paper discusses a pressure-sensitive depth sorting technique on 2D objects with multitouch or multipoint controls. The direct manipulation of objects encourages the grouping of objects, especially when Rotating, Scaling or Translating them. It presents two novel techniques for 2D layering by using a stack of selected objects as well as using a drag and drop model. The first thing that this paper discusses is the multiple studies that have been done which show that a humans estimation of pressure is often bad and is more appropriate as a rate control. It has been shown that humans can get better pressure control with visual feedback at the point of contact as well. A tilt calculation was also performed while the Rotation, Translation and Scaling (RST) calculations were being calculated depending on the pressure that was applied. This system uses a directed acyclic graph of tests which can show intersections to detect overlapping layers. A combination of rendering cues are used to show the tilt to the user by showing “out of plane” will alter the visible outline of the object being altered. Element to element layering was discovered to be useful only when the edges of the objects could be seen, but as they become more complex it became less useful. For future work, they talked about inserting prior overlap relationships into the DAG as lower priority constraints. They also suggested allowing the user to freeze the layering relationships for groups of elements regardless of their overlap state.

Discussion:
To be honest this paper was a little confusing and not very much interesting for me. I understood what they were trying to do, but they seemed to jump around a bit with how they talked about things. It seemed to go from topic to topic with no real linking of anything between them. I felt like if I took individual parts out of this paper then they could be separated into their own papers or paragraphs and still make sense at times. It just seemed to lack a sense of cohesiveness.