Sunday, March 21, 2010

Conversation Clusters: Grouping Conversation Topics through Human-Computer Dialogs (CHI-2009 Round 2)

Conversation Clusters: Grouping Conversation Topics through Human-Computer Dialogs (CHI-2009 Round 2)

By: Tony Bergstrom, Karrie Kerahalios

Summary:
There are many different ways that information can be logged for future referencing. One of the most common things that is archived is conversations. Searching through them is often a very difficult task which requires reading the entire record to find what is needed. This paper tries to solve this problem by using the strengths of machines and man together. Computers have the ability to store large bits of information and people have the ability by augmenting our judgment based on the context of the message being received. This paper presents two different ways to store information. A topic view and a historical view. The topic view serves to see the topics discussed over the course of the meeting while the historical view allows the user to see the progression of topics as time passes. Conversation clusters is something that emerged from this research. It is an attempt to bridge communication between the verbal languages of humans and machines. These conversation clusters use a dynamic visualization on a shared public tabletop to allow the users to see the most recent discussions and the topics associated to them. They do this by showing topics of conversation as a thread history. Threads can merge, split and they can even die when a topic is no longer discussed. The researchers took out topic words from conversations by having each person in the conversation wear a microphone and recording them. Their research showed that conversation clusters were successful in correctly showing what the conversation was about and allowing for faster and easier to find lookups.

Discussion:
This paper seemed kinda cool, but kinda not. I can see the importance of it, but it is not something that I would be interested in at all. The paper seemed rather draggy and not very well written. It made it rather dull to read. The concept of conversation clusters however was very interesting and I think that it is a good design for archiving things in the future.

TypeRight: A Keyboard with Tactile Error Prevention (CHI-2009 Round 2)

TypeRight: A Keyboard with Tactile Error Prevention (CHI-2009 Round 2)

By: Alexander Hoffman, Daniel Spelmezan, Jan Borchers

Summary:
This paper was about TypeRight. TypeRight is a new input device for txt entry which combines the advantages of a tactile feedback system with the error prevention methods of word processors through magnets. It does this by extending the regular keyboard so that the resistance to press a key that would create a spelling error is harder than the resistance that would not create a spelling error. This tactile feedback is very important for letting users now that an input error needs to be corrected and it conveys additional information. Typing errors can be addressed in one of three ways: Prevention, live correction and aftercare. Prevention tells the users that an error will be made before making it. Live correction corrects errors while the typing is happening. Aftercare is the most widely used, and takes place after the fact. Prevention is the strictest of these. Another way to prevent errors would have been to add a short timeout where nothing can be pushed, but this is not a good solution because it the timing is never right. It would either end up being too slow or too fast for users to be of use. The study found that the number of backspace key presses was reduced by 46% with users using the TypeRight. The researchers also discovered that TypeRight reduced the number of mistyped letters by 87%. For the novice user average execution time of typing was very similar, but in the expert TypeRight user it was proven to increase typing speed by 10%. One thing they would like to work on for the future is to make the TypeRight quieter.

Discussion:
I thought this was an interesting area of research, but I’m not sure how much faster the human body can type without some help from other sources. I think the keyboard has a good design, but that to see a significant improvement we are going to have to see something off the wall emerge. For now though, the TypeRight seems like a reasonable solution to allowing users to type faster.

Designing Trustworthy Situated Services: An Implicit and Explicit Assessment of Locative Images’ Effect on Trust (CHI-2009 Round 2 – Assigned one)


Designing Trustworthy Situated Services: An Implicit and Explicit Assessment of Locative Images’ Effect on Trust (CHI-2009 Round 2 – Assigned one)

By: Vassilis Kostakos, Ian Oakley

Summary:
This paper studies a visual design element called locativeness which is the extent to which media showing a service relates the physical environment it is in. Because of mobile devices that can wirelessly access many services the design of these services and what invokes trust in them has been highly speculative. These researchers wanted to discover which areas of design either add to trust, or take away from a services trust. This study was conducted completely online and consisted of 3 stages of experimentation. The first stage was the Demographics capture. This was where the researchers got a list of data from the subjects through a questionnaire. They gathered information such as age, gender and occupation. The second stage was the gathering of data through an implicit assessment and distinguish which things related good and bad to each subject. The last part was to show them pairs of websites and ask the users which one they would be more comfortable giving their credit card information to. Researchers discovered that the following were trust enhancing words: safe, unshakeable, credible, honest, protected and loyal. Words that tore down trust consisted of: Hazard, suspect, deceitful, unsafe, disbelieving and cautious. They discovered that some people are completely brand-oriented but that most people are influenced by quality and locativeness. For some people locativeness was even the strongest influence.

Discussion:
This paper seemed to touch on a good subject area, but I don’t know about the whole locativeness part. It just seemed like introducing some new word that didn’t really mean a lot to me. I think that research into what makes websites form credibility with clients is a interesting part of research and something that we should look into as our society moves further towards a technological stage.

Predictive Text Input in a Mobile Shopping Assistant: Methods and Interface Design (IUI-2009)

Predictive Text Input in a Mobile Shopping Assistant: Methods and Interface Design (IUI-2009)

By: Peter Peltonen, Petri Saarikko, Petteri Nurmi, Andreas Forsblom (may have forgotten one or 2)

Summary:
Everyone goes grocery shopping on a regular basis and many people use grocery lists to keep track of their information on what items they need to purchase. Even though many people acknowledge the importance of shopping lists little research has been done into the creation and management of them. This paper discusses a predictive text input technique that uses association rules and item frequencies to integrate a shopping assistant into a web based mobile device. There are two types of shopping assistants (those for shopping in malls and those for shopping in individual shops). The ones for malls are more concerned for getting the user to the store while the ones for individual shops focus on item level location. This device was limited to grocery items which sampled products and then mapped them into a shopping list. This list of items they searched through took out any unit sizes or things that dealt with brand names and simply referred to the base item (8oz pkg of kraft macaroni = macaroni). They then calculate the Term Frequency-Inverse Document Frequency and score for each word in the product name and return the lowest TF-IDF score when entering a new item. If the rules do not trigger a text prediction then they use a frequent item for prediction.

Users were given 2 different versions of the interface. One that had predictive text and one that didn’t. After the users completed a list with their device they filled out a satisfaction survey. The predictive text input functionality increased the user’s input speed. On average, the increase was around 5 words per minute. It also reduced error rates by about 80% and increased user satisfaction.

Discussion:
I thought the ideas behind this topic were interesting and that the paper was well written. I like the idea of a shopping assistant that would keep track of my grocery list because I know there are a lot of times were I go to the store to get items that are related. Like when making spaghetti you go to the store for butter, noodles, spaghetti sauce, mushrooms, beef etc. I think this is a very useful point of research that could be extended into many other aspects of every day life.

Multi-Touch Interaction for Robot Control (IUI-2009)

Multi-Touch Interaction for Robot Control (IUI-2009)

By: Mark Micire, Jill Drury, Brenden Keyes, Holly Yanco

Summary:
This paper discussed how robot interaction is highly limited when using conventional items such as joysticks, buttons and sliders. They suggested a multi-touch interaction sequence that would allow for a better array of possible actions and better control for robots. Their screen had many visual aids that allowed better control of a robot and then did a user study on how people would use them. The researchers discovered that there are three main components that are important when talking about screen interaction: gesture magnitude, gesture alignment and gesture sequence. Some of the users seemed to think that the magnitude of their gestures was significant (closer for slower, further away for faster etc.). This concept is called proportional velocity. The second component, gesture alignment describes how users gestures interacted with respect to the x and y axis. Whether or not the users usually went left and then right of if they moved diagonally off the main axis. The third component had to deal with gesture sequence, or the pattern of primitive actions (defined as touch, drag and hold). There were several emergent behaviors that came out of this study. These emergent behaviors are things that were not thought of as an initial action sequence when using the product, but that users experimenting with the system came up with. Even though each user was trained in the same way they interacted with the robot in their own way. The researchers plan to do more research on interaction placement on the screen (localizing commonly used controls) and the effects of fatigue on user interaction.

Discussion:
I thought this paper was an interesting topic, but not really as innovative as some of the other papers or as well written. Multi-touch is a very popular topic of research and for good reason. I think that with all of the multi-touch research there will be many advances in this area over the next several years.

Crafting an Environment for Collaborative Reasoning (IUI-2009)

Crafting an Environment for Collaborative Reasoning (IUI-2009)

By: Susanne Hupfer, Steven Ross, Jamie Rasmussen, James Christensen, Daniel Gruen, and John Patterson

Summary:
Many problems in the world today are very challenging because they require many diverse fields of expertise and require sophisticated reasoning that is beyond the capacity of just one person. These problems tend to need the effort and knowledge of many different people from many different fields to find an acceptable solution to them. Many times these problems are very complex and have changeable situations. This is where the term “Sensemaking” comes into play. It is defined as the motivated continuous effort to understand connections (among people, places and events) in order to anticipate their trajectories and act effectively. There are many different knowledge sources (wikis etc) that help to solve these problems by sharing information, but these different tools are still inadequate to solve the problems individually and without the proper use. Many of the problems may not even have a well defined solution of ending point. The goal of this paper was to develop an intelligent interface and infrastructure that would be able to support people in gathering information, analyzing it, and making decisions based off of it. Some important concepts are: awareness (knowing who is working on what) and expertise location (where relevant experts are located). The main important aspects of a Collaborative reasoning environment are “Collaboration, Semantics and Adaptability.” Collaboration is defined as working together to get information. Semantics is making sure the information gathered about situations is fully understood by all parties (people, places, and events etc.). Adaptability allows people to adapt to the problems that have an ever ending nature.

This led the users to develop the CRAFT (Collaborative Reasoning and Analysis Framework and Toolkit) which uses the above ideas to establish a way for better collaborative reasoning, information sharing and decision making. This toolkit uses nodes on a graph to represent people, groups, locations activities, questions, hypotheses and evidence. This allows a centralized collection of data. This study showed that the toolkit was a good way for users to communicate information.

Discussion:
I thought that this paper was rather interesting and well written. I liked the way it first described everything and then tied it together. This made it easier to tie together all of the pieces to understand the whole concept of what they were discussing. I like the concept of being able to share information and collaborate with a group of people through a centralized data structure such as a graph. This could definitely be evolved to be used in a working scenario.

Extending 2D object Arrangement with Pressure-Sensitive Layering Cues (UIST-2008)

Extending 2D object Arrangement with Pressure-Sensitive Layering Cues (UIST-2008)

By: Philip L. Davidson, Jefferson Y. Han

Summary:
This paper discusses a pressure-sensitive depth sorting technique on 2D objects with multitouch or multipoint controls. The direct manipulation of objects encourages the grouping of objects, especially when Rotating, Scaling or Translating them. It presents two novel techniques for 2D layering by using a stack of selected objects as well as using a drag and drop model. The first thing that this paper discusses is the multiple studies that have been done which show that a humans estimation of pressure is often bad and is more appropriate as a rate control. It has been shown that humans can get better pressure control with visual feedback at the point of contact as well. A tilt calculation was also performed while the Rotation, Translation and Scaling (RST) calculations were being calculated depending on the pressure that was applied. This system uses a directed acyclic graph of tests which can show intersections to detect overlapping layers. A combination of rendering cues are used to show the tilt to the user by showing “out of plane” will alter the visible outline of the object being altered. Element to element layering was discovered to be useful only when the edges of the objects could be seen, but as they become more complex it became less useful. For future work, they talked about inserting prior overlap relationships into the DAG as lower priority constraints. They also suggested allowing the user to freeze the layering relationships for groups of elements regardless of their overlap state.

Discussion:
To be honest this paper was a little confusing and not very much interesting for me. I understood what they were trying to do, but they seemed to jump around a bit with how they talked about things. It seemed to go from topic to topic with no real linking of anything between them. I felt like if I took individual parts out of this paper then they could be separated into their own papers or paragraphs and still make sense at times. It just seemed to lack a sense of cohesiveness.

SideSight: Multi-touch Interaction Around Small Devices (UIST-2008)

SideSight: Multi-touch Interaction Around Small Devices (UIST-2008)

By: Alex Butler, Shahram Izadi, Steve Hodges

Summary:
There are many problems that can occur when trying to interact with small devices (such as mobile phones) that have small screen real estate to interact with. Fingers can occlude the screen and make them hard to use, and hard to see everything that is needed. For many small devices using a touch screen display is completely impractical and almost impossible. This paper talks about a prototype device which uses infra-red proximity sensors embedded along the side of a device that can detect movement and distance of finger movements. This gives the device a larger input area. One area of research that is been posed to solve this problem is the stylus, but it also brings in its own problems (another object to use and possibility to be lost).

The SideSight prototype works by first thresholding the image that is being used and then carrying out a connect component analysis. It was discovered that many times when people are interacting with one finger the other fingers also get in the way of what the user is trying to do. They are currently experimenting on ways to make the device track the input of multiple fingers for greater flexibility with the interaction. In conclusion, it has been shown that the input area of a device can be extended outside of its physical area and can sometimes be a more intuitive way to interact with the device because of increased utility.

Discussion:
I think this is a very important area of study for us to continue to look into because this is something that is a constant problem in society. Cell phones and music players keep getting smaller and smaller which is leading to these increased functionality and ease of use with their interactions. Furthermore, users are getting more acclimated to increased functionality and are demanding more features that are easier to use. Any area of research that goes into this topic is highly important to our technological future.

Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles ( UIST – 2008)

Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles ( UIST – 2008)

By: David T. Gallant, Andrew Seniuk, Roel Vertegaal

Summary:
This paper talks about Foldable User Interfaces (FUI) which is a combination of a 3D GUI imbued with the physical properties of a piece of paper. This paper contains Foldable Input Devices (FIDs). It is basically a piece of construction paper with IR reflectors inserted in it that are tracked by computer vision. FIDs allow many interaction techniques including: folding, bending, flipping, stacking, thumb slide, scooping, top corner bend, leafing, squeezing, hovering and shaking. These foldable input devices have been highly influenced by Origami. Some interactions that can be achieved by using the interaction techniques include, but are not limited to: Navigation, browsing, selection, sorting, making origami and zooming. Navigation is done by picking up the FID and moving it. Selection can be achieved by using a transparency sheet and a thumb slide. Origami can be achieved by folding the paper into complex shapes through a series of folds. These folds are made permanent by using a shaking technique. Browsing is accomplished by using the leafing technique to go to the next page in a document. Lastly zooming is achieved by using the hover technique.

Discussion:
At first this paper was a little hard for me to wrap my mind around, but once I got down the concept of a FID it was a rather interesting implementation of a new input device. I had never thought of using a piece of paper as an input device to control something else much like a mouse or keyboard would, but now that it has been mentioned it is a very unique but sensible invention. I am interested to see where this goes over the years and how much smaller, or thinner or devices get, but with better user interaction.

Inferring Player Engagement in a Pervasive Experience (CHI 2009 – Round 1)

Inferring Player Engagement in a Pervasive Experience (CHI 2009 – Round 1)

By: Joel E. Fischer, Steve Benford

Summary:
This paper was about a game called Day of the Figurines which required users to send and receive messages every day so that they could explore a town, chat and help other players and receive missions from the game. The players of the game were usually unhappy when game notifications interrupted normal life, or sad when they were not notified soon enough to act on something when they did want notifications. The gaming experience would be greatly improved if the game could notify users more or less depending on their level of engagement with the game. It was discovered that this could be done to a certain degree by using the elapsed time (et) between two player turns or activities. This could also by determined by using response time (rt) which is the time that it took for a player to respond to a notification. The results of this experiment based off of elapsed and response time showed that a players level of engagement can be determined and that systems can adapt to the user after detecting it. Some advantages of this are that not only can the user receive more or less messages as wanted, but other players can be notified of their disengagement. Another potential benefit could be a summary of what happened while the user was not engaged. Any system that used this would however need to have a mechanism for the user to override the adaptation if desired or improperly interpreted.

Discussion:
I think that this was an interesting paper because it could be applied to more than just games. I think that a similar response and elapsed time concept could be put into effect on websites and other user applications. It would be useful for websites to limit or increase the number of email notifications received, and could be used in other applications to conserve computer power. There are many other ways this could be applied.

Going My Way: A User-aware Route Planner (CHI 2009 – Round 1)

Going My Way: A User-aware Route Planner (CHI 2009 – Round 1)

By: Jaewoo Chung, Chris Schmandt

Summary:
When people get directions they typically find a nearby location that they are familiar with and then get directions to the desired location from there. “Going My Way” attempts to do something similar. It first learns about where you are familiar traveling with, then identifies the areas that are close to your desired location and finally presents a set of directions based off of familiar landmarks. This implementation was done with a phone that had a server for accessing GPS data that it stored as well as a UI for requesting directions. Part of the work done in this paper was to determine what landmarks were memorable and why. To do this they asked users sets of questions and locations of different places. They discovered that people remember places near an intersection better and remember more unique places instead of a chain store. The next part was to have users use “Going My Way” and try to find their way to certain locations. The application got user specific landmarks and found out that users typically use less than a quarter of the amount of landmarks that were stored in its database. The results of this directions test showed that “Going My Way” is far more useful when the user is traveling in an area that the user is somewhat familiar with so that they recognize the landmarks easier. Some users said that it was easier to visualize the location when they got to explore all of the landmarks around it.

Discussion:
I thought this was a very interesting paper because this is the way that many people get directions. I can’t count the number of times where I have been going out to eat and not sure where I’m driving to. When I ask for directions 95 percent of the time the response I get is similar to “Right across the street from ” or “Behind the shopping center that is in.” I think that if this was further modified so that users could use it in unfamiliar areas then it would be way more useful.

Tuesday, March 2, 2010

Emotional Design

Summary:
There were 3 main aspects of product design that were discussed in this book: Visceral, Behavioral, Reflective. Visceral design is decided upon very quickly by the user. It is usually triggered by one of the senses (sight, sound, smell, feel, or touch). This occurs in a way such that our mind is made up before we even know how because this response is usually run on a subconscious level and is usually inalterable. You either like the way it looks or you dont. Sometimes it is associated with phrases such as "The product is me, or that product is not me."

Behavioral Design is based off the way the object is used and how happy the user is while using it. This is usually triggered by a design that the user deems as "good" or "bad." "Bad" designs will usually lead to an unhappy user and can make the muscles tense up. "Good" designs usually lead to enhanced productivity as well as the possibility to enter a state of flow which allows the user to operate at optimum efficiency. Not too easy, but not too hard either. Just enough to make sure that the brain stays fully engaged throughout using it. Iterative testing can benefit here.

Reflective design is based on each individual and their intellect. It is affected greatly by the users understanding of the product, how they feel about the product, and how they integrate their self image into the product. These are very vulnerable to cultural variability, education and personal experiences.

Discussion:
While I thought this book was a little long winded once again, I feel like the author was not quite as long winded as before, and that it was much better. I thought it was a very interesting concept that he incorporated ideas on reflection, behavioral and visceral ideas into how we view everyday things. I think that he had a lot of good facts in the book, or things that I hadnt thought about. I know that I agree with many things that he said about how we use things, and how our user experience is affected by all of our senses, as well as by what memories are tied to it.

Thursday, February 25, 2010

Bringing Physics to the Surface (UIST-2008 Assigned Reading)


Bringing Physics to the Surface (UIST-2008 Assigned Reading)

By: Andrew Wilson, Shahram Izadi, Otmar Hilliges, Armando Garcia-Mendoza, David Kirk

Summary:
This study was directed in the way of exploring how the intersection of surface technologies coincides with advanced games physics engines. The surface technologies being discussed are those that are capable of sensing multiple contacts as well as some shape information. A lot of work has been done recently on interactive surfaces as well as the need to provide richer and more realistic 3D images. In order for this to be possible it was necessary for a physics engine that allowed an appropriate interaction with virtual objects. They used the following strategies to describe these interactions: Direct force (contact point of force touches a virtual object), Virtual joints and springs (contact connected to object by a link so it is dragged behind it), Proxy objects (objects near the contacted object which receive friction forces), Particles (shape information available here), and Deformable mesh (an approach to model 2D of 3D shapes). There are two strategies for moving objects. The first is by applying the external force directly and then updating the position of the object to show what is being done. The second strategy is to attach a rope to the object and move it indirectly. To further understand the utility of the techniques mentioned a study was done where participants were instructed to do 3 simple physics tasks. The researchers analyzed the various behavioral and experience aspects of interaction during these tasks. The results of this study showed that even though the more familiar approaches offered more predictable control in the study, the particle proxy approach can offer good performance as well, but with new modes of interaction (example of cupping the ball). One problem with the system is that there is no way to determine how hard, or soft the user is interacting with the objects (no way to establish a magnitude of force desired). It was also discovered that grabbing an object virtually based off of contacts on its edges is very difficult. In the future the researchers wish to improve the basic sensing techniques of the system so that things like grabbing an object are easier.

Discussion:
I thought this was a very drawn out and dull paper. I think it is an interesting area of research, but that it was really not very well organized or something was wrong with the paper. It seemed to jump around a bit. I think the idea of being able to incorporate interactive surface input into a real time physics simulation was cool. I also think that it could be modified to make advancements into a more virtual reality in the world.

Do You Know? Recommending People to Invite into your Social Network (IUI-2009 Assigned Reading)


Do You Know? Recommending People to Invite into your Social Network (IUI-2009 Assigned Reading)


By: Ido Guy, Inbal Ronen, Eric Wilcox

Summary:

This paper talks about a UI and system that provides people with a set of recommendations of people it thinks match the profile for being a candidate to be in their social network. This system is based on an aggregated information set about peoples relationships which is retrieved using SONAR. SONAR is a system for collecting and aggregating social network information across the organization. SONAR extracts an employee’s social network by getting the information about relationships between people. The relationships include information about: organizational charts, paper co-authorships, patent co-authorship, direct connection as well as several other things associated with IBM. The UI allows for the scrolling through of recommended people one by one while it shows the “relationship evidences” as to why that use was selected as a candidate. People are recommended based off of an articulated set of social network information. This interface is called the “Do You Know” widget and is a new addition to the homepage of IBM’s next-gen employee directory. This recommender system shows a profile of the person as well as a list of things that are in common between the user and candidate. The DYK application recommends people who you may be familiar with, but are not yet connected with.

The evaluation consisted of 2 parts. The first was a field study where the use of DYK was monitored for 4 months. The second was a qualitative user study that includes things such as interviews, and surveys.

There were some very important discoveries with the usage of the DYK system. Many users said that even though the information about links between them and the candidate were known, that they were still useful because it invoked trust in the application. They also found out that even though users were given over 100 results at times, that users of DYK rarely searched through more than the first 20.

This evaluation showed that people recommendations can be highly effective in increasing the number of connects between users as well as the overall number of users in a Social Network Site. Greater ease in finding potential connections was found to enhance user utilization of the site as well.

Discussion:

This paper seemed very familiar to an expert recommender system paper that I read earlier in the semester. As I said then I think this is an interesting area of study. To be able to access data from people and then connect links between them is a very important aspect of life. It is important to be able to contact people when needed especially for work. With the advancements in the Social Networking System I feel that we are going to see many advancements in this area. I thought this paper was a little drawn out, but for the most part well written.

Tuesday, February 16, 2010

The Inmates are running the Asylum (Part 1)

Summary:
This book talked a lot about how the use of technology in the world around us is ever increasing. It also talked about how the current design process that is used by many software developers is based off of what they feel the user wants, and by whats easy to code, or based off of features that they would want, and not necessarily based off what the user wants. The book also discusses the difference between people. How there are "Homo logicus" who wants to know how things work and is typically a programmer, and that there is then also "Homo sapiens" who just wants to use the system and doesn't care about the working parts.

Discussion:
I thought that some of the points presented in the first half of this book were very intriguing. I particularly enjoyed the discussion about "Homo Logicus vs Homo Sapiens." The more I read, the more I realized that what the author was saying was completely true. There are people who like to see how things work, and people who don't care how they work, but just want to use the system. I thought for the most part the first half of this book had some good points, although some of it was rather drawn out.

Tuesday, February 9, 2010

WikiFolders: Augmenting the Display of Folders

Summary:
Traditional file systems allow the user to see and modify their file hierarchys manually, but it may be difficult to find them, sort them, or remember the relationships between those files. The normal means of editing are very limited and often times do not provide enough information for the user to successfully complete their task. People often use readme files to work around having to remember all the intricacies of a given file system arrangement that they are using. In this article wikifolders was discussed. It is a hybrid system for annotating file systems which builds on the regular file system with the use of wiki like functionality, but takes away the weaknesses of the normal file system. The end view of these wikifolders is similar to a traditional file view with some modifications: the user can see annotations for each file, the icon has added functionality, and the file system has these changes without requiring any wholesale changes in the system. Any current file system can be converted into a wikifolder by the click of a button.

Discussion:
I thought that this was a very interesting paper to read. I know that having a more descriptive file system would definitely help me more with my every day functioning on the computer. I would like to see more research go into a better way to display files as well as more flexibility with the icons.

Monday, February 8, 2010

Expert Recommender Systems in Practice: Evaluating

Summary:
This article had to deal with the concept of knowledge management (KM) which has had several leaps and bounds with how it is being used. A second wave of KM applications that would share knowledge among social networks and human actors (those seeking the knowledge) are now being postulated. This paper also has to deal with expert recommender systems (ERS ) which allow the finding of appropriate knowledge carriers based on an expertise profile. The difficult thing about expertise profiles for the actors is the quick, effective and useful design of the profile to be used in the ERS . This paper discusses an ERS which combines self reported information with keyword mining from a user's files. The basis of this ERS system as well as any other ERS is to take an input description of a needed piece of knowledge and output a list of sources (users with profiles) who have the highest outcome of possessing that knowledge. This ERS system was studied on a European industrial association called the NIA which offers services such as networking among member companies, legal regulations, standardizations and many other things to its users. The NIA is a highly decentralized organizational structure with many gaps in the transfer of knowledge between sections. This study was aimed at the increased sharing of knowledge between departments and members of the association. The ERS tested on the NIA was called the ExpertFinding which was created by doing the following: Studying the organizational needs, design a prototype to meet those needs, Roll out and evaluate the designed system. ExpertFinding's main purpose is to help redirect question requests to the person in the association with the highest likelihood of being able to competently answer the question. Profiles for ExpertFinding were created based off of two things. First, there was a large scale keyword list based off arbitrary text documents which would help establish a user's competencies. The second thing it was based off of was a listing of contact information, and other facts about the user that would help personalize their profile (education, job description etc.). The patterns of usage among the testers was that they need to feel adequately represented by the system to want to use it. Some of the testers wished there was a filter system to take out irrelevant terms in the searches. Some of the problems encountered where: the LSI matching algorithm was not the most efficient and the selection mechanisms were not best for the file system that emerged. One of the proposed solutions was the creation of a NIA specific thesaurus which would help filter out irrelevant words and phrases. The findings showed that this ERS could generate accurate profiles for the job, but not always complete.

Discussion:
I thought this was a rather interesting article to read because it seemed like a good idea to find out and use in the working world. I know at my work there are times when I find out someone has already solved a problem I was working on after I solved it. It would be a great help to be able to look up who has worked on something similar before and go talk to them to see their findings. I think that a company specific thesaurus would be a good idea for ways to improve it.

Comment:
http://computerhumaninteractionblog.blogspot.com/2010/02/
social-computing-privacy-concerns.html#comment-form

http://shauntgo.blogspot.com/2010/02/uist-predicting-tie-strength-with.html
#comment-form

Thursday, February 4, 2010

"Pimp My Roomba”: Designing for Personalization

Summary:
The customization of personal objects and technological devices, such as cell phones, mp3 players is something that is becoming more and more important in the world. Existing studies find that the personalization of objects increases the users ownership and satisfaction. This article wants to study whether personalization led to positive outcomes in people's experience with the Roomba. This was a 6 month study which involved 30 houses. It gave 15 households personalization options for the roomba and 15 had no idea that it was even possible. The 4 effects of personalization are: perceived ease of use, recognition of mine from others, reflection of personal identity and the feeling of control. Researchers reported that people want to personalize things based off a motivation of self expression, or when they use that technology frequently which means they are comfortable with how to use it and where they want it customized. The houses that customized their roomba's reported that they felt more connected to it, and felt like it was theirs and not just a robot. Some people personalized to make it represent their own idea of preference as well. The researchers also showed that personalization does not just occur naturally, but that it can be encourage through the appliance design and choices available. Another thing that the users brought to the researchers attention was that any designs need to be able to endure daily wear and tear to be pleasing.

Discussion:
Honestly, I read this article because of the title. It was very interesting to see what users felt about personalizing a home appliance such as the roomba. I think this is a good area of study because it will allow users to be happier with their purchases. I think that the next step in this should be to extend the study to more homes, and with other appliances, perhaps the coffee machine or dishwasher.

“My Dating Site Thinks I’m a Loser”: Effects of Personal Photos and Presentation Intervals on Perceptions of Recommender Systems

Summary:
When interacting human to human, people perform actions and behave in certain ways to show everyone who they are as a person. When interacting with a computer this kind of representation is very difficult to achieve and sometimes has negative outcomes when the computer is trying to make inferences about what to show the user based on their representation. When the computer makes bad suggestions the user will not only think bad of the system, but it can also trigger behavioral modifications so the user can achieve the wanted representation. Personalized recommendation systems can present their data in 2 ways: Intermittently, as if responding to the user input at random intervals, or at the very end of the users input. This article tested a new web based dating algorithm called MetaMatch. The participants answered a set of questions for the algorithm and no matter what the answers were, the results for each person where the same based off a predetermined set of results. Users filled out a dating questionnaire, viewed the results and then filled out a post-questionnaire about their experiences with the site. Half of the users were given a resulting photo after every 10 questions while the other half where given the resulting photos at the end of the questionnaire. Even though the users were told the system was using their answers to pick results, the results were all predetermined. These results were designed to be undesirable by the users. The results showed that the way data is gathered and presented online has a profound consequence on how the users interact with the site. Frustration levels were highest amongst users who got intermittent results which did not match their desires.

Discussion:
I thought this was an interesting paper because I agree that the way information is presented affects the users perception of the site greatly. This just goes to show that there are correct ways to represent and gather data and that there are incorrect ways to do so as well which foster frustration and user unhappiness with the system. I think this paper should do more studies on not only web dating, but just on website layout in general.

Tuesday, February 2, 2010

The Design of Everyday Things by Donald Norman


Summary:
This book tries to explain to the reader some of the many flaws that are encountered in the design of things that we use everyday, goes into ways that the designs could be improved, and discusses a process to design them better. He uses the POET process to describe the design better. This process is a user centered design process. Some of the principles of design that he uses include the following: Use knowledge of the world and knowledge of the head, Simplify the structure of the tasks, make things visible, get the mappings right, exploit the power of constraints, design for error, and standardize. Using knowledge of the world means that knowledge to use the design can be found from clues in the environment while knowledge of the head is strictly knowledge of the user. Simplifying the structure of tasks means that the design should make things as simple as possible. If things are too complicated then restructure them so that they are not as complex. One way this can be achieved is by keeping the tasks similar and providing mental aids to help the user through the design. Making things visible means that the design should bridge both the gulf of evaluation (how long it takes to evaluate the design) and the gulf of execution (how long it takes to execute the design). It is also very important to get the mappings correct. This means that what the user wants needs to match up with the functions and how they are executed. Using the power of constraints can greatly limit the errors that the user can make because it makes many wrong options impossible to even try. There are several different types of constraints: Physical Constraints (can't physically be done), Semantic constraints (things only make sense to do them this way), Cultural Constraints (culture dictating how something works based on a standard, or the accepted norm), and Logical Constraints (the conclusion we decide on when thinking about the problem). Designing for errors means that the design should take into account that there will be errors made because we are human. By making it easy to fix errors the overall design is improved.

Another thing that this book discussed was the use of memory and how short term memory and long term memory work. It is a known fact that we can only keep 5 or 6 unrelated things in our brain to remember at a time. The problem with long term memory is that it is harder to access and sometimes contains errors in the memory. We remember things by several different means. There is a memory for arbitrary things. These memories have no relationship to each other what so ever. There are also memories of meaningful relationships which form a relationship with themselves and other things that are already known which helps them to be remembered. Lastly, there are memories through explanation which are derived from an explanation, but not necessarily learned.

Discussion:
Overall I found a lot of this book to be repetitive. It had some interesting points about the design process and how things that we use every day do have faulty designs, but it kept repeating the process and why they were faulty over and over. This made the book rather monotonous to read. I feel like all the useful information in the book could have been discussed throughly and all the examples used in about half the space. I also felt like this book was a little bit outdated since it was first published so long ago. It had valid points, but in a few years I feel like some of the examples used will go over the heads of students because they will not have been exposed to some of the things used.

Ethnography Idea

My enthnography idea is to glue several different types of coin denominations on the ground in high traffic areas and see how many people try to pick them up. This is to see if there is a significant break in the amount of people who would pick up a nickle vs the amount of people willing to pick up a quarter (or other similar breaks).

Tuesday, January 26, 2010

Optically Sensing Tongue Gestures for Computer Input

Summary:
Spinal cord injuries and other medical conditions often leave some patients severely paralyzed. Many of these patients are however still capable of higher level thinking and possess the ability to communicate with other people if there was an opportunity to. In this paper optical sensors were embedded into an orthodontic dental retainer. These sensors provided the possibility for communication through tongue movement. The idea of wearing a retainer to communicate is a simple solution to design for each user, as well as low profile solution so that the user would not have a large apparatus to draw further attention to their disability. To build this retainer a physical impression of the mouth is needed first. Then a tinfoil separating material is placed on the material to release the retainer when it is complete. The four proximity sensors are then added to the retainer (left, right, front back) and embedded in acrylic. Desktop software was then created to recognize the tongue gestures and provide feeback for them in a real-time system. This system recognizes a left swipe, right swipe, tap up and hold up gesture. An experiment was then done to see how users would react to the use of the retainer. Several laboratory experiments as well as a game of tetris were all designed to test the device. It was discovered that the shape of the tongue at various points in the mouth is often not under control of the user. The current prototype is a wired model, which would not work for real world use, and a wireless version is now in the design process. Additional work is being put into monitoring other mouth activity such as: jaw tension, movement and even chemical changes in the saliva.

Discussion:
I thought this was a very interesting area of research. Although my idea would contain playing video games in class using my tongue, I do believe that this would be a beneficial thing to discover greater use for because it would allow disabled people to communicate in a new way. I think this was a well thought up paper because they included several ideas for the future, which all seem very promising.

A Practical Pressure Sensitive Computer Keyboard


Summary:
Although there have been many successful advancements in the realm of computers, successful computer keyboards have only had minor changes made to them since their discovery. The problem with many of the advancements that could replace keyboards is that many of them have very significant cost barriers which prevent their use for the masses. This problem is solved with the design of pressure sensitive keyboard which uses sensors that are based on pressure sensitive ink which is placed under each key. The old version of keyboards uses a flexible membrane/rubber dome which provides a tactile sensation. When a key is pressed, the stack of 3 membranes deform through the hole in the spacer layer and make contact with the bottom sheet signifying that a button has been pressed. In this new version when a key is pressed, the top contact deforms through the spacer and makes more or less contact with the bottom layer depending on the pressure applied. This is achieved through the use of piezoresistive material, which is a special type of material that changes its resistence based on pressure. The pressure sensitive keyboard is a matrix of variable resisters, which each connect to a unique row and column. This will make the keyboard independently measure each resistor.
This design fixes the problem of "ghosting" (pressing multiple keys in the same row or column shorts another row). The basic idea of a pressure sensitive keyboard is to control the intensity of the pressure applied to each key and get extra information from it. The pressure sensitivity will allow users to express emotion through typing. Hard pressing to make bigger letters or bold font as well as pressing harder to run faster in a game. Some advantages of this design: Can be manufactured at a modest price, looks and feels like a regular keyboard, pressure sensitivity can be enabled in software which would allow users to use the new design at their own pace.

Discussion:
I think this is a very interesting idea and an important area of research because the potential uses of this would be very useful and would make using a keyboard more intuitive. I would like to see the pressure sensitivity used in a game as well. I think this would be very interesting because it is just natural if a monster is chasing you to want to run faster and press the keys harder. Overall I thought this was an interesting paper.

Thursday, January 21, 2010

Abracadabra: Wireless, High-Precision, and Unpowered


Summary:
Abracadabra is a magnetically driven input technique which allows wireless, unpowered finger input for any mobile device that has a very small screen. The advantage of this is that it is a very powerful, yet inexpensive way to interact with the mobile device. This technology uses multi-axis magnetometers to figure out the orientation of the finger in relation to Earth's magnetic field. The major disadvantage of this technique is that the user has to interact with an additional object. One of the advantages is that it is possible to mount the sensor which pics up the magnets location (on the finger) behind the display which means that there is no loss in functionality on the screen. The second major advantage is that the sensor can be placed in the center of the object because it is on the back of the screen. This technique has been tested with small handheld, or wrist size devices and a magnet worn on the finger which overrides the Earth's magnetic field for a small area.
Discussion:
I believe that this is another very important area of research because todays technology is continually getting smaller and smaller which makes the interaction with these devices harder and harder. I believe that one of the problems with this technology is that it requires another thing for the user to use and gives the user something else that they could lose which would make the mobile device unusable. I belive that the next step for this technology would be to figure out a way for the device to monitor finger movements without the need for an external device (finger ring).

Contact Area Interaction with Sliding Widgets

Summary:
The problem with todays touchscreens are that they are based off of a mouse and cursor system which allows access to only single pixel selections. This makes the selecting of buttons difficult when multiple buttons are present. The solution to this is to design a selection region rather than single pixel selection. This would resolve the ambiguity as to which button is selected. By using a selection region the width of allowable interaction with the button is increased. The problem with wider controls is that it would limit the interactivity on devices with small surface area. Sliding buttons are the propsed solution to both of these problems (also known as Fat Finger Problem). When using sliding buttons every button would respond to the touch of a single button near it. Some of the advantages of an area based selection such as sliding widgets: allows easy targetting of small targets, removes ambiguity since everything responds to the fingers touch, resilient to parallax errors (hitting the wrong button), compatible with drag based interaction widgets, allows manipulation of multiple controls at the same time, and the contact information (size and shape of touch on screen) can be used. The biggest disadvantage is that it is possible to select multiple buttons when the user only wishes to select one. This requires another disambiguation mechanism.

Sliding buttons can also have multiple meanings to the same screen area. This means that the user touching a spot on the screen, and then dragging left could mean something different than touching a spot on the screen and dragging right. This also allows the designers to associate a direction with a meaning (flick left for foward and flick right for backward). Sliding buttons are a promising solution for improved accuracy on small targets on a touch screen, but require more thinking and designing to implement correctly.

Discussion:
This paper was interestig because it touches on a subject which is very important in the advancement of technology. With all of the new Ipods/Phones coming out that use this technology it is imperative that advancements with accuracy and ease of use be made in this area. The problems with the sliding buttons solution are that they are far more complex than regular buttons are because all the buttons respond to the touch. I believe that the next step in this field is going to be a way to make the implementation of sliding buttons easier.