The basic idea is that such tools are able to analyse the user interface implementation and build the corresponding underlying model. An example is described in Bellucci et al. The dialogue expressions are connected using CTT operators in order to define their temporal relationships; the ability to support user interfaces including complex and Ajax scripts able to continually update fields by invoking external functions, which can be implemented as Web services, without explicit user request; and dynamic set of user interface elements, which can be obtained through conditional connections between presentations or the possibility of changing only a UI part.
It is notable that HTML 5 is evolving in the same direction by introducing a number of more semantic tags such as navbar, article, etc. However, HTML 5 is mainly limited to graphical, form-based user interfaces. Thus, it is not able to address the increasing availability of various interaction modalities. Figure 12 shows how the tool supports editing a logical description; on the left is the interactive tree representing the structure of the application, in the central area we have the graphical representation of the selected presentation with the possibility to drag-and-drop the relevant elements dynamically shown on the right.
In automatic adaptation we can identify three main phases: device identification, interaction resources identification, adaptation. Device identification can be performed either server-side or client-side. In the client-side case, some level of identification of the main features of the current device can be performed through the markup for example, the srcset attribute is able to indicate which version of an image to use depending on the main features of the device ; or by using the stylesheets, which are associated to different devices by using the media queries; or by using certain scripts e.
Interaction resources identification is applied when it is necessary to have more detailed information on the currently available interaction resources.
The Future of Interaction Design
One format for DDRs is given by the UAProf User Agent Profile , which describes the capabilities of a mobile handset, including screen size and multimedia capabilities. Its production for a device is voluntary. This information is derived from different sources: UAProf, when available; public documentation; developer reports; actual testing. It has a hierarchical, extensible, structure. In general, the device properties can be classified as either static, which cannot change during application execution, such as operating system, RAM size, available storage, display size, input devices, markup support, CSS support, image format support, script support, etc.
Media queries are able to detect a limited set of media features: width, height, device-width, device-height, orientation, aspect-ratio, device-aspect-ratio, color , color-index, monochrome, resolution. The third phase is adaptation.
The Three Types of Formal Methods of Human Computer Interaction
There can be various approaches to automatic re-authoring:. The problem of performing an automatic adaptation from a desktop to a mobile version able to change the user interface structure can be addressed by first calculating the costs in terms of screen space required by the various elements: i. Next, calculating the space required by the user interface in the target device should also consider how much tolerance in scrolling should be allowed, how much additional space should be available for tables, and similar aspects.
If the result is higher than the sustainable cost for the target device then the adaptation of the user interface elements should be considered e. If the resulting overall cost is still excessive for the target device screen then splitting the user interface into multiple presentations should be considered. In order to decide how splitting into multiple presentations should be performed the user interface can be considered as a set of groups of elements, which cannot be split internally.
Thus, the decision is how to distribute such groups in order to obtain presentations sustainable by the target device. Splitting can be implemented either creating separate mobile presentations or by showing the relevant elements dynamically. This adaptation process can be customized according to certain parameters and rules, such as how much scrolling should be allowed in the target device or what policy to follow in distributing the groups of elements in the target device.
In this adaptation process sometimes tables are critical elements because when they are shown on the small screen device they are too large. Another interesting adaptation technique is page summarization, whose purpose is the automatic reduction of content in order to make it more suitable for small screens. There are two types of approach to this issue. The Abstraction-based approach uses sentence manipulation techniques like reduction, compression and reformulation.
The Extraction-based approach assigns scores to sentences in order to select those which better represent the whole text; it can be feature-based e. An example of summarization is that supported by PowerBrowser Buyukkokten et al. The basic idea was that the importance of a keyword depends on the frequency with which it occurs in a text and in a larger collection.
A word within a given text is considered most important if it occurs frequently within the text, but infrequently in the larger collection. The significance factor of a sentence is derived from an analysis of its constituent words.
The sentences in which the greatest number of frequently occurring distinct words are found in closest proximity are probably important. Crowd-sourcing techniques are based on the idea of allocating some tasks to perform through an open call. These techniques are acquiring increasing importance and can be applied to adaptation as well.
For example, Nebeling and Norrie have applied them to adaptation of Web pages. The goal is to support developers in specifying Web interfaces that can adapt to the range and increased diversity of devices. For this purpose they have introduced a tool that augments Web pages to allow users to customize the layout of Web pages for specific devices. Devices are classified in terms of window size, screen resolution, and orientation.
It is then possible to share adaptations so that others with the same device and with similar preferences can directly benefit. The same group Nebeling et al, has developed a tool, W3Touch, whose purpose is to support adaptation for touch according to metrics. The tool produces analytics of the user interaction in order to help designers detect and locate potential design problems for mobile touch devices. For this purpose two metrics are considered: Missed links ratio, which keeps track of how often touches miss an intended target; and Zoom level, which considers how much users on average need to zoom into different components of the Web interface.
Another important aspect to consider is how to evaluate adaptation. For this purpose in Manca et al. Vocal interfaces can play an important role in various contexts: with vision-impaired users; when users are on the move; more generally when the visual channel is busy. Examples of possible applications are booking services, airline information, weather information, telephone list, news.
However, vocal interactive applications have specific features that make them different from graphical user interfaces. They are linear and non-persistent, while graphical interfaces support concurrent interactions and are persistent. The advantage of vocal interfaces is that they can be fast and natural for some operations. Recently there has been increasing interest in vocal interfaces since vocal technology is improving. It is becoming more robust and immediate, without need for long training, and thus various its applications have been proposed in the mass market, e.
This has been made possible by providing the possibility of entering vocal input with audio stored locally and then sent to the server for speech recognition. Vocal menu-based navigation must be carefully designed: there is a need for continuous feedback in order to check the application state, it should provide short prompts and option lists to reduce memory efforts, and should support management of specific events no-input, no-match, help.
Although the logical structure of a graphical page is a tree, its depth and width are too large for vocal browsing.
- Human–computer interaction - Wikipedia.
- Human Computer Interface - Quick Guide - Tutorialspoint.
- Special Drawings: Focke Wulf Fw 190, Pt II: Fw 190 C D, Ta 152 (Photo Hobby Manual).
Figure 13 shows an example of a graphical user interface and represents its logical structure by using polygons with solid borders to indicate the main areas, and then dashed borders to indicate sub-areas inside them. Figure 14 shows on the left a corresponding vocal menu automatically derived according to an algorithm Paterno and Sisti, in which the texts of the vocal menu items are derived either from elements id or from the section contents.
- On War: A Dialogue (New Dialogues in Philosophy);
- Adventures of Chrissie Claus #1!
- What is Human-Computer Interaction (HCI)? | Interaction Design Foundation!
- Donate to arXiv!
- A Faithful Sea: The Religious Cultures of the Mediterranean, 1200-1700.
On the right of Figure 14 there is an example of vocal dialogue that can be obtained from such a vocal interface. Multimodality concerns the identification of the most effective combination of various interaction modalities. In Manca et al. In the case of interaction elements, it is possible to decompose them further into three parts: prompt, input, and feedback, which can be associated with different CARE properties.
In this approach equivalence can be applied only to the input elements since only with them the user can choose which element to enter, while redundancy can be applied to prompt and feedback but not to input since once an input is entered through a modality it does not make sense to enter it also through another modality. Figure 15 shows a general architecture for supporting adaptive multimodal user interfaces. There is a context manager able to detect events related to the user, technology, environment and social aspects.
Then, the adaptation engine receives the descriptions of the user interface and the possible adaptation rules. The descriptions of the user interfaces can be obtained through authoring environments at design time or generated automatically through reverse engineering tools at run-time. When events associated with any adaptation rule occur, then the corresponding action part should be executed. For this purpose three options are possible:. It is now possible to obtain multimodal applications also in the Web. However, such language is no longer supported by current browsers.
This implementation is still not possible for Chrome mobile version. In first empirical tests associated with this solution for context-dependent multimodal adaptation the results are encouraging. User feedback pointed out that users like to have control on modality distribution for supporting personal preferences. It also turned out that the choice of the modalities should take into account the tasks to support, beyond the current context of use, for example showing long query results is something inherently preferable to present graphically since the vocal modality is not persistent and when the last results are presented vocally the user may have forgotten the initial ones.
Another aspect is that mixing modalities at the granularity of parts of single UI elements is not always considered appropriate; for example, in considering a single text field which has to be selected graphically, it is not perceived meaningful then to ask to enter the value vocally.
Distributed UIs and migratory UIs are two independent concepts, indeed there may exist distributed UIs which are also able to migrate, but there are also only distributed user interfaces which do not migrate at all , or migratory UIs that are not distributed across multiple devices. Multi-device support is emerging in various environments.
OS X Lion Resume footnote 3 provides a 'Resume' feature, which lets users pick up where they left off their applications, along with their user interfaces. Chrome-to-phone footnote 4 enables users to send links from their Chrome desktop browser to App on their Android device.
Firefox footnote 6 synchronizes bookmarks, tabs , and web history between desktop and mobile Firefox clients. At research level, Myngle Sohn et al. When considering specifically distributed user interfaces, it is important to note that there are three types of information important to specify Frosini et al. An example of distribution obtained through dynamic customization tools is presented in Manca and Paterno When the application is generated it is still possible for the end user to customise its distribution across various devices through an interactive tool in order to address needs not foreseen at design time.
Figure 16 shows an example: at the beginning the user interface is completely assigned to a mobile device; then through the interactive customization tool some elements are assigned to the large screen and others are redundant across the two devices. Build and share your own catalog of courses with Class Central's custom lists. This course is an introductory course on human-computer interaction, covering the principles, techniques, and open areas of development in HCI.
The class covers three broad categories of topics within human-computer interaction: a the principles and characteristics of the interaction between humans and computers; b the techniques for designing and evaluating user-centered systems; and c current areas of cutting-edge research and development in human-computer interaction.
Although the free version of this course does not include any assessments, you are welcome to follow along with the assignments that Georgia Tech students complete as part of enrollment in this course. Most commonly asked questions about Udacity Udacity. Get personalized course recommendations, track subjects and courses with reminders, and more. Facebook Twitter Envelope Url. Never miss a course! Add as "Interested" to get notified of this course's next session.
The only question is who will have to deal with it. In Designing for Int e raction , a usability academic Gillian Crampton Smith offered a concept of four dimensions or languages of interactive design.click
Human-Computer Interaction: Design Issues, Solutions, and Applications - CRC Press Book
Designers use them to analyze the current interactions and ask questions within each dimension. This is the language we use to describe interactions and the meaning behind every button, label, or signifier. Words should be clear and familiar to end users, used consistently and appropriately to the setting. These are all typography, imagery, icons, and a color palette that users perceive involuntar il y.
Duolingo uses simple pre-established paths and reassuring messages. While first three dimension make up the presentation layer of an interaction, time and behavior define the interaction itself. In virtual interfaces, the technique is applied when the system asks you to retype a new password. He has also described 7 stages of an action that each person goes through in their everyday life. These stages occur on three levels: goals, execution, and evaluation.
Stage 1. Forming goals — what do I want to do? Book a hotel room. Stage 2. Forming the intention — what should I do to meet this goal? Find a hotel room I like on a booking website. Stage 3. Specifying the action sequence — how exactly do I achieve this intention? Open up a browser. Log into Booking. Specify my parameters location, dates, number of guests, other filters. Scroll through the search results. Open the results I like in a new tab to save them for later.
Compare the chosen results and find the best option.