Predicting Trends in Tech Design

Gene Lee
Media Design Practice, Art Center College of Design,
950 S Raymond Ave, Pasadena, CA 91105.



Abstract

We are embodied and embedded creatures. This influences the way we interact with the world and the computational artifacts. Not so long ago, a small keypad was the only input modality to control the phone, but nowadays the user can use voice, touch, gesture, and eye movement to control and interact with mobile devices. As a multimodal trend is being expanded, it allows us to communicate with computers more natural, more efficient, and more engaging. In order to provide a robust channel of communication, a less direct intervention between the device and the user is required. Recently, wearable devices like Google Glass and Samsung Galaxy Gear are really making their way into everyday consumer life. In this paper I would like to question a brain computer interface as a new way of communication between human and computer by revisiting over the history of interfaces through a scope of designer’s point of view.

Introduction

Brain Computer Interface (BCI) has a high potential to be a new way of communication between humans and computers. It is essentially the most direct way of access to the intentions of a person. With the BCI, a person needs no common output pathways of peripheral nerves and muscles. This is the main argument for a BCI system that provides a completely new output pathway. For example, this year the Obama administration officially launched an ambitious multi-year project to probe the human brain in action. The main purpose of the project¹ is to fully grasp the activity of nerve cells in the brain related to autism and Alzheimer's disease as well as to establish a foundation to cure, but to investigate the mechanisms of human cognitive research findings in the field of Brain-Computer Interface development is also expected to contribute significantly.

The Aging statistics from the department of health and human services² shows that the older populations are expected to grow to be 19% of the population by 2030. There will be about 72.1 million older persons, more than twice their number in 2000. Demographic changes are leading to a growing population of older people many of whom have significant wealth and disposable income. Other less fortunate older people require access to many government services which are planned to be offered digitally. Legislation is also providing guarantees for older people with disabilities – approximately 50% of those over 65.

A paper by Robert Prueckl and Christoph Guger³ is introduced a brain computer interface has a potential for controlling a robot. The experiments succeed controlling a tiny robot moving forward, backward, left, right and stopped the movement along with brainwave frequencies. These are measured with the well-known Electroencephalography (EEG), which was primarily used for clinical purposes only in the past, amplified and fed into a personal computer which under certain circumstances and with appropriate algorithms being able to process them to give the person a new kind of communication channel. BCI system is perhaps the only way my grandmother can express herself from brainstem stroke which impair the function of the common output pathways which are responsible for the control of muscles.

While the complete brain map is expected to take a decade, improved brain computer interface products are likely to emerge within two years. There are some brain–computer interfaces that monitor brainwaves through EEG have already made their way to the market. In such, NeuroSky’s headset uses EEG readings as well as electromyography to pick up signals about a person’s level of concentration to control toys and games. Emotiv Systems sells a headset that reads EEG and facial expression to enhance the experience of gaming and signs up IBM to exploit brain computer interfaces. Paul Ledak,⁴ vice president of IBM digital convergence said the use of BCI technology represents a potential breakthrough in human-machine interfaces, changing the realm of possibilities not only for games, but in the way that humans and computers interact. As interactions in virtual environments become more complex, mouse and keyboard alone may soon be inadequate. BCI is an important component of the 3D internet and the future of virtual communication. This paper will introduce a brief history of interface and our leading innovation labs current practices that stimulate human senses to provide a rich HCI experience as a new way of communication between humans and computers.

Sensation and Perception in HCI Design

In HCI design, it is important to understand the human information-processing that is related to cognitive architecture, memory, perception, and motor skills. The French philosopher Etienne Bonnet de Condillac (1715–80) imagined the statue had in working order what we call the “mental hardware” and “software” of a normal human being, but without senses. He believed that such a being should have no mental life, that no ideas were possible in the absence of sensation. In his essay Treatise on the sensation (1754a),⁵ he imagined the statue has only the sense of smell and cannot perceive anything but odors. Therefore the idea of extension, form, sound, or color cannot be acquired. For example, a rose is placed by the statue. From our perspective, the statue has smells of a rose; however, in relationship to itself, it is nothing but the odor of this flower. The statue cannot possess the slightest notion of an object; it is nothing but the scent of the rose, then that sensation would be the whole content of its mind. Then we take away the rose from the statue. The statue retains a trace or an echo of the smell perceived from the rose that was taken away. This trace or echo of the smell is memory. If we place a violet, a jasmine, and an asafetida to compare with, then other impressions and other sensations arise from the trace or echo. These flowers has their unique trace or echo and compares with its memory images. From the experienced sensation by flowers and their comparisons, it arises the passions, desires, and volition.

Now the statue cannot compare two ideas without perceiving some difference or resemblance between them. Some odors experienced by the statue will retain in memory the ideas of pleasure and pain. Pleasure is a quality to the associated sensation of the rose, the violet, and the jasmine; pain is a quality to the odor of asafetida, or decaying memory in which it arises the abstract notions of pleasure and pain. Abstraction itself is a modification of sensation and the highest function of the understanding what is out there in the universe. It depends on our ability to find a correspondence between current input from our senses and earlier input that we organized and stored in memory.

Responding through stored memory, we still think in terms of an superficial game of static things — not realizing that there are no solids, no surfaces or no straight lines, but only waves — and forever frustrated by the wave system realities of Universe. As Fuller wrote in the introduction on the Expanded Cinema (1970),⁶ “We are in a womb of complex frequencies. Some of those frequencies man identifies ignorantly with such words as sight, sound, touch, and smell.” All things we perceive and experience are waves carrying information. We can hear sound as it gets slow down and as it starts speeding up then we can see lights. If the wave frequency slows down more than sound, then it turns into solid we can feel and somewhere in-between is scents we can smell as well. These waves of various frequencies creates the patterns that we perceive by our senses. Therefore, Humans perceive the world through their senses and act on through the motor control of their effectors.

Through waves of various frequencies creating patterns and improving technology, IBM’s R&D labs announced the Five in Five (2012)³ — a list of innovations that have the potential to change the way people work, live and interact during the next five years based on cognitive systems. One of the five predictions of innovation is the sight. Today’s computers does not have the ability to understand the contents or context of images and instead they must rely on tags, titles and other information provided by humans. In five years, humans will no longer need to identify images for computers. Instead, we will teach them examples and they will learn to recognize new images for themselves by colors, shapes, patterns, densities, textures, contextual relationships, and etc.

The Five in Five scenario requires your computer to share your senses. Processing sights requires eyes and most importantly a brain. We note that for decades, neuroscientists probing with electrodes have learned a great deal about the human brain. In our interactions with our environment, we constantly refer to past experiences stored as memories to guide behavioral decisions every day. But how memories are formed, stored and then retrieved to assist decision-making still remains a mystery.

Typing, clicking, and touching through time

The invention of remote control for a general television was a great evolution of the interface in history. However, early PC owners usually had to write their own programs to do anything useful with the machines. A lack of an operating system limits the market of the general public who have not studied the mechanical language specially. While typewriters were widely being used throughout the 1950s to the 1970s, computers were starting to emerge as a consumer friendly product with a keyboard⁸ as a primary input device. In the 1980s, after the introduction of the first mouse, the interface did not change significantly the way computer was for the domain of a few specialists more than 30 years before. When the mouse⁹ was first published in 1968, the core of a graphical user interface was successful catching in trend. From an experts' perspective, command-based interface is still a powerful and flexible tool, but the icon culture with the mouse is represented by an intuitive graphical interface reducing the barriers for the public users. 30 years later, a small joystick mount is tried in consumer products and made small changes in the shape and function of the mouse, but no significant change has led so far.

As the desktops meet home owners, ubiquitous computing introduces laptops with a new interface. It is called a touchpad and most laptop support a full set of gestures, giving you a new way to control and interact with what’s on the screen. It did not completely replace the convenience of a mouse. However, the laptop touch pad is now equipped with mobile devices. The multi-touch¹⁰ gestures are natural to adapt so it is now common for laptop manufacturers to include multi-touch track pads on their laptops, and tablet computers respond to touch input rather than a traditional mouse. Mainstream exposure to multi-touch technology occurred in 2007 when the smartphones gained popularity. Apple Company did not invent the touchscreen, but they have innovated it. The technology became more useful and commercially available to a widespread audience. Although a touchscreen technology has been around for nearly half a century, it is used on ATM machines, GPS systems, cash registers, medical monitors, game consoles, computers, phones and continues to appear in newer technologies. The one of the earliest commercialized touchscreen computers called HP-150¹¹ were made in 1983. The feature consisted of a series of vertical and horizontal infrared light beams that crossed just in front of the screen.

This boom in the touchscreen market spread past smartphones and onto other devices, like gaming consoles or tablets. Today, nearly anything can be turned into an interactive surface. The optical touch technology functions when a finger or an object touches the surface, causing the light to scatter, the reflection is caught with sensors or cameras that send the data to software which dictates a response to the touch.

Troubled Speech & gesture recognition systems

According to a report from the researcher NPD In-Stat¹², only about 12 million U.S. households have their Web-capable TVs connected to the Internet, although In-Stat estimates about 25 million U.S. TV households own a set with the built-in network capability. Also, In-Stat predicts that 100 million homes in North America and western Europe will own television sets that blend traditional programs with Internet content by 2016. This statistics shows the increasing numbers of Smart TV in our society that needs hybrid interfaces. The speech recognition and gesture recognition systems were applied first to Smart TV¹³ before wearable devices. Smart TV features one of the key elements of the interface because of the viewing distance. This environment needs an ease of operation instead continuing a push-button remote control. The traditional TV environment interface, especially the push-button remote control, has the highest accuracy and efficiency in use and the consumer accustomed for decades. Switching the user interface to another is not an easy process. There was an attempt to spread Initial Smart TV remote control models equipped with a QWERTY keyboard, but no good responses get the bar. It leads a wireless mouse pointer can be operated with the remote control and a couple of companies released a Magic remote controls. This has somewhat improved TV environments, but the speech and gesture recognition can be applied to a wider variety of operations so it will continue to be an effort.

The first speech recognition¹⁴ could understand only digits. Given the complexity of human language, it makes sense that inventors and engineers first focused on numbers. Bell Laboratories in 1952 designed the "Audrey" system, which recognized digits spoken by a single voice. Ten years later, IBM demonstrated at the 1962 World's Fair its "Shoebox" machine, which could understand 16 words spoken in English. Speech recognition technology made major strides in 1970s, then Apple, Microsoft, Google and other major IT companies have studied the oncoming field for years to commercialize the system. However, it is still slow and awkward for the consumers and the frequent use of the error occurs in uses. In particular, interference and ambient noise coexist in the room that allows only loud commands to the machine. A major obstacle is the unnatural usability has been pointed out. However, a personal space such as inside the car is concerned the best use of this technology when asking driving instruction in my opinion. Thus, it has a relatively brighter outlook.

The facial recognition system¹⁵ operates on a smartphone to detect the movement of the device itself as a person making a phone call. As the next step to perform the work, it requires, like Microsoft's Kinect gaming applications, the user's hand or body movements whether to perform the intended commands. The basic operation of the device can be equipped with sensors, such as a magnetometer, accelerometer and gyro sensor with a relatively simple to implement, but the interface can be divided into sophisticated status by detecting the user's movements by using a camera or an infrared command itself in processing various commands. Understanding the intuitive gestures and voice commands are very limited numbers. In order to perform some action to generalize the algorithm for the sophistication of the hardware, the users have to learn and remember a lot of action and will not be accepted.

Now the trend is to multi-modal interface

Across the multiple interfaces in computing, communications, and entertainment devices, there is a vigorous trend of applying multiple interfaces on a single device to which we want to perform. Your laptop can use the touch screen, keyboards and track pad by your choice. Your Smart TV at home can recognize your voice and motion to remotely control the device. Your smart phone is at the top of them all. By stimulating tactile, a haptic touch screen system also integrates the user interface to provide a richer experience with a purpose. Simple touch button vibrating function is already widely applied in products, but the surface of the screen with haptic technology is expected to develop more in order to express the texture of things. In addition of the next generation of multimodal materials, an emotion recognition system is interesting. It tracks where the eyes of the people in what order to read information and measures how long the gazing takes.

In January 2014, Intel has introduced 'Perceptual computing’¹⁶ that can provide intuitive uses of personal computers. Senior vice president of Intel claims the touch is not intuitive and not natural. For his proof, he pointed to how humans interact in the real world that we do not touch all the time. We all know the smartphones now are touch based interfaces and it is a good example of why the trend is now on the multimodal interfaces. It means important in the increasing numbers of the mobile device with no lack of computing power which lowers the demands of the personal computers in public. But Intel picks on applying multi modalities in their desktop system to improve the user’s environment.

For example, looking around an image by turning your head or looking up and down rather than using a mouse or a flick on the computer screen. A key to popularizing gesture recognition is to build in 3D cameras in computers, which allow tricks like separating the person looking at the camera from the background around them. Eden, for example, gave the example of a Skype video call in which the person he spoke with kept changing backgrounds–like showing a beach or ski slopes rather than the office.

It is similar to Microsoft's Kinect product. But the idea of perceptual computing is that Intel wants to show the notion of the computing device rather than selling a product or platform. To push such vision ahead, Intel is working on an infrared detecting sensor, a depth HD webcam and a microphone in a package of Perceptual computing development kit that can operate devices like keyboard, mouse, touch screen, voice recognition when it is needed by user at the same time interacting with the environment of the device.

Follow by a trend of hybrid technological world, Intel’s position in providing a unique experience to users around the value is changing. More intuitive actions will make you feel the need of computers. However, industry experts sees the future operating environment will include a plurality of interface devices that can be used simultaneously in harmony in fact that research has positively evaluated.

Simple and Easy Interfaces for future

In 2013, due to the google glass, wearable devices got spotlights respectively. The device worn on the body has to be small in size and light enough to carry around. Under these constraints, physically embedding the equipment to body is not easy with manipulating natural and comfortable look. Consequently, a speech recognition with a touch panel located on the side of google glasses covers a holistic role of the device. For example, the projection screen in front of the surface of the glass, it displays information immediately from the smartphone. Without taking the smartphone out of the bag or pocket and touch of a button for several times to go through all the process in order to get the information we need. With this technology, we can almost automatically acquire the information of the convenience.

In MIT Media Lab, the professor Pattie Maes¹⁷ investigates ways to augment the everyday objects and spaces around us, making them responsive to our attention and actions. She summarizes the limitation of the current interfaces into three elements – Blind, Passive, and Disruptive – that means, the device itself does not know anything about the current situation and the user must respond to the command. Its passive property limits the user interaction with the machine and the inconvenience of giving a machine a close attention. Thus, the device of the future, she believes, has to aware the context, proactive and integrated by the form. For example, shopping at the bookstore, a person will take out of the smart phone, enter the book title in the search box and review the information, then decide whether to buy the book or not. A few years later the same person is also wearing a head-mounted devices and walk into a large bookstore and pick up a book at a gaze of interest. Then his device scans the book cover design automatically displays information on the cover. Of course, the selection is tailored to your reflected tastes.
At TED, Pattie Maes and Pranav Mistry demos several tools that help the physical world interact with the world of data including a deep look at his Sixth Sense¹⁸. Basically, Sixth Sense is a mini-projector coupled with a camera and a cellphone—which acts as the computer and your connection to the Cloud, all the information stored on the web. The SixthSense system augments physical objects the user is interacting with by projecting more information about these objects projected on them, like in the movie Minority Report.

Integrating HCI to the real world

Coincidentally in 2009, Hollywood film Avatar and Surrogates had the main characters in dangerous or inaccessible environments using robots to take all the risk of physical ideas of performing actions instead of themselves. The human brain is not so different from computing devices and robots, it connects the mechanism of the brain computer technology to control machines. No artificial devices such as voice or motion is required to interact with a machine. It is extremely simple and convenient. In fact a simple BCI system is commercialized in applications of the area of medical and gaming environment.

There are three approaches to control the device with user’s intention through the brain. First, using brain activity that occurs differently depending on the state of the EEG frequency method. Second, using nerve cells in specific areas of brain to get electrical signals. At last, using magnetic resonance imaging system to detect where the blood is driven in the brain in video format.

In late 1990s, BCI technology was at the level of moving the cursor on the screen of your computer to enable communication. Since 2000, BCI research has been quite active on the performance level. Patients with paralysis can output simple electrical signals to the read and write messages on the computer screen. Such as a wheelchair or operating the robot arm to improve the quality of life for the disabled is the purpose of HCI being commercialized. A number of organizations, typically, MIT and Brown University, are in collaboration with BrainGate. They have developed a semiconductor chip that is implanted in the brain of patients that can be operated by thoughts of checking e-mails or controlling the robotic arm to move the coffee cup on the table after drinking.

In recent years, beyond the medical field, the general audiences is the extended area of ​​application of BCI. It is still in the early stages of the game applications but Emotiv and NeuroSky have manufactured a unique Bluetooth headset that reads brain waves related to feelings and emotions. I had a chance to hack an EEG toy called Mindflex and found out commands are made by a certain frequency with training. Canada's InteraXon launched an EEG headband that connects wirelessly with your smartphone or tablet in real-time, that can analysis your stress level and help the brain to control movement or increase your concentration.

With proper analysis on current brain activity, it can simply give a value in uses. For automobile manufacturers, they can detect drowsy drivers depending on the state of the brain and automatically flashing emergency lights to prevent the car accidents. In the case of a pilot study from DARPA (the U.S. Defense Advanced Research Projects Agency), cognitive technology can detect the situation of a pilot and automatically converts additional information from the headquarters between audio and graphics mode. MIT lab also introduced the idea of measuring the stress level of the user when the large amount of information that the computer interface automatically adjusts the workload of the application. This idea envisions a mental health detecting system that can increase efficiency of work.

Generalizing HCI with non-invasive sensor

In order to expand the brain computer interface in general applications, it is necessary to improve the hardware. The way to measure the brain activity is basically divided into invasive and non-invasive type. The invasive method is the way to measure brainwaves from inside the scalp by placing the electrode or the microchips with surgical treatment. The non-invasive method is apparently wearing a headset or a headband to measure brain waves. Compared to the invasive method, the non-invasive is easy to use, but in terms of practical issues, a mix of noise limits the accuracy of the brain frequency measurement. However, implanting electrodes or sensor in the brain is not welcome to a typical user who has complete physicals to intuitively issue the commands to the body. In order to ensure the mainstream customer, making the device as smallest possible size and extend the developments on detecting the brain wirelessly with transmit control. However, as mentioned earlier, BCI is also the primary trend of multi-modal interfaces and taking a complementary role with other interfaces.

Conclusion

Oscar Wilde said that when bankers get together they talk about art. When artists get together, they talk about money. To me, it is a little awkward talking about the future of Human-Computer Interaction as a media designer. Because I am not a scientist inventing technologies, or a manufacturer making products, or an engineer making them function. But I believe that only designers can combine insights into something that is desirable, viable, successful and adds value to people’s lives. By flipping through the innovations in the past and the technology trends in the mainstream market, I could forecast the potential in human computer interface that can bring radical changes in the interface design field but in fact all of these technical concepts introduced in the 1960s, and a technical perfection and an application utilization will take half million years to get recognized by the general public use.
While researching, an idea came to my mind. It is related with the world of René Laloux’s cult classic Fantastic Planet (1973). The synopsis of the movie is about the girl and her father whom are massive, blue-skinned species of the Draag and their new pet is a tiny human boy. The boy, Terr witnesses uncanny meditative practices somehow central to the Draag society. The Traag practice of meditation, whereby they commune psychically with each other and with different species, is shown in transformations of their shapes and colors. His vision provokes me what it is like to be connected with others and communicate through mind. He illustrates bodies intersecting and transforming in different shapes and colors to witness physical activity of brain communication. Furthermore, I want to know what we can communicate with hacked EEG headsets by experiencing different sets of communal events in groups.

Selected References with Annotations

¹ "President Obama pitches $100 million investment in human brain research ." NY Daily News. http://www.nydailynews.com/news/politics/obama-pitches-100m-investment-brain-research-article-1.1305354 (accessed January 14, 2014).
This news reported that president Barack Obama on Tuesday, April 2, 2013 asked Congress to spend $100 million next year on a new project to map the human brain in hopes of eventually finding cures for disorders like Alzheimer's, epilepsy and traumatic injuries.

² Newell, Alan. "HCI AND OLDER PEOPLE." HCI AND OLDER PEOPLE.
http://www.dcs.gla.ac.uk/utopia/workshop/newell.pdf (accessed January 14, 2014).
The inclusion of older people within the design cycle for information technology is discussed, and the successful development of a prototype email and web browser described. This is followed by a discussion of the use of theatrical techniques to educate designers in the requirements of older people for technology.

³ Prueckl, Robert, and Christoph Guger. "A Brain-Computer Interface Based on Steady State Visual Evoked Potentials for Controlling a Robot." A Brain-Computer Interface Based on Steady State Visual Evoked Potentials for Controlling a Robot. http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CC0QFjAA&url=http%3A%2F%2Fwww.gtec.at%2Fcontent%2Fdownload%2F1822%2F11397%2Fversion%2F2%2F&ei=9RbVUtyTFtjioATtzoHoAw&usg=AFQjCNE3BVoswj0g6xgb_PkkT8eUErS23A&bvm=bv.59378465,d.cGU (accessed January 13, 2014).
In this paper a brain computer interface (BCI) based on steady state visual evoked potentials (SSVEP) is presented. For stimulation a box equipped with LEDs (for forward, backward, left and right commands) is used that flicker with different frequencies (10, 11, 12, 13 Hz) to induce the SSVEPs. Eight channels of EEG were derived mostly over visual cortex for the experiment with 3 subjects. To calculate features and to classify the EEG data Minimum Energy and Fast Fourier Transformation with linear discriminant analysis was used. Finally the change rate (fluctuation of the classification result) and the majority weight were calculated to increase the robustness and to provide a null classification. As feedback a tiny robot was used that moved forward, backward, to the left and to the right and stopped the movement if the subject did not look at the stimulation LEDs.

⁴ "Neural input devices could bring thought control to hardware." Computerworld. http://www.computerworld.com/s/article/9063938/Neural_input_devices_could_bring_thought_control_to_hardware (accessed January 14, 2014).
This article contains a statement from Paul Ledask, vice president of digital convergence at IBM that The use of BCI technology represents a potential breakthrough in human-machine interfaces, changing the realm of possibilities not only for games, but in the way that humans and computers interact.

⁵ Condillac, Etienne Bonnot De. Condillac's Treatise on the Sensations. Los Angeles: School of Philosophy, University of Southern California, 1930. Print.
Étienne Bonnot, Abbé de Condillac, was the chief exponent of a radically empiricist account of the workings of the mind that has since come to be referred to as “sensationism.” Whereas John Locke's empiricism followed upon a rejection of innate principles and innate ideas, Condillac went further and rejected innate abilities as well. On his version of empiricism, experience not only provides us with “ideas” or the raw materials for knowledge, it also teaches us how to focus attention, remember, imagine, abstract, judge, and reason. It forms our desires and teaches us what to will. Moreover, it provides us with the best lessons in the performance of these operations, so that a study of how we originally learn to perform them also tells us how those operations ought to be performed. The pursuit of this tenet led Condillac to articulate an early developmental psychology, with explicit pedagogical and methodological implications. His concerns also led him to focus on the theory of perception, and to advance important and original views on our perception of spatial form. He offered a more searching, careful, and precise account of what exactly is given to us by each of the sense organs than any that had been offered up to his day, and presented a highly nuanced account of how this raw data is worked up into our beliefs about the world around us.

⁶ Youngblood, Gene. “Introduction by R. Buckminster Fuller.” Expanded Cinema. New York: Dutton, 1970. 15-35. Print.
This is the Introduction by R. Buckminster Fuller for the book Expanded Cinema. Fuller said that it is the most brilliant conceptioning of the objectively positive use of the Scenario-Universe principle, which must be employed by humanity to synchronize its senses and its knowledge in time to ensure the continuance of humanity now installed by evolution aboard our little Space Vehicle Earth.

⁷ "The 5 in 5." IBM technology advances that will customize our lives. http://www.ibm.com/smarterplanet/us/en/ibm_predictions_for_future/ideas/ (accessed January 14, 2014).
IBM researchers are exploring the idea that everything will learn – driven by a new era of cognitive systems where machines will learn, reason and engage with us in a more natural and personalized way. These innovations are beginning to emerge enabled by cloud computing, big data analytics and learning technologies all coming together.

⁸ Wikimedia Foundation. "Computer keyboard." Wikipedia. http://en.wikipedia.org/wiki/Computer_keyboard (accessed January 13, 2014).

⁹ Wikimedia Foundation. "Computer mouse." Wikipedia. http://en.wikipedia.org/wiki/Computer_mouse (accessed January 14, 2014).

¹⁰ Wikimedia Foundation. "Multi touch." Wikipedia. http://en.wikipedia.org/wiki/Multi_touch (accessed January 14, 2014).

¹¹ Wikimedia Foundation. "HP-150." Wikipedia. http://en.wikipedia.org/wiki/HP-150 (accessed January 14, 2014).

¹² "100 million TVs will be Internet-connected by 2016." - latimes.com. http://latimesblogs.latimes.com/entertainmentnewsbuzz/2012/03/100-million-tvs-will-be-internet-connected-by-2016.html (accessed January 14, 2014).

¹³ Wikimedia Foundation. "Smart TV." Wikipedia. http://en.wikipedia.org/wiki/Smart_TV (accessed January 14, 2014).

¹⁴ Wikimedia Foundation. "Speech recognition." Wikipedia. http://en.wikipedia.org/wiki/Speech_recognition (accessed January 14, 2014).

¹⁵ Wikimedia Foundation. "Facial recognition system." Wikipedia. http://en.wikipedia.org/wiki/Facial_recognition_system (accessed January 14, 2014).

¹⁶ "The Wall Street Journal." Digits RSS. http://blogs.wsj.com/digits/2014/01/06/intel-shows-perceptions-of-perceptual-computing/ (accessed January 14, 2014).

¹⁷ "Fluid Interfaces | MIT Media Lab." Fluid Interfaces | MIT Media Lab. http://www.media.mit.edu/research/groups/fluid-interfaces (accessed January 14, 2014).

¹⁸ "SixthSense - a wearable gestural interface (MIT Media Lab)." SixthSense - a wearable gestural interface (MIT Media Lab). http://www.pranavmistry.com/projects/sixthsense/ (accessed January 14, 2014).
Thesis the 2nd half
Thesis the 1st half
Concept the 2nd half
Concept the 1st half
Dev year