2nd International Conference on Animal Computer Interaction

The 2nd International Conference on Animal Computer Interaction was held in Iskandar, Malaysia, on the 16th of November 2015, as part of the 12th International Conference on Advances in Computer Entertainment Technology. This year, the conference lasted for a whole day and had a wide variety of contents: paper presentations, postgraduate consortium, panel discussion and video posters.

Paper presentations were divided into three main sessions by their content: For the Wild, Body that Talks, and Perspectives. This year, 10 paper contributions were accepted, and the quality of the works was remarkable. All the papers can be found here, and below is a brief summary of the sessions.

Paper session 1: For the Wild

Naturalism and ACI: Augmenting Zoo Enclosures with Digital Technology (short paper)
Author(s): Marcus Carter, Sarah Webber and Sally Sherwen

This work, presented by Marcus Carter, described the potential of ACI technology in zoo environments to improve both animal welfare and visitor experience, demonstrating how naturalism and technology are not always opposite concepts. Zoo enclosures have evolved from concrete and limited spaces in which animal welfare was questionable, to immersive and naturalistic habitats in which visitors get a sense on how animals live in the wild. While naturalistic features in these enclosures help visitors to have positive impressions on their visits to the zoo, animal welfare does not necessarily rely on the naturalistic features of the environment directly, but rather on their freedom to express natural behaviors, and technology could be used to encourage such behaviors. Moreover, technological enrichment could also be used to stimulate the animals intelligence and capabilities, providing visitors with a new perspective on zoo technologies. ACI could significantly improve zoo animals welfare while at the same time raising awareness of zoo visitors about wildlife conservation. This evolution towards a fifth generation of zoos should include the zoo visitor in the design process, due to the importance of the visitors perception and relation with the animals.

Playful Rocksalt System: Animal-Computer Interaction Design in Wild Environments (short paper)
Author(s): Hiroki Kobayashi, Kazuhiko Nakamura, Kana Muramatsu, Akio Fujiwara, Junya Okuno, Kaoru Saito

In this paper, an experimental system allowing interactions with wild animals was presented. The system consists of two different subsystems: a Panorama viewer of the forest and a remote animal sensing system. The Panorama viewer is an application on the human side which allows to take a look into the forest using a PDA and its gyroscope to control the camera. The sensing system consists of a theremin which is capable of detecting the arrival and presence of a wild deer. When the theremin detects that a deer is near to it, the PDA alerts the user, which is in a remote location, by emitting slight vibrations. The real-time images are also displayed on the PDA, and the user can also interact with the deer by remotely moving a piece of deer cracker (made of rock salt) located at the forest. The authors highlight the benefits that technology could bring to raise awareness on ecosystem preservations and also provide tourists with simulated experiences which do not interfere with wild nature and do not expose wild animals to any danger or collateral damages due to touristic activities.

Towards the Non-Visual Monitoring of Canine Physiology in Real-Time by Blind Handlers (long paper)
Author(s): Sean Mealin, Mike Winters, Ignacio X. Domínguez, Michelle Marrero-García, Alper Bozkurt, Barbara Sherman and David Roberts

David Roberts presented in this work the design of two non-visual interfaces, a vibrotactile and a visual one, aimed to allow blind handlers to recognize changes in their dogs’ heart and respiratory rates. In this way, blind handlers, who are unable to identify some physiological changes on their dogs as easily as non-blinded handlers, would be able to receive this information in real-time and in a non-distracting way. The canine’s heart rate and respiratory rates can be monitored using a small and non-obtrusive wearable device developed as prior work by the researchers. For the two studies conducted, they used simulated data of a dog’s heart and respiratory rates in different situations during a usual walk with the owner, so the study was reproducible. Both interfaces had two modes to present the information to the handlers: absolute mode, in which every heartbeat and breath are indicated, and relative mode, in which only increases or decreases on the heart and breath rates are indicated. For the audio interface, absolute mode uses sounds of heartbeat and panting, respectively, while the relative mode uses high (heart) and low (breath) pitches, which increase or decrease depending on the information to be transmitted. For the vibrotactile interface, similar clues are given but this time using vibration on two motors located on the right and left side of the dog handle. Two studies were conducted in which users had to report when they detected changes on the dog’s heart or respiratory rates using all interfaces in all modes. Results indicate that participants felt more comfortable using the audio-based interface, even though the accuracy was higher when using the vibrotactile interface. Relative mode in both interfaces seemed to be more effective when the user has to be focused on other tasks, while audio clues were more difficult to detect when conducting other audio-based tasks simultaneously. Detailed discussion and other interesting and detailed results can be found in their paper.

Paper session 2: Body that Talks

Knowledge Engineering for Unsupervised Canine Posture Detection from IMU Data (long paper)
Author(s): Mike Winters, Rita Brugarolas Brufau, John Majikes, Sean Mealin, Sherrie Yuschak, Barbara Sherman, Alper Bozkurt and David Roberts

David Roberts presented a comparison between a supervised and an unsupervised classification for canine posture detection using wearable accelerometers. Based on prior work, they reduced the size of the wearable harness and aimed to reduce the time consuming task of labeling the acquired data to build the posture model. For this purpose, they proposed taking skeletal measurements from the animal to build unsupervised models for canine postures. These skeletal models were compared against traditional learned models from the manually labeled data of the training set. Three decision schemes were evaluated on these two approaches: window match (each sensor selects a posture if the input value falls into the range of values of the model for that sensor), clustering (each sensor selects the posture with the closest average value for that sensor) and fuzzy clustering (each sensor assigns floating point scores to the most likely postures). Finally, the posture with the majority of votes (or points) is selected. Results showed that classification using supervised models was more time-consuming but provided more accurate results than unsupervised classification on skeletal measurements. However, results were very promising and encouraging for improvement, as these systems could be used in the automation of canine training processes. All data, tables and results can be found in the paper, which is a really interesting and recommended read.

Developing a depth-based tracking system for interactive playful environments with animals (long paper)
Author(s): Patricia Pons, Javier Jaen and Alejandro Catala

Patricia Pons presented a depth-based tracking system capable of detecting a cat’s location, posture and field of view. In this work, they are using a Microsoft Kinect sensor placed on the ceiling, in a top down position to cover a wide play area. In this way, no wearable device is required, and therefore the agility and natural behavior of the animal is not limited. This is of high importance in the case of animals who are not used to wear any harness and are extremely sensitive, such as cats, or animals to whom a harness with technological devices is unfamiliar and could pose a threat, such as orangutans. Several sessions with cats were recorded using the Kinect sensor, in which cats were playing and moving freely around the play area, interacting with usual toys, humans or even small robots. These recordings were used to inform the development of the tracking system, which only uses depth information from the Kinect sensor. Firstly, it extracts the cats’ contours from the image, and the depth information shows clearly to the human eye differences in depth values from the different body parts of the cat. Thereafter, a k-means clustering algorithm is applied to each of the detected cats’ contours, separating the different cat body parts into different clusters. Then, these clusters are classified into head, body and tail using information such as the cluster’s average depth, its size and the size of the cat’s contour. Moreover, basic postures can also be recognized using this information, such as sitting, walking, jumping or turning. The authors are working on improving the accuracy rates of the algorithm and also test its performance using machine learning algorithms for the classification. This system could be of valuable interest in automatic posture and behavior recognition, specially in the development of intelligent playful environments which adapt to the animals’ interactions.

Towards a Canine-Human Communication System Based on Head Gestures (long paper)
Author(s): Giancarlo Valentin, Joelle Alcaidinho, Melody Moore Jackon, Ayanna Howard and Thad Starner

Joelle Alcaidinho explained how different kinds of barriers between dogs and humans can difficult the task of working dogs when they try to communicate with a human. These barriers could be perceptual (if humans cannot sense what the dog is sensing, e.g. blind people), contextual (if the human does not know the dog well and is unable to interpret its signaling) or distance barriers (signaling takes places beyond line of sight or hearing). In order to overcome perceptual and distance barriers, the authors are developing a system capable of detecting representative and natural head gestures from dogs using a motion sensor on a dog’s collar. They used the accelerometer and gyroscope sensors of a WAX9 unit, and defined seven characteristics that the selected gestures should try to maximize: generalizability across subjects, low false positives, high true positives, physical ease, conceptuale ease, ease of training and ease of remembering. In a first experiment, they evaluated different gestures and sequences of gestures in order to see which were the most promising. Vertical gestures were discarded from the experiment as they require to differentiate between a posture in which the dog is looking up and the movement of looking up. They finally identified four gestures (sequences) that deserved further exploration: spin, twirl, reaching right rib cage twice, and reaching left rib cage twice. Then, the authors evaluated the system’s accuracy recognizing those gestures. Attending to observational findings, rotational gestures seem to fit better the seven requirements defined by the researchers. The way in which the dog was trained also affected the accuracy of the system, as a dog which is rewarded after each single movement would look down after each movement looking for a treat, breaking the sequence of recognizable gestures.

Sensing the Shape of Canine Responses to Cancer (short paper)
Author(s): Olivia Johnston-Wilder, Clara Mancini, Brendan Aengenheister, Joe Mills, Rob Harris, Claire Guest

In this presentation, Clara Mancini presented a study on the detection of pressure patterns of cancer detection dogs when they are searching for positive samples. Cancer detection dogs are trained to walk along a set of different samples and sit in front of  the positive one, i.e. the one which contains cancer cells. Due to the spontaneous nature of dogs, and their inability to communicate verbally what happened, sometimes their signaling is ambiguous. The researchers built a canine-centered interface consisting of a modified stand as the ones used to present the samples, with a pressure sensor to record the pressure applied by the dog when he is sniffing the sample. Previous work demonstrated that dogs produced different patterns for positive and negative samples, and also uncertain cases had patterns not belonging to any of the previous two cases. In the present study, they used amyl acetate in order to control the amount of the component present in the sample. In this way, they observed that there were pattern variations depending on the different levels of concentration of the component. Also, the study involved two dogs with different training and behaviours. Pattern variations were found  between the two dogs, as one produced more energetic and strong patterns, while the other was less energetic and produced less characteristic patterns. However, there were common pressure patterns for both dogs and for different concentration levels: in the positive samples, there was always an initial and intense spike, then a second one, usually longer, and then a set of short and decreasing spikes denoting the bounces of the plate when the dog leaves. In the negative samples, there was no second spike. The duration of the first and second spikes in the positive samples varies with the concentration levels. This promising work could allow to automatize the training process of these dogs and avoid ambiguous situations in which the signaling is not clear.

Paper session 3: Perspectives

Smelling, Pulling, and Looking: Unpacking Similarities and Differences in Dog and Human City Life (long paper)
Author(s): Fredrik Aspling, Oskar Juhlin and Elisa Chiodo

Fredrik Aspling stated in his work the necessity of understanding human-dog communication in common spaces such as urban life. He presented an ethnomethodological video analysis of several clips of a walk down the street of a human with two leashed dogs. In this analysis, they focus on finding indicators that show the different interests of the dogs and the human in several points during the walk. This is realized by observing the leash and how it is strained when the dog wants to reach a particular point or take a different direction than the human. Negotiations occur in these situations, as a strained leash means a conflict between the dog’s wants (smelling other dogs scents) and the human wants (move along). The authors propose to support the dogs’ wants and needs in these situations by giving the handler information on their dogs’ curiosities when walking on an urban environment.

Cross-Disciplinary Perspectives on Animal Welfare Science and Animal-Computer Interaction (short paper)
Author(s): Jean-Loup Rault, Sarah Webber and Marcus Carter

Sarah Webber presented this paper discussing the symbiotic relationship between the fields of animal welfare science and animal computer interaction, showing real examples and experiences at Melbourne Zoo. She exposed that there are three major schools regarding how to assess animal welfare: biological functioning (relying on physical indicators), affective states (focusing on mental health, feelings, etc.), and natural living (animals have to behave in natural ways). However, ACI is starting to show how non-natural interactions for animals, such as the use of technology, could improve their welfare in the same way it has benefited human beings. As well as in animal welfare science, ACI could use behavior observation and identification to understand the interactions of the animals, not only when using a device but also on the consequences this interactions cause on long term. In addition, possible contributions of ACI to animal welfare were highlighted: technology could allow animals to control their environment, therefore reducing stress, such as in zoos in which crowded areas with visitors could disturb the animals. Technology has also the potential to enhance social interaction between animals or animals and humans, even remotely, while providing enrichment to their environments. For these reasons, animal welfare scientists as well as ACI researchers should work closely to design systems using knowledge from animal welfare science, which effective improve animal wellbeing.

Designing for intuitive use for non-human users (long paper)
Author(s): Hanna Wirman and Ida Kathrine H. Jorgensen

Hanna Wirman talked about a project with Bornean orangutans in rescue centers called TOUCH, in which she participates. In this project, they are aiming to enrich orangutans play using technology, focusing on extending existing playful practices by digital means. They started using digital interfaces like touch screen computers, but soon realized the difficulties orangutans had trying to understand the intended use of human-oriented interfaces, which for us are easy to use due to our previous knowledge and abilities. Then, they began to explore what “intuitive interface” means when we speak about technology for orangutans, and how to design playful interactions which are really intuitive from the perspective of the animals. Tangible user interfaces (TUI) seem to be a promising way of providing intuitive interfaces for the orangutans which are easy to use without any training. Manipulations of a physical object can let the orangutan explore how the object is supposed to be used and what can be done with it. By studying how an animal behaves in its natural environment, how it uses the objects around and how it plays with them, we may be able to identify some of their mental schemas and how they perceive the object they are using, and this could inform the development of suitable interfaces. The natural interactions that emerge when the animal uses the object should provoke recognizable and predictable responses on the system so that the whole interaction makes sense for the animal. As part of the TOUCH project, the researchers have been observing how orangutans play naturally with and without technology, in order to come up with some guidelines on how to better develop interfaces which orangutans might find intuitive or which accommodate to their playing preferences. As an example, orangutans like to play poking people and objects with sticks or branches, so a possible TUI could make use of branches and the action of poking to create some reaction on the system side.

Postgraduate consortium

After lunch, the first postgraduate consortium in ACI was held. It was a really interesting session which gives the opportunity to PhD students to present their work and get insightful feedback and comments from experts in the field. Each of the two PhD students who had an accepted contribution for the consortium had to prepare a presentation with the advice of a professional in the field in order to make the presentation clear and fruitful for discussion. Sarah Webber had David Roberts as advisor for the presentation, while Clara Mancini was the advisor for Fredrik Aspling.

Sarah Webber’s PhD is focused on technology at the zoo, and she plans to investigate the impact of digital technologies in human-animal encounters in these environments. She proposes to provide guidelines and methods for zoo technology design and evaluation, which help to create new technological experiences at the zoo to enhance the wellbeing of captive animals as well as supporting the work of zoos as educational environments for nature and animal conservation. She has studied the impact of five existing interactive systems at Melbourne’s Zoo for human-animal encounters, and is now analyzing a new system for playful collaboration they have developed for orangutans and zoo personnel.

Fredrik Aspling’s PhD focuses on the empirical exploration of different forms and types of multispecies computer mediated interactions involving humans, animals and plants. He is currently focusing on three case studies: an ethnography on the use and experiences of mobile proximity sensor cameras in hunting, an ethnomethodological study of the negotiations between two leashed dogs and their handler during a walk, and a multispecies ethnography of people’s use of mobile phone cameras and social photography applications.

Video posters

This year, an illustrative way of presenting work within ACI was introduced at the conference through video posters. This type of submission allowed the authors to illustrate a domain problem or show potential applications of their work. Two video posters were accepted and displayed during the whole ACE conference, so not only ACI researchers could discuss about them, but also other people from outside the ACI field could envision how technology could help to improve animals’ wellbeing. Video posters were accompanied by explanatory postcards which included the title of the work, the authors contact info and an abstract describing the content of the video.

A depth-based tracking system for cats (VIDEO)

This work presents a tracking system capable of detecting a cat’s location, body posture and orientation using a Microsoft Kinect placed facing down from the ceiling and the depth information it provides. The video explains how the tracking system works, and its potential to improve animals’ wellbeing by supporting the development of intelligent playful activities in which the system adapts to the animals’ interactions and body postures.

HABIT: Horse Automated Behavior Identification Tool (VIDEO)

Presents a tool for the automatic analysis and recognition of horse-to-horse and horse-to-human behaviours. The system will be trained to recognise behavioral signatures annotated manually from video clips, and with this ethogram database it aims to provide feedback on the evaluation of horse behaviors, automatically detecting unnatural behaviors and reducing human biases.

Panel discussion

A panel discussion was held at the end of the day, with Clara Mancini, Marcus Carter and Hanna Wirman as panelists, and David Roberts as moderator. The panel was a really exciting opportunity to share concerns and opinions now that ACI community is growing rapidly and drawing the attention of more and more people.

One of the questions focused on how to publish ACI work in HCI dominated conferences or journals, and the general opinion was that we have to find and highlight the benefits that ACI work can provide to both humans and non-humans. However sometimes it only works if we strongly justify the human side of the study, as only animal welfare sometimes is not enough to serve as an entry point to some venues. A related question was if the researchers present in the room considered ACI as HCI, or if instead HCI was a part of a more general field which is ACI. The agreement among all presents in the room was that indeed ACI should be considered as the general field from which HCI is an area focusing on specific individuals of the animal kingdom, simply by the definition of human beings as animal beings. However, if this question is asked to people from the HCI field, they would probably answer the opposite, and this is the main source of problems to publish or get fundings for ACI research. Another question that emerged was therefore how to obtain funding for research, and some general comments: the difficulty usually resides on the ethics surrounding the specific animal species for the proposed study, if it is a domestic animal or a wild animal, if it is regional or foreign, etc. This can condition the type of organisms or people interested in funding ACI studies on a specific species and guide our search. Other interesting questions were about when and to which extent should we put boundaries to ACI studies, if parallelisms could be drawn with the evolution of studies with children using technology and animals using technology, and how to collaborate with experts in animal behavior.

Before closing the conference, Patricia Pons presented an joint initiative undertaken by several PhD students in ACI: a website on playful technological interactions with animals. The researchers running the website are Sofya Baskina, Annika Geurtsen, Ilyena Hirskyj-Douglas, Patricia Pons, Michelle Westerlaken and Anna Zamansky. The aim of this initiative is to create a reference forum for researchers interested in exploring playful interactions with animals within ACI from different perspectives. It is intended to be an open and collaborative space, so all researchers are welcome to collaborate and share their work, write interesting posts on the area, share their publications, events or interests. The idea is to spread the word and get more people from inside and outside the field aware of the possibilities and potential applications and collaborations. The website also contains a world map of ACI research groups working on playful technology for animals, with brief information on their research. A calendar on ACI events in general is also featured. For those who wish to collaborate or get updates on this initiative, here are the details:

Finally, Clara Mancini announced that next year’s ACI conference will be held at the Open University in London, as an independent conference. We really look forward to it! 🙂

Advertisements