ABC Completed Research Development Projects
Raziel Riemer, Industrial Engineering & Management
Jeffrey A. Reinbolt, Department of Biomedical Engineering, University of Tennessee
Exoskeleton technologies constitute a growing field that finds application in assisting individuals with disabilities; aiding in robotic rehabilitation; and augmenting normal human capabilities in industrial or military applications. Despite the significant amount of work that has been devoted to designing and controlling exoskeletons, there still remains a lack of knowledge about the interaction between exoskeletons and their human users. An understanding of this interaction is critical to the design of exoskeletons since in device design, several interconnected parameters must be considered (e.g., actuator type, gear ratio, etc.). Since the components critically influence one another, the design procedure is by no means a trivial task.
The primary method for exoskeleton design is to build and rebuild several prototypes until the desired parameters are reached [1]. This is a very costly and time-consuming process. A recent step forward in improving this process has been the introduction of design methods in which researchers adjust exoskeleton parameters and test online their effect on human performance. Examples include adjusting moment profiles [2], hip assistance force with a soft exosuit during walking [3] and controller signals [4]. Yet, even with this approach, there is still a need for multiple experiments and the use of complex expensive devices.
A methodology that might be applicable to aid exoskeleton design is optimization-based motion prediction, which assumes that human motions aim to optimize performance measures such as jerk [5] and energy expenditure [6]. The motion may thus be determined by solving an optimization problem with constraints, such as maximum joint torque or angle. We hypothesize that combining optimization base motion prediction and experimental models will lead to the development of a better design tool for exoskeletons.
This research aims to develop a method – based on theoretical and empirical models – for efficiently designing a functioning exoskeleton by using a mathematical model to represent: the device, the human using the device and device–human interactions.
David Zarrouk, Mechanical Engineering
Shai Arogetti, Mechanical Engineering
This multi-disciplinary project proposes a new simple and inexpensive SLAM system and algorithm to improve the accuracy of simultaneous localization and mapping (SLAM) beyond the state of art. The novel SLAM system and algorithm is based on combining the use of two robots fitted with rotating cameras and laser sensors that allow collaborate together to measure their position and orientation at very high accuracy in unstructured environment. The research will also include developing an intelligent navigation algorithm to optimize the navigation algorithm to improve the accuracy of the mapping. Along the research, we will perform numerical simulations and develop an experimental setup to demonstrate the accuracy of the system. We believe that, using low cost (less than 1000$/ robot) off-the-shelf components, we can achieve an accuracy (positioning error/distance travelled) of nearly 1/1000.
Avinoam Borowsky, Industrial Engineering & Management
Shai Arogetti, Mechanical Engineering
Vehicles equipped with varying levels of automation are starting to share the roads with conventional vehicles (i.e., with no automation). Still, existing traffic is mostly composed of conventional vehicles, and this situation is expected to last for at least the next 20 years. Past studies have shown the benefit of automated vehicles on the overall traffic performance. However, managing mixed traffic is challenging, as the mixed traffic behavior represents a coupled mixture of human and automated machine dynamics.
Modem automated vehicles can be equipped with communication devices, which allow their cooperative control. Then, vehicles can be controlled based on a shared goal that is driven from traffic considerations. Platooning is a multi-vehicle system having the goal of achieving a configuration where all vehicles keep some desired speed and spacing distances. Tight vehicle platoons can improve highways traffic flow and road capacity. Nevertheless, all past developments have been dedicated to fully automated platoons and did not consider mixed traffic scenarios. This study extends the previous concept of a fully automated platoon by suggesting a more general condition, where the controlled platoon includes a mixture of automated and manually driven vehicles.
Thus, the main objective of this research is to extend the previous concept of a fully automated platoon by developing a cooperative control approach for mixed platoons driving on highways. The cooperative element arises from the fact that the automated vehicles may share information needed for control, while the mixed-platoon term represents the fact that the target criterion to be fulfilled (or optimized) includes a mixed group of controlled and uncontrolled vehicles. No communication is assumed between manually driven and automated vehicles. The developed approach that is based on car simulations and driving simulator experiments should minimize possible disturbances caused by the human driver on the overall performance of the mixed platoon, as well as minimize the influence of the platoon on the driving comfort of the human driver.
David Zarrouk, Mechanical Engineering
In this project we intend to develop an amphibious robot, A-STAR, that is capable of crawling OVER water similar to the “Basiliscus Lizard”. The robot draws inspirations from cockroaches and the Basiliscus lizard. Besides crawling over water, the A-STAR robot will also be capable of crawling over ground surfaces. The robot is similar to our STAR robot but fitted with tilted propellers at the bottom which provide lift and thrust. The main applications are in the fields of search and rescue robotics in flooded urban areas for example and in transportation and excavation in unstructured environments.
Gera Weiss, Computer Science
Achiya Elyasaf, Software and Information Systems Engineering
Shai Arogeti, Mechanical Engineering
Abstract
Robots that act in complex environments must combine a dynamic set of advanced control strategies effectively. Additionally, software architectures for such robots must allow for incremental and decentralized development process, where different stakeholders can integrate their engineering artifacts in a natural and intuitive way. To accomplish these, two lines of research should be combined: one concerning control-theory for the design of advanced control laws that are capable of achieving complex goals robustly, and another focusing on software architectures that explicitly address the need to acquire and use partial specifications and integrate them into a robust decision-making mechanism. The combination of these two disciplines has realized the field of behavioral robotics; a multidisciplinary field of research that draws on research in control theory as well as software engineering, cognitive science, artificial intelligence, and physics.
In this research, we develop new behavior-based control strategies with which robotic tasks can be decomposed into elementary behaviors that can be simultaneously managed at run-time. Specifically, we are based on the state of the art techniques for behavior integration, using a modeling and design tool called Behavioral Programming (BP). Our goal is to extend BP with control theory idioms to allow for better integration of sub-controllers. Then, the applicability of the newly developed method will be studied through a variety of case studies, including the control of smart-roads, smart-buildings, Nano-satellites, and drones.
Shlomi Dolev, Computer Science
Michael Rosenblit, Ilse Katz Institute for Nanoscale Science and Technology
Abstract
We intend to research computation and communication abilities of non-organic bio-compatible nano-robots, to be used in the physiological medium. These robots can be used for energy harvesting from glucose in the blood and utilizing the harvested energy for actuating a response identifiable by the immune system. The robots can also synchronize amongst themselves with a common decided signal. Synchronization of particle matters and devices is observed in insects (fireflies), flocking of birds, etc. In terms of machines and quantum devices, Radio Frequency) RF( and molecular excitation based synchronization are implemented for applications like common timing circuits, Phase-Locked Loops (PLL), etc. One similarity which exists in every model of synchronization is the fact that the process involves numerable (finite) particles exhibiting non-linear oscillations (timing signals) between one another and eventually approaching a zero chaos scenario by superimposing a coherency on themselves and to their neighbors through a shared medium of transmission (electromagnetic or acoustic field, molecular sensing and eventual change of state, etc.). Precisely, we would like to design programmable nano-robots that can surf in the blood and target malicious entities. A theoretical concept exists in computer science, named programmable matters which are low computational particles that can collectively mimic and amplify a central system. Recent researches have shown the possibility to produce these programmable matters and their control through expansion, contraction, and movement, and can also serve for detection and communicating the detection via radio signals.
Jessica Cauchard, Industrial Engineering and Management
Ehud Sharlin, Computer Science, University of Calgary
Abstract
The last few years have seen a revolution in aerial robotics where personal drones have become pervasive in our environments. As they become ubiquitous and increasingly autonomous, it is crucial to understand how they are perceived and understood by people. The robotics community has extensively theorized and quantified how robotic agents are perceived as social creatures and how this affects people. However, drones present different form factors that are yet to be systematically explored. This project aims to fill this gap by understanding people's perceptions and expectations from drones and how this affects Human-Drone Interaction (HDI) design. We in particular focus on three aspects of HDI: Awareness, Physicality, and Interaction.
What can facial information tell about driver's readiness to 'take-over requests' in autonomous vehicles (SAE Level 3)
Galia Avidan, Psychology
Tal Oron-Gilad, Industrial Engineering & Management
Carmel Sofer, Psychology
The driver in a semi-autonomous car (SAE Level 3) is partially involved in the driving experience as he is required to take control over the vehicle under certain situations. In these situations, the driver is required to be alert and prepared to take control during take over request (TOR), and the vehicle should be able to ascertain his readiness for the shift between automatic to manual control, considering road conditions as well as the driver's state of mind and ability. Our overarching goal is to systematically investigate facial measures (eye movements, facial expressions and head movements) indicative of the drivers' state of mind, as an indication for his readiness to take over control of the vehicle. To do so, in the first year we have conducted several experiments in which we aimed to explore eye movements patterns in response to participants' basic emotional states such as disgust, neutral, sadness, happiness and fear. These emotions states, were planned to serve as a reference to the drives' state of mind. We have used IAPS images and emotional faces for emotion induction, while tracking eye movements, in a well-controlled environment. This direction yielded a preliminary feasibility proof, revealing a distinctive eye movement behavior for the different IAPS images. We also used emotional faces to induce the emotional states and found out that these faces lacked sufficient power to produce distinctive eye movement behavior. Based on our preliminary study, we have decided to modify the experimental paradigm and to turn to a procedure that will better resemble an emotional driving situation, which will yield more data, using realistic driving videos. In order to cope with the rich sensory data, we will use inter-subject correlation (ISC) and machine learning algorithms, that will allow detecting subtle changes in head, face and eye movements behavior.
Sigal Berman, Industrial Engineering & Management
Shelly Levy-Tzedek, Physical Therapy
Mindy Levin, Physical & Occupational Therapy, McGill
Stroke is a leading cause of long-term sensorimotor disability with deficits in upper limb function persisting into the chronic stage in a large proportion of stoke survivors. This may partly be due to the limited effectiveness of current upper limb rehabilitation interventions. A stronger focus on training aligned with addressing the underlying causes of the motor control deficits rather than on only behavioral output, may be essential for significantly improving treatment outcomes. Robotic augmented therapy can repetitively deliver patient-specific, high-dosage training. Yet, to-date, results of training approaches integrating robotics have not been highly successful. Robots have been used to administer high-dosage repetitive training, which leads to improvement in the trajectories practiced. There is evidence that learned movement patterns generalize across different movements made within the practiced regions. However, whether such training leads to improvement in upper limb functional ability remains controversial. Robots have been used to administer upper limb treatment based on error augmentation (EA). In EA treatment, subjects are provided with feedback that enhances their motor errors. This is usually done with distorted visual feedback and, using robots, by providing haptic feedback based on motion error. EA has resulted in positive effects for lower limb locomotor training. The current project will investigate the effects of EA treatment for upper limb rehabilitation in subjects with stroke. The innovation in our approach is the development of rehabilitation robotic training protocols based on motor control theory to remediate underlying movement deficits. Our goal is to investigate error augmentation treatment for upper limb rehabilitation in subjects with stroke and adapt treatment patterns for attaining substantial functional improvement.
Oren Shriki, Brain and Cognitive Sciences
Multirobot systems, or robotic swarms, can display complex collective behavior and perform tasks that cannot be performed by single robots. However, due to the large number of degrees of freedom, their control by a human operator poses new challenges. In the proposed study, we will develop a novel control interface for robotic swarms, combining conventional control means for low-level motion commands with a brain-machine interface for higher-level commands. This setting will allow us to explore the interaction between conventional control and BMI-based control. A major focus of the study will be on exploring the utility of novel BMI paradigms for mental imagery, in particular, using spatio-temporal cascades of activity termed neuronal avalanches. To personalize the BMI training process, we will monitor changes in the cognitive workload of subjects during skill acquisition. A successful training process should lead to automaticity and low workload. Conversely, high workload would indicate that more training is required.
Armin Biess, Industrial Engineering & Management
Jan Peters, Intelligent Autonomous Systems, Technische Universitaet Darmstadt
Learning by imitation is a versatile and rapid mechanisms to transfer motor skills from one intelligent agent (humans, animals and robots) to another. In this project we focus on the correspondence problem in imitation learning, which results in the question of how can one agent (the learner) produce a similar behavior it perceives in another agent (the expert) given that the agents have different embodiments (body morphology, degrees of freedom, constraints, joints and actuators, torque limits). Existing imitation learning algorithms, such as behavioral cloning (BC) or inverse reinforcement learning (IRL) do not explicitly address the correspondence problem. We propose a metric-based approach towards the correspondence problem by analyzing spatiotemporal distance and similarity measures across different state and action spaces of embodiments. These distance measures will be used in a reinforcement learning and optimal control framework to derive a ``best matching” policy between learner and expert. In cases where pairs of matching states between a learner and expert are provided, supervised learning methods can be applied to learn the corresponding map. We intend to apply our metric-based imitation learning algorithms to non-goal oriented motor tasks (ballet) as well as to goal-oriented tasks (sports-striking movements - tennis, table tennis, golf, …) in simulation and in real robotic environments.
Itshak Melzer, Physical Therapy
Simona Bar-Haim, Physical Therapy
Amir Shapiro, Mechanical Engineering
Raziel Riemer, Industrial Engineering & Management
Lior Shmuelof, Brain & Cognitive Sciences
Tripping over an object or slipping on a smooth surface are very common causes of falls, which are failures of the postural control system to regain balance in the face of unforeseen environmental challenges. Trip and slip related falls are a major contributor to injury, disability and death in older adults. We do not, however, regard this as an inevitable consequence of either biological ageing or brain injury since our extensive research indicates that (harness-protected) exposure to unexpected gait perturbations over quite brief training periods can markedly enhance balance. We believe that translation of our balance rehabilitation protocols into clinical interventions will reduce the incidence of falls in a highly efficient and cost-effective manner, especially for high-risk populations. Adaptive responses for recovery from slipping and tripping are investigated in the present study, i.e. the ability of the central nervous system to learn to recover after unexpected perturbation. At the first stage a "Perturbation Stationary Bicycles Robotic system (i.e., the PerStBiRo)" was designed and built, a system that exposes participants to unexpected perturbations during bicycling. The PerStBiRo software can learn the participants balance reactions and provide higher levels of perturbations at the next training secession. The degree to which the PerStBiRo perturbation training protocol positively transfers to the ability to recover from unexpected loss of balance during standing and during walking and the effectiveness of the perturbation protocols in reducing real-life falls will be studied at the 2nd stage of this project.
Walking is a complex behavior, involving multiple organs and systems and controlled by multiple brain structures. To accommodate internal and external requirements, this control system must be able to employ various walking strategies (i.e flexibility) and to modify existing ones (i.e. adaptability).
Impaired walking after brain damage is multifaceted. Common rehabilitation technologies ignore some of the facets. Specifically visual information, as a key component in guiding locomotion.
Our long-term goal is develop VIsually Guided NeurOmuscular Rehabilitation” (VIGNOR), a multidimensional approach and system, for the rehabilitation of walking. Therefore, the aim of the proposed study is to investigate the role of vision in the anticipatory nature of flexible and adaptive gait, and to establish the theoretical and empirical basis for developing a multidimensional system for the rehabilitation of walking in stroke survivors.
Tzvi Ganel, Psychology
Sigal Berman, Industrial Engineering & Management
Our project examines perception-action interactions within interfaces of 3D VR environments and telerobotic systems in grasping, obstacle avoidance and ball catching. We intend to examine the influence of tactile feedback type, the size of the target object and the obstacle, and the transmission delay on system transparency. Based on our gained experience and work, we are now aim to develop a vibration-based glove and virtual environment along with the training regime that will facilitate increased transparency of the interface. We are also developing and conducting an experimental paradigm within a physical environment designed to study the effect of obstacle size on pointing trajectories. This experiment will serve as baseline to measure the effect of obstacle size on pointing movements within the virtual experiment. The current project could serve as important step for development of transparent interfaces for 3D VR environments and telerobotic systems.
Aryeh Kontorovich, Computer Science
Armin Biess, Industrial Engineering and Management
Abstract
Motivated by an application from Robotics -- imitation learning -- we propose to study the problem of metric-to-metric regression. In this setting, the learner is presented with body configurations (say, parametrized by limb-joint angles) of the demonstrator robot paired with the "correct" configurations of the target imitator robot. The goal is to learn this demonstrator-to-imitator mapping. Our modeling assumption is that the configuration space (for both the teacher and the learner) is endowed with some natural metric, and further that the "imitation mapping" satisfies some smoothness (e.g., Lipschitz) condition. Given this framework, the notion of Lipschitz extension as a means of generalizing from observed examples to the rest of the configuration space seems quite natural. Lipschitz extension of real-valued functions is a relatively well-understood problem, with numerous deep results in analysis and even some learning-theoretic results [1,2]. In addition to generalization bounds in [1], we also presented very efficient techniques for performing real-valued Lipschitz extension in doubling metric spaces.
Unfortunately, metric-valued Lipschitz extension is not always possible -- even in Banach spaces. In Hilbert spaces, however, Kirszbraun's theorem guarantees the existence of such extensions. Working jointly with Yury Makarychev from TTIC and Armin Biess from the BGU Department of Industrial Engineering and Management, we propose to (1) obtain demonstrator-imitator data from real robots (2) develop fast and practical algorithms implementing Kirszbraun between Hilbert (Euclidean) spaces and (3) develop novel generalization bounds for this regression setting.
Prof. Ohad Ben-Shahar, Department of Computer Science
Prof. Steve Rosen, Department of Archeology
Our goal in this project is to develop the underlying science for a ground-breaking technology to virtually eliminate one of the most labor-intensive and frustrating steps in archaeological research, namely the physical reconstruction of shattered artworks. This task is first and foremost a cognitive challenge reminiscent of solving a visual puzzle. However, an integrated robotic system that achieves this goal requires at least two more aspects, and in particular an automated intelligent scanning process (through which the physical fragments are being represented digitally in the system) and a physical reconstruction process (where the physical fragments are being manipulated into a complete artifacts). Each of the three phases poses unique and hitherto unsolved challenges, where human intervention should be minimized and higher-level domain-specific knowledge must be integrated in (for example in order to facilitate the puzzle solving even when fragments are damaged, worn out, or have completely missing appearance or 3D geometry). Toward this end, this project will develop algorithmic approach for polygonal visual puzzle solver (puzzles whose pieces are polygons) and implement a robotic puzzle solver accordingly.
Amir Shapiro, Mechanical Engineering
Ronen Brafman, Computer Sciences
Nowadays robots perform a variety of tasks, and robotic manipulation is one of them. Although different tasks are done in an automated way, mostly the solutions are for a given object and a given repeating task. Designing and programming a robot to perform various tasks autonomously remains a challenge, especially in the field of human-robot interaction. Amazon Robotic Challenge presents an opportunity for teams across the world to present their state of the art solutions for autonomous picking and placing of various items, and even though the challenge is held for a third year in a row, still no universal solution that approaches to the performance of humans exists. A robotic system capable of automated manipulation tasks can be adapted for various fields, such as education, rehabilitation, an aid for elderly etc.
Our goal is to develop a set of primitive operation modules for grasping and manipulation of various (including previously unknown) objects, and using these modules to develop a system for automated manipulation planning for a given job that will operate by suitably combining the primitive operations. To achieve this, the first step will be to provide primitive functional modules for a wide variety of scenarios such as: anthropomorphic grasping of assorted items, motion planning for item manipulations, cluttered environment object recognition and localization with subsequent motion planning, manipulation of soft and deformable object, manipulation of multiple objects simultaneously and so on. When these primitives are designed and implemented, algorithms for global task planning can be developed and implemented. Solutions would use machine learning to achieve desired results and perform essential adaptations.
The recent research deals with the task of manipulating a set of objects located in a given configuration, using a flexible robotic arm designed for confined spaces. Our goal is to remove this set of objects from its initial configuration to a desired configuration by using a set of primitive manipulations. This work is divided into 2 main parts: the global task planning and the manipulation planning. From the task planning aspect, we developed a system for solving a rearrangement problem of a set of objects located on a plane surface. The plane is divided into a 2-dimensional discrete grid while the objects represented as single point on the grid. Each configuration of the objects on the surface is represented as node of a graph while the edges connecting each node to its neighbors is defined by the robotic arm capabilities of manipulating the set from one configuration to another. We use the A* algorithm for searching inside a graph in order to find the set of minimum manipulations needed to complete the task. From the manipulation planning aspect, we are working on the design of a flexible robotic arm for confined spaces. We also developed algorithms of the robotic arm motion planning in order to define its set of capabilities. The first algorithm is intended to solve the problem of an object located in a configuration that is inaccessible for robotic arm. In order to grasp the object, the algorithm will guide the arm by performing a set of manipulations on the object until it is graspable. Another algorithm that was developed, will determine the required relative configuration of multiple objects which we would like to simultaneously grasp or manipulate. The global task planning system uses these algorithms as primitives while planning the set of manipulations.
Boaz Lerner, Industrial Engineering & Management
Amir Shapiro, Mechanical Engineering
Shelly Levy-Tzedek, Physical Therapy
We are developing a closed-loop socially assistive robotic (SAR) system for post-stroke rehabilitation, using the humanoid robot Pepper. We combine psychophysical in-lab and in-clinic behavioral studies with machine-learning algorithms and methodology from the fields of motor-control and human-robot interaction. The endeavor includes four PIs, one lab engineer, five students (three of whom are graduate students), and a research assistant. This project brings together the expertise from the Levy-Tzedek, Lerner, Shapiro, and Rokach labs.
Hadar Ben-Yoav, Biomedical Engineering
Ilana Nisky, Biomedical Engineering
Teleoperated robot-assisted minimally invasive surgery (RAMIS) has transformed many surgical disciplines such as Urology, General Surgery, Gynecology, Thoracic Surgery, and Otolaryngology. However, an active debate remains regarding the safety, efficacy, and cost-effectiveness of RAMIS, and many other clinical fields do not incorporate robot-assisted surgeries due to increased risks. For example, hepatic surgery entails high chances for vascular injuries that can lead to significant pleading and endanger the patient. In this project, we aim to develop novel sensors and novel cognitive algorithms in RAMIS systems for smart extraction of tissue damage information, and for prevention of unnecessary damage to the patient. We focus on: 1) developing three kinds of sensors – biomechanical force, blood-vessel proximity, and biochemical blood-vessel damage; 2) smart integration of their functional information; and 3) using this information to design a shared control for the surgical robot that will enable immediate automatic response to damage by the robot together with communicating the information to the surgeon. Our project addresses all three gaps by developing novel sensors and by providing users with assistance and combined force and tissue damage feedback. We take advantage of the cognitive abilities in RAMIS and use machine-learning techniques to predict excessive tissue damage. This is challenging because some tissue damage is necessary for achieving the purposes of surgical intervention. We provide automatic assistance in case of excessive damage, and present the processed information to the surgeon in a useful and intuitive manner via physical and visual information layers. Such next generation of intelligent interactions and feedback during surgery will provide the surgeons with unique perception about the complex bio-micro-environment—i.e., a “sixth sense”— and will help to decrease major surgical risks.
Ilana Nisky, Biomedical Engineering
Yael Rafaely, Thorasic Surgery, Soroka
Part of the motivation for this project focus on general MIS skill instead of the more relevant to ABC Robotic’s mission robot-assisted surgery was the fact that Soroka Medical Center (SMC) did not have a surgical robot. Soon after the submission of the proposal, and concurrently with PI Nisky’s preparation of an ERC proposal focusing on understanding complex surgical movement control and complex skill acquisition, SMC purchased a surgical robot (da Vinci Xi). Very quickly co-PI Refaely as well as several other surgeons went through the process of a robotic surgical skill acquisition and started operating robotically at an intense pace. In the same time, the Nisky lab worked on integrating our da Vinci Research Kit (dVRK), and it is now fully operational and was tested in 2 studies. Therefore, it was natural and timely to shift the focus of this study to understanding complex robotic surgical skills acquisition with the dVRK rather than on deployment of the virtual reality rig to the hospital and study adaptation to delay and perturbing forces. Therefore, in this report, we report our work on understanding complex robotic surgical skills. The results of these studies were already used in this year’s submission to ERC and ISF on understanding of the control of complex surgical movements and long-term skill acquisition, and resulted in several submitted and in-prep
Ronen Segev, Life Sciences
Opher Donchin, Biomedical Engineering
Ohad Ben-Shahar, Computer Science
The first year was devoted to producing the first prototype of the goldfish biobot. This includes identifying the electronics we can use to stimulate the fish brain and development of the implant, which will be used to stimulate directly the fish brain. The first target in the fish brain will be the DM region, which is responsible for emotional learning. The selection of this region was due to the possibility to observe some influence on the fish behavior due to the stimulation. In the second year, we will explore the following additional regions: the VTA, a reward center region, the optic tectum, a sensory integration center and the DLV, a navigation center in the fish brain
Harnessing art and animation to bridge and build trust between technology and users’ perceptions and attitudes, in order to design robots and interactions that could better serve the elderly population
Vardit Fleischman, Industrial Engineering & Management
Nea Erlich, Arts
Tal Oron-Gilad, Industrial Engineering & Management
Yael Edan, Industrial Engineering & Management
Social assistive robots (SARs) are being designed for a variety of purposes in order to promote independent living for the elderly and enhance their wellbeing. To date, most of the population is not familiar with robots, other than what they have previously heard or seen in other contexts such as film and literature. It is therefore significant to take into account cultural references that the target population may be familiar with, in order to portray to potential users the possibilities that robotics may hold. This proposed exploratory study aims to discover elderly's preferences regarding the design of robots, and improve their trustworthiness and acceptance of robots at home while harnessing art and animation to bridge and build trust between the technology and perceptions and attitudes of older users.
During the first year of the project the research infrastructure was developed. Efforts were made to create a shared terminology which will associate elements from art and animation with components of human-robot interaction. This was achieved in several ways: a) Recruiting graduate students from both Art and HRI areas. b) Establishing research collaborations with Bezalel Academy of Arts and Design. c) Producing an international conference on the topic of Art, Design and Human-Robot Interaction at BGU. d) Submission of grant proposals to The State of Israel's The Ministry of Science and Technology and to the Multidisciplinary Project Award of the Humanities in collaboration with the Department of Industrial Engineering and Management, Ben-Gurion University of the Negev. Both submissions were approved.