Browsing by Author "Amirshirzad, Negin"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Conference paperPublication Metadata only Adaptive shared control with human intention estimation for human agent collaboration(IEEE, 2022) Amirshirzad, Negin; Uğur, E.; Bebek, Özkan; Öztop, Erhan; Computer Science; Mechanical Engineering; BEBEK, Özkan; ÖZTOP, Erhan; Amirshirzad, NeginIn this paper an adaptive shared control frame-work for human agent collaboration is introduced. In this framework the agent predicts the human intention with a confidence factor that also serves as the control blending parameter, that is used to combine the human and agent control commands to drive a robot or a manipulator. While performing a given task, the blending parameter is dynamically updated as the result of the interplay between human and agent control. In a scenario where additional trajectories need to be taught to the agent, either new human demonstrations can be generated and given to the learning system, or alternatively the aforementioned shared control system can be used to generate new demonstrations. The simulation study conducted in this study shows that the latter approach is more beneficial. The latter approach creates improved collaboration between the human and the agent, by decreasing the human effort and increasing the compatibility of the human and agent control commands.Conference paperPublication Metadata only Context based echo state networks for robot movement primitives(IEEE, 2023) Amirshirzad, Negin; Asada, M.; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Amirshirzad, NeginReservoir Computing, in particular Echo State Networks (ESNs) offer a lightweight solution for time series representation and prediction. An ESN is based on a discrete time random dynamical system that is used to output a desired time series with the application of a learned linear readout weight vector. The simplicity of the learning suggests that an ESN can be used as a lightweight alternative for movement primitive representation in robotics. In this study, we explore this possibility and develop Context-based Echo State Networks (CESNs), and demonstrate their applicability to robot movement generation. The CESNs are designed for generating joint or Cartesian trajectories based on a user definable context input. The context modulates the dynamics represented by the ESN involved. The linear read-out weights then can pick up the context-dependent dynamics for generating different movement patterns for different contexts. To achieve robust movement execution and generalization over unseen contexts, we introduce a novel data augmentation mechanism for ESN training. We show the effectiveness of our approach in a learning from demonstration setting. To be concrete, we teach the robot reaching and obstacle avoidance tasks in simulation and in real-world, which shows that the developed system, CESN provides a lightweight movement primitive representation system that facilitate robust task execution with generalization ability for unseen seen contexts, including extrapolated ones.ArticlePublication Metadata only Human adaptation to human–robot shared control(IEEE, 2019-04) Amirshirzad, Negin; Kumru, Asiye; Öztop, Erhan; Computer Science; Psychology; KUMRU, Asiye; ÖZTOP, Erhan; Amirshirzad, NeginHuman-in-the-loop robot control systems naturally provide the means for synergistic human-robot collaboration through control sharing. The expectation in such a system is that the strengths of each partner are combined to achieve a task performance higher than that can be achieved by the individual partners alone. However, there is no general established rule to ensure a synergistic partnership. In particular, it is not well studied how humans adapt to a nonstationary robot partner whose behavior may change in response to human actions. If the human is not given the choice to turn on or off the control sharing, the robot-human system can even be unstable depending on how the shared control is implemented. In this paper, we instantiate a human-robot shared control system with the "ball balancing task," where a hall must be brought to a desired position on a tray held by the robot partner. The experimental setup is used to assess the effectiveness of the system and to find out the differences in human sensorimotor learning when the robot is a control sharing partner, as opposed to being a passive teleoperated robot. The results of the four-day 20-subject experiments conducted show that 1) after a short human learning phase, task execution performance is significantly improved when both human and robot are in charge. Moreover, 2) even though the subjects are not instructed about the role of the robot, they do learn faster despite the nonstationary behavior of the robot caused by the goal estimation mechanism built in.Master ThesisPublication Metadata only Human-robot collaboration for synergistic task execution(2017-05) Amirshirzad, Negin; Öztop, Erhan; Öztop, Erhan; Kumru, Asiye; Uğur, E.; Department of Computer Science; Amirshirzad, NeginThere is great potential for human and robot to work together as a team, since this collaboration can take advantage of both human and robot capabilities, cover their weakness and yield a higher performance. We propose and implement a human-robot collaboration framework where, while the human tries to perform a task, the robot infers the human intention and assists the human in achieving the inferred goal. We explore how the human is influenced when (s)he interact with machine autonomy, and whether there is any advantage in task performance when human shares control with an autonomous agent. In particular, we investigate whether interacting with autonomy can aid humans to improve their performance in shorter time. We realized this collaboration system by designing a ball balancing task in which the goal is to move and balance the ball on a target position on a tray held by a robotic arm. The human performs the task by controlling the robotic arm with an interface which tilts the tray and moves the ball while the robot infers the target ball position by observing the trajectory of the ball, and augments the human control commands for assisting in task execution. The length of ball movement trajectory, completion time and positional error were chosen as the measures to evaluate the task performance. To assess the impact of our system on human learning and task execution a set of experiments were conducted under two conditions, human control condition where human performs the task alone and share control condition where both human and robot are involved in performing the task. 20 naive subjects were volunteered to perform the experiment in four continuous days. The result of these experiments suggests that not only the task execution can be improved in collaboration with robot compare to when the humans perform the task alone but also this collaboration system can make the human learning to progress faster.Conference paperPublication Metadata only Learning medical suturing primitives for autonomous suturing(IEEE, 2021) Amirshirzad, Negin; Sunal, Begüm; Bebek, Özkan; Öztop, Erhan; Computer Science; Mechanical Engineering; BEBEK, Özkan; ÖZTOP, Erhan; Amirshirzad, Negin; Sunal, BegümThis paper focuses on a learning from demonstration approach for autonomous medical suturing. A conditional neural network is used to learn and generate suturing primitives trajectories which were conditioned on desired context points. Using our designed GUI a user could plan and select suturing insertion points. Given the insertion point our model generates joint trajectories on real time satisfying this condition. The generated trajectories combined with a kinematic feedback loop were used to drive an 11-DOF robotic system and shows satisfying abilities to learn and perform suturing primitives autonomously having only a few demonstrations of the movements.Conference paperPublication Metadata only Synergistic human-robot shared control via human goal estimation(IEEE, 2016) Amirshirzad, Negin; Kaya, Osman; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Amirshirzad, Negin; Kaya, OsmanIn this paper, we propose and implement a synergistic human-robot collaboration framework, where the robotic system estimates the intent or goal of the human operator while being controlled by the human in real-time. Having an estimate of the goal of the human operator, the system augments the human control signals by its own autonomous control output based on the goal estimate. Consequently, the net control command that drives the robot becomes a mixture of human and robot commands. The motivation for such a collaborative system is to obtain an improved task execution to surpass the performance levels that each party could achieve in solo. This is possible if the developed system can take advantage of the individual skills so as to cover the weakness of the other party. To test and validate the proposed system we realized the framework by using the `ball balancing task' where an anthropomorphic robot arm was required to bring a ball on a tray attached to its end effector to a desired position. Task execution performance was quantified with completion time and positional accuracy. To test the validity of the framework, experiments were conducted in three conditions: full autonomous control, human-in-the-loop control, and shared control. Full autonomous control did not require any human subjects; whereas for the latter two conditions, 10 subjects for each condition were employed to measure task performance of naive solo operators and naive human-robot partners. The performance results indicate that the task can be completed more effectively by the human-robot system compared to human alone or autonomous robot execution in different performance measures.