Browsing by Author "Kaya, Osman"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Master ThesisPublication Metadata only Dexterous manipulation with a robotic hand(2017-06) Kaya, Osman; Öztop, Erhan; Uğurlu, Regaip Barkan; Öztop, Erhan; Bebek, Özkan; Uğur, E.; Department of Computer Science; Kaya, OsmanIn robotics, flexible and the dexterous manipulation are one of the most desired type of skills. To this end, we investigate dexterous manipulation skills on an anthropomorphic robot hand. In the first part of the study, a sensorless grasping method is described. Although the high-precision sensing is highly relevant for precise grasps, precision is often not necessary to perform power grasps. An alternative approach is proposed for robotic grasping tasks based on external force estimation. Estimation accuracy is confirmed using a force sensor and the estimations are found to be useful for creating soft/power grasp behavior. In the second part, human-in-the-loop heterogeneous control for dexterous manipulation is investigated on a setup with a robotic hand and a robotic arm. The goal of the study is to experimentally verify that in tasks where the manual and explicit trajectory tuning is not possible, the autonomous movement can be learned by giving a basic policy to a robotic system, after which a human can learn and transfer an orthogonal complex part of the policy. The approach is shown on a ball swapping task in which a robotic arm is controlled by the human and a robotic hand is given an initial basic policy. In the results, we experimentally show that, in certain tasks, complex autonomous policies can be constructed by delegating the complex learning part to a human, the simple part to an autonomous agent, finally creating an autonomous control policy by recombining the parts.Conference ObjectPublication Metadata only Effective robot skill synthesis via divided control(IEEE, 2018-07-02) Kaya, Osman; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Kaya, OsmanLearning from demonstration is a powerful method for obtaining task skills, which aim to eliminate the need for explicit robot programming. Classically, the tasks are demonstrated to the robot by means of either recorded human motion, direct kinesthetic teaching or through manual interfaces, which may not be applicable for task that involve dynamics. In such cases, human-in-the-Ioop robot learning with anthropomorphic and intuitive tele-operation may be more suitable. In this paper, we propose a divide-and-conquer approach for human-in-the-Ioop robot learning framework to improve the efficacy of skill synthesis. Usually a straightforward division of control between the human and the robot for skill transfer can be designed for effective skill transfer. With such a division, not only the human learning is sped up, but also the design of the autonomous part of the control policy is simplified by exploiting the human capability to learn to adapt to robot operation. In this study, the proposed approach is realized by using the `ball swapping task' on an anthropomorphic robotic arm-hand setup, where the balls must be swapped over the fingers without being dropped. In the current implementation, the control is divided over the control of the arm and the hand, where the human learns to control the position and the orientation of the hand to swap the balls, while the hand runs a periodic finger movement autonomously. Our results indicate that complex autonomous policies can be easily obtained by distributing control over the human operator and the robot in a human-in-the-loop control setup. In particular, we show that the human operator quickly learns to control the arm in such a way that the simple finger movements of the hand become effective ball swapping actions. The combination of human and robot control then yields an autonomous ball swapping skill, which can be further improved for speed.Conference ObjectPublication Open Access Environmental force estimation for a robotic hand : compliant contact detection(IEEE, 2015) Kaya, Osman; Yıldırım, Mehmet Can; Kuzuluk, Nisan; Çiçek, Emre; Bebek, Özkan; Öztop, Erhan; Uğurlu, Regaip Barkan; Computer Science; Mechanical Engineering; BEBEK, Özkan; ÖZTOP, Erhan; UĞURLU, Regaip Barkan; Kaya, Osman; Yıldırım, Mehmet Can; Kuzuluk, Nisan; Çiçek, EmreThis paper presents a model based compensation method to enable environmental force estimation for a robotic hand with no tactile or force sensors. To this end, we utilize multi-joint robot dynamics and disturbance observer based friction identification methods based to account for forces that arise due to Coriolis, gravity, stiction and viscous friction. With the effective compensation of these forces, disturbance observer units, implemented for each joint, allow us to estimate environmental interaction forces. To validate the effectiveness of the force estimation with our method, experiments were conducted on an anthropomorphic robot with no haptic sensing capability. The results of these experiments showed that the force estimation was in good agreement with the actual sensor measurements. To further elaborate the effectiveness of the method, compliant contact detection task was implemented on the robot. The result of this experiment indicated that environmental force estimation performance was enough to facilitate the task, and as such our method may eliminate the need for expensive force sensors at the finger tips or the joints for dexterous manipulation.Conference ObjectPublication Metadata only Synergistic human-robot shared control via human goal estimation(IEEE, 2016) Amirshirzad, Negin; Kaya, Osman; Öztop, Erhan; Computer Science; ÖZTOP, Erhan; Amirshirzad, Negin; Kaya, OsmanIn this paper, we propose and implement a synergistic human-robot collaboration framework, where the robotic system estimates the intent or goal of the human operator while being controlled by the human in real-time. Having an estimate of the goal of the human operator, the system augments the human control signals by its own autonomous control output based on the goal estimate. Consequently, the net control command that drives the robot becomes a mixture of human and robot commands. The motivation for such a collaborative system is to obtain an improved task execution to surpass the performance levels that each party could achieve in solo. This is possible if the developed system can take advantage of the individual skills so as to cover the weakness of the other party. To test and validate the proposed system we realized the framework by using the `ball balancing task' where an anthropomorphic robot arm was required to bring a ball on a tray attached to its end effector to a desired position. Task execution performance was quantified with completion time and positional accuracy. To test the validity of the framework, experiments were conducted in three conditions: full autonomous control, human-in-the-loop control, and shared control. Full autonomous control did not require any human subjects; whereas for the latter two conditions, 10 subjects for each condition were employed to measure task performance of naive solo operators and naive human-robot partners. The performance results indicate that the task can be completed more effectively by the human-robot system compared to human alone or autonomous robot execution in different performance measures.