Graduate School of Engineering and Science

Permanent URI for this collectionhttps://hdl.handle.net/10679/9877

Browse

Recent Submissions

Now showing 1 - 20 of 58
  • Placeholder
    PhD DissertationPublication
    Applications of robust optimization in logistics and production planning
    Avishan, Farzad; Yanıkoğlu, İhsan; Yanıkoğlu, İhsan; Özener, Okan Örsan; Özener, Başak Altan; Yakıcı, E.; Yavuz, T.; Department of Industrial Engineering
    We analyze three applications of the robust optimization approach in this thesis. The initial part is dedicated to planning a dairy production and distribution problem considering uncertain demand. The second part presents an adjustable robust optimization approach for relief distribution in a post-disaster scenario where travel times are uncertain. Lastly, in the third part, we tackle the electric bus scheduling problem that incorporates uncertainty in both travel times and energy consumption. The first part of this thesis investigates a robust dairy production and distribution planning problem that considers the complexities of dairy production, including perishability, sequence dependence, and demand uncertainty. To tackle this uncertainty, we introduce an adjustable robust optimization approach that generates a robust and Pareto-efficient production and distribution management plan. This approach provides decision-making flexibility by allowing for adjustments based on the actual demand observed over a multi-period planning horizon. The effectiveness of the proposed method is evaluated through extensive Monte Carlo simulation experiments. Additionally, we conduct a case study to demonstrate how the adjustable approach outperforms the static robust approach in terms of the objective function value and solution performance. In the second part, we present an adjustable robust optimization approach for relief supply distribution in the aftermath of a disaster. The approach generates routes for relief logistics teams and determines the service times for visited sites to distribute supplies, taking into account the uncertainty of travel times. The model allows for adjustments to service decisions based on real-time information, resulting in robust solutions for the worst-case scenario of travel times but also more flexible and less conservative than those of static robust optimization. Due to the computational complexity of solving resulting models, we propose heuristic algorithms as an alternative solution approach. Using 2011 earthquake data from the Van province of Turkey, we have also demonstrated the effectiveness of our approach. In the last part, we investigate a scheduling problem for electric fleets faced with uncertain travel times and energy consumption. We propose a mixed-integer linear programming model to optimize electric fleet purchasing and charging operation costs, utilizing robust optimization to address uncertainty. A new uncertainty set is introduced to control the robustness of the solution. The model determines the required number of buses to cover all trips, schedules the trips, and designs charging plans for the buses. We evaluate the effectiveness of the model through extensive Monte Carlo simulations. Additionally, we present a case study on the off-campus transport network at Binghamton University to demonstrate the real-world applicability of the model and solution approach.
  • Placeholder
    PhD DissertationPublication
    Generalization of deep neural networks totransformations through novel andhybrid architectures
    Özcan, Barış; Kıraç, Mustafa Furkan; Kıraç, Mustafa Furkan; Uğurdağ, Hasan Fatih; Aydoğan, Reyhan; Gökberk, B.; Uğur, E.; Department of Computer Science
    Object recognition is a foundational pillar for many computer vision tasks such as searching, tracking, navigating, scene understanding or information retrieval that require some kind of category knowledge at various levels. Even with the major advances in these tasks with the data-driven deep learning methods such as convolutional neural networks (CNNs), generalization to geometric variations and embedding part-whole relationships are still yet to be achieved when compared to human-level recognition. CNNs particularly fail to generalize to unseen viewpoints of a learned object even with substantial samples and are easily confused as the pooling operations lose the relation between existing entities in the input. Recently emerged capsule networks outperform CNNs in novel viewpoint generalization tasks even with significantly fewer parameters. Capsule networks group the neuron activations for representing higher-level attributes and their interactions for achieving equivariance to visual transformations. Capsules are designed to represent the pose of an existing visual entity and learned transformations are essentially pose transformations which are matrices. However, capsule networks have a high computational cost for learning the interactions of capsules in consecutive layers via the, so-called, routing algorithm in addition to the training stability problems. In this thesis, we propose to represent the pose information and transformations with quaternions in Quaternion Capsule Networks (QCNs). Quaternions are immune to the gimbal lock, have straightforward regularization of the rotation representation for capsules, and require a smaller number of parameters than matrices. QCNs directly inherit the existing EM-Routing for a fair comparison of the benefits of using quaternions instead of matrices. Experimental results show that QCNs generalize better to novel viewpoints with fewer parameters, and achieve on-par or better performances with the state-of-the-art Capsule architectures on well-known benchmarking datasets. Building on this proposal, we aimed to reduce the computational burden and embed feature vectors to the capsules in addition to pose information. In this context we propose, Alleviated Pose Attentive Capsule Agreement (ALPACA) which is tailored for capsules that contain pose, feature and existence probability information together to enhance novel viewpoint generalization of capsules on 2D images. For this purpose, we have created a Novel ViewPoint Dataset (NVPD) a viewpoint-controlled texture-free dataset that has 8 different setups where training and test samples are formed by different viewpoints. In addition to NVPD, we have conducted experiments on the iLab2M dataset where the dataset is split in terms of the object instances. Experimental results show that ALPACA outperforms its capsule network counterparts and state-of-the-art CNNs on iLab2M and NVPD datasets. Moreover, ALPACA is 10 times faster when compared to routing-based capsule networks. It also outperforms attention-based routing algorithms of the domain while keeping the inference and training times comparable.
  • Placeholder
    PhD DissertationPublication
    Control of solenoid-based injectors with different nozzle types to achieve soft-landing and mass flow rates
    Qureshi, Muhammad Sarmad; Bebek, Özkan; Bebek, Özkan; Adam, Evşen Yanmaz; Uğurlu, Regaip Barkan; Erbatur, K.; Şafak, K. K.; Department of Electrical and Electronics Engineering
    Electromagnetic (EM) solenoid actuator has recently gained attention in research because of their cost-effectiveness, compact size, and low heat dissipation. Due to these advantages, these devices are used in a variety of applications including injector systems in automobiles. However, a major drawback of using such devices is their unfavorable design to attach any physical sensor, which gives rise to their uncontrollable nature. Due to the discrete actuation properties, the solenoid actuator generates unwanted noise at the time of closing, mechanical wear, and tear, and lack of control over the actuator reduces its practical adaptability. The main focus of this dissertation is the development of open-loop control approaches to achieve low seating velocities, referred to as soft-landing, and to regulate mass flow rates using solenoid-based injectors. \\ Prior to further research, different control algorithms were evaluated for tracking performance, and the better-performing control algorithm was selected. A novel open-loop control methodology to achieve lower seating velocities (soft-landing) for solenoid-based injector systems has been proposed, to reduce impact noise and mechanical deterioration. Moreover, a quantification of the attributes to achieve soft-landing has been investigated, for the generic application of the control law on any solenoid-based injector, without any sensory feedback. In addition, the mass flow rate control for the solenoid-based injector in the open-loop environment that is, without the use of any physical sensor feedback is investigated. A novel robust control approach for tracking mass flow rate reference profiles using a solenoid-based injector has been presented. The proposed control approach successfully tracked the desired mass flow rate reference profiles, to be used in various automotive and chemical injection applications.
  • Placeholder
    PhD DissertationPublication
    Experimental characterization of underwater visible light communications
    Yıldız, Samet; Uysal, Murat; Uysal, Murat; Durak, Kadir; Demiroğlu, Cenk; Yarkan, S.; Çolak, S. A.; Department of Electrical and Electronics Engineering
    As human activities such as collecting scientific data, monitoring the environment, exploring offshore oilfields, archaeology, harbor security, and surveillance increase, the demand for underwater communication systems has also increased. Wire-line systems underwater can provide real-time communication in numerous ways. Wire-line systems, however, are limited by their inflexibility, high costs, and operational disadvantages to being used underwater in most real-life situations. Thus, underwater wireless communication has attracted an inevitable amount of attention from researchers. Underwater wireless transmission is typically performed using acoustic signaling due to its capability to support distances of up to tens of kilometers. However, due to its low data rate (in tens of Kilobits Per Second (Kb/s)), it is unsuitable for high-bandwidth underwater applications such as image or real-time video transmission. Therefore, optical communication offers a high-capacity alternative to low-capacity acoustic communication. With the transparency of water to blue and green light (450 nm – 550 nm), Laser Diodes (LDs) or Light Emitting Diodes (LEDs) can potentially serve as underwater wireless transmitters with data rates up to tens of Gigabits Per Second (Gb/s) per wavelength. Any technology must undergo experimental validation and investigation for the theoretical studies to be validated. However, there still needs to be more experimental verification for Underwater Visible Light Communication (UVLC) systems. In addition, path loss, a bottleneck caused by massive attenuation in underwater environments, must be addressed. Therefore, experimental verification is needed for semi-collimated optical signals propagating underwater. Based on the experimentally verified attenuation model, a UVLC with a semi-collimated source is to be investigated to achieve the performance constraints of UVLC, how further to improve the received optical signal via a reflector, and how to optimize the system performance in the presence of solar noise. Motivated by these, we target experimentally validating a collimated and semi-collimated UVLC attenuation model. For this reason, we first confirm the accuracy of the collimated measurements after measuring the extinction coefficient of the water experimentally. Then, after generating a semi-collimated Gaussian beam using various optical lenses, we measure the beam divergence angle. Afterward, considering different water types in an indoor aquarium at OKATEM Research center and both collimated and semi-collimated laser sources with Gaussian beam shapes, we perform measurements to determine the UVLC path loss channel coefficient. Finally, we confirm experimentally the theoretically proposed aggregate channel model that incorporates both attenuation and geometric loss. Having enough confidence in the conventional UVLC semi-collimated channels, which are already verified experimentally, we then investigate the performance constraints of UVLC system in the absence of turbulence and pointing errors. For a targeted Bit Error Rate (BER) performance, we derive closed-form expressions for the maximum achievable link distance, maximum acceptable beam divergence angle, minimum acceptable receiver aperture diameter, and maximum acceptable water turbidity. Using these expressions, we investigate each system parameter’s performance in the simulation environment in the pure sea, clear ocean, and coastal water to meet the desired BER performance with the use of desired system performance of a M-ary Unipolar Pulse-Amplitude Modulation (PAM) (M-UPAM) UVLC systems. After confirming the aggregate path loss channel coefficient for Line of Sight (LOS) links, we observe that the scattered rays would support the channel gain with a reflector by reflecting them to the photodetector. In the literature, the reflector-aided transmission technique is only explored after receiving both direct and scattered rays for higher received signal strength. Therefore, we propose a closed-form expression for the underwater path loss assuming Non-Line-of-Sight (NLOS) transmission through the water surface and manmade reflector (e.g., mirror) in addition to the LOS link. First, utilizing the derived expression, we quantify the achievable NLOS gain defined as the ratio between the maximum achievable channel coefficient from reflection and the overall channel coefficient. Then, we validate our findings experimentally by utilizing the water surface and the mirror as the reflecting surfaces in an aquarium. Our results reveal that achievable gains up to around 3 dB can be observed due to reflections. In the last part of this dissertation, we examine the impact of solar energy on the performance of vertical UVLC links in an underwater communication deployment scenario, which we analyze analytically and experimentally. In our study, we show that solar energy has a destructive impact on communication quality according to its position during the time of the day, day of the year, and location on the earth based on the visible light spectrum. Furthermore, to optimize the operation of this UVLC system, we propose optimization algorithms to achieve maximum Signal-to-Noise Ratio (SNR) to optimize the system parameters.
  • Placeholder
    PhD DissertationPublication
    Quantum advances in imaging systems
    Kuniyil, Hashir Puthiyapurayil; Durak, Kadir; Durak, Kadir; Akgiray, Ahmed Halid; Parlak, Mehmet; Müstecaplıoğlu, Ö.; Kiraz, A.; Department of Electrical and Electronics Engineering
    Quantum mechanics-based systems are increasingly used for advancing existing technologies. One of the technological frontiers where features of quantum mechanics are shown to have high potential is the imaging industry. Quantum imaging, a quantum version of the conventional imaging system, is becoming increasingly explored for imaging applications. This method of imaging involves taking advantage of unique spatial and temporal quantum correlations to enhance its figure of merit, including modulation transfer function. This thesis discusses the implementation procedure of the photonic quantum imaging scheme with the principal aim of achieving an improved imaging system. In this direction, the efficient generation of quantum sources, as well as their distribution and accurate analysis, are of paramount importance, particularly regarding the practicability of many quantum imaging applications. A quantum source based on the nonlinear process of spontaneous parametric downconversion (SPDC) is well-known and increasingly used for this task. We theoretically study the suitability of the critically phasematched SPDC process and a detailed analysis of it is given in this thesis that will help in engineering a quantum source that is fit for quantum imaging applications. From our theoretical studies and basic experiments, it is well understood that the photon pairs called signal and idler from an SPDC process are capable of showing entanglement time and position correlationsuitable parameters for quantum imaging. These temporal and spatial correlations are used for quantum imaging where spatial correlation supports the extraction of the complete 2D features of the sample under study and the temporal correlation allows us to enhance the SNR of the system, a parameter that affects the modulation transfer function. To learn the temporal correlation’s capability to suppress the noise, hence enhanced SNR, a separate work that is based on the quantum illumination experiment driven by continuous wave laser pumped SPDC is conducted. Our results show that a quantum correlation in the time domain has increased resilience to noise. In a novel method, we demonstrate that with the assistance of polarisation correlation, we can further improve the performance metric of SNR of a quantum illumination system. As we use temporal correlation in the imaging scheme in addition to the spatial correlation, this result has implications for suppressing noise from a quantum imaging system. The advancement of camera technologies offered ample room for experimenting with quantum spatial features, especially in SPDC-based spatial correlation-based imaging. The spatial correlation in SPDC stands out from classical systems- as this showed EPR-type correlations. The task of achieving reconstruction of the spatial feature of an object sample with the assistance of SPDC’s spatial quantum correlation has been shown in this thesis. Our study exploited the transverse spatial correlation in the continuous basis of the SPDC mode to the realization of 2D quantum imaging. Our proof of concept experimental result proves that spatial correlation can be used for reconstructing an image. As this imaging scheme is based on the quantum-correlation behavior of signal and idler photons of the SPDC process, we could suppress the noise in the system by employing the coincidence imaging technique. The demonstrated experimental imaging method is diffraction limited, thus the resolution of the system, at best, is equal to the classical counterparts. Our simulation work shows the performance of the quantum imaging scheme can be affected by input parameters such as the pump beam waist, crystal length, and location of the sample object and the imaging instrument. Therefore, carefully selecting the input parameter values is critical for the system’s performance. The study concludes that quantum-based imaging schemes, like the one presented in this thesis, have high resilience to noise.
  • Placeholder
    PhD DissertationPublication
    Off-design performance of micro-scale solar Brayton cycle
    Akba, Tufan; Mengüç, Mustafa Pınar; Mengüç, Mustafa Pınar; Önal, Mehmet; Güler, M. G.; Department of Mechanical Engineering
    A novel methodology to design a micro-scale, solar-only Brayton cycle and assess its on- and off-design performance is presented. The method is applied to generate and assess six thermodynamic layouts over a range of solar irradiation levels. All plants have the same on-design requirements to create a baseline to compare their off-design performance. PyCycle, a thermodynamic cycle modeling library to model jet engine performance, is revised to transform the jet engine performance modeling to solar thermal plant performance modeling and used to create a volumetric receiver component. Initially, a gradient-based receiver design methodology is proposed. Even, gradient calculation is the longest step in this methodology and compared to the design of experiment study, 77% fewer designs are iterated in gradient-based optimization. The final result is 6% more efficient receiver design compared to 62% efficient, the best design of experiment result. For an efficient receiver design process, surrogate model algorithms are tested, and using the design of experiment results as training data and surrogate-based design optimizations are performed. Then a response surface surrogate model of the receiver is selected for design optimization to maximize the component-level efficiency. Because of the surrogate simplicity, the optimization process was completed with fewer designs in a shorter time and reached a better objective than the gradient-based optimization of the base model. For the plant design phase, the compressor and turbine maps are scaled for the balance of the plant. Off-design efficiency, mass flow rate, operation range, turbomachinery maps, and maximum power output are presented. Since the methodology can be adapted to all plant sizes, the results are normalized to on-design condition. The outcome of this study demonstrates the impact of the thermodynamic configuration on off-design performance and provides a methodology to design plants that are more robust across a range of solar irradiation levels and can be operated in a more flexible manner. For given design conditions in thesis, the solar radiation operation envelope can be extended 5%, with 6% less mass flow, and operates more efficiently than the benchmark case over 85% of the operating regime.
  • Placeholder
    PhD DissertationPublication
    Cyclic and corrosion behavior of nickel titanium shape memory alloys modified with various biocompatible coatings
    Şimşek, Görkem Muttalip; Yapıcı, Güney Güven; Yapıcı, Güney Güven; Başol, Altuğ Melik; Bebek, Özkan; Yılmazer, H.; İpekoğlu, M.; Department of Mechanical Engineering
    Metallic materials including stainless steels, cobalt-chromium based alloys, commer- cial titanium, Ti6Al4V, and nickel-titanium (NiTi) shape memory alloys (SMA) have long been considered to be the dominant source of implant materials in the medical industry. Among all, the practical use of NiTi SMAs is fascinating due to their ex- traordinary behaviors, which are entirely new compared to other conventional metallic materials. However, a major problem associated with the use of NiTi for in-vivo ap- plications is the potential risk of Ni release due to the highly corrosive environment of the human body. There have been many attempts to overcome such difficulties and to understand the corrosion mechanisms for conventional NiTi implant materials during the last decade through simulations or in-vivo and in-vitro experimental studies. Within this context, the effect of heat treatment parameters on the mechani- cal properties of NiTi shape memory alloy in wire form is investigated since heat treatment can strongly influence the mechanical properties of shape memory alloys. Detailed experiments were planned and utilized to examine the following properties as a function of heat treatment condition; phase transformation temperature, move- ment and repeatability, cyclic behavior, corrosion resistance, and biocompatibility. All experimental setups were custom designed and manufactured based on specific test requirements. Corrosion and cyclic experiments were performed in Ringer solu- tion and Simulated Body Fluid (SBF) to better understand the response of NiTi in a human body environment. Besides heat treatment parameters, the effect of biocompatible layer on the func- tional behavior of NiTi was also investigated since the method was found highly promising by several research groups. In this dissertation, CaP and PVA based hy- drogel coatings were applied on NiTi SMAs in wire form via the dip coating method. However, bioceramics and biopolymers possess poor mechanical properties that con- stitutes a drawback. The main contribution of the dissertation is combining the superior mechanical properties of NiTi with the excellent biocompatibility of certain polymers and ceramics to develop a new type of implant for various medical appli- cations. The present work also demonstrates for the first time the effect of NaOH pre-treatment on the wire form HA coated NiTi. These efforts show that deposition of biocompatible layer on metallic surfaces may act as physical or chemical barriers and inhibit the ion release from the surface in a highly corrosive environment. Be- sides, the influence of biocompatible layer on the cyclic behavior of NiTi was also investigated with CaP and hydrogel coated NiTi samples. Cyclic experiments were performed in different environmental conditions including dry condition, SBF and Ringer's solutions with different frequencies to better understand the NiTi response in a human body environment. Finally, this dissertation investigates the mathematical modeling of NiTi corro- sion mechanisms. Even though the biomaterials were tested in the experiments for corrosion, it is hard to experimentally predict all the situations that can occur in the body. The reasons for incomplete experimental testing include stochastic nature of the corrosion process, the need for long-term data collection, and the different re- sponses of different patients to the same biomaterials. In this manner, mathematical modeling of the corrosion process was structured with selected parameters such as pH, temperature, and difference between potential of the metal and solution by uti- lizing Cellular Automata (CA) to simulate the corrosion behavior of uncoated and coated NiTi wires. It was concluded that the developed model accurately captures the corrosion progress development in response to changes of different environmental parameters.
  • Placeholder
    PhD DissertationPublication
    Optimal trajectory generation and adaptive control of an underactuated and self-balancing lower body exoskeleton
    Soliman, Ahmed Adel Ahmed Fahmy; Uğurlu, Regaip Barkan; Uğurlu, Regaip Barkan; Bebek, Özkan; Ünal, Ramazan; Erbatur, K.; Şafak, K. K.; Department of Mechanical Engineering
    This thesis presents an approach for developing three-dimensional (3D) dynamic walking capabilities in a bipedal exoskeleton with underactuated legs. The proposed framework consists of a trajectory generator and an optimized inverse kinematics algorithm designed to handle underactuation. To achieve feasible task velocities despite the underactuated legs, the inverse kinematics algorithm utilizes a task prioritization method by exploiting the null space. This approach allows lower-priority tasks, such as swing foot orientation, to be accomplished to the greatest extent possible without interfering with higher-priority tasks like the Center of Mass trajectory. Simultaneously, the trajectory generator analytically incorporates the zero moment point concept, ensuring continuous acceleration throughout the entire walking period, regardless of changes in contact and phase. Furthermore, three locomotion controllers were developed to complement the proposed task prioritization algorithm and enhance its robustness against significant parameter uncertainty and external disturbances. These controllers include the zero moment point impedance feedback controller, along with two other state-of-the-art locomotion controllers: admittance control and centroidal momentum control. The objective is to integrate these controllers with the task prioritization algorithm, collectively improving the system's ability to handle challenging conditions and uncertainties. A series of simulation experiments were conducted using a 3D simulator to verify the validity and robustness of these controllers for thorough benchmarking. A human-robot coupled model is considered, including a 40 kg underactuated exoskeleton and 12 distinct anthropomorphic subjects. When combined with the proposed task priority-based optimization algorithm, all three controllers demonstrate adequate performance in addressing balanced locomotion behavior. The proposed zero moment point impedance controller shows statistically significant results, indicating a comparatively more robust feature. As the proposed locomotion controller approaches are model-based controllers, there is a desire to develop a real-time applicable inertial parameter identification algorithm to improve the locomotion controller's adaptability against inertial parameter variations. A semidefinite programming algorithm is developed to perform the identification algorithm recursively while guaranteeing the complete physical consistency of the identified inertial parameters. A recursive algorithm to update the identifiability projection matrix is developed to manage the inclusion of newly acquired samples and arrange them concerning the old identifiable parameters. The idea of the filtered regressor is used to mitigate the effect of contact transitions and noise without losing information about the identifiable parameters. To verify the validity of the proposed identification algorithm, a series of simulation experiments are conducted using 3D simulator using the human-robot coupled model. As a result, the algorithm shows feasible performance based on accuracy, computation time, and the complete physical consistency of the identified parameters. To verify the proposed algorithm, an exoskeleton prototype was constructed. The prototype was equipped with eight series elastic actuators, sixteen force-sensitive resistors for measuring contact force, and another sixteen absolute encoders for measuring motor angular displacements and series elastic actuator spring deflections. Communication with series elastic actuators and sensors was enabled through a set of interface circuits and a desktop PC. The real-time operation was justified by employing Ubuntu 18.04 and Xenomai 3.1. The required Cartesian and joint-level controllers were programmed using the C language with the GNU scientific library. Simultaneous resolution of both algorithms was facilitated through parallel programming. Experimental development of squat, sway, and sagittal walk motions was carried out. The real-time applicability and feasibility of the proposed algorithms, as well as the developed hardware, were demonstrated by the experiments.
  • Placeholder
    PhD DissertationPublication
    Developing a methodology for the design and optimization of the pressure-swirl atomizers
    Nural, Ozan Ekin; Ertunç, Özgür; Ertunç, Özgür; Mengüç, Mustafa Pınar; Başol, Altuğ Melik; Güngör, A. G.; Uzol, O.
    Atomization is the process of disintegration of bulk liquid into smaller droplets and has been an almost century-long research topic of fluid dynamics. Devices used for atomization are called atomizers and many different types of atomizers using different strategies to achieve atomization have been developed. Pressure-swirl atomizer is one of the most widely used types of atomizer due to its simplicity and ability to achieve a wide range of droplet sizes and coverage area. Even though the geometry of the pressure-swirl atomizer is simple, its internal flow field includes complex phenomena such as turbulence, liquid/gas interface, recirculation zones, and instabilities. Modeling the performance of pressure-swirl atomizer have been challenged by many researchers, and different models have been developed. Out of these models, 2D and 3D numerical simulations were proposed to be the most accurate ones, even though the accuracy of 2D simulations was questioned by many researchers. Other models, 1D models, and semi-empirical correlations also exist in the literature yet the accuracy of the latter is reported to be low. Models that could be used for the optimization of pressure-swirl atomizers, where it will be used to perform thousands of calculations while having low error values are lacking in the literature. Even though numerical modeling can result in accurate predictions, computation time prevents it to be used in optimization calculations. This study presents the developed models that could be used for such calculations and present their accuracy in comparison with conducted experiments and simulations. In this study, first, inlet modeling of the 2D simulations is inspected. It was shown that the accuracy of the 2D simulations can be improved drastically by adjusting the inlet velocity components to include the effect of flow deformation that occurs due to tangential ports. It is shown that a model developed in an earlier study can be used to describe this deformation and calculate the inlet velocity components. Later, existing semi-empirical models in the literature along with the experimental data are presented and inspected. Overall 1,777 experimental data points, obtained from 34 different studies, are provided and cataloged. It is seen that, when the accuracy of semi-empirical correlations is evaluated in a global range, rather than in the range that they are developed at, error values increase significantly. A new set of semi-empirical correlations that are globally more accurate are obtained using experimental data. However, even with the developed correlations, error values in the calculation of pressure difference and droplet sizes are still higher than 50 %. Due to the lack of the desired level of accuracy in the semi-empirical modeling, boundary layer modeling approach is adopted for the description of the internal flow field of the pressure-swirl atomizer. Two different models, one for the straight sections (swirl chamber and orifice) and another for the convergent or divergent sections (convergent section or trumpet), are obtained. Evaluation of the model for the straight section is done with the comparison of the experimental data of open-end pressure swirl atomizers. Comparison of convergent or divergent section models is done with the 3D simulation of the unique atomizer geometry that is developed in the framework of this dissertation. Finally, the accuracy of the combined model is evaluated with the experimental data of close-end pressure-swirl atomizers. A large number of 3D full geometry simulations are conducted to reason the coupling of internal flow dynamics with the uniformity of the pressure-swirl atomizer spray. These simulations are done with atomizer geometry having a unique shape, and later three of the geometries are selected to be manufactured. Of the selected geometries, two have uniform sprays, while the third one has a non-uniform spray. Due to the unique shape of the inlets, these atomizers are manufactured with the method called Laser Lithography, and the manufacturing tolerance of the geometries was less than 1 $\mu$m. Experiments of the atomizers are done with Laser Induced Fluorescence method at the radial plane, and obtained results showed a non-uniform internal flow results in a spray where cluster of droplets with large diameters exist. A semi-empirical model for the description of the non-uniformity is obtained, and comparison with the existing model and experimental data in the literature is done. Spray of the pressure-swirl atomizer takes different shapes as the pressure and flow rate through the atomizer is increased, and the final form of the spray is called fully-developed spray. Development of the spray of pressure-swirl atomizers is also modeled in this study. For this purpose, both experiments conducted in the framework of this study and experimental data obtained from the literature are used. A model that is based on the bulk Reynolds and Weber numbers is obtained using the gradient-descent method, and comparisons with the existing correlations in the literature are done. Droplet diameter modeling is done by utilizing the model called Linear Instability Sheet Atomization (LISA), which is often used by commercial CFD codes. This model is analyzed by examining the equations, and eight different correlations, differing in the simplifications, are obtained. Evaluation of these correlations is done by making comparisons with the experimental data of this study, which is obtained with the Shadowgraphy method. All of the droplet diameter experiments are conducted in the primary break-up region of the atomization. Comparisons have shown that when the break-up length of the atomizer can be modeled accurately, the LISA model can be used to estimate the representative droplet diameter. For the description of the droplet diameter distribution, a new model is obtained for the calculation of the spreading parameter. This parameter is often taken as constant in the commercial CFD codes, yet it was shown that it significantly affects the obtained distributions. Comparison of obtained model with the model of commercial CFD codes is also presented. Finally, combination of the obtained models and their order of use for performing an optimization study is presented. It was also presented that the obtained model can perform the calculation of single geometry in 20.0 seconds with a single core, whereas 2D and 3D simulations require 208 core-hour and 2,880 core-hour, respectively. It can be seen that the developed model is 37,440 times faster than the 2D simulation and 518,440 times faster than the 3D simulations.
  • Placeholder
    PhD DissertationPublication
    Enabling techniques for next generation secure underwater optical communications
    Kebapci, Burak; Uysal, Murat; Uysal, Murat; Durak, Kadir; Edemen, Çağatay; Levent, V. E.; Erdoğan, E.; Department of Electrical and Electronics Engineering
    As threats in the maritime domain diversify, securing data transmission becomes critical for underwater wireless networks designed for the surveillance of critical infrastructure and maritime border protection. This has sparked interest in underwater Quantum Key Distribution (QKD). In this study, a fully functional BB84 QKD system is developed as a solution to emerging security need of underwater wireless communication systems. The QKD unit is built on a hybrid computation system consisting of an Field Programmable Gate Array (FPGA) and an on-board computer (OBC) interfaced with optical front-ends. A real-time photon counting module is implemented on FPGA. The transmitter and receiver units are powered with external UPS and all system parameters can be monitored from the connected computers. The system is equipped with a visible laser and an alignment indicator to validate successful manual alignment. Secure key distribution at a rate of 100 qubits per second was successfully tested over a link distance of 7 meters.
  • Placeholder
    PhD DissertationPublication
    Timing side channel issues and photon budget optimization in QKD
    Pahalı, Melis; Durak, Kadir; Durak, Kadir; Akgiray, Ahmed Halid; Uğurdağ, Hasan Fatih; Turgut, S.; Müstecaplıoğlu, Ö. M.; Department of Electrical and Electronics Engineering
    Prepare-and-measure and entanglement-based quantum-key-distribution (QKD) protocols are vulnerable against a side channel attack that exploits the time difference in detectors' responses used to obtain key bits. There is a correlation between the timing histograms of the detectors and the values of bits they generate, and information leakage to an eavesdropper is quantified by mutual information between them. The recommended solution against the timing side channel attack is to use a large time bin width instead of high-resolution timing information in the QKD system. A common notion is that using a large bin width reduces the resolution of detectors' responses, hence supposedly minimizes the information leakage to an eavesdropper. We challenge this conventional wisdom and demonstrate that increasing the bin width does not monotonically reduce the mutual information between the key bits and the detectors' responses. Second, we discover the parameter of start time of binning and show the characteristic behaviour of the mutual information with respect to it. Third, we examine the effect of full width half maximums of the detectors' responses on the mutual information. As a result, although QKD is theoretically secure in cryptographic point of view, it should be protected against side channel attacks, and our findings about these three points should be found in the body of the thesis and should be taken into account in order to estimate leakage information to a possible eavesdropper correctly. Additionally, there is a huge effort to improve data transfer capacity in QKD technologies to make it deployable in various size establishments and on a global scale. We propose a method to increase raw key rate in discrete-variable entanglement-based QKD applications in which a Bell's inequality is employed for security check. We focus on a standardized E91 QKD system. This method relies on the optimization of photon budget allocation among different types of bits generated purposely and occurred unavoidably in the QKD system. Additionally, we examine the optimized values of core variables depending on various photon budgets. Meanwhile, we remind the dependencies and limitations of photon counting process to the reader.
  • Placeholder
    PhD DissertationPublication
    Blockchain-based authentication and authorization for software defined networks
    Latah, Majd; Kalkan, Kübra; Çakmakçı, Kübra Kalkan; Arı, İsmail; Alagöz, F.; Levi, A.; Department of Computer Science
    Software-defined networking (SDN) is a novel networking paradigm that allows a simple and flexible management of the underlying forwarding devices through a centralized controller. However, SDN suffers from different security issues that may paralyze the whole network when the controller is under attack. Blockchain (BC) is considered a new technology that provides a decentralized distributed ledger, which can be used to protect the SDN controller from other malicious components in the network. In this thesis, we investigate the integration between SDN and BC technology. We focus on BC-enabled authentication and authorization for SDNs. First, we propose, DPSec, a blockchain-based data plane authentication protocol for SDNs. Second, we improve the performance of BC-enabled SDN by proposing a component-wise waiting time approach. We also utilize lattice-based signatures and Key Encapsulation Methods (KEMs) to improve the security of BC-SDN. Third, we introduce, HostSec, a blockchain-based approach that provides mutual host-controller, Packet-In/Packet-Out and host-host authentication for SDNs. Fourth, we propose, SDN-API-Sec, a blockchain-based access control method for cross-domain SDNs by utilizing BC smart contracts. The results suggest a trade-off between security and latency.
  • Placeholder
    PhD DissertationPublication
    Nano-scale chemically modified thin film characterization for chemical mechanical planarization applications
    Karagöz, Ayşe; Başım, Gül Bahar; Mengüç, Mustafa Pınar; Yaralıoğlu, Göksenin; Erkol, Güray; Akgün, B.; Department of Mechanical Engineering; Karagöz, Ayşe
    The aim of the microelectronics industry has historically been achieving increasing functionality through decreasing the device sizes while simultaneously reducing the unit manufacturing costs. This objective has been achieved by the implementation of multilevel metallization (MLM) based on the development of advanced photolithography processes and the chemical mechanical planarization (CMP) process that enabled the successful patterning through photolithography by planarization of the wafer surfaces. The projected targets of the Integrated Circuit (IC) manufacturing are facing some physical barriers with the current and forthcoming needs of the semiconductor industry to develop future metal oxide semiconductor field effect transistors (MOSFET). These challenges entail the introduction of new and more difficult materials to achieve better device performance such as use of Germanium due to its higher electron mobility to build faster microprocessors, as well as the use of III-V semiconductors such as GaN, GaAs or InAs which are being tested for high power device applications. Furthermore, new ideas such as reduced power consumption and ability of energy harvesting introduced ferroelectric and magnetic memories, as well as piezoelectric transducers which involve variety of materials harder to integrate to conventional semiconductor manufacturing. Chemical Mechanical Planarization process is one of the key enablers for the integration of the new materials into the current semiconductor fabrication processes. CMP functions on the principle of chemically modifying the surface to be polished while this surface is continuously abraded mechanically by the Nano-particles homogeneously suspended in the slurry environment. Development of new CMP processes require a robust slurry formulation that can provide high material removal rates (MRR) to promote high volume manufacturing throughput with low dissolution rates (DR) to achieve topographic selectivity and global planarity, in addition to creating minimum surface defectivity surface roughness values. Hence, it is important to understand the chemical and mechanical nature of the CMP process to better control the process and design future processes for the new generation materials. This dissertation focuses on the characterization of the chemically modified thin films which form during the CMP applications through the exposure of the surface to be polished to the slurry chemicals to optimize the process performance at a Nano-scale. The overall study is presented in two main sections the first part focusing on the findings on the chemically formed metal oxide thin films on metal CMP applications and the second part focusing on the chemically modified thin film characterization on the nonmetal CMP applications. On metal CMP applications, chemically modified thin films are required to be protective oxides to achieve topographic selectivity. Therefore, the chemically modified thin films of the metal (tungsten) substrates were characterized for thickness and composition as well as their protective nature by calculating their Pilling-Bethworth (P-B) ratio comparing the volume of oxide to the volume of the metal underneath. The analyses have shown a layered oxide film formation with the very top oxide film was detected to be a hydroxyl compound of the tungsten followed by W/WOx combination in the lower layers until a pure W substrate is reached. Furthermore, it has also been demonstrated that the surface topography of the tungsten wafers tend to change as a function of the oxidizer concentration. The observed changes in the surface topography were also found to affect the wettability and the total surface energy. Hence, it is obvious that the protective nature, as well as the surface nano-topography of the metal oxide thin films need to be studied to assess the metal CMP performance at a nano-scale. Changes in the surface roughness and topography with the oxidizer concentration were also studied through a mathematical modeling approach using Cahn-Hilliard Equation (CHE) approximation. CHE explains the formation of surface nano-structures in terms of reverse diffusion principal and expected to shed light to understanding the changes in the material removal rate mechanisms as a function of the CMP process variables. In the second half of the dissertation, slurry formulations are evaluated to characterize the chemically modified thin films to enhance selectivity for germanium/silica (Ge/SiO2) CMP systems. The chemically modified thin films are mainly the hydroxyl layers formed by the dissolution of the materials such as silicon, silica or germanium in front end applications of the CMP. To evaluate the impact of chemically modified thin films of the Ge/SiO2 system, it is necessary to modify these chemically modified films with oxidizers and surface active agents (surfactants) since the function of the chemically modified layers in these applications is to achieve optimal removal rate selectivity. Therefore, initially selectivity analyses were conducted by wear rate responses measured at a single particle-surface interaction level through Atomic Force Microscopy (AFM) with and without the use of surfactants. Both anionic (sodium dodecyl sulfate-SDS) and cationic (cetyl trimethyl ammonium bromide-C12TAB) surfactants were evaluated at their sub-micelle and above critical micelle concentrations (CMC) as a function of pH and oxidizer concentrations. CMP performances of Ge and SiO2 wafers were evaluated in terms of material removal rates, selectivity and surface quality. This dissertation is composed of seven chapters. The first chapter discusses the importance of nano-scale chemically modified thin films for chemical mechanical planarization process by an introduction to CMP process. Chapter 2 reviews the main components of the CMP process, its integration at the front end, middle section and back end of the line applications, as well as the importance of the characterization of the chemically modified thin films in CMP development. Chapter 3 discusses metal oxide protective thin film characterization for metal CMP applications by focusing on the growth and protective nature of the metal oxide thin films on CVD deposited tungsten wafers pre and post polishing in the presence of an oxidizer. In Chapter 4, CMP performance of the tungsten wafers are evaluated based on the material removal rate and surface quality analyses as a function of the changes in the slurry solids loading and H2O2 concentration as an oxidizer. Chapter 5 focuses on germanium/silica CMP through wear rate testing as a preliminary predictive approach as well as through standard CMP MRR evaluations to define a suitable slurry formulation with sufficient selectivity and removal rate performance. Chapter 6 extends the current knowledge base on the use of surfactant systems from standard shallow trench isolation (STI) CMP to Ge- based STI CMP. CMP results are reported with slurries made of silica particles with 3 wt % solids loading and 200-300 nm particle size in the presence of surfactants and 0.1 M H2O2 as an oxidizer. The optimal conditions for slurry formulations are presented as a function of pH and oxidizer concentration on Ge/SiO2 selectivity statistically through design of experiments (DOE). Finally, Chapter 7 provides a summary of the reported findings of this dissertation, and the suggested future work.
  • Placeholder
    PhD DissertationPublication
    Application of large-scale optimization methods in scheduling and routing problems
    Elyasi, Milad; Özener, Okan Örsan; Özener, Okan Örsan; Yanıkoğlu, İhsan; Ekici, Ali; Yakıcı, E.; Duran, S.; Department of Industrial Engineering; Elyasi, Milad
    In this thesis, we consider three different applications of large-scale optimization methods. We focus on the blood donation tailoring problem under uncertain demand in the first problem. In the second one, we propose a model for hybrid manufacturing consisting of flexible manufacturing systems and typical manufacturing machines. In the last one, we consider a two-echelon vehicle routing problem for last-mile delivery of groceries. In the first part of the thesis, we propose a stochastic scenario-based reformulation of the blood donation management problem that adopts multicomponent apheresis and utilizes donor pool segmentation as here-and-now and wait-and-see donors. The donation pool segmentation enables more flexible donation schedules than the orthodox donation approach because wait-and-see donors may adjust their donation schedules according to the realized values of demand over time. We propose a column generation approach to solve the associated multi-stage stochastic donation tailoring problem for realistically sized instances. The second part considers a flexible/hybrid manufacturing production setting with typically dedicated machinery to satisfy regular demand and a flexible manufacturing system to handle surged demand. We model the uncertainty in demand using a scenario-based approach and allow the business to make here-and-now and wait-and-see decisions exploiting the cost-effectiveness of the standard production and responsiveness of the flexible manufacturing systems. We propose a branch-and-price algorithm as the solution approach. Our computational analysis shows that this hybrid production setting provides highly robust response to the uncertainty in demand, even with high fluctuations. In the third part, we propose a \textit{two-echelon vehicle routing problem} (2E-VRP) under consideration of a heterogeneous fleet of vehicles and different customer types. In our model, unlike the previous studies in the literature, not only do the large vehicles visit the pre-assigned points, called satellites, to refill the smaller vehicles, but they also deliver items to the customers. On the other hand, smaller vehicles are responsible for the customers with small size demands and can get refilled whether at the depots or satellites. We propose a branch-and-price algorithm as the solution approach and obtain promising results in comprehensive numerical studies that prove its versatility.
  • Placeholder
    PhD DissertationPublication
    Automated maintenance support for data-tier software
    Ersoy, Ersin; Sözer, Hasan; Sözer, Hasan; Özener, Okan Örsan; Kıraç, Mustafa Furkan; Aktaş, M. S.; Kaya, K.; Department of Computer Science; Ersoy, Ersin
    Data-tier software includes the data model and business logic of enterprise systems, and it is subject to long-term maintenance. Even though the user interface of these systems can be completely replaced, data-tier software usually evolves for decades. The number of domain experts with extensive knowledge about the overall software diminishes in time and applying extensions or changes becomes increasingly effort-consuming and error-prone for new developers. In this thesis, we introduce techniques and tools to provide automated maintenance support for data-tier software. These techniques and tools aim at reducing effort and the number of errors specifically for three challenging maintenance tasks: i) correct placement of a new object like a stored procedure in data-tier software, ii) evaluating the impact of changing database tables on software modules, and iii) evaluating the impact of table extensions on other tables of the same database. The first task is important because introducing a new object to data-tier software should not hamper its modular structure. This structure is defined by the allocation of objects among a set of schemas. Therefore, we introduce an approach and a tool to automatically predict the correct placement of new objects. We extract dependencies among various types of objects (database types, sequences, tables, procedures, functions, packages, and views) that are already placed in schemas. These dependencies are used for training an artificial neural network model, which is then used for prediction. Our industrial case studies show that our approach can reach an accuracy of 89%, whereas the baseline approach using coupling and cohesion metrics can reach 57.4% accuracy at most. There are already several techniques and tools for supporting the second task of analyzing the impact of changes in the data model on the source code. However, they fall short to analyze dynamically created SQL statements, queries on multiple tables, and other types of statements that allow data manipulation in PL/SQL, which is a commonly used language for developing data-tier software. We introduce techniques and a tool to parse both the data model and the source code (i.e., PL/SQL functions and procedures) taking part in all the schemas of a given database. Then, a dependency model is created based on queries and manipulation of database tables. Unlike prior studies, our tool can analyze queries that are created dynamically and that involve multiple tables as well as PL/SQL-specific features. We use the derived dependency model to estimate effort for two different common refactoring types on real systems. We observe high consistency between the automated estimations and manual estimations. The third task is concerned with the impact of changing tables on other tables of the same database. There are only a few studies that focus on this concern. Moreover, these studies consider the impact of deletion and modification of columns in database tables only. To address this limitation, we introduce an approach and a tool for automatically detecting the impact of data model extensions on the data model itself. We employ Siamese networks to detect similarities among database tables and such, to learn implicit relations among them. Table similarities are used as the basis for identifying potential impact. We develop another tool as the baseline, which employs the cosine similarity metric to measure similarity among database tables. Results obtained with Siamese networks turned out to be better than the baseline, achieving the mean F1 score of 96.1%.
  • Placeholder
    PhD DissertationPublication
    Server and client-side algorithms for enhancing adaptive streaming
    Akçay, Mehmet Necmettin; Beğen, Ali Cengiz; Beğen, Ali Cengiz; Arı, İsmail; Civanlar, Mehmet Reha; Sayıt, M.; Akgül, T.; Department of Computer Science; Akçay, Mehmet Necmettin
    HTTP adaptive video streaming is a technique widely used on the internet today to stream live and on-demand content. Server and client-side algorithms play an important role in improving user experience in terms of metrics such as latency, rebufferings and rendering quality. After explaining the commonly used metrics, we analyzed four main aspects of video streaming (i) bandwidth prediction accuracy, (ii) utilization of playback speed, (iii) adaptive streaming for content-aware-encoded videos, and (iv) head motion awareness for 360-degree videos. 360-degree video streaming requires much higher bandwidth compared to conventional video streaming. We demonstrate that most of the algorithmic improvements achieved for video streaming can also be applied to Viewport Dependent Streaming (VDS) for 360-degree videos. It is also important that in 360-degree video streaming, we have a Head Mounted Display (HMD) device that is capable of pointing the viewport orientation of the user. We also investigate and improve the rate-adaptation algorithms for 360-degree videos by developing several new algorithms making use of the HMD. The new algorithms proposed in this thesis are Low-on-Latency (LoL), Low-on-Latency+ (LoL+), Bang-on-Bandwidth (BoB), Size-aware Rate Adaptation (SARA), Content-aware Playback Speed Control (CAPSC), Head-motion-aware Viewport Margins (HMAVM).We evaluate the proposed new algorithms using the objective metrics discussed in detail and show significant contributions for these new algorithms including up to 91% decrease in rebuffering duration for on-demand streaming, 61.9% decrease in rebuffering duration and 8.1% decrease in latency compared to L2A for low-latency live streaming, 81.3% bandwidth prediction accuracy for interactive streaming, lastly 20% improvement in viewport quality and 50% reduction in motion-to-high-quality delay for 360-degree video streaming.
  • Placeholder
    PhD DissertationPublication
    Set-covering based heuristic approaches for the problems from the printing industry
    Çankaya, Emre; Ekici, Ali; Ekici, Ali; Özener, Okan Örsan; Göktürk, Elvin Çoban; Duran, S.; Yakıcı, E.; Department of Industrial Engineering; Çankaya, Emre
    In this thesis, we focus on two different planning production problems coming from the printing industry. The first problem is the label printing problem and the second one is a variant of the cover printing problem. In both studies, the problems take into account the best assignment of products on different templates in order to meet demand requirements. In the first part of the thesis, we focus on minimizing the waste, whereas the goal is to minimize the total production cost in the second problem. In these problems, each template can contain fixed number of products and suitable assignment of products to each template provides to decrease the waste of products and to improve the efficient of the printing production with minimum waste. We handle the first problem into two different cases. Each product can be assigned to a single template in the first case, whereas each product can be assigned to the all templates in the second case. In the second problem, we consider only second case due to the organizational constraints. We propose two-phase heuristic algorithms to solve these problems since the studied problems are hard. We conduct an extensive computational studies on real-world and randomly generated instances in order to assess the performances of the proposed algorithms and compare the performances of the proposed algorithms with respect to existing solution algorithms in the literature in terms of solution quality.
  • Placeholder
    PhD DissertationPublication
    Adaptive MIMO free space optical communication systems
    Nouri, Hatef; Uysal, Murat; Uysal, Murat; Demiroğlu, Cenk; Durak, Kadir; Baykaş, T.; Güçlüoğlu, T.; Department of Electrical and Electronics Engineering; Nouri, Hatef
    Free space optical (FSO) enjoys the high data rate of optical spectrum and have also the flexibility of RF links. FSO systems provide many advantages to the line of sight wireless communication technology. With the recent increasing interest on this promising technology, there is a need for a comprehensive understanding of system limitations which is mainly due to atmospheric conditions. In the first part of this research, we use our custom design atmospheric channel emulator for FSO system evaluations in a controlled environment and experimentally investigate the performance of FSO links. Specifically, we investigate the geometric loss, absorption loss, different weather conditions (like foggy and rainy), different beam shapes, and atmospheric turbulence using the atmospheric chamber. Atmospheric turbulence is a significant impairment in FSO channels which results in random fluctuations in the received signal level. By generating a desired level of atmospheric turbulence in the chamber, we investigate effect of wavelength and aperture averaging on the performance of FSO systems. Aperture averaging extracts inherent receive diversity gains and can be used as an effective fading mitigation technique. Furthermore, multiple apertures systems are also adopted in practical FSO systems to mitigate the turbulence induced fading effects and offer dramatic performance improvements in terms of link reliability (via diversity gain) and data rates (via multiplexing gain). On the other hand, the turbulence induced fading is characterized as very slow-varying, hence reliable feedback would be possible and adaptive transmission can be implemented in practical FSO systems and brings a noticeable performance improvement. Although the literature for adaptive transmission of RF systems is mature, it has been recently applied to SISO FSO systems and its direct application to MIMO FSO systems is challenging. Aiming to fill research gaps in this growing field, this work develops a framework for practical FSO systems with adaptive MIMO architectures. A MIMO system over a frequency-flat, log-normal or Gamma-Gamma slow-fading channel is considered in our work. In MIMO FSO systems, the space-time transmission strategy can also be adjusted, introducing a new dimension for adaptation. This means that practical MIMO link adaptation algorithms must also provide a dynamic adaptation between diversity and multiplexing modes of operation which needs a fundamental understanding of diversity-multiplexing tradeoff (DMT) under log-normal fading channels. Although there has been a growing interest on the study of DMT, the existing works are mostly restricted to the outcomes reported for Rayleigh, Rician, and Nakagami fading channels. In the next part of this research, we investigate the optimal tradeoff in the presence of log-normal fading channels. We derive the outage probability expression, and then present the asymptotical DMT expression. We further investigate DMT for finite SNRs and demonstrate convergence to the asymptotical case. Next, we suggest a framework for practical MIMO FSO system with adaptive architectures and shows how to use this framework to increase either link reliability (via diversity gain) and or data rates (via multiplexing gain). To illustrate our approach, we consider three MIMO transmission mapping matrices which includes: Matrix A (multiplexing), employing only spatial multiplexing; Matrix B (diversity), exploiting only diversity; and Matrix C (hybrid), combining diversity and spatial multiplexing. We first obtain expressions for the outage capacity of these matrices as the metric to maximize the rate of system for a fix target outage probability. Limiting the adaptation modes to a small subset is the key of adaptive strategy. The spatial adaptation can be combined with conventional adaptive modulation and coding (AMC) to give the optimal system performance. Particularly, we consider multiple-input single-output (MISO) and single-input multiple-output (SIMO) FSO systems with pulse position modulation (PPM) and pulse amplitude modulation (PAM). We propose three adaptive algorithms where the modulation size and/or transmit power are adjusted according to the channel conditions. We formulate the design of adaptive algorithms to maximize the spectral efficiency under peak and average power constraints while maintaining a targeted value of outage probability. In conclusion, this work propose a promising progress to overcome the main impairments (fog attenuations and turbulence induced fading) of the FSO links in four ways: 1) by examining the channel and proposing novel models and characteristics for atmospheric attenuations 2) by taking advantage of aperture averaging and wavelength dependency trade-off 3) by investigating and proposing spatial adaptation in MIMO FSO links and 4) by employing adaptive modulation and power control scenarios and approving the promising performance of adaptive system.
  • Placeholder
    PhD DissertationPublication
    Development of a single-chip visible light communication receiver
    Kısacık, Rifat; Uysal, Murat; Uysal, Murat; Durak, Kadir; Poyrazoğlu, Göktürk; Pusane, A. E.; Altunbaş, İ.; Department of Electrical and Electronics Engineering; Kısacık, Rifat
    The VLC has drawn the attention in the last decade, which is simply based on the data transmission over the visible light (400nm-700nm). It is considered as an alternative to RF-based technologies and offers unlicensed frequency spectrum to the users. Despite the widely available optical bandwidth, LEDs that are used as a light source in VLC impose a limitation in data rate due to their limited bandwidth, which changes between a few hundred kHz and a few MHz. At this point, the equalization is one the most used method to achieve higher data rates in VLC system. There have been a growing works focusing on the equalization performed for VLC system. However, most of them offer an improvement in data rate at the discrete element level. They are not integrated and consume higher power. The works, which offer a solution at the chip level, are limited. Additionally, a monolithic optoelectronic receiver that can be operated with LEDs with different bandwidths and perform the equalization for the employed LED has been not investigated recently. This work is concentrated on the design and implementation of a monolithic optoelectronic receiver that includes a photodiode with the area of 300 umx 300 um, a TIA, an adjustable equalizer controlled by a switching mechanism, and an output buffer. The designed optoelectronic receiver is implemented in 130 nm CMOS technology and tested in an experimental setup. It is employed with three different phosphorescent white LEDs and for each of the employed LEDs around 20 times improvement in data rate to their bandwidth is achieved at a distance of 2 meters. The implemented chip has an area of 0.6 mm^2 and consumes around 2.05 mW.
  • Placeholder
    PhD DissertationPublication
    Performance evaluation and experimental verification of vehicular visible light communication
    Mohamed, Bassam Aly Abdelrahman; Uysal, Murat; Uysal, Murat; Durak, Kadir; Demiroğlu, Cenk; Güçlüoğlu, T.; Baykaş, T.; Department of Electrical and Electronics Engineering; Mohamed, Bassam Aly Abdelrahman
    Araç geliştirme, gelecekteki bazı uygulamalarını desteklemek için büyük ölçüde veri iletişimine aktarır. Araç müfrezesi ve otonom araçlar (AV), bu on yılda ticari olarak temin edilebilecek uygulamalardan bazılarıdır. Bu noktadan sonra, akıllı ulaşım sistemleri (ITS), merkezi olmayan çevresel bildirim mesajlarının (DENM), kooperatif farkındalık masajlarının (CAM) veya herhangi bir uygulamaya özel masajın birbirleriyle değiş tokuş edilebileceği kablosuz iletişim uygulamaları için en çekici alanlardan biri haline geldi. araçlar. Bu masajlar, hem yolculuk süresinin hem de çevre kirliliğinin azaltılmasına katkıda bulunacak trafik yönetimi tarafından daha fazla kullanılabilir. Bu, otomotiv üreticilerini, teknoloji uygulaması yoluyla pazar gelişimini yakalamak için araç bağlantısına güvenmeye motive ediyor. ITS uygulamalarında son derece güvenilir, yüksek veri hızı ve düşük gecikmeli bir iletişim bağlantısı gereklidir. Özel kısa menzilli iletişim (DSRC) ve Hücresel araçtan her şeye (CV2X), ITS uygulamaları için geleneksel radyo frekansı (RF) teknolojisidir. Mevcut RF spektrumunun bant genişliği sınırlaması ve elektromanyetik girişim (EMI) sorunları nedeniyle, görünür ışık iletişimi (VLC), lisanssız geniş bir bant genişliği ve EMI'ye karşı bağışıklık sunarak güçlü kablosuz erişim teknolojisi olarak ortaya çıkmıştır. Özellikle VLC, kullanıcı yoğun ortamlarda güçlü bir performansa sahiptir ve veri boşaltma \linebreak yoluyla WiFi'ye tamamlayıcı bir teknoloji olarak kabul edilir. Ayrıca VLC, hastane yoğun bakım üniteleri, nükleer santraller gibi RF kısıtlı alanlarda güvenli bir alternatif olarak kullanılabilir. Otomotiv üreticisi son zamanlarda araç gövdelerinde ışık yayan diyotların \linebreak (LED'ler) kullanımını artırıyor. VLC, aydınlatma ve iletişim amaçları için LED'in ikili kullanımına izin verdiğinden, bu, VLC'nin araç bağlantısı için tamamlayıcı bir yaklaşım olarak değerlendirilmesini sağlar. VLC sistemi, istenen bilgi sinyalini sürüş doğru akımı (DC) üzerine bindirerek ışık kaynağının yoğunluğunu modüle eder. DC değeri, genlik kısıtlamaları dikkate alınarak istenilen çalışma noktasına göre seçilir. Modüle edici sinyalin frekansı çok yüksek olduğundan, ışık yoğunluğunun titremesi insan gözü tarafından gözlemlenemez. VLC sistemleri için tipik modülasyon seçimi, açma-kapama anahtarlama (OOK) ve darbe modülasyon teknikleri iken, dikey frekans bölmeli çoğullama (OFDM) gibi diğer modülasyon teknikleri, Gbps düzeyinde ultra yüksek hızları desteklemek için ayrıca önerilmiştir. V2X uygulaması için VLC bağlantısının pratik uygulaması için bazı endişeler var. Örneğin, pratik uygulama yönleri. Araç VLC'si için ticari olarak temin edilebilen ürünlerdeki kıtlığa rağmen, ön uç tasarlanmalı ve sistem tasarımında etkisi göz önünde bulundurulmalıdır. Ayrıca, güneş ışığı gibi dış ortamın etkisi karakterize edilmelidir. Ayrıca, herhangi bir iletişim sisteminde olduğu gibi, kanal modelleme araç VLC sistemlerinde kritik bir rol oynar, özellikle araç ışık kaynağının asimetrik modeli ile deneysel olarak modellenmesi gerekir. Diğer bir kritik husus, hareketliliğin etkisidir ve fotodedektörün araç üzerindeki konumu, farklı yol şekillerinde (yani düz ve kavisli yollar) sağlam bir araç VLC bağlantısına sahip olmak için uygun şekilde tanımlanmalıdır. Bir kıyaslama olarak, araç VLC'sinin performansı RF ile daha fazla karşılaştırılmalıdır. Bunlardan motive olarak, öncelikle VLC sistemleri için temel bant işleme ve farklı modülasyon tekniklerinin deneysel uygulamasını araştırıyoruz. Daha sonra, araç \linebreak LED'lerinin yüksek güç iletimini destekleyen önyüz tasarlıyor ve sistem performansı üzerindeki deneysel etkilerini araştırıyoruz. Daha sonra dış ortamdaki bozulmaları ve araç ortamına uygun lens kombinasyonunun doğru seçimini karakterize ediyoruz. Araç VLC sistemi için bir diğer kritik husus, araç asimetrik ışık kaynağının yol kaybını deneysel olarak modelleyerek kanal modelinin ayrıca ele alınmasıdır. Bu model, özel durum için kullanılmak üzere daha da genişletildi (kamyondan kamyona VLC sistemi). Bunlara güvenerek, hem binek araçlar hem de ağır araçlar (yani Kamyon) için hareketlilik varlığında araç VLC bağlantı performansının kritik yönünü araştırıyoruz. Kıyaslama amacıyla, ayrıca karşılaştırma amacıyla bir hibrit sistem (VLC/RF) uyguluyor ve belirli bir hizmet kalitesini (QoS) karşılamak için bir anahtarlama algoritması uyguluyoruz.