Photonic Tensor Machine and Multi-level Encoding and Decoding in Wavelength-Multiplexed Photonic Processors
Loading...
Authors
Guo, Zhimu
Date
Type
thesis
Language
eng
Keyword
Silicon Photonics , Optical Computing , High Performance Computing , Wavelength Division Multiplexing , Machine Learning , Artificial Intelligence
Alternative Title
Abstract
The rapid development in machine learning based algorithms and applications has put more and more emphasis on specialized hardware for heavy parallel computations. Large dense matrix computations, also known as tensor operations, has been the major concern because of its intense demand for fast and efficient computations. Facing the challenge, conventional solutions such as conventional Central Processing Units (CPUs) and field-programmable gate arrays (FPGAs) fall short because of their limited parallel processing capability and numerical computation power. Therefore, several companies have ventured into developing Application-Specific Integrated Circuit to alleviate the burden placed on current hardware, such as Google’s Tensor Processing Unit and NVIDIA’s tensor core. These attempts have showed a significant improvement in computation speed and efficiency compared to contemporary CPUs and FPGAs and have proved to be highly programmable and adaptable to various machine learning tasks.
In view of recent development of photonic neural networks , the project aims to design a photonic tensor machine by exploiting the parallel processing power provided by photonic integrated circuit. Photonic tensor machine focuses on fast tensor operation that is required in most machine learning and deep learning tasks, such as feedforward network and backpropagation. In addition, its simple but effective functionality will grant the specialized device an extremely high compatibility with various machine learning algorithms and potential application in High Performance Computing tasks.
The design of a photonic tensor machine is based on an on-chip two dimensional array of microring resonators called MRR weightbanks, together with on-chip balanced photodetectors, off-chip wavelength division multiplexer, and off-chip arrayed waveguide grating. Input information to the photonic tensor machine takes either of the two forms: electrical or optical. The optical signal will be generated from off-chip tunable laser sources, and the information is encoded as the amplitude of the laser. The electrical signal encodes information as the magnitude of the modulation current provided to a specific MRR. The multiplication between two inputs is achieved through the electrical signal modulating the optical signal, and the modulation magnitude is determined by the electric signal inputs. Different optical inputs will be distinguished by different wavelengths. Because different wavelengths can propagate in the same waveguide, the photonic tensor machine can process information from different channels simultaneously. Thus, the parallel processing capability gives the photonic tensor machine the advantage of faster speed.
Description
Citation
Publisher
License
Queen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canada
ProQuest PhD and Master's Theses International Dissemination Agreement
Intellectual Property Guidelines at Queen's University
Copying and Preserving Your Thesis
This publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.
CC0 1.0 Universal
ProQuest PhD and Master's Theses International Dissemination Agreement
Intellectual Property Guidelines at Queen's University
Copying and Preserving Your Thesis
This publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.
CC0 1.0 Universal
