Our Guest Speakers
March 28th, 2018 | An Information-Theoretic Framework for Multi-Hop All-Cast Networks, Jonathan Ponniah Assistant Professor, EE department, San Jose State University |
Channels with three or more nodes are not fully understood despite decades of research in network information theory. One particular strategy called decode-forward is examined, in which messages generated by sources are sequentially forwarded by intermediate nodes (relays) until reaching their destinations. The decode-forward strategy reflects the dynamics of multi- hop wireless networks which are of practical interest, and captures the combinatorial challenges of the networking problem. At the heart of this challenge is a fundamental tension in which each node in the network has an incentive to be the last node to decode the source messages. This tension creates many different multi-hop strategies each of which are contingent on the channel statistics, and all of which seem to lack a unifying structure that makes sense of the possibilities. We propose a framework to characterize the fundamental limits of decode-forward in multi-source, multi-relay, all-cast channels with independent input distributions. This work directly extends Shannon’s result for the single-source single-destination channel to the multi-hop setting.
February 28th , 2018 | From Differential Privacy to Generative Adversarial Privacy, Peter Kairouz, postdoctoral scholar at Stanford University |
Video |
The explosive growth in connectivity and data collection is accelerating the use of machine learning to guide consumers through a myriad of choices and decisions. While this vision is expected to generate many disruptive businesses and social opportunities, it presents one of the biggest threats to privacy in recent history. In response to this threat, differential privacy (DP) has recently surfaced as a context-free, robust, and mathematically rigorous notion of privacy. The first part of my talk will focus on understanding the fundamental tradeoff between DP and utility for a variety of unsupervised learning applications. Surprisingly, our results show the universal optimality of a family of extremal privacy mechanisms called staircase mechanisms. While the vast majority of works on DP have focused on using the Laplace mechanism, our results indicate that it is strictly suboptimal and can be replaced by a staircase mechanism to improve utility. Our results also show that the strong privacy guarantees of DP often come at a significant loss in utility. The second part of my talk is motivated by the following question: can we exploit data statistics to achieve a better privacy-utility tradeoff? To address this question, I will present a novel context-aware notion of privacy called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to arrive to a unified framework for data-driven privacy that has deep game-theoretic and information-theoretic roots. I will conclude my talk by showcasing the performance of GAP on real life datasets.
November 15th, 2017 | DNA Sequencing: From Information Limits to Genome Assembly Software Ilan Shomorony, PhD., Human Longevity, Inc. |
Emerging long-read sequencing technologies promise to enable near-perfect reconstruction of whole genomes. Assembly of long reads is usually accomplished using a read-overlap graph, in which the true sequence corresponds to a Hamiltonian path. As such, the assembly problem becomes NP-hard under most formulations, and most of the known algorithmic approaches are heuristic in nature.In this talk, we show that by focusing on the informational limits of this problem, rather than the computational ones, one can design assembly algorithms with correctness guarantees. We begin with a basic feasibility question: when does the set of reads contain enough information to allow unambiguous reconstruction of the genome? We show that in most instances of the problem where the reads contain enough information for assembly, the read-overlap graph can be sparsified, allowing the problem to be solved in linear time. To study the remaining information-infeasible instances, we formulate the partial assembly problem from a rate-distortion perspective. We introduce a notion of assembly graph distortion, and propose an algorithm that seeks to minimize this quantity. Finally, we describe how these ideas formed the theoretical foundation of our long-read assembly software HINGE, which outperforms existing tools and is currently being employed by genomics research groups and companies
October 25th, 2017 | Information Theoretic Limits of Molecular Communication and System Design Using Machine Learning, Nariman Farsad, Post Doctoral Fellow, EE Department, Stanford |
Molecular communication is a new and bio-inspired field, where chemical signals are used to transfer information instead of electromagnetic or electrical signals. In this paradigm, the transmitter releases chemicals or molecules and encodes information on some property of these signals such as their timing or concentration. The signal then propagates the medium between the transmitter and the receiver through different means such as diffusion, until it arrives at the receiver where the signal is detected and the information decoded. This new multidisciplinary field can be used for in-body communication, secrecy, networking microscale and nanoscale devices, infrastructure monitoring in smart cities and industrial complexes, as well as for underwater communications. Since these systems are fundamentally different from telecommunication systems, most techniques that have been developed over the past few decades to advance radio technology cannot be applied to them directly. In this talk, we first explore some of the fundamental limits of molecular communication channels, evaluate how capacity scales with respect to the number of particles released by the transmitter, and the optimal input distribution. Finally, since the underlying channel models for some molecular communication systems are unknown, we demonstrate how techniques from machine learning and deep learning can be used to design components such as detection algorithms, directly from transmission data, without any knowledge of the underlying channel models.
June 28th | OTFS: A New Generation of Modulation Addressing the Challenges of 5G, Dr. Anton Monk, VP Strategic Alliances and Standards, Cohere Technologies |
A new two-dimensional modulation technique called Orthogonal Time Frequency Space (OTFS) modulation designed in the delay-Doppler domain is introduced as a waveform ideally suited to new 5G use cases. Through this design, which exploits full diversity over time and frequency, OTFS coupled with equalization converts the fading, time-varying wireless channel experienced by modulated signals such as OFDM into a time- independent channel with a complex channel gain that is roughly constant for all symbols. Thus, transmitter adaptation is not needed. This extraction of the full channel diversity allows OTFS to greatly simplify system operation and significantly improves performance, particular in systems with high Doppler, short packets, and large antenna arrays. Simulation results indicate at least several dB of block error rate performance improvement for OTFS over OFDM in all of these settings which translates to significant spectral efficiency improvements. In addition these results show that even at very high Doppler (500 km/h), OTFS approaches channel capacity, whereas the performance of OFDM under typical design parameters breaks down.
June 22nd, 2017 | Computational Microscopy, Prof. Laura Waller, UC Berkeley, Sponsored by the Santa Clara Valley Chapter of the IEEE Signal Processing Society |
Slides |
Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. This talk will describe new methods for computational microscopy with coded illumination, based on a simple and inexpensive hardware modification of a commercial microscope, combined with advanced image reconstruction algorithms. In conventional microscopes and cameras, one must trade off field-of-view and resolution. Our methods allow both simultaneously by using multiple images, resulting in Gigapixel-scale reconstructions with resolution beyond the diffraction limit of the system. Our algorithms are based on large-scale nonlinear non-convex optimization procedures for phase retrieval, with appropriate priors.
Visit laurawaller.com for related publications and projects
![]() |
![]() |
![]() |
May 24th, 2017 | Regenerating codes for distributed storage, Prof. Mary Wootters, EE Department, Stanford University Co-sponsored with the Santa Clara Valley Chapter of the IEEE Magnetics Society |
Slides |
In distributed storage systems, large amounts of data are distributed across many nodes, which are prone to failure. In this talk, I’ll survey regenerating codes, which are a type of error correcting code designed to protect data in distributed systems from failures, while at the same time enabling extremely efficient repair of missing data. I’ll give the basic framework of regenerating codes, and discuss some recent research aimed at establishing theoretical limitations regenerating codes from an algebraic perspective.
May 16th, 2017 | A Universal Low-latency Real-time Optical Flow based Stereoscopic Panoramic Video Communication System for AR/VR, Dr. Jiangtao Wen, Tsinghua University, China. Hosted by IEEE SPS Chapter and co-sponsored by IEEE CES/ITS/SSCS Chapters & Intel Latino Network. |
Slides |
Introduce an optimized system for real time, low latency stereoscopic panoramic video communications that is camera agnostic. After intelligent camera calibration, the system is capable of stitching inputs from different cameras using a real time, low latency optical flow based algorithm that intelligently learns input video features over time to improve stitch quality. Depth information is also extracted in the process. The resulted stereoscopic panoramic video is then encoded with content-adaptive temporal and/or spatial resolution to achieve low bitrate while maintaining good video quality. Various aspects of the system including the optimized stitching algorithm, parallelization and task scheduling, as well as encoding will be introduced with demos with conventional (non-panoramic) professional and consumer grade cameras as well as integrated panoramic cameras.
![]() |
![]() |
April 26th2017 | Spacetime replication of continuous variable quantum information Grant Salton, Stanford Institute for Theoretical Physics, Co-sponsored with the Santa Clara Valley Chapter of the IEEE Photonics Society |
Are there fundamental restrictions on the flow of information through space and time? What about the flow of quantum information? It is well known that no information can be transmitted faster than light, and it is also known that quantum information cannot be cloned or copied arbitrarily. These two “laws” place restrictions on the transmission of information through spacetime, but are there other limitations? The answer is no: the only such restrictions are (1) no-signalling (faster- than-light communication), and (2) no-cloning of quantum information. This talk will first provide a brief introduction to some fundamentals of quantum information theory (including the no-cloning theorem and quantum error correction), and then will show that any process that transmits information through spacetime without violating (1) or (2) is physically realizable as a so-called spacetime information replication task. In particular, this talk will describe how one can succeed at distributing information in seemingly impossible ways using quantum error correction, and it will showcase new, continuous variable quantum error correcting codes that can efficiently replicate information in spacetime. If time permits, a proposal will be outlined for an optical experiment to realize information replication in the lab.
Mar 22nd 2017 | Foundations of Energy Harvesting and Remotely Powered Communication Systems Prof. Ayfer Ozgur Aydin, Stanford EE Department Co-sponsored with the IEEE SCV Consumer Electronics Society Co-sponsored with the IEEE SCV Communications Society |
The next exponential growth in connectivity is projected to be no longer in access between people but in connecting objects and machines in the age of “Internet of Everything” (IoE). Projections show sensor demand growing from billions in 2012 to trillions within the next decade. This has led to significant recent interest in building tiny and low-cost wireless radios that can form the fabric of smart technologies and cyberphysical systems, enabling a plethora of exciting applications from in-body health monitoring, to smart homes and transportation systems. However, achieving orders of magnitude reduction in the cost and size of wireless radios often requires to eliminate external components such as batteries and crystal oscillators. In this talk, we will discuss the information and communication theoretic foundations for such radios, including communication with energy harvesting and remotely powered wireless devices and, time permitting, also with crystal-free radios
Epilepsy is one of the most common neurological disorders affecting about 1% of the world population. While in most cases treating epilepsy with antiepileptic drugs (AED) is successful, about a third of the patients cannot be adequately treated with AEDs. The main treatment for such patients is a surgical procedure for removal of the seizure onset zone (SOZ), the area in the brain from which the seizures originate. The main tool for accurately identifying the SOZ is electrocorticography (ECoG) recordings, taken from grids of electrodes placed on the cortex to allow a direct measurement of the brain’s electric activity. In this talk we will present a novel SOZ localization algorithm, based on ECoG recordings. Our underlying hypothesis is that seizures start in the SOZ and then spread to surrounding areas in the brain. Thus, signals recorded at electrodes close to the SOZ should have a relatively large causal influence on the rest of the recorded signals. To evaluate the statistical causal influence between the recorded signals, we represent the set of electrodes using a directed graph, where the edges’ weights are the pair-wise causal influence, quantified via the information theoretic functional of directed information. The directed information is estimated from the ECoG recording using the nearest-neighbor estimation paradigm. Finally, the SOZ is inferred from the obtained network via a variation of the famous PageRank algorithm. Testing the proposed algorithm on 15 ECoG recordings of epileptic patients, listed in the iEEG portal, shows a close match with the SOZ estimated by expert neurologists
Jan 29th , 2017 | Towards transparent and scalable computational integrity Prof. Eli Ben-Sasson, Computer Science Department, Technion Israel Institute of Technology Co-sponsored with the Silicon Valley Ethereum Meetup |
December 13th, 2016 | Super-resolution Image Reconstruction – Methods and Lessons Learned Prof. Sally Wood, Thomas J. Bannan Professor in Electrical Engineering, Santa Clara University Co-sponsored, Hosted by the IEEE Signal Processing Society Santa Clara Valley Chapter |
Slides |
Although there is some variation in the interpretation of the term “super-resolution” in different imaging application contexts, for computational methods it typically refers to the use of multiple images acquired at a low spatial resolution to compute a single image with increased spatial resolution. The motivation for this may be to improve the perceptual quality of the image content or to derive more accurate information from the image content such as the location of features. This may be attractive in situations where a higher resolution camera can not be used because of size or cost for example. A potential application, which may be fixed or mobile, is monitoring and surveillance. The additional information used to improve the spatial resolution may be some combination of a-priori assumptions and multiple passively acquired images in which the desired high frequency information is present, but aliased. Performance measures of super-resolution algorithms may be based on measures of image accuracy, measures of image quality, computational efficiency, or robustness in the presence of measurement noise and image acquisition model error. While computational efficiency is relatively unambiguous, the metrics for accuracy and robustness may be debated. This talk will provide an introduction to super-resolution methods and applications, explore the effects of noise and model error on resolution improvement, describe one specific project application, and discuss general lessons learned.
![]() |
![]() |
Nov 30th, 2016 | Information Theoretic Security Dr. Ashish Khisti, University of Toronto Co-sponsored with the IEEE Communications Society Santa Clara Valley Chapter |
A variety of approaches to secure communication have been used since ancient civilization. Claude Shannon introduced the notion of perfect secrecy, using an information theoretic approach, in 1949. This talk will introduce the framework of Information Theoretic Security (ITS) and discuss various applications. Our first application will pertain to wireless networks. We will discuss how principles of ITS inspire new approaches for securing wireless networks at the physical layer. Our second application will pertain to biometric systems. We will discuss the need for hash functions robust against measurement noise, and present a solution based on error correction codes. Our final application will pertain to smart-metered systems. We will discuss how a rechargeable battery can be used to mask the instantaneous electricity load from a utility company, and discuss information theoretic measures for privacy in these systems.
Oct 6th, 2016 | Indoor and Outdoor Image based Localization for mobile devices Prof. Avideh Zakhor, EECS Department, UC Berkeley and Indoor Reality Co-sponsored with the IEEE Signal Processing Society Santa Clara Valley Chapter |
Image geo-location has a wide variety of applications in GPS denied environments such as indoors, as well as error prone outdoor environments where GPS signal is unreliable. Besides accuracy, an inherent advantage of image based localization is recovery of orientation as well as position. This could be important in applications such as navigation and augmented reality. In this talk, I describe a number of indoor and outdoor image based localization approaches and characterize their performance in a variety of scenarios. I start with a basic divide and conquer photo-matching strategy for large area outdoor localization and show its superior performance over compass and GPS on today’s cell phones; I characterize the performance of this system for a 30,000 image database for Oakland, CA as well as 5 million image database for 10,000 square km area in Taiwan. Next I describe a fast, automated methodology for Simultaneous Multi-modal fingerprinting And Physical mapping (SMAP) of indoor environments to be used for indoor positioning. The sensor modalities consist of images, WiFi and magnetic. I show that one shot, static image based localization has 50 percentile error of less than 1 meter and 85 percentile error of less than 2 meters. Finally, I describe the associated multi-modal indoor positioning algorithms for dynamic tracking of users and show that they outperform uni-modal schemes based on WiFi alone. Future work consists of demonstrating the scheme on wearable devices such as the Glass, and the Watch.
June 9th, 2016 |
Noncoherent communications in large antenna arrays Mainak Chowdhury, Wireless Systems Laboratory, Stanford University Co-sponsored with the IEEE Signal Processing Society Santa Clara Valley Chapter |
Coherent schemes with accurate channel state information are considered to be important to realizing many benefits from massive multiple-input multiple-output (massive MIMO) cellular systems involving large antenna arrays at the base station. In this talk we introduce and describe noncoherent communication schemes, i.e., schemes which do not use any instantaneous channel state information, and find that they have the same scaling behavior of achievable rates as coherent schemes with the number of antennas. This holds true not only for Rayleigh fading, but also for ray tracing models. Analog signal processing architectures for large antenna arrays based on our analyses will be described. We also consider wideband large antenna systems and identify a bandwidth limited regime where having channel state information does not increase scaling laws, and outside of which there is a clear rate penalty. This talk is based on joint work with Alexandros Manolakos, Andrea Goldsmith, Felipe Gomez-Cuba, and Elza Erkip.
March 23rd, 2016 |
When your big data seems too small: accurate inferences beyond the empirical distribution Prof. Gregory Valiant, Computer Science Dept., Stanford Co-sponsored with the IEEE Signal Processing Society Santa Clara Valley Chapter |
We discuss two problems related to the general challenge of making accurate inferences about a complex distribution, in the regime in which the amount of data (i.e. the sample size) is too small for the empirical distribution of the samples to be an accurate representation of the underlying distribution. The first problem is the basic task of inferring properties of a discrete distribution, given access to independent draws. We show that one can accurately recover the unlabelled vector of probabilities of all domain elements whose true probability is greater than 1/(n log n). Stated differently, one can learn–up to relabelling–the portion of the distribution consisting of elements with probability greater than 1/(n log n). This result has several curious implications, including leading to an optimal algorithm for “de-noising” the empirical distribution of the samples, and implying that one can accurately estimate the number of new domain elements that would be seen given a new larger sample, of size up to n* log n. (Extrapolation beyond this sample size is provable information theoretically impossible, without additional assumptions on the distribution.) While these results are applicable generally, we highlight an adaptation of this general approach to some problems in genomics (e.g. quantifying the number of unobserved protein coding variants). The second problem we consider is the task of accurately estimating the eigenvalues of the covariance matrix of a (high dimensional real-valued) distribution–the “population spectrum”. (These eigenvalues contain basic information about the distribution, including the presence or lack of low- dimensional structure in the distribution and the applicability of many higher- level machine learning and multivariate statistical tools.) As we show, even in the regime where the sample size is linear or sublinear in the dimensionality of the distribution, and hence the eigenvalues and eigenvectors of the empirical covariance matrix are misleading, accurate approximations to the true population spectrum are possible. This talk is based on three papers, which are joint works with Paul Valiant,James Zou, and Weihao Kong.
April 6th, 2016 |
A Signal-Processing Approach to Modeling Vision, and Applications Dr. Sheila S. Hemami, Chair,- Electrical and Computer Engineering, Northeastern University Co-sponsored with the IEEE Signal Processing Society Santa Clara Valley Chapter |
Current state-of-the-art algorithms that process visual information for end use by humans treat images and video as traditional signals and employ sophisticated signal processing strategies to achieve their excellent performance. These algorithms also incorporate characteristics of the human visual system (HVS), but typically in a relatively simplistic manner, and achievable performance is reaching an asymptote. However, large gains are still realizable with current techniques by aggressively incorporating HVS characteristics to a much greater extent than is presently done, combined with a good dose of clever signal processing. Achieving these gains requires HVS characterizations which better model natural image perception ranging from sub-threshold perception (where distortions are not visible) to suprathreshold perception (where distortions are clearly visible). In this talk, I will review results from our lab characterizing the responses of the HVS to natural images, and contrast these results with ‘classical’ psychophysical results. I will also present several examples of signal processing algorithms which have been designed to fully exploit these results.
Feb 24th, 2016 |
Interleaved direct bandpass sampling for software defined radio/radar receivers Prof. Bernard Levy, Department of Electrical and Computer Engineering, UC Davis Co-sponsored with IEEE Signal Processing Society Santa Clara Valley |
Slides |
Due to their low hardware complexity, direct bandpass sampling front ends have become attractive for software defined radio/radar applications. These front ends require three elements: a tunable filter to select the band of interest, a wideband sample and hold to acquire the bandpass signal, and finally an analog to digital converter (ADC) to digitize the signal. Unfortunately, due to the overlap of aliased copies of the positive and negative signal spectrum components, if a single ADC is employed, depending on the exact position of the band where the signal is located, it is not always possible to sample a signal of occupied bandwidth B at a sampling rate ?s just above the 2B Nyquist rate. Sometimes, much higher rates are needed. For software radio applications, this represents a significant challenge, since one would normally prefer to use a single ADC with fixed sampling rate to sample all possible signals of interest. A solution to this problem was proposed as early as 1953 by Kohlenberg, who showed that Nyquist rate sampling can be achieved by using time- interleaved sampling, where two sub-ADCs sample the signal at a rate ?s/2 each, but with a relative timing offset d (such that 0 < d < 1 if the offset is measured relative to the sub-ADC sampling period). However, certain offsets are forbidden, since for example d = 1/2 would result in a uniform overall ADC. In this presentation, a method will be described to simultaneously sample and demodulate the bandpass signal of interest. The sampled complex envelope of the bandpass signal is computed entirely in the DSP domain by passing the sub-ADC samples through digital FIR filters, followed by a digital demodulation operation. However, as the quality factor (ratio of the carrier frequency ?c to the signal bandwidth B) of the front-end selection filter increases, the performance of the envelope computation method becomes progressively more sensitive to mismatches between the nominal offset d0 and actual offset d of the two sampling channels. To overcome this problem, a blind calibration technique to estimate and correct mismatches, is presented.
Feb 18th, 2016 |
JPEG Emerging Standards Prof. Dr. Touradj Ebrahimi, Ecole Polytechnique Federale de Lausanne, Lausanne, FR Co-sponsored with the IEEE Signal Processing Society Santa Clara Valley Chapter |
Video |
JPEG standardization committee has played an important role in the digital revolution in the last quarter of century. The legacy JPEG format, which became an international standard 21 years ago is the dominant picture format in many imaging applications. This dominance does not seem to slow down when observing that the number of JPEG images uploaded to social networks alone has surpassed 2 billions per day in 2014, when compared to less than a 1 billion the year before. JPEG 2000, which became an international standard 15 years ago, has been the format of choice in a number of professional applications, among which contribution in broadcast and digital cinema are two examples. This talk starts by providing an overview of a recently developed image format to deal with High Dynamic Range content called JPEG XT. JPEG XT has been defined to be backward compatible with the legacy JPEG format in order to facilitate its use in current imaging ecosystem. We will then discuss JPEG PLENO, a recent initiative by JPEG committee to address an emerging modality in imaging, namely, plenoptic imaging. “Pleno” is a reference to “Plenoptic” a mathematical representation, which not only provides color information of a specific point in a scene, but also how it changes when observed from different directions and distances. “Pleno” is also the latin word for “complete”, a reference to the vision of the JPEG committee that believes future imaging will provide a more complete description of scenes well beyond what is possible today. The talk will conclude with an quick overview of two potential standardization initiatives under investigation. The first, referred to as JPEG Privacy & Security facilitates protections and security in legacy JPEG images, such as coping with privacy concerns. The second called JPEG XS puts an emphasis on low latency, low complexity and transparent quality as well as low cost, desirable in a number of applications, including broadcasting and high bandwidth links between devices and displays.
![]() |
January 15th, 2015 | Hebbian-LMS Learning Algorithm Dr. Bernard Widrow, Stanford University, Joint meeting with IEEE Signal Processing Society Chapter |
Slides |
Hebbian learning is widely accepted in the fields of psychology, neurology, and neurobiology. It is one of the fundamental premises of neuroscience. The LMS (least mean square) algorithm of Widrow and Hoff is the world’s most widely used adaptive algorithm, fundamental in the fields of signal processing, control systems, pattern recognition, and artificial neural networks. These are very different learning paradigms. Hebbian learning is unsupervised. LMS learning is supervised. However, a form of LMS can be constructed to perform unsupervised learning and, as such, LMS can be used in a natural way to implement Hebbian learning. Combining the two paradigms creates a new unsupervised learning algorithm that has practical engineering applications and may provide insight into learning in living neural networks.
February 27th, 2015 | Shannon-inspired Statistical Computing Pr. Naresh R. Shanbha, University of Illinois at Urbana Joint meeting with IEEE Solid State Circuits Society Chapter |
Moore’s Law has been the driving force behind the exponential growth in the semiconductor industry for the past five decades. Today, energy efficiency and reliability challenges in nanoscale CMOS (and beyond CMOS) processes threaten the continuation of Moore’s Law. This talk will describe our work on developing a Shannon-inspired statistical information processing that seeks to address this issue by treating the problem of computing on unreliable devices and circuits as one of information transfer over an unreliable/noisy channel. Such a paradigm seeks to transform computing from its von Neumann roots in data processing to Shannon-inspired information processing. Key elements of this paradigm are the use of statistical signal processing, machine learning principles, equalization and error-control, for designing error-resilient on-chip computation, communication, storage, and mixed-signal analog front-ends. The talk will provide a historical perspective and demonstrate examples of Shannon-inspired designs of on-chip subsystems. This talk will conclude with a brief overview of the Systems On Nanoscale Information fabriCs (SONIC) Center, a multi-university research center based at the University of Illinois at Urbana-Champaign, focused on developing a Shannon/brain-inspired foundation for information processing on CMOS and beyond CMOS nanoscale fabrics.
March 25th, 2015 | How to estimate mutual information with insufficient sampling Jiantao Jiao, Stanford University |
Mutual information emerged in Shannon’s 1948 masterpiece as the answer to the most fundamental questions of compression and communication. Since that time, however, it has been adopted and widely used in a variety of other disciplines. In particular, its estimation has emerged as a key component in fields such as machine learning, computer vision, systems biology, medical imaging, neuroscience, genomics, economics, ecology, and physics. In practical applications, the underlying distribution is usually unknown, so it is of utmost importance to obtain accurate mutual information estimates from empirical data for inference. We discuss a new approach to the estimation of mutual information between random objects with distributions residing in high-dimensional spaces (e.g., large alphabets), as is the case in increasingly many applications. We will discuss the shortcomings of traditional estimators, and suggest a new estimator achieving essentially optimum worst-case performance under L2 risk (i.e., achieves the minimax rates). We apply this new estimator in various applications, including the Chow–Liu algorithm and the Tree-Augmented Naive Bayes (TAN) classifier. Experiments with these and other algorithms show that replacing the empirical mutual information by the proposed estimator results in consistent and substantial performance boosts on a wide variety of datasets.
November 17th, 2015 | Rate-distortion of sub-Nyquist sampled processes Alon Kipnis, PhD Candidate, Stanford EE Department |
Consider the task of analog to digital conversion in which a continuous time random process is mapped into a stream of bits. The optimal trade-off between the bitrate and the minimal average distortion in recovering the waveform from its bit representation is described by the Shannon rate-distortion function of the continuous-time source. Traditionally, in solving for the optimal mapping and the rate-distortion function we assume that the analog waveform has a discrete time version, as in the case of a band-limited signal sampled above its Nyquist frequency. Such assumption, however, may not hold in many scenarios due to wideband signaling and A/D technology limitations. A more relevant assumption in such scenarios is that only a sub-Nyquist sampled version of the source can be observed, and that the error in analog to digital conversion is due to both sub-sampling and finite bit representation. This assumption gives rise to a combined sampling and source coding problem, in which the quantities of merit are the sampling frequency, the bitrate and the average distortion. In this talk we will characterize the optimal trade-off among these three parameters. The resulting rate-distortion-sampling frequency function can be seen as a generalization of the classical Shannon-Kotelnikov-Whittaker sampling theorem to the case where finite bit rate representation is required. This characterization also provides us with a new critical sampling rate: the minimal sampling rate required to achieve the rate-distortion function of a Gaussian stationary process for a given rate-distortion pair. That is, although the Nyquist rate is the minimal sampling frequency that allows perfect reconstruction of a band-limited signal from its samples, relaxing perfect reconstruction to a prescribed distortion allows sampling below the Nyquist rate while achieving the same rate-distortion trade-off.
December 9th, 2015 | Ph.D. Elevator Pitch to Professionals Co-sponsored with IEEE SCV Signal Processing Chapter |
Looking for new and local talents? Want to keep yourself and your company up-to-date on the latest hot topics and technical contributions in Signal Processing? You can do both by attending the “Ph.D. Elevator Pitch to Professionals” event. The IEEE Signal Processing Chapter of Santa Clara Valley is organizing an event to connect Ph.D. candidate close to graduation and newly graduated Ph.D. to local companies looking for talents and new technologies. A panel of students will explain their Ph.D. contributions and results in the form of elevator pitch, followed by Q&A and a social event to continue the conversations into poster session.
Guest Speaker:
- Alex Acero, President, IEEE Signal Processing Society (Slides)
- David Held, Computer Science Department, Stanford University ( Slides) Video
- Amin Kheradmand, Department of Electrical Engineering, University of California at Santa Cruz ( Slides)
- Koji Seto, Department of Electrical Engineering, Santa Clara University ( Slides) Video
April 23rd,2014 | Information flow in Wireless Network, How similar is it to water flowing in pipes? Adnan Raja, Ph.D., Fastback Networks |
A wired network is modeled as a flow network, which is a directed graph where each edge has a capacity and the flow on each edge cannot exceed the capacity. This is similar to a commodity network, like traffic in a road system or fluid in pipes. The very well-known max-flow min-cut theorem characterizes the maximum flow from a source terminal to a destination terminal in such a network and also gives an algorithm to schedule an optimal flow. But what about a wireless network; with say one radio sending information to another distant radio with the help of a multitude of relay nodes? There are no edges here. Wireless communication is inherently characterized by broadcast of signal from the transmitters and interference of signal at the receiver. In this talk, I will present our research which characterizes the maximum information flow in a wireless relay network. Our research shows that for wireless network also there is an analogue to the max-flow min-cut theorem of the wired network. Our research also gives an approximately optimal scheme for the relay network called the compress-and-forward scheme where each relay node only forwards optimal information to aid the end-to-end communication.
May 7th, 2014 | Let’s Not Dumb Down the History of Computer Science Dr. Donald Knuth, Professor Emeritus, Stanford University |
For many years the history of computer science was presented in a way that was useful to computer scientists. But nowadays almost all technical content is excised; historians are concentrating rather on issues like how computer scientists have been able to get funding for their projects, and/or how much their work has influenced Wall Street. We no longer are told what ideas were actually discovered, nor how they were discovered, nor why they are great ideas. We get only a scorecard. Similar trends are occurring with respect to other sciences. Historians generally now prefer external history to internal history, so that they can write stories that appeal to readers with almost no expertise. Historians of mathematics have thankfully been resisting such temptations. In this talk the speaker will explain why he is so grateful for the continued excellence of papers on mathematical history, and he will make a plea for historians of computer science to get back on track.
August 6th, 2014 |
IEEE Tutorial on LDPC Decoding: VLSI Architectures and Implementations |
September 24th, 2014 | Information theory and signal processing for the world’s smallest computational video camera Dr. David G. Stork, RAMBUS |
We describe a new class of computational optical sensors and imagers that do not rely on traditional refractive or reflective focusing but instead on special diffractive optical elements integrated with CMOS photodiode arrays. Images are not captured, as in traditional imaging systems, but rather computed from raw photodiode signals. Because such imagers forgo the use of lenses, they can be made unprecedentedly small as small as the cross-section of a human hair. In such a computational imager, signal processing takes much of the burden of optical processing done by optical elements in traditional cameras, and thus information theoretic and signal processing considerations become of central importance. In fact, these new imaging systems are best understood as information channels rather than as traditional image forming devices. As such such systems present numerous challenges in information theory and signal processing: How does one optimize the effective electro-optical bandwidth given the constraints of optical components? What is the tradeoff in computational complexity and image quality or other metrics? What is the proper electro- optical representation and basis function set? The talk will end with a list of important research opportunities afforded by this new class of computational imager.