Cloaking Communication

Wednesday, May 13, 2009 - 12:00pm

Cell phone antennas, radio receivers and GPS devices may one day go incognito. In a paper to appear in Physical Review Letters, Andrea Alù and Nader Engheta propose a new cloaking method that cancels out the electromagnetic waves bouncing off an object. The concept may ultimately lead to surreptitious sensors that can collect and send messages without detection.

This is a fascinating paper addressing a very important challenge,” says physicist Nikolay Zheludev at the University of Southampton in England. “The result could have metrological, environmental and defense applications when the idea is developed as a practical device.”

The new cloak manipulates electromagnetic waves — including light — not by blocking out the waves, but by working with them. Previous cloaks worked by diverting waves around an object. “We are asking the question, ‘Is it possible to put a layer around an object such that when a wave hits the object, the wave scatters less?’?” says Engheta, of the University of Pennsylvania in Philadelphia.

Separately, the object and the cloak would both be visible. But the cloak would be designed so that when the two are together, the waves scattering off of both objects would add destructively. “If the cloak is designed properly, the effect is reduced, like a cancelling effect,” Engheta says. “It puts in balance the object and the cloak.”

The key is that electromagnetic waves still enter the cloaked area and hit the object — a requirement for an antenna to pick up a signal, for example. This feature is what separates the new work from other cloaking methods that completely isolate objects from the environment. “Some cloaks divert waves around an object. If you have a sensor in there, it won’t be able to measure the field,” says Engheta.

The cloak could also be designed to not interfere with certain types of outgoing waves (other than the ones being dampened), allowing the sensor to still send unperturbed messages.

“If you want to communicate, this is the cloak for you,” says John Pendry, a theoretical physicist at Imperial College London. In 2006, Pendry and his colleagues created the type of cloak that directs microwaves around an object. But he notes that isolating the object in this way would effectively rule out the option of sending or receiving signals.

A material with this new selective shielding ability “would be of great practical use,” Pendry says. Wrapping this type of cloak around devices perched on top of military vehicles could minimize telltale scattering while still allowing the sending and receiving of crucial messages.

Researchers have a long way to go before these cloaks become a reality. Each cloak has to be fine-tuned to suit the object it’s cloaking, and will probably work for only a small range of electromagnetic waves, Engheta says.

But the blueprint for the cloaks could work for any kind of electromagnetic waves, including light waves, radio waves and microwaves. “The concept is to scatter waves,” he says. “Whether you deal in microwaves or optics, the concept is similar.”

;2009-05-13 Digital Identity Management;

Dr. Suzanne Barber is a national leader in identity management. “We are developing software agents that can assess the trustworthiness of information from different sources, says Barber. These ‘intelligent agents’ operate in complex environments with massive amounts of conflicting information. They evaluate the reliability of the source, coordinate information exchange, and can take into account cost considerations and timeliness as well.”

University of Texas researcher, Dr. Suzanne Barber, is bringing industry, government, and academic identity management experts together to solve a basic problem: in a digital world, how do we know who you are?
The plethora of answers to that question shows how difficult it is to manage identities online.

Barber’s Center for Excellence in Distributed Global Environments (EDGE) recently partnered with the Center for Applied Identity Management Research (CAIMR) as co-sponsors of an Identity Management Summit entitled, “The Digital Identity: A Double-Edged Sword”.

The issue that came up over and over at the summit was the lack of standardization,” says Barber. “Commercial entities tend to develop ad hoc solutions to their specific needs. Even the government can’t agree on a single method of authentication. In a perfect world, you would be able to access your federal income tax information, your college transcript, and your water bill using the same method.

“In the meantime, we are developing software agents that can assess the trustworthiness of information from different sources. These ‘intelligent agents’ operate in complex environments with massive amounts of conflicting information. They evaluate the reliability of the source, coordinate information exchange, and can take into account cost considerations and timeliness as well. We are very excited by the rapid progress we are making in this area.”

The EDGE partnership with CAIMR aids progress by providing software engineering researchers unparalleled access to industrial and governmental expertise and data. This access means that previously unavailable information sources can be used to investigate the most significant ID and security management problems in real-world terms. Researchers interested in this rapidly growing field of research are encouraged to contact either Dr. Barber or her Assistant Director, Rick Scott, for more information.

;2009-05-19 Novel Graphene Device Could Revolutionize Chip Design Industry;

UT-ECE’s Professors Sanjay Banerjee, Frank Register, and Emanuel Tutuc along with Prof. Allan MacDonald, UT Physics Department, and ECE graduate student Dharmendar Reddy designed a novel graphene-based BiSFET device that could revolutionize the chip design industry. This device is discussed in more detail in a recent IEEE Spectrum article.

“The BiSFET, described by Sanjay Banerjee and Leonard Franklin Register and their colleagues at UT Austin, is in the earliest research phase but offers tremendous potential. The BiSFET could substitute for a MOSFET transistor in logic and memory applications. Like a MOSFET transistor, it can switch and it can amplify. Where the BiSFET stands alone, however, is in its phenomenal power parsimony: It needs only one-hundredth to one-thousandth the power of a standard MOSFET, mainly because it would switch at much lower voltages than a MOSFET.”

;2009-10-05 Defect/Fault Tolerant Systems and Design for Testability;

By Rudrajit Datta, Graduate Research Assistant, Prof. Nur Touba’s Group

With the increasing complexity of design in modern day electronic systems, fault tolerance is becoming more and more important to guarantee reliable operation under all operating conditions. Dr. Nur Touba and graduate students have been focusing on the design of fault tolerant systems. Fault tolerance as a property finds application in a wide variety of scenarios ranging from satellites to modern microprocessors. Fault tolerant systems have the capability of withstanding defects and are able to provide specified output despite faults occurring or having occurred. Similarly design for testability (DFT) is a technique that facilitates ease of testing of complicated electronic systems. Researchers at the Computer Aided Testing (CAT) Lab at CERC have been developing modern techniques for DFT, as well as the design of reliable systems. Neither of these is just an academic concept anymore. Our research often leads to the exchange of ideas with leading companies in our field, such as Intel Corporation. Ideas and techniques developed at the CAT Lab have been implemented by companies including Intel Corporation, Logic Vision (recently acquired by Mentor Graphics) and led to joint publications at top conferences.

Fault tolerance is usually achieved by means of redundancy. This redundancy could be of several types viz. information redundancy, hardware redundancy, etc. One of the most common methods of fault tolerance through information redundancy is using parity bits. Parity constructed on a set of bits can be effectively used to detect and correct erroneous data. This is particularly useful for protecting data stored in semiconductor memory. Transient errors like radiation, power supply noise, etc. can cause bit flips in memory. To protect the data integrity of the memory an error correcting code (ECC) is employed. ECCs can range from simple single-error detection to intensely complicated multi-error detection, multi-error correction. We have been trying to develop newer and more effective types of ECC to counter growing reliability issues with continuing voltage scaling. In some cases, we have tried to augment the reliability of ECC with hardware redundancy, using spare rows and columns in the memory array. As operating power becomes a growing concern, our work can help maintain the data integrity of memories at low power by improving upon existing ECC. In DFT, we have developed newer techniques to mitigate the problem of X’s, or unknowns, arising from un-initialized memory elements, bus contention, etc. in testing. Compacting output streams that have unknown ‘X’ values is a major issue for test compression and built-in self-test (BIST). X’s corrupt the final signature making it unknown. At CAT Lab, we have developed techniques shown to achieve better x-compaction than existing methods. Our proposed schemes have also been implemented for industrial designs in conjunction with Intel Corporation.

DFT and design of fault tolerant systems continue to face further challenges as more and more transistors are packed on a chip, with the figure growing well beyond a billion. There is a tremendous scope for active research in both the above-mentioned topics and related topics on reliability and test compression. Some of the most important test challenges are now actually centered on some of the more subtle historical missions of manufacturing testing - reliability and yield learning. It is also important to note that the impact of these challenges affect not only the manufacturing test process itself, but the entire semiconductor business, both in terms of enabling the timely delivery of future processes and cost effective products and meeting customer expectations for reliability.


;2010-01-29 The Pharos Project: Testing Complex Cyber-Physical Systems;

By Drs. Christine Julien and Sriram Viswanath (Principal Investigators)
Drew Stovall (Post-Doctoral Fellow)
Nicholas Paine (Ph.D. Student)

Cyber-physical systems are those that require the combination and coordination of many different computational elements to interact with the physical world. These systems are not only complicated but also complex and unpredictable. Examples of cyber-physical systems include autonomous automotive systems (real-time traffic monitoring, collision avoidance, intersection management), medical monitoring (elderly, Alzheimer's, and Parkinson’s patients), process control systems, distributed robotics, and wireless sensor networks.

The Pharos Project at The University of Texas at Austin is creating a pervasive computing testbed to support the study of these complex cyber-physical systems. The centerpiece of the Pharos Project is the Proteus mobile node. Each node is afforded autonomous behavior by both a Linux server and a microcontroller. Each node’s mobility is provided by one of three different platforms: a Roomba robotics platform, a modified remote controlled car, or a Segway robotic mobility platform. Finally, nodes are provided with a wide variety of sensors and actuators that the nodes can use to interact with the environment and each other. These Proteus nodes have been specifically designed to be inexpensive, easy to manufacture, and simple to control. These individually controlled mobile platforms simulate the complex, distributed, and heterogeneous environment that characterizes cyber-physical systems.

The Pharos testbed consists of a number of these Proteus mobile nodes and a variety of additional embedded sensing capabilities. It is designed for use as an in situ emulation environment for the study of complex cyber-physical systems. Since the challenges encountered in these environments are inherently interdisciplinary, the Pharos Project also takes an interdisciplinary approach to the study of cyber-physical systems. The testbed supports research in sensor management, network coding, wireless protocols, localization, multi-agent coordination, fault-tolerant systems, and software simulation validation. Eventually, we will also support research in distributed teams, project management, and maintenance of software and hardware artifacts. The Pharos testbed has already provided support for a wealth of both undergraduate and graduate research projects in control and communications. The broad range of topics available to the Pharos Project research allows for advancement in a host of different fields, each highly visible and well funded.

More Information:
The Mobile and Pervasive Computing Group:
The Pharos Project:


;2010-02-09 Femtocells: A Revolution in Urban Wireless Broadband Networking;

By Prof. Jeff Andrews

Femtocells will be assuming a massive role in expanding the capabilities of today’s cellular networks, and enabling them to satisfy people’s increasing demands for anytime, anywhere data. A femtocell is basically a small base station that people put in their home and attach to their wired internet connection just like a wireless LAN. However, it works just like a base station and seamlessly allows roaming, voice calls, and the increasing number of things people like to do with their cell phones.

Femtocells pose major technical issues, and if not deployed and designed correctly, could flood the already-strained cellular network with interference. For example, if a normal cell phone user is located close (say 100 feet) to a femtocell but transmitting to a base station that is farther away (typically 1000-5000 feet), its large transmission power causes crippling interference to the femtocell and renders it unusable. Perhaps even more crucially, the mirror effect can knock out cell phones and hence cause service interruptions.

Prof. Andrews’s group has been at the forefront of femtocell research since 2005 and has worked jointly with the CTO of Texas Instruments to model the interference issues and develop practical solutions. Together, they have authored the earliest and most-cited papers on the topic, including two “best papers” in top-flight IEEE conferences and a number of patents at UT and TI that develop interference avoidance and cancellation techniques at the physical and network layers.

For more information, please consult the following documents and links:

IEEE Communications Magazine

IEEE Transactions on Wireless Communications

Wireless Networking & Communications Group

;2010-02-18 High-precision Laser Beam Shaping and Image Projection;

By Jinyang Liang
Professor: Michael F. Becker;
Affiliation: Optical Signal Processing Laboratory (OSPLab)

The Optical Lattice Emulator (OLE) program, led by Nobel Laureate Professor Wolfgang Ketterle of MIT and funded by DARPA, strives to build a solid-state material emulator by using ultracold atoms in an optically defined lattice in collaboration with eight other prestigious universities in the US and Europe. UT ECE Professor Michael Becker and graduate student, Jinyang Liang, are researching high-precision laser beam shaping and image projection, in order to form a controllable optical lattice that will be used to program the emulator. A flat-top beam is defined by a beam profile with uniform intensity distribution at the center created through high-precision control of the amplitude and phase of light. The flat-top beam will be used to produce a standing wave, which creates a uniform optical lattice potential. In atomic physics, a flat-top beam can improve the sensitivity of interferometric gravity wave detectors and be used in ultracold atom experiments. UT ECE in collaboration with the UT Department of Physics plans to contribute to the OLE program by applying this technique to form a three-dimensional optical lattice and control states of ultracold atoms for Bose-Einstein Condensate (BEC) experiments.

Using the state-of-art facilities in Professor Becker’s Optical Signal Processing Laboratory (OSPLab), we have achieved high-precision laser beam shaping by employing a binary-amplitude spatial light modulator, the Texas Instrument Digital Micromirror Device (DMD), followed by an imaging telescope that contains a pinhole low-pass filter (LPF) (Figure 1). This experiment demonstrates the ability to shape raw quasi-Gaussian laser beams into beams with precisely controlled profiles that have an unprecedented low level of RMS error with respect to the target profile. We have shown that our iterative refinement process is able to improve the light intensity uniformity to around 1% RMS error in a raw camera image for both 633 nm and 1064 nm laser beams (Figure 2). The digital LPF of the camera image matches the performance of the pinhole filter in the experimental setup. The digital LPF results reveal that the actual optical beam profiles have an RMS error down to 0.23%.


Figure 1 Optical layout of the beam shaping system producing one-dimensional optical lattice

<img alt=" src=/images/research/laser-beam-2.png style=width: 222px