spectrum.ieee.org
Open in
urlscan Pro
151.101.129.91
Public Scan
URL:
https://spectrum.ieee.org/the-eight-year-leap-second-delay-might-not-be-as-bad-as-it-seems
Submission: On November 29 via api from US — Scanned from DE
Submission: On November 29 via api from US — Scanned from DE
Form analysis
2 forms found in the DOM/search/
<form action="/search/"><button type="submit" class="menu-global__submit fa fa-search" value="" aria-label="Submit"></button><input name="q" class="menu-global__text-input" type="text" placeholder="Search..." aria-label="Search"></form>
/search/
<form action="/search/"><label for="q" class="hide-text">Search: </label><input placeholder="Type to search" type="text" name="q" id="q" class="search-form__text-input"><button aria-label="Search" type="submit" class="search-form__submit"
value="Search"><svg width="18px" height="19px" viewBox="0 0 18 19" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="1376---Sample-Front-Page" transform="translate(-826.000000, -102.000000)" stroke="#0D0D0D" stroke-width="1.5">
<g id="Light-/-Nav" transform="translate(816.857864, 96.000000)">
<g id="Search-Icon" transform="translate(7.307612, 5.000000)">
<path
d="M11.631728,14.6819805 C15.2215789,14.6819805 18.131728,11.7718314 18.131728,8.18198052 C18.131728,4.59212964 15.2215789,1.68198052 11.631728,1.68198052 C8.04187711,1.68198052 5.13172798,4.59212964 5.13172798,8.18198052 C5.13172798,11.7718314 8.04187711,14.6819805 11.631728,14.6819805 Z M11.631728,14.5814755 L11.631728,21.5814755"
id="Combined-Shape" transform="translate(11.631728, 11.631728) rotate(-45.000000) translate(-11.631728, -11.631728) "></path>
</g>
</g>
</g>
</g>
</svg></button></form>
Text Content
IEEE.orgIEEE Xplore Digital LibraryIEEE StandardsMore Sites Sign InJoin IEEE See our latest special report, “Reinventing Invention: Stories From Innovation’s Edge” → View → Close bar The Eight-Year Leap Second Delay Might Not Be As Bad As It Seems Share FOR THE TECHNOLOGY INSIDER Search: Explore by topic AerospaceArtificial IntelligenceBiomedicalClimate TechComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsTelecommunicationsTransportation IEEE Spectrum FOR THE TECHNOLOGY INSIDER TOPICS AerospaceArtificial IntelligenceBiomedicalClimate TechComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsTelecommunicationsTransportation SECTIONS FeaturesNewsOpinionCareersDIYEngineering Resources MORE NewslettersPodcastsSpecial ReportsCollectionsExplainersTop Programming LanguagesRobots Guide ↗IEEE Job Site ↗ FOR IEEE MEMBERS Current IssueMagazine ArchiveThe InstituteThe Institute Archive FOR IEEE MEMBERS Current IssueMagazine ArchiveThe InstituteThe Institute Archive IEEE SPECTRUM About UsContact UsReprints & Permissions ↗Advertising ↗ FOLLOW IEEE SPECTRUM SUPPORT IEEE SPECTRUM IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science. Join IEEE Subscribe About IEEEContact & SupportAccessibilityNondiscrimination PolicyTermsIEEE Privacy PolicyCookie PreferencesAd Privacy Options © Copyright 2024 IEEE — All rights reserved. A public charity, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. ENJOY MORE FREE CONTENT AND BENEFITS BY CREATING AN ACCOUNT SAVING ARTICLES TO READ LATER REQUIRES AN IEEE SPECTRUM ACCOUNT THE INSTITUTE CONTENT IS ONLY AVAILABLE FOR MEMBERS DOWNLOADING FULL PDF ISSUES IS EXCLUSIVE FOR IEEE MEMBERS DOWNLOADING THIS E-BOOK IS EXCLUSIVE FOR IEEE MEMBERS ACCESS TO SPECTRUM 'S DIGITAL EDITION IS EXCLUSIVE FOR IEEE MEMBERS FOLLOWING TOPICS IS A FEATURE EXCLUSIVE FOR IEEE MEMBERS ADDING YOUR RESPONSE TO AN ARTICLE REQUIRES AN IEEE SPECTRUM ACCOUNT CREATE AN ACCOUNT TO ACCESS MORE CONTENT AND FEATURES ON IEEE SPECTRUM , INCLUDING THE ABILITY TO SAVE ARTICLES TO READ LATER, DOWNLOAD SPECTRUM COLLECTIONS, AND PARTICIPATE IN CONVERSATIONS WITH READERS AND EDITORS. FOR MORE EXCLUSIVE CONTENT AND FEATURES, CONSIDER JOINING IEEE . JOIN THE WORLD’S LARGEST PROFESSIONAL ORGANIZATION DEVOTED TO ENGINEERING AND APPLIED SCIENCES AND GET ACCESS TO ALL OF SPECTRUM’S ARTICLES, ARCHIVES, PDF DOWNLOADS, AND OTHER BENEFITS. LEARN MORE ABOUT IEEE → JOIN THE WORLD’S LARGEST PROFESSIONAL ORGANIZATION DEVOTED TO ENGINEERING AND APPLIED SCIENCES AND GET ACCESS TO THIS E-BOOK PLUS ALL OF IEEE SPECTRUM’S ARTICLES, ARCHIVES, PDF DOWNLOADS, AND OTHER BENEFITS. LEARN MORE ABOUT IEEE → CREATE AN ACCOUNTSIGN IN JOIN IEEESIGN IN Close ACCESS THOUSANDS OF ARTICLES — COMPLETELY FREE CREATE AN ACCOUNT AND GET EXCLUSIVE CONTENT AND FEATURES: SAVE ARTICLES, DOWNLOAD COLLECTIONS, AND TALK TO TECH INSIDERS — ALL FREE! FOR FULL ACCESS AND BENEFITS, JOIN IEEE AS A PAYING MEMBER. CREATE AN ACCOUNTSIGN IN TelecommunicationsNews THE EIGHT-YEAR LEAP SECOND DELAY MIGHT NOT BE AS BAD AS IT SEEMS THE UTC DECISION HAS BEEN PUT OFF UNTIL 2023, BUT CHANGES MIGHT HAPPEN FAST AT THAT POINT Rachel Courtland 02 Dec 2015 3 min read Photo: The Yomiuri Shimbun/AP Photo internet time leap second gps metrology standards After I posted a curtain-raiser about the debate over the fate of the leap second at the World Radiocommunication Conference in Geneva last month, I settled in for a wait. The leap second, if you haven’t come across it before, is the stray second that is added intermittently to atomic-clock–based Coordinated Universal Time (UTC) to keep it in sync with the unsteady rotation of the Earth. The question of whether to keep or drop the leap second from UTC has a long and contentious history, and several people I interviewed said they expected negotiations to last through most of the four-week-long meeting. Instead, “everything was really settled at the end of the second week,” says Vincent Meens of France’s National Center for Space Studies. And the decision was to delay the decision: the question was placed on hold until the 2023 World Radiocommunication Conference, which will be the meeting after the next WRC meeting. That might sound like kicking the proverbial can down the road—and especially bad news for those who think that adding leap seconds threatens modern networks and systems. But the eight-year delay might not be as bad as it sounds. If the leap second were dropped this year, there would likely have been a grace period to allow systems to adjust to the new order; the proposal submitted this year by the Inter-American Telecommunication Commission, for example, would have waited until 2022 to make the change to UTC active. Meens expects that if a decision is made to eliminate the leap second in 2023, it would be accompanied by swift action. “The idea is not to wait. So if it’s decided [to eliminate the leap second] it should be right when the new radio regulation is put into force. The new time scale would be in the beginning of 2024,” Meens says. So what looks like an eight-year delay right now might only wind up being a couple of years. Of course, that outcome will likely depend on what’s done in the meantime (i.e. a good amount of consensus-building and leg work). There is a long list of organizations (see paragraph five in that link) that are expected to take part in studies leading up to WRC-23. And in the midst of all that, Nature’s Elizabeth Gibney reports, responsibility for the definition of UTC will be shifting away from the International Telecommunication Union and toward the international body that already manages International Atomic Time as well as the SI units of measure. She says the change in responsibility is unlikely to accelerate the decision. In fact, says Brian Patten of the U.S. National Telecommunications and Information Administration, the International Telecommunication Union can’t make the change by itself. “The ITU cannot alone make a decision about leap seconds,” he says, as the organization is responsible for distributing the time scale not making it. As for a speedy resolution in 2023, Patten says it’s too early to call: “we will have to see what happens in the joint work and discussions,” he says. “We can’t speculate on what the outcome will be when a report is delivered to WRC-23 on the status of the work.” Although Meens predicts swift implementation if the leap second is eliminated, he can’t predict which way the decision will go. He’s had a role for years in international deliberations over the leap second, but even he was surprised by the outcome of this meeting. “I thought this was going to go until the end of the conference,” Meens says. “This was a particular subject where it was hard to find gray between white and black.” He theorizes the decision to delay might have come about in part because the international participants of the WRC wanted to focus on other difficult subjects—in particular, the allocation of radio-frequency bands for mobile devices. It’s hard to imagine we won’t be demanding even more spectrum in eight years time. But perhaps it will be less of a distraction the next time around. The Final Acts (pdf) of the conference are now available (the UTC decision is in RESOLUTION COM5/1). internettimeleap secondgpsmetrologystandards Rachel Courtland Rachel Courtland, an unabashed astronomy aficionado, is a former senior associate editor at Spectrum. See full bio → The Conversation (0) Publish Sort byNewestOldestPopular History of TechnologyMagazineRoboticsInterview 5 QUESTIONS FOR ROBOTICS LEGEND RUZENA BAJCSY 16 hours ago 2 min read The InstituteIEEE Member NewsCareersProfile SIGNAL PROCESSING PIONEER’S TECH HAS IMPROVED AUTISM DIAGNOSIS 27 Nov 2024 6 min read History of TechnologyMagazineFeatureSemiconductors THE FORGOTTEN STORY OF HOW IBM INVENTED THE AUTOMATED FAB 27 Nov 2024 15 min read 2 RELATED STORIES TelecommunicationsOpinionJune 2021 FORGET CRYPTOCURRENCIES AND NFTS—SECURING DEVICES IS THE FUTURE OF BLOCKCHAIN TECHNOLOGY Consumer ElectronicsNews WHY THE WAY WE CALCULATE TV ENERGY EFFICIENCY IS WRONG TelecommunicationsNews 5G JUST GOT WEIRD Telecommunications Magazine November 2024 Feature Special Reports STARTUPS SQUEEZE ROOM-SIZE OPTICAL ATOMIC CLOCKS INTO A BRIEFCASE COMPACT CLOCKS ENABLE GPS WITH CENTIMETER-SCALE ACCURACY Dina Genkina 15 Oct 2024 15 min read 7 One of the most precise clocks in the world—the optical atomic clock in Boulder, Colo.—is composed of strontium atoms in a vacuum chamber, with seven different lasers orchestrated in precise patterns to cool, trap, and detect the atoms. Matthew Jonas/Boulder Daily Camera Blue Walking into Jun Ye’s lab at the University of Colorado Boulder is a bit like walking into an electronic jungle. There are wires strung across the ceiling that hang down to the floor. Right in the middle of the room are four hefty steel tables with metal panels above them extending all the way to the ceiling. Slide one of the panels to the side and you’ll see a dense mesh of vacuum chambers, mirrors, magnetic coils, and laser light bouncing around in precisely orchestrated patterns. This is one of the world’s most precise and accurate clocks, and it’s so accurate that you’d have to wait 40 billion years—or three times the age of the universe—for it to be off by one second. This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.” What’s interesting about Ye’s atomic clock, part of a joint venture between the University of Colorado Boulder and the National Institute of Standards and Technology (NIST), is that it is optical not microwave, like most atomic clocks. The ticking heart of the clock is the strontium atom, and it beats at a frequency of 429 terahertz, or 429 trillion ticks per second. It’s the same frequency as light in the lower part of the red region of the visible spectrum, and that relatively high frequency is a pillar of the clock’s incredible precision. Commonly available atomic clocks beat at frequencies in the gigahertz range, or about 10 billion ticks per second. Going from the microwave to the optical makes it possible for Ye’s clock to be tens of thousands of times as precise. The startup Vector Atomic uses a vapor of iodine molecules trapped in a small glass cell as the ticking heart of its optical atomic clock. Will Lunden One of Ye’s former graduate students, Martin Boyd, cofounded a company called Vector Atomic, which has taken the idea behind Ye’s optical-clock technology and used it to make a clock small enough to fit in a box the size of a large briefcase. The precision of Vector Atomic’s clock is far from that of Ye’s—it might lose a second in 32 million years, says Jamil Abo-Shaeer, CEO of Vector Atomic. But it, too, operates at an optical frequency, and it matches or beats commercial alternatives. In the past year, three separate companies have developed their own versions of compact optical atomic clocks—besides Vector Atomic, there’s also Infleqtion, in Boulder, Colo., and QuantX Labs, based in Adelaide, Australia. Freed from the laboratory, these new clocks promise greater resilience and a backup to GPS for military applications, as well as for data centers, financial institutions, and power grids. And they may enable a future of more-precise GPS, with centimeter-positioning resolution, exact enough to keep self-driving cars in their lanes, allow drones to drop deliveries onto balconies, and more. And even more than all that, this is a story of invention at the frontiers of electronics and optics. Getting the technology from an unwieldy, lab-size behemoth to a reliable, portable product took a major shift in mind-set: The tech staff of these companies, mostly Ph.D. atomic physicists, had to go from focusing on precision at all costs to obsessing over compactness, robustness, and minimizing power consumption. They took an idea that pushed the boundaries of science and turned it into an invention that stretched the possibilities of technology. HOW DOES AN ATOMIC CLOCK WORK? Like any scientist, Ye is motivated by understanding the deepest mysteries of the universe. He hopes his lab’s ultraprecise clocks will one day help glean the secrets of quantum gravity, or help understand the nature of dark matter. He also revels in the engineering complexity of his device. “I love this job because everything you’re teaching in physics turns out to matter when you’re trying to measure things at such a high-precision level,” he says. For example, if someone walks into the lab, the minuscule thermal radiation emanating from their body will polarize the atoms in the lab ever so slightly, changing their ticking frequency. To maintain the clock’s precision, you need to bring that effect under control. Inside the briefcase-size optical atomic clock. A laser (1) shines into a glass cell containing atomic vapor (2). The atoms absorb light at only a very precise frequency. A detector (3) measures the amount of absorption and uses that to stabilize the laser at the correct frequency. A frequency comb (4) gears down from the optical oscillation in the terahertz to the microwave range. The clock outputs an ultraprecise megahertz signal (5). Chris Philpot In an atomic clock, the atoms act like an extremely picky Goldilocks, identifying when a frequency of electromagnetic radiation they are exposed to is too hot, too cold, or just right. The clock starts with a source of electromagnetic radiation, be it a microwave oscillator (like the current commercial atomic clocks) or a laser (like Ye’s clock). No matter how precisely the sources are engineered, they will always have some variation, some bandwidth, and some jitter, making their frequency irregular and unreliable. Unlike these radiation sources, all atoms of a certain isotope of a species—rubidium, cesium, strontium, or any other—are exactly identical to one another. And any atom has a host of discrete energy levels that it can occupy. Each pair of energy levels has its own energy gap, corresponding to a frequency. If an atom is illuminated by radiation of the exact frequency of one such gap, the atom will absorb the radiation, and the electrons will hop to the higher energy level. Shortly after, the atom will re-emit radiation as those electrons hop back down to the lower energy levels. During clock operation, a maximally stable (but inevitably still somewhat broadband-jittery) source illuminates the atoms. The electrons get excited and hop energy levels only when the source’s frequency is just right. A detector observes how much of the radiation the atoms absorb (or how much they later re-emitted, depending on the architecture) and reports whether the incoming frequency is too high or too low. Then, active feedback stabilizes the source’s frequency to the atoms’ frequency of choice. This precise frequency feeds into a counter that can count the crests and troughs of the electromagnetic radiation—the ticks of the atomic clock. That stabilized count is an ultra-accurate frequency base—a clock, in other words. There are a plethora of effects that can affect the precision of the clock. If the atoms are moving, the frequency of radiation from the atoms’ reference point is altered by the Doppler effect, causing different atoms to select for different frequencies according to their velocity. External electric or magnetic fields, or even heat radiating from a human, can tweak the atoms’ preferred frequency. A vibration can knock a source laser’s frequency so far off that the atoms will stop responding altogether, breaking the feedback loop. Ye chose one of the pickiest atoms of them all, one that would offer very high precision—strontium. To minimize the noisemaking effects of heat, Ye’s team uses more lasers to cool the atoms down to just shy of absolute zero. To better detect the atoms’ signal, they corral the atoms in a periodic lattice—a trap shaped like an egg carton and made by yet another laser. This configuration creates several separate groups of atoms that can all be compared against one another to get a more precise measurement. All in all, Ye’s lab uses seven lasers of different colors for cooling, trapping, preparing the clock state, and detection, all defined by the atoms’ particular needs. The lasers enable the clock’s astounding precision, but they are also expensive, and they take up a lot of space. Aside from the light source itself, each laser requires a bevy of optical control elements to coax it to the right frequency and alignment—and they are easily misaligned or knocked slightly away from their target color. “The laser is a weak link,” Ye says. “When you design a microwave oscillator, you put a waveguide around it, and they work forever. Lasers are still very much more gentle or fragile.” The lasers can be knocked out of alignment by someone lightly knocking on one of Ye’s massive tables. Waveguides, meanwhile, being enclosed and bolted down, are much less sensitive. The lab is run by a team of graduate students and postdocs, bent on ensuring that the laser’s instabilities do not deter them from making the world’s most precise measurements. They have the luxury of pursuing the ultimate precision without concern for worldly practicalities. THE MIND-SET SHIFT TO A COMMERCIAL PRODUCT While Ye and his team pursue perfection in timing, Vector Atomic, the first company to put an optical atomic clock on the market, is after an equally elusive objective: commercial impact. “Our competition is not Jun Ye,” says Vector Atomic’s Abo-Shaeer. “Our competition is the clocks that are out there—it’s the commercial clocks. We’re trying to bring these more modern timekeeping techniques to bear.” To be commercially viable, these clocks cannot be thrown off by the bodily heat of a nearby human, nor can they malfunction when someone knocks against the device. So Vector Atomic had to rethink the whole construction of its device from the ground up, and the most fragile part of the system became the company’s focus. “Instead of designing the system around the atom, we designed the system around the lasers,” Abo-Shaeer says. First, they drastically reduced the number of lasers used in the design. That means no laser cooling—the clock has to work with atoms or molecules in their gaseous state, confined in a glass cell. And there is no periodic lattice to group the atoms into separate clumps and get multiple readings. Both of these choices come with hits to precision, but they were necessary to make robust, compact devices. Then, to choose their lasers, Abo-Shaeer and his coworkers asked themselves which ones were the most robust, cheap, and well-engineered. The answer was clear—infrared lasers used in mature telecommunications and machining industries. Then they asked themselves which atom, or molecule, had a transition that could be stimulated by such a laser. The answer here was an iodine molecule, whose electrons have a transition at 532 nanometers—conveniently, exactly half the wavelength of a common industrial laser. Halving the wavelength could be achieved by means of a common optical device. “We have all these Ph.D. atomic physicists, and it takes as much or more creativity to get all this into a box as it did when we were graduate students with the ultimate goal of writing Nature and Science papers,” Abo-Shaeer says. Vector Atomic couldn’t get away with just one laser in its system. Having a box that outputs a very precise laser, oscillating at hundreds of terahertz, sounds cool but is completely useless. No electronics are capable of counting those ticks. To convert the optical signal into a friendly microwave one, while keeping the original signal’s precision, the team needed to incorporate a frequency comb. Frequency combs are lasers that emit light in regularly spaced bursts in time. Their comblike nature becomes apparent if you look at the frequencies—or colors—of the light they emit, regularly spaced like the teeth of a comb. The subject of the 2005 Nobel Prize in Physics, these devices bridge the optical and microwave domains, allowing laser light to “gear down” to lower frequency range while preserving precision. In the past decade, frequency combs underwent their own transformation, from lab-based devices to briefcase-size commercially available products (and even quarter-size prototypes). This development, as much as anything else, unleashed a wave of innovation that enabled the three optical atomic clocks and this nascent market today. HIGH TIME FOR OPTICAL TIME Inventions often happen in a flurry, as if there were something in the air making conditions ripe for the new innovation. Alongside Vector Atomic’s Evergreen-30 clock, Infleqtion and QuantX Labs have both developed clocks of their own in short order. Infleqtion has made seven sales to date of their clock, Tiqker (yes, quantum-tech companies are morally obligated to put a q in every name). QuantX Labs, meanwhile, has sold the first two of their Tempo clocks, with delivery to customers scheduled before the end of this year, says Andre Luiten, cofounder and managing director of QuantX Labs. (A fourth company, Vescent, based in Golden, Colo., is also selling an optical atomic clock, although it is not integrated into a single box.) Vector Atomic, QuantX Labs, and Infleqtion all have plans to send prototypes of their clocks into space. QuantX Labs has designed a 20-liter engineering model of their space clock [left]. QuantX Labs Independently, all three companies have made surprisingly similar design choices. They all realized that lasers were the limiting factor, and so chose to use a glass cell filled with atomic vapor rather than a vacuum chamber and laser cooling and trapping. They all opted to double the frequency of a telecom laser. Unlike Vector Atomic, Infleqtion and QuantX Labs chose the rubidium atom. The energy gap in rubidium, around 780 nm, can be addressed by a frequency-doubled infrared laser at 1,560 nm. QuantX Labs stands out for using two such lasers, very close to each other in frequency, to probe through a clever two-tone scheme that requires less power. They all managed to fit their clock systems into a 30-liter box, roughly the size of a briefcase. All three companies took great pains to ensure that their clocks will operate robustly in realistic environments. At the lower level of precision compared with lab-based optical clocks, the radiation coming from a nearby person is no longer an issue. However, by doing away with laser cooling, these companies have heightened the possibility that temperature and motion could affect the atoms’ internal ticking frequency. “You’ve got to be smart about the way you make the atomic cell so that it’s not coupled to the environment,” says Luiten. OPTICAL CLOCKS SET SAIL AND TAKE FLIGHT In mid-2022, to test the robustness of their design, Vector Atomic and QuantX Labs’ partners in its venture, the University of Adelaide and Australia’s Defence Science and Technology Group, took their clocks out to sea. They brought their clocks to Pearl Harbor, in Hawaii, to participate in the Alternative Position, Navigation and Timing Challenge at Rim of the Pacific, a defense collaboration among the Five Eyes nations—Australia, Canada, New Zealand, the United Kingdom, and the United States. “They were playing touch rugby with the New Zealand sailors. So that was an awesome experience for atomic physicists,” Abo-Shaeer says. After 20 days aboard a naval ship, Vector Atomic’s optical clocks maintained a performance that was very close to that of their measurements under lab conditions. “When it happened, I thought everyone should be standing up and shouting from the rooftops,” says Jonathan Hoffman, a program manager at the U.S. Defense Advanced Research Projects Agency (DARPA), which cofunded Vector Atomic’s work. “People have been working on these optical clocks for decades. And this was the first time an optical clock ran on its own without human interference, out in the real world.” Vector Atomic and QuantX affiliate University of Adelaide installed their optical atomic clocks on a ship [top] to test their robustness in a harsh environment. The performance of Vector Atomic’s clocks [bottom] remained basically unchanged despite the ship’s rocking, temperature swings, and water sprays. The University of Adelaide’s clock degraded somewhat, but the team used the trial to improve their design. Will Lunden The University of Adelaide’s clock did suffer some degradation at sea, but a critical outcome of the trial was an understanding of why that happened. This has allowed the team to amend its design to avoid the leading causes of noise, says Luiten. In May 2024, Infleqtion sent its Tiqker clock into flight, along with its atom-based navigation system. A short-haul flight from MoD Boscombe Down, a military aircraft testing site in the United Kingdom, carried the quantum tech along with the U.K.’s science minister, Andrew Griffith. The company is still analyzing data from the flight, but at a minimum the clock has outperformed all onboard references, according to Judith Olson, head of the atomic clock project at Infleqtion. All three companies are working on yet more compact versions of their clocks. All are confident they will be able to get their briefcase-size boxes down from a volume of about 30 liters to 5 L, about the size of an old-school two-slice toaster, say Olson, Luiten, and Abo-Shaeer. “Mostly those boxes are still empty space,” Luiten says. During the sea trials, Vector Atomic’s and the University of Adelaide’s clocks were exposed to the elements. Jon Roslund Infleqtion also has designs for an even smaller, 100-mL version, which leverages integrated photonics to make such tight packaging possible. “At that point, you basically have a clock that can fit in your pocket,” says Olson. “It might make a very warm pocket after a while, because the power draw will still be high. But even with the large power draw, that’s something we perceive as being potentially extremely disruptive.” All three companies also plan to send their designs into space, aboard satellites, in the next several years. Under their Kairos mission, QuantX will launch a component of their Tempo clock into space in 2025, with a full launch scheduled for 2026. PRECISION TIMING TODAY So why would someone need the astounding precision of an optical atomic clock? The most likely immediate use cases will be in situations where GPS is unavailable. When most people think of GPS, they picture that blue dot on a map on their smartphone. But behind that dot is a sophisticated network of remarkable timing devices. It starts with Coordinated Universal Time (UTC), the standard established by averaging together about 400 atomic clocks of various kinds all over the world. “UTC is known to be some factor of 1 million more stable than any astronomical sense of time provided by Earth’s rotation,” says Jeffrey Sherman, a supervisory physicist at NIST who works on maintaining and improving UTC clocks. UTC is transmitted to satellites in the GPS network twice a day. Each satellite carries an onboard clock of its own, a microwave atomic clock usually based on rubidium. These clocks are precise to about a nanosecond during that half-day they are left to their own devices, Sherman says. From there, satellites provide the time for all kinds of facilities here on Earth, including data centers, financial institutions, power grids, and cell towers. Precise timing is what allows the satellites to locate that blue dot on a phone map, too. A phone must connect to three or more GPS satellites and receive precise time from all three. However, the times will be different due to the different distances traveled from the satellites. Using this difference, and knowing the positions of the satellites, the phone triangulates its own position. So the precision of timing aboard the satellites directly relates to how precisely the location of any phone can be determined—currently about 2 meters in the nonmilitary version of the service. THE PRECISELY TIMED FUTURE Optical atomic clocks can usefully inject themselves into multiple stages of this worldwide timing scheme. First, if they prove reliable enough over the long term, they can be used in defining the UTC standard alongside—and eventually instead of—other clocks. Currently, the majority of the clocks that make up the standard are hydrogen masers. Hydrogen masers have a precision similar to that of the new portable optical clocks, but they are far from portable: They are roughly the size of a kitchen refrigerator and require a room-size thermally and vibrationally controlled environment. “I think everyone can agree the maser is probably at the end of its technological evolution,” Shermann says. “They’ve stopped really getting a lot better, while on day one, the first crop of optical clocks are comparable. There’s a hope that by encouraging development, they can take over, and they can become much better in the near future.” The global timing infrastructure. A collection of precise clocks, including hydrogen masers and atomic clocks, is used to create Coordinated Universal Time (UTC). A network of satellites carries atomic clocks of their own, synced to UTC on a regular basis. The satellites then send precise timing to data centers, financial institutions, the power grid, cell towers, and more. Four or more satellites are used to determine your phone’s GPS position. An optical atomic clock can be included in UTC, sent aboard satellites, or used as backup in data centers, financial institutions, or cell towers. Chris Philpot Second, optical clocks can come in handy in situations where GPS isn’t available. Although many people experience GPS as extremely reliable, jammed or spoofed GPS is very common in times of war or conflict. (To see a daily map of where GPS is unavailable due to interference, check out gpsjam.org.) This is a big issue for the U.S. Department of Defense. Not having access to GPS-based time compromises military communications. “For the DOD, it’s very important that we can put this on many, many different platforms,” DARPA’s Hoffman says. “We want to put it on ships, we want to put it on aircraft, we want to put it on satellites and vehicles.” It can also be an issue in financial markets, data centers, and 5G communications. All of these use cases require precise timing to about 1 microsecond to function properly and meet regulatory requirements. That means the source of timing for these applications must be at least an order of magnitude better, or roughly a 100-nanosecond resolution. GPS provided this with room to spare, but if these industries rely solely on GPS, jamming or spoofing puts them at great risk. A local microwave atomic clock can provide a backup, but these clocks lose several nanoseconds a day even in controlled-temperature environments. Optical atomic clocks can provide these industries with security, so that even if they lose access to GPS for extended periods of time, their operations will continue unimpeded. “By having this headroom in performance, it means that we can trust how well our clocks are ticking hours and days or even months later,” says Infleqtion’s Olson. “The lower-performing clocks don’t have that.” Finally, portable optical atomic clocks open up the possibility of a future where the entire timing fabric goes from nanosecond to picosecond resolution. That means sending these clocks into space to form their own version of a more-precise GPS. Among other things, this would enable location precision that’s several millimeters instead of 2 meters. “We call it GPS 2.0,” says Vector Atomic’s Abo-Shaeer. He argues that millimeter-scale location resolution would allow autonomous vehicles to stay in their lanes, or make it possible for delivery drones to land on a New York City balcony. Perhaps most exciting of all, this invention promises to open the possibility for many inventions in a variety of fields. Having the option of superior timing will open new applications that have not yet been envisioned. “A lot of applications are built around the current limitations of GPS. In other words, it’s sort of a catch-22,” says David Howe, emeritus and former leader of the time and frequency metrology group of NIST. “You get into this mode where you don’t ever cross over to something better because the applications are designed for what’s available. So, it’ll take a larger vision to say, ‘Let’s see what we can do with optical clocks.’” This article appears in the November 2024 print issue as “Squeezing an Optical Atomic Clock Into a Briefcase.” From Your Site Articles * Entangled Atoms Lead to Ultraprecise Quantum Sensors › * Optical Atomic Clocks Are Ready to Redefine Time › Related Articles Around the Web * Optical clocks – Physics World › * An elementary quantum network of entangled optical atomic clocks ... › Keep Reading ↓ Show less AI Energy News Transportation Consumer Electronics TRUMP'S SECOND TERM WILL CHANGE AI, ENERGY, AND MORE HERE'S YOUR GUIDE TO HOW THE INCOMING ADMINISTRATION WILL IMPACT TECH IEEE Spectrum IEEE Spectrum is an award-winning technology magazine and the flagship publication of the IEEE, the world’s largest professional organization devoted to engineering and the applied sciences. 27 Nov 2024 5 min read 1 Gluekit policy artificial intelligence energy transportation cryptocurrency consumer electronics U.S. presidential administrations tend to have big impacts on tech around the world. So it should be taken as a given that when Donald Trump returns to the White House in January, his second administration will do the same. Perhaps more than usual, even, as he staffs his cabinet with people closely linked to the Heritage Foundation, the Washington, D.C.-based conservative think tank behind the controversial 900-page Mandate for Leadership (also known as Project 2025). The incoming administration will affect far more than technology and engineering, of course, but here at IEEE Spectrum, we’ve dug into how Trump’s second term is likely to impact those sectors. Read on to find out more, or click to navigate to a specific topic. This post will be updated as more information comes in. * Artificial Intelligence * Consumer Electronics * Cryptocurrencies * Energy * Transportation ARTIFICIAL INTELLIGENCE During Trump’s campaign, he vowed to rescind President Joe Biden’s 2023 executive order on AI, saying in his platform that it “hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” Experts expect him to follow through on that promise, potentially killing momentum on many regulatory fronts, such as dealing with AI-generated misinformation and protecting people from algorithmic discrimination. However, some of the executive order’s work has already been done; rescinding it wouldn’t unwrite reports or roll back decisions made by various cabinet secretaries, such as the Commerce secretary’s establishment of an AI Safety Institute. While Trump could order his new Commerce secretary to shut down the institute, some experts think it has enough bipartisan support to survive. “It develops standards and processes that promote trust and safety—that’s important for corporate users of AI systems, not just for the public,” says Doug Calidas, senior vice president of government affairs for the advocacy group Americans for Responsible Innovation. As for new initiatives, Trump is expected to encourage the use of AI for national security. It’s also likely that, in the name of keeping ahead of China, he’ll expand export restrictions relating to AI technology. Currently, U.S. semiconductor companies can’t sell their most advanced chips to Chinese firms, but that rule contains a gaping loophole: Chinese companies need only sign up for U.S.-based cloud computing services to get their computations done on state-of-the-art hardware. Trump may close this loophole with restrictions on Chinese companies’ use of cloud computing. He could even expand export controls to restrict Chinese firms’ access to foundation models’ weights—the numerical parameters that define how a machine learning model does its job. —Eliza Strickland Back to top CONSUMER ELECTRONICS Trump plans to implement hefty tariffs on imported goods, including a 60 percent tariff on goods from China, 25 percent on those from Canada and Mexico, and a blanket 10 or 20 percent tariff on all other imports. He’s pledged to do this on day 1 of his administration, and once implemented, these tariffs would hike prices on many consumer electronics. According to a report published by the Consumer Technology Association in late October, the tariffs could induce a 45 percent increase in the consumer price of laptops and tablets, as well as a 40 percent increase for video game consoles, 31 percent for monitors, and 26 percent for smartphones. Collectively, U.S. purchasing power for consumer technology could drop by US $90 billion annually, the report projects. Tariffs imposed during the first Trump administration have continued under Biden. Meanwhile, the Trump Administration may take a less aggressive stance on regulating Big Tech. Under Biden, the Federal Trade Commission has sued Amazon for maintaining monopoly power and Meta for antitrust violations, and worked to block mergers and acquisitions by Big Tech companies. Trump is expected to replace the current FTC chair Lina Khan, though it remains unclear how much the new administration—which bills itself as anti-regulation—will affect the scrutiny Big Tech is facing. Executives from major companies including Amazon, Alphabet, Apple, Meta, Microsoft, OpenAI, Intel, and Qualcomm congratulated Trump on his election on social media, primarily X. (The CTA also issued congratulations.) —Gwendolyn Rak Back to top CRYPTOCURRENCIES On 6 November, the day the election was called for Trump, Bitcoin jumped 9.5 percent, closing at over US $75,000—a sign that the cryptocurrency world expects to boom under the next regime. Donald Trump marketed himself as a pro-crypto candidate, vowing to turn America into the “crypto capital of the planet” at a Bitcoin conference in July. If he follows through on his promises, Trump could create a national bitcoin reserve by holding on to bitcoin seized by the U.S. government. Trump also promised to remove Gary Gensler, the chair of the Securities and Exchanges Commission, who has pushed to regulate most cryptocurrencies as securities (like stocks and bonds), with more government scrutiny. While it may not be within Trump’s power to remove him, Gensler is likely to resign when a new administration starts. It is within Trump’s power to select the new SEC chair, who will likely be much more lenient on cryptocurrencies. The evidence lies in Trump’s pro-crypto cabinet nominations: Howard Lutnick as Commerce Secretary, whose finance company oversees the assets of the Tether stablecoin; Robert F. Kennedy Jr. as the Secretary of Health and Human Services, who has said in a post that “Bitcoin is the currency of freedom”; and Tulsi Gabbard for the Director of National Intelligence, who had holdings in two cryptocurrencies back in 2018. As Trump put it at that Bitcoin conference, “the rules will be written by people who love your industry, not hate your industry.” —Kohava Mendelsohn Back to top ENERGY Trump’s plans for the energy sector focus on establishing U.S. “energy dominance,” mainly by boosting domestic oil and gas production, and deregulating those sectors. To that end, he has selected oil services executive Chris Wright to lead the U.S. Department of Energy. “Starting on day 1, I will approve new drilling, new pipelines, new refineries, new power plants, new reactors, and we will slash the red tape,” Trump said in a campaign speech in Michigan in August. Trump’s stance on nuclear power, however, is less clear. His first administration provided billions in loan guarantees for the construction of the newest Vogtle reactors in Georgia. But in an October interview with podcaster Joe Rogan, Trump said that large-scale nuclear builds like Vogtle “get too big, and too complex and too expensive.” Trump periodically shows support for the development of advanced nuclear technologies, particularly small modular reactors (SMRs). As for renewables, Trump plans to “terminate” federal incentives for them. He vowed to gut the Inflation Reduction Act, a signature law from the Biden Administration that invests in electric vehicles, batteries, solar and wind power, clean hydrogen, and other clean energy and climate sectors. Trump trumpets a particular distaste for offshore wind, which he claims will end “on day 1” of his next presidency. The first time Trump ran for president, he vowed to preserve the coal industry, but this time around, he rarely mentioned it. Coal-fired electricity generation has steadily declined since 2008, despite Trump’s first-term appointment of a former coal lobbyist to lead the Environmental Protection Agency. For his next EPA head, Trump has nominated former New York Representative Lee Zeldin—a play expected to be central to Trump’s campaign pledges for swift deregulation. —Emily Waltz Back to top TRANSPORTATION The incoming administration hasn’t laid out too many specifics about transportation yet, but Project 2025 has lots to say on the subject. It recommends the elimination of federal transit funding, including programs administered by the Federal Transit Administration (FTA). This would severely impact local transit systems—for instance, the Metropolitan Transportation Authority in New York City could lose nearly 20 percent of its capital funding, potentially leading to fare hikes, service cuts, and project delays. Kevin DeGood, Director of Infrastructure Policy at the Center for American Progress, warns that “taking away capital or operational subsidies to transit providers would very quickly begin to result in systems breaking down and becoming unreliable.” DeGood also highlights the risk to the FTA’s Capital Investment Grants, which fund transit expansion projects such as rail and bus rapid transit. Without this support, transit systems would struggle to meet the needs of a growing population. Project 2025 also proposes spinning off certain Federal Aviation Administration functions into a government-sponsored corporation. DeGood acknowledges that privatization can be effective if well-structured, and he cautions against assuming that privatization inherently leads to weaker oversight. “It’s wrong to assume that government control means strong oversight and privatization means lax oversight,” he says. Project 2025’s deregulatory agenda also includes rescinding federal fuel-economy standards and halting initiatives like Vision Zero, which aims to reduce traffic fatalities. Additionally, funding for programs designed to connect underserved communities to jobs and services would be cut. Critics, including researchers from Berkeley Law, argue that these measures prioritize cost-cutting over long-term resilience. Trump has also announced plans to end the US $7,500 tax credit for purchasing an electric vehicle. —Willie D. Jones Back to top From Your Site Articles * Trump CTO Addresses AI, Facial Recognition, Immigration, Tech Infrastructure, and More › * Can Joe Biden Deliver a Sci-Tech Renaissance? › Related Articles Around the Web * Project 2025 Publishes Comprehensive Policy Guide, 'Mandate for ... › Keep Reading ↓ Show less Telecommunications Sensors Climate Tech Climate Change Sponsored Article THIS STARTUP IS BUILDING THE INTERNET OF UNDERWATER THINGS WSENSE’S INNOVATIVE NETWORKING SYSTEMS ARE TRANSFORMING HOW WE EXPLORE OCEAN ENVIRONMENTS LEMO LEMO is a global manufacturer of custom precision connection and cable solutions. Based in Écublens, Switzerland, the company is known for its push-pull connectors used in medical, industrial, audio/visual, telecommunications, military, and research applications. 06 Sep 2023 6 min read 3 Italian startup WSense develops software and hardware for underwater data collection and communication. WSense This is a sponsored article brought to you by LEMO. Science thrives on data. As such, the emergence of the Internet of Things (IoT) brought about a fantastic revolution. Billions of “intelligent objects” packed with sensors are connected to each other and to servers, capturing and exchanging, in real time, huge amounts of data. Analyzed, accessible, and shareable worldwide, these data enable researchers to observe and understand our planet like never before. Well, not all of our planet: IoT does not connect us to seas and oceans. This blind spot is rather striking. Water covers 72 percent of the Earth’s surface, its volumes host 80 percent of biodiversity and play a pivotal role in global phenomena, such as climate change. It is impossible to claim a global vision without integrating the oceans. PIONEERING UNDERWATER NETWORK TECHNOLOGY There are a few marine research stations scattered around the globe (like needles in algal stacks). An increasing number of intelligent marine objects have also been created (sensors, buoys, autonomous vehicles, probes). The foundations of an underwater wireless network are also being set up, which should be as accessible and reliable as the IoT, the Internet of Underwater Things (IoUT). A pioneer in the field, Italian company WSense has had favorable currents this year. The adventure of the startup began at the University of Sapienza in Rome, where Professor Chiara Petrioli is in charge of a research laboratory. “We started looking into underwater networks 10 years ago,” she says. “We wanted to find a way to transmit information reliably with elements like routers in large areas.” This research resulted in solutions “achieving levels of reliability and performance previously not possible” and several international patents were filed. Potential applications supported the creation of a spin-off: WSense launched in 2017 with a handful of PhDs and engineers with backgrounds in acoustics, network architecture, signal processing, among other areas. Today, the startup employs a staff of 50 people with offices located in Italy, U.K., and Norway. It has about 20 customers — “Blue economy” companies and scientific institutions. Its innovations have been honored in 2022 by a Digital Challenge of the European Institute of Innovation and Technology and by a Blueinvest prize from the European Commission. HOW WSENSE IS HELPING PROTECT ITALY'S UNDERWATER ARCHEOLOGICAL TREASURES DEPLOYING ACOUSTICS, OPTICAL SYSTEMS, AND AI As you can imagine, “wireless network” and “underwater” are not made for each other. In fact, anything that makes aerial Wi-Fi function does not work underwater. Radio waves are significantly attenuated, light or sound communication vary a lot depending on the temperature, salinity level, background noise — everything had to be reconsidered and that’s exactly what WSense has done. Their solution is based on an innovative combination of acoustic communication for medium-range distances and optical LED technologies for short distances, with a hint of artificial intelligence. More specifically, underwater “nodes” are deployed. Data transfer between the nodes is permanently optimized by AI: Whenever sea conditions change, algorithms modify the path followed by byte packets. The system, explains Petrioli, can send data to 1000 meters at the speed of 1 kbit/s and up to several Mbit/s over shorter distances. This bandwidth can’t be compared to those of aerial networks “but we are working on enlarging it.” However, it is sufficient for transmitting environmental data collected by the sensors. “We are in the process of developing autonomous robotic systems. We can allow teams of robots to communicate and collaborate, to send data, get instructions, and change their mission in real time.” —Chiara Petrioli, WSense Founder & CEO The resulting network is stable, reliable, and open: A plurality of devices (sensors, probes, vehicles) of various types and brands can be connected. WSense has designed its platform first for shallow water (up to 300 m depth), but now it asserts that it is operational up to -3000 m, opening the door wider to the oceans. On the surface, floating gateways (or posted on nearby land) connect this local network to the cloud, and so to the rest of the world — the IoUT joins IoT. WSense designs all the software in-house (from network software to data processing) as well as all the necessary hardware: nodes, probes, modems, and gateways. WSense’s devices are packed with sensors. “They measure parameters such as temperature, salinity, pH, chlorophyll, methane, ammonium, phosphate, CO2, waves and tide, background noise,” explains Petrioli. In a nutshell: everything required for real-time follow-up and extensive surveillance of submarine environments. Aquaculture was one of the first sectors to show an interest in WSense (and remains a sector with key customers). The deployment of a wireless network covering the rearing cages, without multiple bulky cabling, connects everything that provides for monitoring the biotope and controlling the fish farm. Cameras and sensors, as well as robots. “We are in the process of developing autonomous robotic systems,” says Petrioli. “We can allow teams of robots to communicate and collaborate, to send data, get instructions, and change their mission in real time.” STUDYING HOW ANIMALS ADAPT TO CLIMATE CHANGE Following a request from a Norwegian customer, WSense R&D has recently developed an ultra-miniature fish wearable element. It makes it possible to closely observe the life and health of animals, while monitoring water quality. “All this goes in the same direction: supplying tools to go further in the direction of a more sustainable fish farming,” Petrioli says. Similarly, WSense’s platform can make it considerably easier to survey and work around offshore stations, as well as underwater infrastructure, such as gas and oil pipelines. AN OUT-OF-THE-BOX DIVING EXPERIENCE This summer, WSense launched a miniature device: a “micronode” that could considerably enhance our submarine diving experience, just like smartphone applications have contributed to enriching our daily lives. The size of a pack of cigarettes, the device is linked by cable (and LEMO W Series connectors) to a watertight tablet. Thanks to the solution, divers can communicate with the surface and among each other much better than by sign language. “It also makes it possible for them to receive real-time information about what they see around themselves”, explains WSense founder and CEO Chiara Petrioli. For the submerged Roman ruins of Baiae for instance, the tablet could show, in augmented reality, the reconstituted buildings visited by “diving tourists”. In addition, the “micronode” is equipped with a GPS, “which increases safety, since the divers will always be precisely located. This option also opens new ways of exploring archaeological sites. It will be possible, for instance, to guide visitors along predefined itineraries,” Petroli says. “There are endless possibilities !” The new device adds interactivity, augmented reality, and much more for the divers. This new product has been presented during the finish of the prestigious “Ocean Race” (a round-the-world sailing challenge) which was held in late June in Genoa (Italy). It is just as efficient in more natural environments. The startup has deployed its network in sensitive sites and environmental hotspots. Scientists use it for instance for studying how algae, corals, and animals adapt to climate change. In the field and continuously, “which is much more precise than what we could do from the surface or satellites,” according to Petrioli. The solution also monitors sites that represent major risks for human populations, such as volcanic areas. The WSense platform is also deployed in archeological or cultural sites, such as the submerged luxurious Roman city of Baiae, near Naples (Italy), which is part of the UNESCO World Heritage Sites. By measuring pollution and the effects of climate change or potential damage caused by visitors, it contributes to their protection the same way as it has for a long time in the case of on-land archaeological sites. Just like webcams placed around the world, “those connected by WSense can also promote these sites.” They open windows for education and tourism, providing access to a larger audience than that of just scientists, companies, or authorities. DEFINING THE STANDARD FOR IOUT The startup is also about to launch a “micronode” that, connected to a watertight tablet, would enhance the diving experience. This new appealing product does not really embody WSense’s true ambitions. The Italian company does not only offer, unlike others, “smart devices.” It doesn’t want to be just one more component in our already too fragmented knowledge of oceans. On the contrary, it wants to unite all the components. With this in mind, WSense has ensured the interoperability of its submarine network. For the same reason, it has also been working hard on making deployment simple and reducing costs, both prerequisites for its true purpose: to define the standard for IoUT. Underwater wireless networks give continuous access to an unprecedented wealth of data about our oceans For this purpose, WSense must enhance its notoriety as well as its platform. In January, it got a great boost from a place that hasn’t seen any oceans for the last 200 million years: Davos, in the heart of the Swiss Alps. During its last edition, the prestigious World Economic Forum (WEF) rewarded 10 companies, including WSense, winner of its Ocean Data Challenge, an event for identifying the most promising technologies in data collection and management for ocean protection. The award gives access to the WEF network, an ideal platform for finding people who could give support for global scale up. There was an immediate effect: WSense spent the following weeks answering a flood of inquiries. “It was huge,” says Petrioli. “We were able to talk to political and scientific leaders, top managers, who were often unaware of the possibilities. We could explain to them that the Internet of Underwater Things was not deep tech, but a solution ready to be implemented.” Quick positioning on the submarine communications market is quite interesting (Forbes estimated it at $3.5 billion dollars, with a 22 percent increase per year). However, urgency lies elsewhere, insists Petrioli. “We cannot delay applying these solutions. We must not go on ignoring so many things about the exploitation of the oceans or climate change. We must understand today, because it may be too late tomorrow.” From Your Site Articles * MIT Makes Low-Power Underwater Communication Practical - IEEE Spectrum › * Cheap Underwater Acoustic Communication Is Now Possible - IEEE Spectrum › Related Articles Around the Web * Wsense – Underwater Wireless Communication for Blue Econom › * WSENSE | LinkedIn › * Contribution: Building the internet of things underwater to provide ... › Keep Reading ↓ Show less Telecommunications Whitepaper EM SIMULATIONS IN AUTOMOTIVE INDUSTRY HOW SCENARIOS TYPICAL TO THE AUTOMOTIVE INDUSTRY CAN BE SUCCESSFULLY ANALYZED USING WIPL-D EM SIMULATION SOFTWARE SUITE WIPL-D WIPL-D develops commercial EM simulation software and provides consulting services in the field of electromagnetism. Established in 2002, the company is headquartered in Belgrade, Serbia. 19 Nov 2024 1 min read 1 This whitepaper shows how electromagnetic simulations can be applied to various scenarios in the automotive industry. The scenarios encompassed 77 GHz anti-collision radar mounted on a car bumper, an EM obstacle detection also at 77 GHz, a radar antenna mounted on a car bumper operating at 24 GHz and 40 GHz, and vehicle-to-vehicle communication at 5.9 GHz. It is shown that the rapid increase of requirements for efficient EM simulations in automotive applications could be addressed successfully by applying several sophisticated techniques in WIPL-D 3D full-wave simulation software. Transportation AI News AI DASH CAMS GIVE WAKE-UP CALLS TO DROWSY DRIVERS INNOVATIVE TECH DETECTS DRIVER FATIGUE AND SIGNALS THEM TO TAKE A BREAK Willie D. Jones Willie Jones is an associate editor at IEEE Spectrum. In addition to editing and planning daily coverage, he manages several of Spectrum's newsletters and contributes regularly to the monthly Big Picture section that appears in the print edition. 26 Nov 2024 5 min read AI-powered dash cams could detect when a commercial driver is weary and prompt them to pull over. Samsara artificial intelligence driving advanced driver assistance systems monitoring Increasingly, vehicles with advanced driver assistance systems are looking not only at the road but also at the driver. And for good reason. These systems can, paradoxically, make driving less safe as drivers engage in more risky behaviors behind the wheel under the mistaken belief that electronic equipment will compensate for lack of caution. Attempting to ward off such misuse, automakers have for years used camera-based systems to monitor the driver’s eye movement, posture, breathing, and hand placement for signs of inattention. Those metrics are compared with baseline data gathered during trips with drivers who were fully alert and focused on the road. The point is to make sure that drivers appear alert and ready to take control of the driving task if the suite of electronic sensors and actuators gets overwhelmed or misjudges a situation. Now, several companies targeting commercial vehicle fleet operators, especially long-haul trucking companies, are introducing AI-enabled dashcam technology that takes driver monitoring a step further. These new dash cams use machine learning to pick up on the subtle behavioral cues that are signs of drowsiness. “Long-haul truckers are particularly at risk of driving drowsy because they often work long hours and drive lengthy routes,” says Evan Welbourne, Vice president for AI and Data at Samsara, which recently introduced its drowsiness detection solution. The driver monitoring tech developed by Samsara and Motive, both based in and San Francisco, and Nauto, headquartered in nearby Sunnyvale, Calif., deliver real-time audio alerts to a drowsy driver, giving them a prompt to take a break to reduce the risk of a fatigue-related accident. All are configured so that if a dash cam detects that a driver continues to operate the vehicle while displaying signs of drowsiness after the in-cab alert, it can directly contact fleet managers so they can coach the driver and reinforce safety measures. Each of the systems is trained to pick up on different combinations of signs that a driver is drowsy. For example, Motive’s AI, introduced in July 2024, tracks yawning and head movement. “Excessive” yawning and head posture indicating that the driver’s has taken their gaze away from the roadway for five seconds triggers an alert. Nauto’s drowsiness detection feature, introduced in November 2021, tracks an individual driver’s behavior over time, tracking yawning and other indicators such as blink duration and frequency and changes in the driver’s overall body posture. Nauto’s AI is trained so that when these signs of drowsiness accumulate to a level associated with unacceptable risk, it issues an alert to the driver. Samsara’s driver monitoring tech triggers an audio alert to the driver when it detects a combination of more than a dozen drowsiness symptoms, including prolonged eye closure, head nodding, yawning, rubbing eyes, and slouching, which are telltale signs that the driver is dozing off. IMPROVING DETECTORS’ EFFECTIVENESS According to the Foundation for Traffic Safety, 17 percent of all fatal crashes involve a drowsy driver. The earliest generation of driver monitoring techaccounted for only one or two signs that a driver might be drifting off to sleep. Driver-monitoring developments such as the Percentage of Eyelid Closure Over Time (PERCLOS) methodology for measuring driver drowsiness, introduced by the U.S. National Highway Traffic Safety Administration (NHTSA) in the mid-1990s, gave system developers a direct physiological indicator to home in on. “But drowsiness is more than a single behavior, like yawning or having your eyes closed,” says Samsara’s Welbourne. Welbourne notes that the new generation of drowsiness-detection tools are based on the Karolinska Sleepiness Scale (KSS). He explains that “KSS is a nine-point scale for making an assessment based on as many as 17 behaviors including yawning, facial contortions, and sudden jerks” that happen when they are jerking back awake after a brief interval during which they have fallen asleep. “The KSS score accounts for all of them and gives us a quantitative way to assess holistically, Is this person drowsy?” Stefan Heck, Nauto’s CEO, says his company’s Ai is tuned to intervene at Karolinska Level 6. “We let the very early signs of drowsiness go because people find it annoying if tou alert too much. At Level 1 or 2, a person won’t be aware that they’re drowsy yet, so alerts at those levels would just come across as a nuisance.” By the time their drowsiness reaches Level 5 or 6, Heck says, they’re starting to be dangerous because they exhibit long periods of inattention. “And at that point, they know they’re drowsy, so the alert won’t come as a surprise to them. Samsara’s Welbourne asserts that his company has good reason to be confident that its AI models are solid and will avoid false positives or false negatives that would diminish the tool’s usefulness to drivers and fleet operators. “Accurate detection is only as good as the data that feeds and trains AI models,” he notes. With that in mind, the Samsara AI team trained a machine learning model to predict the Karolinska Sleep Score associated with a driver’s behavior using more than 180 billion minutes of video footage (depicting 220 billion miles traveled). The footage came from the dash cams in its customers’ fleet vehicles. A big challenge, Welbourne recalls, was spotting incidences of behaviors linked to drowsiness amid that mountain of data. “It’s kind of rare, so, getting enough examples to train a big model requires poring over an enormous amount of data.” Just as challenging, he says, was creating labels for all that data, “and through several iterations, coming up with a model aligned with the clinical definition of drowsiness.” That painstaking effort has already begun to pay dividends in the short time since Samsara made the drowsiness-detection feature available in its dash cams this past October. According to Welbourne, Samsara has found that the focus on multiple signs of drowsiness was indeed a good idea. More than three-fourths of the drowsy driving events to which it has been alerted by dash cams since October were detected by behaviors other than yawning alone. And he shares an anecdote about an oilfield services company that uses Samsara dash cams in its vehicles. The firm, which had previously experienced two drowsy driver events a week on average, went the entire first month after drivers started getting drowsiness alerts without any such events occurring. To drivers concerned that the introduction of this technology foreshadows a further erosion of privacy, Samsara says that its driver-monitoring feature is intended strictly for use within commercial vehicle fleets and that it has no intention of seeking mass adoption in consumer vehicles. Maybe so, but drowsiness detection is already being incorporated as a standard safety feature in a growing number of passenger cars. Automakers such as Ford, Honda, Toyota, and Daimler-Benz have vehicles in their respective lineups that deliver audible and/or visual alert signals encouraging distracted or drowsy drivers to take a break. And it’s possible that government agencies like NHTSA will eventually mandate the technology’s use in all vehicles equipped with ADAS systems that give them Level 2 or Level 3 autonomy. Those concerns notwithstanding, drowsiness-detection and other driver-monitoring technologies have been generally well received by fleet vehicle drivers so far. Truck drivers are mostly amenable to having dash cams aboard when they’re behind the wheel. When accidents occur, dash cams can exonerate drivers blamed for collisions they didn’t cause, saving them and freight companies a ton of money in liability claims. Now, systems capable of monitoring what’s going on inside the cab will keep the subset of drivers most likely to fall asleep at the wheel—those hauling loads at night, driving after a bout of physical exertion, or affected by an undiagnosed medical condition—from putting themselves and others in danger. From Your Site Articles * The Next Generation of AI-Enabled Cars Will Understand You › * Seatbelt Sensors to Fight Drowsy Driving › Related Articles Around the Web * A contextual and temporal algorithm for driver drowsiness detection ... › * Driver drowsiness detection - Wikipedia › Keep Reading ↓ Show less {"imageShortcodeIds":[]} GET THE LATEST TECHNOLOGY NEWS IN YOUR INBOX Subscribe to IEEE Spectrum’s newsletters by selecting from the list. Tech Alert AI Alert Climate Tech Alert Career Alert Robotics News The Future Lane University Spotlight Product Spotlight Email input I agree to the IEEE Privacy Policy Please select one or more newsletters, enter a valid email address and accept Privacy Policy Subscribe Close YOU HAVE SUCCESFULLY SUBSCRIBED TO THE NEWSLETTERS BELOW: THANK YOUR FOR YOUR SUBSCRIPTION. Telecommunications Magazine Feature Top Tech 2024 Computing Special Reports January 2024 WI-FI 7 SIGNALS THE INDUSTRY’S NEW PRIORITY: STABILITY MULTI-LINK OPERATIONS AND THE 6-GHZ BAND PROMISE MORE RELIABILITY THAN BEFORE Michael Koziol 29 Dec 2023 4 min read 14 Giacomo Bagnara Blue Wi-Fi is one of the most aggravating success stories. Despite how ubiquitous the technology has become in our lives, it still gives reasons to grumble: The service is spotty or slow, for example, or the network keeps cutting out. Wi-Fi’s reliability has an image problem. When Wi-Fi 7 arrives this year, it will bring with it a new focus on improving its image. Every Wi-Fi generation brings new features and areas of focus, usually related to throughput—getting more bits from point A to point B. The new features in Wi-Fi 7 will result in a generation of wireless technology that is more focused on reliability and reduced latency, while still finding new ways to continue increasing data rates. This article is part of our special report Top Tech 2024. “The question that we posed ourselves was, ‘What do we do now?’” says Carlos Cordeiro, an Intel fellow and the company’s chief technology officer of wireless connectivity. “What Wi-Fi really needed to do at that point was become more reliable…. I think it’s the time that we should be looking more at latency and becoming more deterministic.” The renewed focus on reliability is motivated by emerging applications. Imagine a wireless factory robot in a situation where a worker suddenly steps in front of it and the robot needs to make an immediate decision. “It’s not so much about throughput, but you really want to make sure that your [data] packet gets across the first time that you send it,” says Cordeiro. Beyond industrial automation and robotics, augmented and virtual reality technologies as well as gaming stand to benefit from faster, more reliable wireless signals. MULTI-LINK OPERATIONS WILL MAKE WI-FI MORE RELIABLE The key to a future Wi-Fi you can depend on is something called multi-link operations (MLO). “It is the marquee feature of Wi-Fi 7,” says Kevin Robinson, president and CEO of the Wi-Fi Alliance. MLO comes in two flavors. The first—and simpler—of the two is a version that allows Wi-Fi devices to spread a stream of data across multiple channels in a single frequency band. The technique makes the collective Wi-Fi signal more resilient to interference at a specific frequency. Where MLO really makes Wi-Fi 7 stand apart from previous generations, however, is a version that allows devices to spread a data stream across multiple frequency bands. For context, Wi-Fi utilizes three bands—2.5 gigahertz, 5 GHz, and as of 2020, 6 GHz. Whether MLO spreads signals across multiple channels in the same frequency band or channels across two or three bands, the goals are the same: dependability and reduced latency. Devices will be able to split up a stream of data and send portions across different channels at the same time—which cuts down on the overall transmission time—or beam copies of the data across diverse channels, in case one channel is noisy or otherwise impaired. “[Multi-link operations are] the marquee feature of Wi-Fi 7.” —Kevin Robinson, president and CEO of the Wi-Fi Alliance MLO is hardly the only feature new to Wi-Fi 7, even if industry experts agree it’s the most notable. Wi-Fi 7 will also see channel size increase from 160 megahertz to a new maximum of 320 MHz. Bigger channels means more throughput capacity, which means more data in the same amount of time. That said, 320-MHz channels won’t be universally available. Wi-Fi uses unlicensed spectrum—and in some regions, contiguous 320-MHz chunks of unlicensed spectrum don’t exist because of other spectrum allocations. In cases where full channels aren’t possible, Wi-Fi 7 includes another feature, called puncturing. “In the past, let’s say you’re looking for 320 MHz somewhere, but right within, there’s a 20-MHz interferer. You would need to look at going to either side of that,” says Andy Davidson, senior director of technology planning at Qualcomm. Before Wi-Fi 7, you’d functionally be stuck with about a 160-MHz channel either above or below that interference. “With Wi-Fi 7, you can just notch out the interference…. You’ve still got an effective 300-MHz channel,” says Davidson. WHEN DO I GET MY WI-FI 7? The closest thing that a Wi-Fi generation has to a “release date” is when the Wi-Fi Alliance releases its certification, which is a process for ensuring that wireless products meet the industry’s agreed-upon standards for security, interoperability, and device protocols. Wi-Fi Certified 7—slated for the first quarter of 2024—is the culmination of years of collaborative work by the wireless industry to determine what features should be included in the new generation. After agreement on features, there is months of validation work on early implementations of those features to ensure they all work, separately and together, according to Robinson. Early Wi-Fi 7 implementations are tested at the organization’s R&D lab in Santa Clara, Calif. Finally, the new features are locked in and the Wi-Fi Alliance releases its certification program. Separate from the Wi-Fi Alliance’s certification process, the IEEE will ratify a new version of the 802.11 standard. The two are not entirely equivalent—not everything specified in the standard makes it into the Wi-Fi Alliance certification. Regardless, the new version—802.11be—should be ratified later this year as well, after the Wi-Fi 7 certification release. When Wi-Fi Certified 7 is released, manufacturers will bring their devices to one of 20 authorized test labs around the world to confirm that their devices conform to the specs laid out by the Wi-Fi Alliance. Most importantly, certified devices are guaranteed to work together properly. Wi-Fi 7 routers, chips, and other devices are already available, ahead of Wi-Fi Certified 7’s release. This is standard practice: Companies release their Wi-Fi 7–compatible products and undergo the official certification when it becomes available. Qualcomm’s Davidson explains that it’s common for companies to work from earlier IEEE draft standards once it becomes clear what features and requirements the next wireless generation will include. Meanwhile, work is already underway on what will become Wi-Fi 8. “Think of it as a pipeline,” says Robinson. “While the Wi-Fi Alliance is putting the finishing touches on commercializing a new generation of Wi-Fi, standards organizations like the IEEE are already looking forward to what is going to go into the next generation.” This article appears in the January 2024 print issue as “Wi-Fi’s Big Bet on Reliability.” From Your Site Articles * Wi-Fi 7 Stomps on the Gas › * What Is Wi-Fi 7? › * Low-Power Wi-Fi Extends Signals up to 3 Kilometers - IEEE Spectrum › * Qualcomm’s Newest Chip Brings AI to Wi-Fi - IEEE Spectrum › Related Articles Around the Web * IEEE 802.11be - Wikipedia › Keep Reading ↓ Show less Consumer Electronics News THIS “LOLLIPOP” BRINGS TASTE TO VIRTUAL REALITY LICKABLE DEVICES COULD MAKE FOR FLAVORFUL EXTENDED-REALITY ENVIRONMENTS Gwendolyn Rak Gwendolyn Rak is an assistant editor at IEEE Spectrum covering consumer electronics and careers. She holds a master’s degree in science journalism from New York University and a bachelor’s degree in astrophysics and history from Swarthmore College. 25 Nov 2024 3 min read 1 Future extended reality environments could include tastes and smells, courtesy of devices like lickable “lollipops.” Liu et al./PNAS virtual reality interfaces taste extended reality Virtual- and augmented-reality setups already modify the way users see and hear the world around them. Add in haptic feedback for a sense of touch and a VR version of Smell-O-Vision, and only one major sense remains: taste. To fill the gap, researchers at the City University of Hong Kong have developed a new interface to simulate taste in virtual and other extended reality (XR). The group previously worked on other systems for wearable interfaces, such as haptic and olfactory feedback. To create a more “immersive VR experience,” they turned to adding taste sensations, says Yiming Liu, a coauthor of the group’s research paper published today in the Proceedings of the National Academy of Sciences. The lollipop-shaped lickable device can produce nine different flavors: sugar, salt, citric acid, cherry, passion fruit, green tea, milk, durian, and grapefruit. Each flavor is produced by food-grade chemicals embedded in a pocket of agarose gel. When a voltage is applied to the gel, the chemicals are transported to the surface in a liquid that then mixes with saliva on the tongue like a real lollipop. Increase the voltage, and get a stronger flavor. Initially, the researchers tested several methods for simulating taste, including electrostimulating the tongue. The other methods each came with limitations, such as being too bulky or less safe, so the researchers opted for chemical delivery through a process called iontophoresis, which moves chemicals and ions through hydrogels and has a low electrical-power requirement. With a 2-volt maximum, the device is well within the human safety limit of 30 V, which is considered enough to deliver a substantial shock in some situations. Delivering the chemical stimuli of taste and smell is one of the main challenges for XR systems, says Alessandro Tonacci, a biomedical engineer for Italy’s National Research Council, who chairs the IEEE Consumer Systems for Healthcare and Wellbeing technical committee. XR systems “are capable of providing feedback consisting of physical stimulations (sight, touch, hearing), but, due to technological constraints, still fail when dealing with chemical stimuli,” Tonacci says. The researchers’ approach has been prototyped by others, but they have made the technology more usable by improving the taste quality and consistency, and providing a portable, user-friendly interface, Tonacci says. At this point, he adds, the major challenge will be user acceptance. A TASTE FOR VR The researchers imagine three possible uses for “tasteful” extended reality: standardized gustation tests, similar to a hearing or vision test; online shopping in virtual grocery stores; and mixed-reality environments where, for example, a child could explore the flavors of different foods. To further enhance the taste experience in these scenarios, the researchers drew on the strong connection between smell and taste by adding an olfactory component. In addition to the taste generating gels, they added seven channels for odors. Motion sensors are used to pair the lollipop’s location in the virtual and physical worlds, creating a more immersive experience. Liu et al./PNAS For better usability, it’s also important for the device to be small and portable. The researchers used ultrathin printed circuit boards and a 3D-printed nylon exterior to keep the weight down. Once loaded with all nine gels, the lollipop weighs about 15 grams—about the same as a Tootsie Pop. (The researchers also tested versions with fewer gels, which allows for a greater volume of each gel and therefore more intense flavor. The trade-off is between intensity and complexity of possible flavors.) One of the major limitations of the current interface is that it can be used for only one hour before the chemical-infused gels effectively run out. The gels continuously shrink during use, so after an hour, the flavor-generation rate will be extremely low and the gel should be replaced, says Liu. Going forward, the research group plans to further develop the system to address the short operation time, as well as the limited number of flavor channels and constraints on how it is used. In other words, consider this just a taste of XR interfaces to come. From Your Site Articles * VR System Hacks Your Nose to Turn Smells Into Temperatures › * Virtual-Reality Scent System Fools Flavor Sense › Related Articles Around the Web * VR / AR Fundamentals — 3) Other Senses (Touch, Smell, Taste ... › * VR: When Will We Get Taste, Touch, and Smell? › Keep Reading ↓ Show less {"imageShortcodeIds":[]} Telecommunications Engineering Resources COMSOL Resource Library Sponsored Article DESIGNING A SILICON PHOTONIC MEMS PHASE SHIFTER WITH SIMULATION ENGINEERS AT EPFL USED SIMULATION TO DESIGN PHOTONIC DEVICES FOR ENHANCED OPTICAL NETWORK SPEED, CAPACITY, AND RELIABILITY Alan Petrillo 16 Jan 2023 4 min read 2 EPFL This sponsored article is brought to you by COMSOL. The modern internet-connected world is often described as wired, but most core network data traffic is actually carried by optical fiber — not electric wires. Despite this, existing infrastructure still relies on many electrical signal processing components embedded inside fiber optic networks. Replacing these components with photonic devices could boost network speed, capacity, and reliability. To help realize the potential of this emerging technology, a multinational team at the Swiss Federal Institute of Technology Lausanne (EPFL) has developed a prototype of a silicon photonic phase shifter, a device that could become an essential building block for the next generation of optical fiber data networks. LIGHTING A PATH TOWARD ALL-OPTICAL NETWORKS Using photonic devices to process photonic signals seems logical, so why is this approach not already the norm? “A very good question, but actually a tricky one to answer!” says Hamed Sattari, an engineer currently at the Swiss Center for Electronics and Microtechnology (CSEM) specializing in photonic integrated circuits (PIC) with a focus on microelectromechanical system (MEMS) technology. Sattari was a key member of the EPFL photonics team that developed the silicon photonic phase shifter. In pursuing a MEMS-based approach to optical signal processing, Sattari and his colleagues are taking advantage of new and emerging fabrication technology. “Even ten years ago, we were not able to reliably produce integrated movable structures for use in these devices,” Sattari says. “Now, silicon photonics and MEMS are becoming more achievable with the current manufacturing capabilities of the microelectronics industry. Our goal is to demonstrate how these capabilities can be used to transform optical fiber network infrastructure.” Optical fiber networks, which make up the backbone of the internet, rely on many electrical signal processing devices. Nanoscale silicon photonic network components, such as phase shifters, could boost optical network speed, capacity, and reliability. The phase shifter design project is part of EPFL’s broader efforts to develop programmable photonic components for fiber optic data networks and space applications. These devices include switches; chip-to-fiber grating couplers; variable optical attenuators (VOAs); and phase shifters, which modulate optical signals. “Existing optical phase shifters for this application tend to be bulky, or they suffer from signal loss,” Sattari says. “Our priority is to create a smaller phase shifter with lower loss, and to make it scalable for use in many network applications. MEMS actuation of movable waveguides could modulate an optical signal with low power consumption in a small footprint,” he explains. HOW A MOVABLE WAVEGUIDE HELPS MODULATE OPTICAL SIGNALS The MEMS phase shifter is a sophisticated mechanism with a deceptively simple-sounding purpose: It adjusts the speed of light. To shift the phase of light is to slow it down. When light is carrying a data signal, a change in its speed causes a change in the signal. Rapid and precise shifts in phase will thereby modulate the signal, supporting data transmission with minimal loss throughout the network. To change the phase of light traveling through an optical fiber conductor, or bus waveguide, the MEMS mechanism moves a piece of translucent silicon called a coupler into close proximity with the bus. Figure 1. Two stages of motion for the MEMS mechanism in the phase shifter. The design of the MEMS mechanism in the phase shifter provides two stages of motion (Figure 1). The first stage provides a simple on–off movement of the coupler waveguide, thereby engaging or disengaging the coupler to the bus. When the coupler is engaged, a finer range of motion is then provided by the second stage. This enables tuning of the gap between the coupler and bus, which provides precise modulation of phase change in the optical signal. “Moving the coupler toward the bus is what changes the phase of the signal,” explains Sattari. “The coupler is made from silicon with a high refractive index. When the two components are coupled, a light wave moving through the bus will also pass through the coupler, and the wave will slow down.” If the optical coupling of the coupler and bus is not carefully controlled, the light’s waveform can be distorted, potentially losing the signal — and the data. DESIGNING AT NANOSCALE WITH OPTICAL AND ELECTROMECHANICAL SIMULATION The challenge for Sattari and his team was to design a nanoscale mechanism to control the coupling process as precisely and reliably as possible. As their phase shifter would use electric current to physically move an optical element, Sattari and the EPFL team took a two-track approach to the device’s design. Their goal was to determine how much voltage had to be applied to the MEMS mechanism to induce a desired shift in the photonic signal. Simulation was an essential tool for determining the multiple values that would establish the voltage versus phase relationship. “Voltage vs. phase is a complex multiphysics question. The COMSOL Multiphysics software gave us many options for breaking this large problem into smaller tasks,” Sattari says. “We conducted our simulation in two parallel arcs, using the RF Module for optical modeling and the Structural Mechanics Module for electromechanical simulation.” The optical modeling (Figure 2) included a mode analysis, which determined the effective refractive index of the coupled waveguide elements, followed by a study of the signal propagation. “Our goal is for light to enter and exit our device with only the desired change in its phase,” Sattari says. “To help achieve this, we can determine the eigenmode of our system in COMSOL.” Figure 2. Left: Light passes from left to right through a path composed of an optical bus and a coupled movable waveguide. Right: Cross-sectional slices of a simulated light waveform as it passes through the coupled device. By adjusting the distance between the two optical elements in their simulation, the EPFL team could determine how that distance affected the speed, or phase, of the optical signal. Images courtesy EPFL and licensed under CC BY 4.0 Figure 3. Simulation showing deformation of the movable waveguide support structure. The thin elements that suspend the movable waveguide will flex in response to an applied voltage. Image courtesy EPFL and licensed under CC BY 4.0 Figure 4. Optical simulation (left) established the vertical distance between the coupler and waveguide that would result in a desired phase shift in the optical signal. Electromechanical simulation (right) determined the voltage that, when applied to the MEMS mechanism, would move the coupler waveguide to the desired distance away from the bus. Images courtesy EPFL and licensed under CC BY 4.0 Along with determining the physical forms of the waveguide and actuation mechanism, simulation also enabled Sattari to study stress effects, such as unwanted deformation or displacement caused by repeated operation. “Every decision about the design is based on what the simulation showed us,” he says. ADDING TO THE FOUNDATION OF FUTURE PHOTONIC NETWORKS The goal of this project was to demonstrate how MEMS phase shifters could be produced with existing fabrication capabilities. The result is a robust and reliable design that is achievable with existing surface micromachined manufacturing processes, and occupies a total footprint of just 60 μm × 44 μm. Now that they have an established proof of concept, Sattari and his colleagues look forward to seeing their designs integrated into the world’s optical data networks. “We are creating building blocks for the future, and it will be rewarding to see their potential become a reality,” says Sattari. REFERENCES 1. H. Sattari et al., “Silicon Photonic MEMS Phase-Shifter,” Optics Express, vol. 27, no. 13, pp. 18959–18969, 2019. 2. T.J. Seok et al., “Large-scale broadband digital silicon photonic switches with vertical adiabatic couplers,” Optica, vol. 3, no. 1, pp. 64–70, 2016. Keep Reading ↓ Show less {"imageShortcodeIds":["32366883","32366901","32366913"]} Telecommunications Webinar MULTIBAND ANTENNA SIMULATION AND WIRELESS KPI EXTRACTION EXPLORE HOW TO LEVERAGE THE STATE-OF-THE-ART HIGH-FREQUENCY SIMULATION CAPABILITIES OF ANSYS HFSS TO INNOVATE AND DEVELOP ADVANCED MULTIBAND ANTENNA SYSTEMS. Ansys Ansys engineering simulation and 3D design software delivers product modeling solutions with unmatched scalability and a comprehensive multiphysics foundation. 29 Oct 2024 1 min read 2 In this upcoming webinar, explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems. OVERVIEW This webinar will explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems. Attendees will learn how to optimize antenna performance and analyze installed performance within wireless networks. The session will also demonstrate how this approach enables users to extract valuable wireless and network KPIs, providing a comprehensive toolset for enhancing antenna design, optimizing multiband communication, and improving overall network performance. Join us to discover how Ansys HFSS can transform wireless system design and network efficiency approach. WHAT ATTENDEES WILL LEARN * How to design interleaved multiband antenna systems using the latest capabilities in HFSS * How to extract Network Key Performance Indicators * How to run and extract RF Channels for the dynamic environment WHO SHOULD ATTEND This webinar is valuable to anyone involved in antenna, R&D, product design, and wireless networks. Register now for this free webinar! Keep Reading ↓ Show less The Institute News NEW IEEE SCHOLARSHIP HONORS SPACE-MAPPING PIONEER INAUGURAL GRANT FOR PH.D. CANDIDATES AND POSTDOCS TO BE AWARDED NEXT YEAR David Whyte David Whyte is president and chair of the IEEE Canadian Foundation board of directors. 25 Nov 2024 4 min read IEEE Life Fellow John Bandler was known for pioneering space-mapping technology. Beth Bandler type:ti ieee news ieee canadian foundation ieee scholarships space mapping engineering design optimization students This year the IEEE Canadian Foundation established the Dr. John William Bandler Graduate Scholarship in Engineering Design in honor of the world-renowned engineer, teacher, and innovator. Bandler, an IEEE Life Fellow, died on 28 September 2023. He was known for pioneering space-mapping technology, which enables optimal, high-fidelity design of devices, circuits, and systems at a cost of only a few high-fidelity simulations. The scholarship—funded by a donation from Bandler’s wife, Beth—provides an annual award of about US $3,550 (CAD $5,000) to a Ph.D. student or postdoctoral fellow at a Canadian university who is conducting research in electromagnetic optimization in micromillimeter- or millimeter-wave engineering, micromillimeter- or millimeter-wave imaging and inverse-scattering, engineering design optimization, or space mapping. The scholarship is to be awarded for the first time next year. Bandler was an entrepreneur and a professor. In 1983 he founded Optimization Systems Associates in Hamilton, Ontario, to commercialize his methodology and algorithms. His award-winning research during his 50-year career revolutionized the engineering and computer-assisted design of microwave circuitry. His practical application of space mapping, device modeling, and optimization theories led to significant reductions in the development costs of a wide variety of electronic systems. His research was published in more than 500 publications. Bandler served as dean of the faculty of engineering at McMaster University, also in Hamilton, and taught electrical engineering there from 1969 until his death. “He was a trusted teacher, advisor, and friend to McMaster,” says Heather Sheardown, the university’s current dean of the faculty of engineering. “His innovations truly transformed engineering design optimization.” A FOCUS ON SIMULATION, OPTIMIZATION, AND CONTROL Bandler was born in Jerusalem during World War II, and his family moved to Nicosia, Cyprus, when he was a youngster. As a teenager, the family moved to England, where he completed high school and attended the Imperial College London, which at the time was part of the University of London. He received three engineering degrees from Imperial: a bachelor’s in 1963, a Ph.D. in 1967, and doctor of science in 1976. After earning his Ph.D., he briefly worked at Mullard Research Laboratories, in Redhill, England, before accepting a postdoctoral fellowship at the University of Manitoba, in Winnipeg, Canada. He completed the fellowship in 1969 and joined McMaster University as an engineering professor. During his almost 55-year-long career there, he served as the 1978–1979 chair of the electrical engineering department and as dean of the faculty of engineering from 1979 to 1981. In 1973 he established a research group to focus on simulation, optimization, and control. The group later was named the Simulation Optimization Systems Research Laboratory. “Bandler was a trusted teacher, advisor, and friend to McMaster. His innovations truly transformed engineering design optimization.”—Heather Sheardown It was in that lab that Bandler developed his space-mapping technology and other optimization algorithms. To help commercialize his innovations, in 1983 he founded Optimization Systems Associates, which was sold in 1997 to Hewlett Packard Enterprise. The division later was spun off into Keysight Technologies of Santa Rosa, Calif. Bandler received several awards for his work, including the 2023 IEEE Electromagnetics Award, the 2013 IEEE Microwave Theory and Technology Society (IEEE MTT-S) Microwave Career Award, and the 2012 IEEE Canada McNaughton Gold Medal. He was appointed as an Order of Canada officer in 2016 for his scientific contributions, which helped position the country at the forefront of microwave engineering. ELEVATING THE NEXT GENERATION OF ENGINEERS In addition to conducting research and teaching, Bandler mentored students and young professionals. He volunteered for the IEEE MTT-S International Microwave Symposium’s Three Minute Thesis, a program that connects graduate students in engineering with mentors to help them better explain their research. At the end of the event, participants present their work in less than three minutes, using only one slide to a panel of judges who are not engineers. Starting this year, the program has been renamed the John Bandler Memorial Three Minute Thesis Competition. In his acceptance speech for the IEEE Electromagnetics Award in 2023, Bandler spoke directly to students and young professionals and said: “Just about everything I’m known for one expert or another has discouraged me from doing. So students and young professionals, let naysayers say no. Especially if they say ‘No, go for it.’ “It took me 30 years to discover common sense hidden in plain sight, and electromagnetic optimization took off,” he said. “What are you waiting for? Your own breakthrough is staring at you.” BLENDING CREATIVITY WITH TECHNICAL EXPERTISE Bandler was a multifaceted individual with a rich artistic background. “One day I found myself in my mother-in-law’s studio with a paintbrush in hand and a canvas on an easel, so I started painting,” he said in a Toronto Globe and Mail interview. “Until then I didn’t even know I had an interest in art. I was strictly an engineer, an academic, and a committed entrepreneur.” His love of painting turned into a passion for art history, writing plays, and making films. Several of his plays were performed at the Hamilton Fringe Festival and theaters in the Canadian city. Many of his plays, films, and writings can be viewed on YouTube as well as his website. Bandler’s legacy is greater than his disruptive innovations in microwave design and optimization. His life journey encompassed art, theater, fiction, entrepreneurial activities, and academia, leaving a lasting impact on those who experienced his work and spent time with him. His ability to blend creativity with technical expertise made him a remarkable figure in both artistic and engineering circles. To donate to—or nominate a candidate for—the Bandler Graduate Scholarship in Engineering Design, visit the IEEE Canadian Foundation website. From Your Site Articles * Double Your Impact This Giving Tuesday › * Why These Members Donate to the IEEE Foundation › * Meet the Teens Whose Tech Reduces Drownings and Fights Air Pollution › Keep Reading ↓ Show less word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1