spectrum.ieee.org Open in urlscan Pro
151.101.129.91  Public Scan

Submitted URL: http://spectrum.ieee.org/cerebras-wafer-scale-engine?utm_source=tldrai/1/010001902b8233e5-c3d7ab34-8f71-480e-98e4-0ed4379...
Effective URL: https://spectrum.ieee.org/cerebras-wafer-scale-engine?utm_source=tldrai/1/010001902b8233e5-c3d7ab34-8f71-480e-98e4-0ed4379...
Submission: On June 20 via api from IE — Scanned from DE

Form analysis 2 forms found in the DOM

/search/

<form action="/search/"><button type="submit" class="menu-global__submit fa fa-search" value="" aria-label="Submit"></button><input name="q" class="menu-global__text-input" type="text" placeholder="Search..." aria-label="Search"></form>

/search/

<form action="/search/"><label for="q" class="hide-text">Search: </label><input placeholder="Type to search" type="text" name="q" id="q" class="search-form__text-input"><button aria-label="Search" type="submit" class="search-form__submit"
    value="Search"><svg width="18px" height="19px" viewBox="0 0 18 19" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
      <g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
        <g id="1376---Sample-Front-Page" transform="translate(-826.000000, -102.000000)" stroke="#0D0D0D" stroke-width="1.5">
          <g id="Light-/-Nav" transform="translate(816.857864, 96.000000)">
            <g id="Search-Icon" transform="translate(7.307612, 5.000000)">
              <path
                d="M11.631728,14.6819805 C15.2215789,14.6819805 18.131728,11.7718314 18.131728,8.18198052 C18.131728,4.59212964 15.2215789,1.68198052 11.631728,1.68198052 C8.04187711,1.68198052 5.13172798,4.59212964 5.13172798,8.18198052 C5.13172798,11.7718314 8.04187711,14.6819805 11.631728,14.6819805 Z M11.631728,14.5814755 L11.631728,21.5814755"
                id="Combined-Shape" transform="translate(11.631728, 11.631728) rotate(-45.000000) translate(-11.631728, -11.631728) "></path>
            </g>
          </g>
        </g>
      </g>
    </svg></button></form>

Text Content

IEEE.orgIEEE Xplore Digital LibraryIEEE StandardsMore Sites
Sign InJoin IEEE

Giant Chips Give Supercomputers a Run for Their Money
Share
FOR THE TECHNOLOGY INSIDER
Search:
Explore by topic
AerospaceArtificial IntelligenceBiomedicalClimate TechComputingConsumer
ElectronicsEnergyHistory of
TechnologyRoboticsSemiconductorsTelecommunicationsTransportation

IEEE Spectrum
FOR THE TECHNOLOGY INSIDER



TOPICS

AerospaceArtificial IntelligenceBiomedicalClimate TechComputingConsumer
ElectronicsEnergyHistory of
TechnologyRoboticsSemiconductorsTelecommunicationsTransportation


SECTIONS

FeaturesNewsOpinionCareersDIYEngineering Resources


MORE

NewslettersPodcastsSpecial ReportsCollectionsExplainersTop Programming
LanguagesRobots Guide ↗IEEE Job Site ↗


FOR IEEE MEMBERS

Current IssueMagazine ArchiveThe InstituteThe Institute Archive


FOR IEEE MEMBERS

Current IssueMagazine ArchiveThe InstituteThe Institute Archive


IEEE SPECTRUM

About UsContact UsReprints & Permissions ↗Advertising ↗


FOLLOW IEEE SPECTRUM




SUPPORT IEEE SPECTRUM

IEEE Spectrum is the flagship publication of the IEEE — the world’s largest
professional organization devoted to engineering and applied sciences. Our
articles, podcasts, and infographics inform our readers about developments in
technology, engineering, and science.
Join IEEE
Subscribe

About IEEEContact & SupportAccessibilityNondiscrimination PolicyTermsIEEE
Privacy PolicyCookie PreferencesAd Privacy Options
© Copyright 2024 IEEE — All rights reserved. A not-for-profit organization, IEEE
is the world's largest technical professional organization dedicated to
advancing technology for the benefit of humanity.



ENJOY MORE FREE CONTENT AND BENEFITS BY CREATING AN ACCOUNT


SAVING ARTICLES TO READ LATER REQUIRES AN IEEE SPECTRUM ACCOUNT


THE INSTITUTE CONTENT IS ONLY AVAILABLE FOR MEMBERS


DOWNLOADING FULL PDF ISSUES IS EXCLUSIVE FOR IEEE MEMBERS


DOWNLOADING THIS E-BOOK IS EXCLUSIVE FOR IEEE MEMBERS


ACCESS TO SPECTRUM 'S DIGITAL EDITION IS EXCLUSIVE FOR IEEE MEMBERS


FOLLOWING TOPICS IS A FEATURE EXCLUSIVE FOR IEEE MEMBERS


ADDING YOUR RESPONSE TO AN ARTICLE REQUIRES AN IEEE SPECTRUM ACCOUNT


CREATE AN ACCOUNT TO ACCESS MORE CONTENT AND FEATURES ON IEEE SPECTRUM ,
INCLUDING THE ABILITY TO SAVE ARTICLES TO READ LATER, DOWNLOAD SPECTRUM
COLLECTIONS, AND PARTICIPATE IN CONVERSATIONS WITH READERS AND EDITORS. FOR MORE
EXCLUSIVE CONTENT AND FEATURES, CONSIDER JOINING IEEE .


JOIN THE WORLD’S LARGEST PROFESSIONAL ORGANIZATION DEVOTED TO ENGINEERING AND
APPLIED SCIENCES AND GET ACCESS TO ALL OF SPECTRUM’S ARTICLES, ARCHIVES, PDF
DOWNLOADS, AND OTHER BENEFITS. LEARN MORE →


JOIN THE WORLD’S LARGEST PROFESSIONAL ORGANIZATION DEVOTED TO ENGINEERING AND
APPLIED SCIENCES AND GET ACCESS TO THIS E-BOOK PLUS ALL OF IEEE SPECTRUM’S
ARTICLES, ARCHIVES, PDF DOWNLOADS, AND OTHER BENEFITS. LEARN MORE →

CREATE AN ACCOUNTSIGN IN
JOIN IEEESIGN IN
Close


ACCESS THOUSANDS OF ARTICLES — COMPLETELY FREE


CREATE AN ACCOUNT AND GET EXCLUSIVE CONTENT AND FEATURES: SAVE ARTICLES,
DOWNLOAD COLLECTIONS, AND TALK TO TECH INSIDERS — ALL FREE! FOR FULL ACCESS AND
BENEFITS, JOIN IEEE AS A PAYING MEMBER.

CREATE AN ACCOUNTSIGN IN

ComputingArtificial IntelligenceNews


GIANT CHIPS GIVE SUPERCOMPUTERS A RUN FOR THEIR MONEY

CEREBRAS’S WAFER-SCALE CHIPS EXCEL AT MOLECULAR DYNAMICS AND AI INFERENCE


Dina Genkina
12 Jun 2024
4 min read
2

Dina Genkina is the computing and hardware editor at IEEE Spectrum



Cerebras' second-generation Wafer-Scale Engine (WSE-2) is a massive chip
tailored for AI applications.

Cayce Clifford/The New York Times/Redux
machine learning material science supercomputers cerebras artificial
intelligence



As large supercomputers keep getting larger,Sunnyvale, California-based Cerebras
has been taking a different approach. Instead of connecting more and more GPUs
together, the company has been squeezing as many processors as it can onto one
giant wafer. The main advantage is in the interconnects—by wiring processors
together on-chip, the wafer-scale chip bypasses many of the computational speed
lossesthat come from many GPUs talking to each other, as well as losses from
loading data to and from memory.


Now, Cerebras has flaunted the advantages of their wafer-scale chips in two
separate but related results. First, the company demonstrated that its second
generation wafer-scale engine, WSE-2,was significantly faster than world’s
fastest supercomputer, Frontier, in molecular dynamics calculations—the field
that underlies protein folding, modeling radiation damage in nuclear reactors,
and other problems in material science. Second, in collaboration with machine
learning model optimization company Neural Magic, Cerebras demonstrated that a
sparse large language model could perform inference at one-third of the energy
cost of a full model without losing any accuracy. Although the results are in
vastly different fields, they were both possible because of the interconnects
and fast memory access enabled by Cerebras’ hardware.


SPEEDING THROUGH THE MOLECULAR WORLD

“Imagine there’s a tailor and he can make a suit in a week,” says Cerebras CEO
and co-founder Andrew Feldman. “He buys the neighboring tailor, and she can also
make a suit in a week, but they can’t work together. Now, they can now make two
suits in a week. But what they can’t do is make a suit in three and a half
days.”







According to Feldman, GPUs are like tailors that can’t work together, at least
when it comes to some problems in molecular dynamics. As you connect more and
more GPUs, they can simulate more atoms at the same time, but they can’t
simulate the same number of atoms more quickly.

Cerebras’ wafer-scale engine, however, scales in a fundamentally different way.
Because the chips are not limited by interconnect bandwidth, they can
communicate quickly, like two tailors collaborating perfectly to make a suit in
three and a half days.

“It’s difficult to create materials that have the right properties, that have a
long lifetime and sufficient strength and don’t break.” —Tomas Oppelstrup,
Lawrence Livermore National Laboratory

To demonstrate this advantage, the team simulated 800,000 atoms interacting with
each other, calculating the interactions in increments of one femtosecond at a
time. Each step took just microseconds to compute on their hardware. Although
that’s still 9 orders of magnitude slower than the actual interactions, it was
also 179 times as fast as the Frontier supercomputer. The achievement
effectively reduced a year’s worth of computation to just two days.

This work was done in collaboration with Sandia, Lawrence Livermore, and Los
Alamos National Laboratories. Tomas Oppelstrup, staff scientist at Lawrence
Livermore National Laboratory, says this advance makes it feasible to simulate
molecular interactions that were previously inaccessible.

Oppelstrup says this will be particularly useful for understanding the
longer-term stability of materials in extreme conditions. “When you build
advanced machines that operate at high temperatures, like jet engines, nuclear
reactors, or fusion reactors for energy production,” he says, “you need
materials that can withstand these high temperatures and very harsh
environments. It’s difficult to create materials that have the right properties,
that have a long lifetime and sufficient strength and don’t break.” Being able
to simulate the behavior of candidate materials for longer, Oppelstrup says,
will be crucial to the material design and development process.

Ilya Sharapov, principal engineer at Cerebras, say the company is looking
forward to extending applications of its wafer-scale engine to a larger class of
problems, including molecular dynamics simulations of biological processes and
simulations of airflow around cars or aircrafts.


DOWNSIZING LARGE LANGUAGE MODELS

As large language models (LLMs) are becoming more popular, the energy costs of
using them are starting to overshadow the training costs—potentially by as much
as a factor of ten in some estimates. “Inference is is the primary workload of
AI today because everyone is using ChatGPT,” says James Wang, director of
product marketing at Cerebras, “and it’s very expensive to run especially at
scale.”

One way to reduce the energy cost (and speed) of inference is through
sparsity—essentially, harnessing the power of zeros. LLMs are made up of huge
numbers of parameters. The open-source Llama model used by Cerebras, for
example, has 7 billion parameters. During inference, each of those parameters is
used to crunch through the input data and spit out the output. If, however, a
significant fraction of those parameters are zeros, they can be skipped during
the calculation, saving both time and energy.







The problem is that skipping specific parameters is a difficult to do on a GPU.
Reading from memory on a GPU is relatively slow, because they’re designed to
read memory in chunks, which means taking in groups of parameters at a time.
This doesn’t allow GPUs to skip zeros that are randomly interspersed in the
parameter set. Cerebras CEO Feldman offered another analogy: “It’s equivalent to
a shipper, only wanting to move stuff on pallets because they don’t want to
examine each box. Memory bandwidth is the ability to examine each box to make
sure it’s not empty. If it’s empty, set it aside and then not move it.”

“There’s a million cores in a very tight package, meaning that the cores have
very low latency, high bandwidth interactions between them.” —Ilya Sharapov,
Cerebras

Some GPUs are equipped for a particular kind of sparsity, called 2:4, where
exactly two out of every four consecutively stored parameters are zeros.
State-of-the-art GPUs have terabytes per second of memory bandwidth. The memory
bandwidth of Cerebras’ WSE-2 is more than one thousand times as high, at 20
petabytes per second. This allows for harnessing unstructured sparsity, meaning
the researcherscan zero out parameters as needed, wherever in the model they
happen to be, and check each one on the fly during a computation. “Our hardware
is built right from day one to support unstructured sparsity,” Wang says.

Even with the appropriate hardware, zeroing out many of the model’s parameters
results in a worse model. But the joint team from Neural Magic and Cerebras
figured out a way to recover the full accuracy of the original model. After
slashing 70 percent of the parameters to zero, the team performed two further
phases of training to give the non-zero parameters a chance to compensate for
the new zeros.

This extra training uses about 7 percent of the original training energy, and
the companies found that they recover full model accuracy with this training.
The smaller model takes one-third of the time and energy during inference as the
original, full model. “What makes these novel applications possible in our
hardware,” Sharapov says, “Is that there’s a million cores in a very tight
package, meaning that the cores have very low latency, high bandwidth
interactions between them.”

From Your Site Articles
 * Cerebras Unveils Its Next Waferscale AI Chip ›
 * Cerebras' Tech Trains "Brain-Scale" AIs ›

Related Articles Around the Web
 * Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co ... ›
 * Harnessing the Power of Sparsity for Large GPT AI Models - Cerebras ›


machine learningmaterial sciencesupercomputerscerebrasartificial intelligence
{"imageShortcodeIds":[]}

Dina Genkina

is an associate editor at IEEE Spectrum focused on computing and hardware.
See full bio →


The Conversation (0)
Publish

Sort byNewestOldestPopular


The InstituteNewsClimate TechClimate Change


IEEE EDUCATIONAL VIDEO FOR KIDS SPOTLIGHTS CLIMATE CHANGE

18 hours ago
2 min read

TransportationNewsClimate TechClimate Change


FOR EVS, SEMI-SOLID-STATE BATTERIES OFFER A STEP FORWARD

20 hours ago
4 min read
1
RoboticsNews


HERE'S THE MOST BUG-LIKE ROBOT BUG YET

19 Jun 2024
2 min read
2


RELATED STORIES

ComputingNovember 2023Artificial IntelligenceMagazineSemiconductorsNews


CEREBRAS INTRODUCES ITS 2-EXAFLOP AI SUPERCOMPUTER


Artificial IntelligenceNews


NEW AI PROJECT AIMS TO MIMIC THE HUMAN NEOCORTEX


Artificial IntelligenceSemiconductorsNewsComputing


NVIDIA CONQUERS LATEST AI TESTS




Computing Magazine Feature April 2024 Climate Tech Climate Change


WE NEED TO DECARBONIZE SOFTWARE

THE GREEN SOFTWARE MOVEMENT IS TACKLING THE HIDDEN ENVIRONMENTAL IMPACT OF
TODAY’S CODE


Rina Diane Caballar
23 Mar 2024
8 min read
14


Elias Stein
LightGreen

Software may be eating the world, but it is also heating it.

In December 2023, representatives from nearly 200 countries gathered in Dubai
for COP28, the U.N.’s climate-change conference, to discuss the urgent need to
lower emissions. Meanwhile, COP28’s website produced 3.69 grams of carbon
dioxide (CO2) per page load, according to the website sustainability scoring
tool Ecograder. That appears to be a tiny amount, but if the site gets 10,000
views each month for a year, its emissions would be a little over that of a
one-way flight from San Francisco to Toronto.



This was not inevitable. Based on Ecograder’s analysis, unused code, improperly
sized images, and third-party scripts, among other things, affect the COP28
website’s emissions. These all factor into the energy used for data transfer,
loading, and processing, consuming a lot of power on users’ devices. Fixing and
optimizing these things could chop a whopping 93 percent from the website’s
per-page-load emissions, Ecograder notes.

While software on its own doesn’t release any emissions, it runs on hardware in
data centers and steers data through transmission networks, which account for
about 1 percent of energy-related greenhouse gas emissions each. The information
and communications technology sector as a whole is responsible for an estimated
2 to 4 percent of global greenhouse gas emissions. By 2040, that number could
reach 14 percent—almost as much carbon as that emitted by air, land, and sea
transport combined.

Within the sphere of software, artificial intelligence has its own
sustainability issues. AI company Hugging Face estimated the carbon footprint of
its BLOOM large language model across its entire life cycle, from equipment
manufacturing to deployment. The company found that BLOOM’s final training
emitted 50 tonnes of CO2—equivalent to about a dozen flights from New York City
to Sydney.

Green software engineering is an emerging discipline consisting of best
practices to build applications that reduce carbon emissions. The green software
movement is fast gaining momentum. Companies like Salesforce have launched their
own software sustainability initiatives, while the Green Software Foundation now
comprises 64 member organizations, including tech giants Google, Intel, and
Microsoft. But the sector will have to embrace these practices even more broadly
if they are to prevent worsening emissions from developing and using software.


WHAT IS GREEN SOFTWARE ENGINEERING?

The path to green software began more than 10 years ago. The Sustainable Web
Design Community Group of the World Wide Web Consortium (W3C) was established in
2013, while the Green Web Foundation began in 2006 as a way to understand the
kinds of energy that power the Internet. Now, the Green Web Foundation is
working toward the ambitious goal of a fossil-free Internet by 2030.


GREEN SOFTWARE RESOURCES

 * The Green Software Foundation offers a catalog of green software patterns for
   AI, the cloud, and the Web.
 * The W3C’s Sustainable Web Design Community Group released a draft of its Web
   sustainability guidelines, with both tactical and technical recommendations
   for business and product strategy, user-experience design, Web development,
   and hosting and infrastructure. The draft guidelines also include impact and
   effort ratings to give software engineers an idea of the level of difficulty
   in terms of implementation and the level of impact in terms of
   sustainability.

“There’s an already existing large segment of the software-development ecosystem
that cares about this space—they just haven’t known what to do,” says Asim
Hussain, chairperson and executive director of the Green Software Foundation and
former director of green software and ecosystems at Intel.

What to do, according to Hussain, falls under three main pillars: energy
efficiency, or using less energy; hardware efficiency, or using fewer physical
resources; and carbon-aware computing, or using energy more intelligently.
Carbon-aware computing, Hussain adds, is about doing more with your applications
during the periods when the electricity comes from clean or low-carbon
sources—such as when wind and solar power are available—and doing less when it
doesn’t.


THE CASE FOR SUSTAINABLE SOFTWARE

So why should programmers care about making their software sustainable? For one,
green software is efficient software, allowing coders to cultivate faster,
higher-quality systems, says Kaspar Kinsiveer, a team lead and
sustainable-software strategist at the software-development firm Helmes.

These efficient systems could also mean lower costs for companies. “One of the
main misconceptions about green software is that you have to do something extra,
and it will cost extra,” Kinsiveer says. “It doesn’t cost extra—you just have to
do things right.”

Green software is efficient software, allowing coders to cultivate faster,
higher-quality systems.

Other motivating factors, especially on the business side of software, are the
upcoming legislation and regulations related to sustainability. In the European
Union, for instance, the Corporate Sustainability Reporting Directive requires
companies to report more on their environmental footprint, energy usage, and
emissions, including the emissions related to the use of their products.


Yet other developers may be motivated by the climate crisis itself, wanting to
play their part in fostering a habitable planet for the coming generations. And
software engineers have tremendous influence on the actual purpose and emissions
of what they build.

“It’s not just lines of code. Those lines have an impact on human beings,” says
June Sallou, a postdoctoral researcher specializing in sustainable-software
engineering at the Delft University of Technology, in the Netherlands. Because
of AI’s societal impact in particular, she adds, developers have a
responsibility to ensure that what they’re creating isn’t damaging the
environment.


BUILDING GREENER WEBSITES AND APPS

The makers of COP28’s website could have taken a page from directories like
Lowwwcarbon, which highlights examples of existing low-carbon websites. The
company website of the Netherlands-based Web design and branding firm
Tijgerbrood, for instance, emits less than 0.1 grams of carbon per page view.

Creating sustainable websites like Tijgerbrood’s is a team effort that involves
different roles—from business analysts who define software requirements to
designers, architects, and those in charge of operations—and includes green
practices that can be applied at each stage of the software-development process.


TIPS FOR GREENER WEBSITES AND APPS



First, analysts will have to consider if the feature, app, or software they’re
designing should even be developed in the first place. Tech is often about
creating the next new thing, but making software sustainable also entails
decisions on what not to build, and that may require a shift in mind-set.

The design stage is all about choosing efficient algorithms and architectures.
“Think about sustainability before going into the solution—and not after,” says
Chiara Lanza, a researcher at the Sustainable AI unit of the Centre Tecnològic
de Telecomunicacions de Catalunya, in Barcelona.

During the development stage, programmers need to focus on optimizing code. “We
need the overall amount of energy we’re using to run software to go down. Some
of that will come from writing [code] efficiently,” says Hannah Smith, a
sustainable digital tech consultant and director of operations at the Green Web
Foundation.

Tijgerbrood’s website optimized the company’s code by using low-resolution
images and modern image formats, loading animations only when a user scrolls
them into view, and removing unnecessary code. These techniques help speed up
data transfer, loading, and processing on a user’s device. The website also uses
minimal JavaScript. “When a user loads a website [with] a lot of JavaScript, it
causes them to use a lot more energy on their own device because their device is
having to do all the work of reading the JavaScript and running [it],” explains
Smith.

When it comes to operations, one of the most impactful actions you can take is
to select a sustainable Web hosting or cloud-computing provider. The Green Web
Foundation has a tool to check if your website runs on green energy, as well as
a directory of hosting providers powered by renewable energy. You can also ask
your hosting provider if you can scale how your software runs in the cloud so
that peak usage is powered by green energy or pause or switch off certain
services during nonpeak hours.


AI THE GREEN WAY

Programmers can apply green software strategies when developing AI as well.
Trimming training data is one of the major ways to make AI systems greener.
Starting with data collection and preprocessing, it’s worth thinking about how
much data is really needed to do the job. It may pay to clean the dataset to
remove unnecessary data, or select only a subset of the dataset for training.

“The larger your dataset is, the more time and computation it will take for the
algorithm to go through all the data,” hence using up more energy, says Sallou.

For instance, in a study of six different AI algorithms that detect SMS spam
messages, Sallou and her colleagues found that the random forest algorithm,
which combines the output of a collection of decision trees to make a
prediction, was the most energy-greedy algorithm. But reducing the size of the
training dataset to 20 percent—only 1,000 data points out of 5,000—dropped the
energy consumption of training by nearly 75 percent, with only a 0.06 percent
loss in accuracy.


Choosing a greener algorithm could also save carbon. Tools like CodeCarbon and
ML CO2 Impact can help make the choice by estimating the energy usage and carbon
footprint of training different AI models.


TIPS FOR GREENER AI




TOOLS FOR MEASURING SOFTWARE’S CARBON FOOTPRINT


To write green code, developers need a way of measuring the actual carbon
emissions across a system’s entire life cycle. It’s a complex feat, given the
myriad processes involved. If we take AI as an example, its life cycle
encompasses raw material extraction, materials manufacturing, hardware
manufacturing, model training, model deployment, and disposal—and not all of
these stages have available data.

“We don’t understand huge parts of the ecosystem at the moment, and access to
reliable data is tough,” Smith says. The biggest need, she adds, is “open data
that we can rely on and trust” from big tech data-center operators and cloud
providers like Amazon, Google, and Microsoft.

Until that data surfaces, a more practical approach would be to measure how much
power software consumes. “Just knowing the energy consumption of running a piece
of software can impact how software engineers can improve the code,” Sallou
says.

Developers themselves are heeding the call for more measurement, and they’re
building tools to meet this demand. The W3C’s Sustainable Web Design Community
Group, for instance, plans to provide a test suite to measure the impacts of
implementing its Web sustainability guidelines. Similarly, the Green Software
Foundation wrote a specification to calculate the carbon intensity of software
systems. For accurate measurements, Lanza suggests isolating the hardware in
which a system runs from any other operations and to avoid running any other
programs that could influence measurements.

Other tools developers can use to measure the impact of green software
engineering practices include dashboards that give an overview of the estimated
carbon emissions associated with cloud workloads, such as the AWS Customer
Carbon Footprint Tool and Microsoft’s Azure Emissions Impact Dashboard; energy
profilers or power monitors like Intel’s Performance Counter Monitor; and tools
that help calculate the carbon footprint of websites, such as Ecograder, Firefox
Profiler, and Website Carbon Calculator.


GREEN SOFTWARE MEASUREMENT TOOLS 

Developers can use these tools to measure the impact of green software
engineering practices.


AI

Estimate the energy usage and carbon footprint of training AI models with these
tools.

carbontracker
experiment-impact-tracker ML CO2 Impact



CLOUD

These dashboards give an overview of the estimated carbon emissions associated
with cloud workloads. AWS Customer Carbon Footprint Tool
Google Cloud Carbon Footprint
Microsoft Azure Emissions Impact Dashboard
Cloud Carbon Footprint (free, open source, provider agnostic)



CODE

Integrate emissions estimation at the code level using these tools. Carbon-Aware
SDK
CodeCarbon
Impact Framework



MIDDLEWARE

These energy profilers or power monitors provide APIs (application programming
interfaces) to measure power consumption of apps or track energy metrics of
processors. Intel’s Performance Counter Monitor
PowerAPI



WEB

These tools help calculate the carbon footprint of websites. Are my third
parties green?
CO2.js
Ecograder
Firefox Profiler
Website Carbon Calculator



THE FUTURE IS GREEN

Green software engineering is growing and evolving, but we need more awareness
to help the discipline to become more widespread. This is why, in addition to
its Green Software for Practitioners course, the Green Software Foundation aims
to create more training courses, some of which may even lead to certifications.
Likewise, Sallou coteaches a graduate course in sustainable software
engineering, whose syllabus is open and can be used as a foundation for anyone
looking to build a similar course. Providing this knowledge to students early
on, she says, could ensure they bring it to their workplaces as future software
engineers.

In the realm of artificial intelligence, Navveen Balani, an AI expert and Google
Cloud Certified Fellow who also serves on the Green Software Foundation’s
steering committee, notes that AI could inherently include green AI principles
in the coming years, much like how security considerations are now an integral
part of software development. “This shift will align AI innovation with
environmental sustainability, making green AI not just a specialty but an
implied standard in the field,” he says.

As for the Web, Smith hopes the Green Web Foundation will cease to exist by
2030. “Our dream as an organization is that we’re not needed, we meet our goal,
and the Internet is green by default,” she says.

Kinsiveer has observed that in the past, software had to be optimized and built
well because hardware then was lacking. As hardware performance and innovation
leveled up, “the quality of programming itself went down,” he says. But now, the
industry is coming full circle, going back to its efficiency roots and adding
sustainability to the mix.

“The future is green software,” Kinsiveer says. “I cannot imagine it any other
way.”

From Your Site Articles
 * AI in the 2020s Must Get Greener—and Here’s How ›
 * Making Information Tech Greener Can Help Address the Climate Crisis ›
 * Why Bloat Is Still Software’s Biggest Vulnerability ›
 * Software Sucks, but It Doesn’t Have To - IEEE Spectrum ›

Related Articles Around the Web
 * Green Software Foundation | GSF ›

Keep Reading ↓ Show less

The Institute News Climate Tech Climate Change


IEEE EDUCATIONAL VIDEO FOR KIDS SPOTLIGHTS CLIMATE CHANGE

PRODUCED BY BOSTON MUSEUM OF SCIENCE, IT COVERS THE ISSUES AND SOLUTIONS


Robert Schneider
Robert Schneider is an education program specialist for IEEE Educational
Activities.
20 hours ago
2 min read


iStock

climate change climate tech ieee educational activities ieee products and
services ieee tryengineering students type:ti

When it comes to addressing climate change, the “in unity there’s strength”
adage certainly applies.

To support IEEE’s climate change initiative, which highlights innovative
solutions and approaches to the climate crisis, IEEE’s TryEngineering program
has created a collection of lesson plans, activities, and events that cover
electric vehicles, solar and wind power systems, and more.



TryEngineering, a program within IEEE Educational Activities, aims to foster the
next generation of technology innovators by providing preuniversity educators
and students with resources.

To help bring the climate collection to more students, TryEngineering has
partnered with the Museum of Science in Boston. The museum, one of the world’s
largest science centers, reaches nearly 5 million people annually through its
physical location, nearby classrooms, and online platforms.

TryEngineering worked with the museum to distribute a nearly four-minute
educational video created by Moment Factory, a multimedia studio specializing in
immersive experiences. Using age-appropriate language, the video, which is
posted on TryEngineering’s climate change page, explores the issue through
visual models and scientific explanations.

“Since the industrial revolution, humans have been digging up fossil fuels and
burning them, which releases CO2 into the atmosphere in unprecedented
quantities,” the video says. It notes that in the past 60 years, atmospheric
carbon dioxide increased at a rate 100 times faster than previous natural
changes.

“We are committed to energizing students around important issues like climate
change and helping them understand how engineering can make a difference.”

The video explains the impact of pollutants such as lead and ash, and it adds
that “when we work together, we can change the global environment.” The video
encourages students to contribute to a global solution by making small, personal
changes.







“We’re thrilled to contribute to the IEEE climate change initiative by providing
IEEE volunteers and educators access to TryEngineering’s collection, so they
have resources to use with students,” says Debra Gulick, director of IEEE
student and academic education programs.

“We are excited to partner with the Museum of Science to bring even more
awareness and exposure of this important issue to the school setting,” Gulick
says. “Working with prominent partners like the museum, we are committed to
energizing students around important issues like climate change and helping them
understand how engineering can make a difference.”

From Your Site Articles
 * This Startup Uses the MIT App Inventor to Teach Girls Coding ›
 * IEEE’s TryEngineering Summer Institute Provides Hands-On Experiences ›
 * IEEE Receives Grant to Develop Lesson Plans On Semiconductors ›

Keep Reading ↓ Show less

Computing Semiconductors Sponsored Article


THE FUTURE OF FULLY HOMOMORPHIC ENCRYPTION

NYU TANDON RESEARCHERS ARE DEVELOPING SPECIALIZED HARDWARE ACCELERATORS FOR
ENABLING COMPUTATION ON ENCRYPTED DATA


NYU Tandon School of Engineering

The NYU Tandon School of Engineering is the engineering and applied sciences
school of New York University.

01 Nov 2023
5 min read
12

NYU Tandon School of Engineering


This sponsored article is brought to you by NYU Tandon School of Engineering.

In our digital age, where information flows seamlessly through the vast network
of the internet, the importance of encrypted data cannot be overstated. As we
share, communicate, and store an increasing amount of sensitive information
online, the need to safeguard it from prying eyes and malicious actors becomes
paramount. Encryption serves as the digital guardian, placing our data in a
lockbox of algorithms that only those with the proper key can unlock.



Whether it’s personal messages, health data, financial transactions, or
confidential business communications, encryption plays a pivotal role in
maintaining privacy and ensuring the integrity of our digital interactions.
Typically, data encryption protects data in transit: it’s locked in an encrypted
“container” for transit over potentially unsecured networks, then unlocked at
the other end, by the other party for analysis. But outsourcing to a third-party
is inherently insecure.

Brandon Reagen, Assistant Professor of Computer Science and Engineering and
Electrical and Computer Engineering at the NYU Tandon School of Engineering.

NYU Tandon School of Engineering

But what if encryption didn’t just exist in transit and sit unprotected on
either end of the transmission? What if it was possible to do all of your
computer work — from basic apps to complicated algorithms — fully encrypted,
from beginning to end.

That is the task being taken up by Brandon Reagen, Assistant Professor of
Computer Science and Engineering and Electrical and Computer Engineering at the
NYU Tandon School of Engineering. Reagen, who is also a member of the NYU Center
for Cybersecurity, focuses his research on designing specialized hardware
accelerators for applications including privacy preserving computation. And now,
he is proving that the future of computing can be privacy-forward while making
huge advances in information processing and hardware design.


ALL-ENCOMPASSING ENCRYPTION

In a world where cyber threats are ever-evolving and data breaches are a
constant concern, encrypted data acts as a shield against unauthorized access,
identity theft, and other cybercrimes. It provides individuals, businesses, and
organizations with a secure foundation upon which they can build trust and
confidence in the digital realm.

The goal of cybersecurity researchers is the protection of your data from all
sorts of bad actors — cybercriminals, data-hungry companies, and authoritarian
governments. And Reagen believes encrypted computing could hold an answer. “This
sort of encryption can give you three major things: improved security, complete
confidentiality and sometimes control over how your data is used,” says Reagen.
“It’s a totally new level of privacy.”

“My aim is to develop ways to run expensive applications, for example, massive
neural networks, cost-effectively and efficiently, anywhere, from massive
servers to smartphones” —Brandon Reagen, NYU Tandon

Fully homomorphic encryption (FHE), one type of privacy preserving computation,
offers a solution to this challenge. FHE enables computation on encrypted data,
or ciphertext, to keep data protected at all times. The benefits of FHE are
significant, from enabling the use of untrusted networks to enhancing data
privacy. FHE is an advanced cryptographic technique, widely considered the “holy
grail of encryption,” that enables users to process encrypted data while the
data or models remain encrypted, preserving data privacy throughout the data
computation process, not just during transit.

While a number of FHE solutions have been developed, running FHE in software on
standard processing hardware remains untenable for practical data security
applications due to the massive processing overhead. Reagen and his colleagues
have recently been working on a DARPA-funded project called The Data Protection
in Virtual Environments (DPRIVE) program, that seeks to speed up FHE computation
to more usable levels.

The microarchitecture of Reagen’s designed Ring Processing Unit (RPU), one of
several designs to remake cybersecurity in computing. The RPU was designed for
general ring processing with high performance by taking advantage of regularity
and data parallelism.

NYU Tandon School of Engineering

Specifically, the program seeks to develop novel approaches to data movement and
management, parallel processing, custom functional units, compiler technology,
and formal verification methods that ensure the design of the FHE implementation
is effective and accurate, while also dramatically decreasing the performance
penalty incurred by FHE computations. The target accelerator should reduce the
computational run time overhead by many orders of magnitude compared to current
software-based FHE computations on conventional CPUs, and accelerate FHE
calculations to within one order of magnitude of current performance on
unencrypted data.


THE HARDWARE PROMISING PRIVACY

While FHE has been shown to be possible, the hardware required for it to be
practical is still rapidly being developed by researchers. Reagen and his team
are designing it from the ground up, including new chips, datapaths, memory
hierarchies, and software stacks to make it all work together.

The team was the first to show that the extreme levels of speedup needed to make
HE feasible was possible. And by early next year, they’ll begin manufacturing of
their prototypes to further their field testing.

Reagen — who earned a doctoral degree in computer science from Harvard in 2018
and undergraduate degrees in computer systems engineering and applied
mathematics from the University of Massachusetts, Amherst, in 2012 — focused on
creating specialized hardware accelerators for applications like deep learning.
These accelerators enhance specialized hardware that can be made orders of
magnitude more efficient than general-purpose platforms like CPUs. Enabling
accelerators requires changes to the entire compute stack, and to bring about
this change, he has made several contributions to lowering the barrier of using
accelerators as general architectural constructs, including benchmarking,
simulation infrastructure, and System on a Chip (SoC) design.

Cheetah accelerator architecture, an earlier project from Reagen. (a) The
accelerator is composed of parallel PEs operating in output stationary fashion.
Off-chip data is communicated via a PCIe-like streaming interface, and data is
buffered on-chip using global PE SRAM. (b) Each PE contains Partial Processing
Lanes which compute the HE dot product. (c) Lanes comprise individual HE
operators.

NYU Tandon School of Engineering

“My aim is to develop ways to run expensive applications, for example, massive
neural networks, cost-effectively and efficiently, anywhere, from massive
servers to smartphones,” he says.

Before coming to NYU Tandon, Reagen was a former research scientist on
Facebook’s AI Infrastructure Research team, where he became deeply involved in
studying privacy. This combination of a deep cutting-edge computer hardware
background and a commitment to digital security made him a perfect fit for NYU
Tandon and the NYU Center for Cybersecurity, which has been at the forefront of
cybersecurity research since its inception.

“A lot of the big problems that we have in the world right now revolve around
data. Consider global health coming off of COVID: if we had better ways of
computing global health data analytics and sharing information without exposing
private data, we might have been able to respond to the crisis more effectively
and sooner” —Brandon Reagen, NYU Tandon

For Reagen, this is an exciting moment in the history of privacy preserving
computation, a field that will have huge implications for the future of data and
computing.

“I’m an optimist — I think this could have as big an impact as the Internet
itself,” says Reagen. “And the reason is that, if you think about a lot of the
big problems that we have in the world right now, a lot of them revolve around
data. Consider global health. We’re just coming off of COVID, and if we had
better ways of computing global health data analytics and sharing information
without exposing private data, we might have been able to respond to the crisis
more effectively and sooner. If we had better ways of sharing data about climate
change data from all over the world, without exposing what each individual
country or state or city was actually emitting, you could imagine better ways of
managing and fighting global climate change. These problems are, in large part,
problems of data, and this kind of software can help us solve them.”

From Your Site Articles
 * IBM Makes Encryption Paradox Practical ›
 * How to Compute With Data You Can’t See ›
 * Homomorphic Encryption - IEEE Spectrum ›
 * Stop Trusting Your Cloud Provider - IEEE Spectrum ›
 * Chips to Compute With Encrypted Data Are Coming - IEEE Spectrum ›

Related Articles Around the Web
 * Home - NYU Center for Cyber Security ›
 * Brandon Reagen | NYU Tandon School of Engineering ›
 * Homomorphic encryption - Wikipedia ›

Keep Reading ↓ Show less

Computing Sensors Webinar


FAST-TRACK YOUR SENSOR RESEARCH: ESSENTIAL TOOLS FOR ACCELERATED TESTING

LEARN ABOUT ESSENTIAL TOOLS FOR CONTROLLING AND RAPIDLY TESTING YOUR SENSOR


Zurich Instruments
03 Apr 2024
1 min read
1


A sensor generates an electrical signal that depends on the physical quantity we
aim to measure. Achieving the desired performance is an iterative process that
begins with finding suitable materials, sensing methods, and control parameters.
A complete toolset to characterize the prototype with efficient workflows is
crucial to keep up with the project timelines. In this webinar, Kıvanç Esat and
Jim Phillips present the measurement requirements, discuss the essential tools,
and explain best practices with examples to accelerate your testing.

You will learn:


 1. The essential measurement steps to find a sensor’s optimal operation
    conditions;
 2. Several control strategies, including Phase-locked Loops (PLL) and
    Pound-Drever-Hall (PDH);
 3. How efficient workflows with correct instruments enable sensing yottagrams
    and attonewtons.

Register now for this free webinar!
Keep Reading ↓ Show less

Transportation News Climate Tech Climate Change


FOR EVS, SEMI-SOLID-STATE BATTERIES OFFER A STEP FORWARD

CHINESE AUTOMAKERS ARE ROLLING OUT BATTERIES WITH GEL ELECTROLYTES


Willie D. Jones

Willie Jones is an associate editor at IEEE Spectrum. In addition to editing and
planning daily coverage, he manages several of Spectrum's newsletters and
contributes regularly to the monthly Big Picture section that appears in the
print edition.

22 hours ago
4 min read
1

iStock

climate tech electric vehicle batteries nio solid-state batteries welion
electrolytes china

Earlier this month, China announced that it is pouring 6 billion yuan (about US
$827 million) into a fund meant to spur the development of solid-state batteries
by the nation’s leading battery manufacturers. Solid-state batteries use
electrolytes of either glass, ceramic, or solid polymer material instead of the
liquid lithium salts that are in the vast majority of today’s electric vehicle
(EV) batteries. They’re greatly anticipated because they will have three or four
times as much energy density as batteries with liquid electrolytes, offer more
charge-discharge cycles over their lifetimes, and be far less susceptible to the
thermal runaway reaction that occasionally causes lithium batteries to catch
fire.

But China’s investment in the future of batteries won’t likely speed up the
timetable for mass production and use in production vehicles. As IEEE Spectrum
pointed out in January, it’s not realistic to look for solid-state batteries in
production vehicles anytime soon. Experts Spectrum consulted at the time “noted
a pointed skepticism toward the technical merits of these announcements. None
could isolate anything on the horizon indicating that solid-state technology can
escape the engineering and ‘production hell’ that lies ahead.”



“To state at this point that any one battery and any one country’s investments
in battery R&D will dominate in the future is simply incorrect.” –Steve W.
Martin, Iowa State University

Reaching scale production of solid-state batteries for EVs will first require
validating existing solid-state battery technologies—now being used for other,
less demanding applications—in terms of performance, lifespan, and relative cost
for vehicle propulsion. Researchers must still determine how those batteries
take and hold a charge and deliver power as they age. They’ll also need to
provide proof that a glass or ceramic battery can stand up to the jarring that
comes with driving on bumpy roads and certify that they can withstand the
occasional fender bender.








HERE COME SEMI-SOLID-STATE BATTERIES

Meanwhile, as the world waits for solid electrolytes to shove liquids aside,
Chinese electric vehicle manufacturer Nio and battery maker WeLion New Energy
Technology Co. have partnered to stake a claim on the market for a third option
that splits the difference: semi-solid-state batteries, with gel electrolytes.

Car News China reported in April that the WeLion cells have an energy density of
360 watt-hours per kilogram. Fully packaged, the battery’s density rating is 260
Wh/kg. That’s still a significant improvement over lithium iron phosphate
batteries, whose density tops out at 160 Wh/kg. In tests conducted last month
with Nio’s EVs in Shanghai, Chengdu, and several other cities, the WeLion
battery packs delivered more than 1,000 kilometers of driving range on a single
charge. Nio says it plans to roll out the new battery type across its vehicle
lineup beginning this month.

But the Beijing government’s largesse and the Nio-WeLion partnership’s attempt
to be first to get semi-solid-state batteries into production vehicles shouldn’t
be a temptation to call the EV propulsion game prematurely in China’s favor.

So says Steve W. Martin, a professor of materials science and engineering at
Iowa State University in Ames. Martin, whose research areas include glassy solid
electrolytes for solid-state lithium batteries and high-capacity reversible
anodes for lithium batteries, believes that solid-state batteries are the future
and that hybrid semi-solid batteries will likely be a transition between liquid
and solid-state batteries. However, he says, “to state at this point that any
one battery and any one country’s investments in battery R&D will dominate in
the future is simply incorrect.” Martin explains that “there are too many
different kinds of solid-state batteries being developed right now and no one of
these has a clear technological lead.”


THE ADVANTAGES OF SEMI-SOLID-STATE BATTERIES

The main innovation that gives semi-solid-state batteries an advantage over
conventional batteries is the semi-solid electrolyte from which they get their
name. The gel electrolyte contains ionic conductors such as lithium salts just
as liquid electrolytes do, but the way they are suspended in the gel matrix
supports much more efficient ion conductivity. Enhanced transport of ions from
one side of the battery to the other boosts the flow of current in the opposite
direction that makes a complete circuit. This is important during the charging
phase because the process happens more rapidly than it can in a battery with a
liquid electrolyte. The gel’s structure also resists the formation of dendrites,
the needle-like structures that can form on the anode during charging and cause
short circuits. Additionally, gels are less volatile than liquid electrolytes
and are therefore less prone to catching fire.







Though semi-solid-state batteries won’t reach the energy densities and lifespans
that are expected from those with solid electrolytes, they’re at an advantage in
the short term because they can be made on conventional lithium-ion battery
production lines. Just as important, they have been tested and are available now
rather than at some as yet unknown date.

Semi-solid-state batteries can be made on conventional lithium-ion battery
production lines.

Several companies besides WeLion are actively developing semi-solid-state
batteries. China’s prominent battery manufacturers, including CATL, BYD, and the
state-owned automakers FAW Group and SAIC Group are, like WeLion, beneficiaries
of Beijing’s plans to advance next-generation battery technology domestically.
Separately, the startup Farasis Energy, founded in Ganzhou, China, in 2009, is
collaborating with Mercedes-Benz to commercialize advanced batteries.


THE ROAD FORWARD TO SOLID-STATE BATTERIES

U.S. startup QuantumScape says the solid-state lithium metal batteries it’s
developing will offer energy density of around 400 Wh/kg. The company notes that
its cells eliminate the charging bottleneck that occurs in conventional
lithium-ion cells, where lithium must diffuse into the carbon particles.
QuantumScape’s advanced batteries will therefore allow fast charging from 10 to
80 percent in 15 minutes. That’s a ways off, but the Silicon Valley–based
company announced in March that it had begun shipping its prototype Alpha-2
semi-solid-state cells to manufacturers for testing.

Toyota is among a group of companies not looking to hedge their bets. The
automaker, ignoring naysayers, aims to commercialize solid-state batteries by
2027 that it says will give an EV a range of 1,200 km on a single charge and
allow 10-minute fast charging. It attributes its optimism to breakthroughs
addressing durability issues. And for companies like Solid Power, it’s also
solid-state or bust. Solid Power, which aims to commercialize a lithium battery
with a proprietary sulfide-based solid electrolyte, has partnered with major
automakers Ford and BMW. ProLogium Technology, which is also forging ahead with
preparations for a solid-state battery rollout, claims that it will start
delivering batteries this year that combine a ceramic oxide electrolyte with a
lithium-free soft cathode (for energy density exceeding 500 Wh/kg). The company,
which has teamed up with Mercedes-Benz, demonstrated confidence in its timetable
by opening the world’s first giga-level solid-state lithium ceramic battery
factory earlier this year in Taoke, Taiwan.

From Your Site Articles
 * Solid-State Batteries Rev Up Electric Cars, Boost Grid Storage ›
 * Solid-State Batteries Could Face “Production Hell” ›

Related Articles Around the Web
 * Is semi solid battery a future technology right around the corner or a ... ›

Keep Reading ↓ Show less



FIXING THE FUTURE

On IEEE Spectrum’s Fixing the Future podcast, our editors talk with the
brightest minds in technology about concrete solutions to big challenges
All Fixing the Future episodes →
Robotics Fixing the Future Podcasts Podcast


U.S. COMMERCIAL DRONE DELIVERY COMES CLOSER

ZIPLINE’S KEENAN WYROBEK TALKS ABOUT TWO RECENT MILESTONES

62
20:16
17 Apr 2024


Energy Fixing the Future Podcasts Podcast


HEAT PUMPS GO NORTH

ADVANCES COULD BRING THESE ENERGY-EFFICIENT DEVICES TO MANY MORE HOMES

61
12:49
03 Apr 2024


Semiconductors Fixing the Future Podcasts Podcast


EXPLODING CHIPS, META'S AR HARDWARE, AND MORE

IEEE SPECTRUM VISITS ISSCC, THE KEY CONFERENCE IN INTEGRATED CIRCUIT TECH

60
29:21
20 Mar 2024
2


Computing Magazine Feature March 2024


SCIENCE FICTION SHORT: HIJACK

BUILDING A COMPUTER THE SIZE OF A PLANET CAN HAVE UNEXPECTED CONSEQUENCES


Karl Schroeder

Charles Q. Choi
24 Feb 2024
16 min read
13

Vertical

DarkBlue1






Computers have grown more and more powerful over the decades by pushing the
limits of how small their electronics can get. But just how big can a computer
get? Could we turn a planet into a computer, and if so, what would we do with
it?

In considering such questions, we go beyond normal technological projections and
into the realm of outright speculation. So IEEE Spectrum is making one of its
occasional forays into science fiction, with a short story by Karl Schroeder
about the unexpected outcomes from building a computer out of planet Mercury.
Because we’re going much farther into the future than a typical Spectrum article
does, we’ve contextualized and annotated Schroeder’s story to show how it’s
still grounded in real science and technology. This isn’t the first work of
fiction to consider such possibilities. In “The Hitchhiker’s Guide to the
Galaxy,” Douglas Adams famously imagined a world constructed to serve as a
processor.


Real-world scientists are also intrigued by the idea. Jason Wright, director of
the Penn State Extraterrestrial Intelligence Center, has given serious thought
to how large a computer can get. A planet-scale computer, he notes, might
feature in the search for extraterrestrial intelligence. “In SETI, we try to
look for generic things any civilization might do, and computation feels pretty
generic,” Wright says. “If that’s true, then someone’s got the biggest computer,
and it’s interesting to think about how big it could be, and what limits they
might hit.”

There are, of course, physical constraints on very large computers. For
instance, a planet-scale computer probably could not consist of a solid ball
like Earth. “It would just get too hot,” Wright says. Any computation generates
waste heat. Today’s microchips and data centers “face huge problems with heat
management.”

In addition, if too much of a planet-scale computer’s mass is concentrated in
one place, “it could implode under its own weight,” says Anders Sandberg, a
senior research fellow at the University of Oxford’s Future of Humanity
Institute. “There are materials stronger than steel, but molecular bonds have a
limit.”

Instead, creating a computer from a planet will likely involve spreading out a
world’s worth of mass. This strategy would also make it easier to harvest solar
energy. Rather than building a single object that would be subject to all kinds
of mechanical stresses, it would be better to break the computer up into a
globular flotilla of nodes, known as a Dyson swarm.

What uses might a planet-scale computer have? Hosting virtual realities for
uploaded minds is one possibility, Sandberg notes. Quantum simulation of
ecosystems is another, says Seth Lloyd, a quantum physicist at MIT.

Which brings us to our story…

Andrew Archer



Which brings us to our story…



Simon Okoro settled into a lawn chair in the Heaven runtime and watched as
worlds were born.

“I suppose I should feel honored you chose to watch this with me,” said Martin
as he sat down next to Simon. “Considering that you don’t believe I exist.”

“Can’t we just share a moment? It’s been years since we did anything together.
And you worked toward this moment too. You deserve some recognition.”


A

Uploading is a hypothetical process in which brain scanning can help create
emulations of human minds in computers. A large enough computer could
potentially house a civilization. These uploads could then go on to live in
computer-simulated virtual realities.





B



Chris Philpot

A typical satellite must orbit around a celestial object at a speed above a
critical value to avoid being pulled into the surface of the object by gravity.
A statite, a hypothetical form of satellite patented by physicist Robert L.
Forward, uses a solar sail to help it hover above a star or planet, using
radiation pressure from sunlight to balance the force of gravity.

“Ah. They sent you to acknowledge the Uploaded, is that it?” Martin turned his
long, sad-eyed face to the sky and the drama playing out above. A The Heaven
runtime was a fully virtual world, so Simon had converted the sky into a vast
screen on which to project what was happening in the real world. The magnified
surface of the sun made a curving arc from horizon to horizon. Jets and coronas
rippled over it, and high, high above its incandescent surface hung thousands of
solar statites shaped like mirrored flowers B.

They did not orbit, instead floating over a particular spot by light pressure
alone. They formed a diffuse cloud, dwindling to invisibility before reaching
the horizon. This telescope view showed the closest statite cores scattering
fiery specks like spores into the overwhelming light. The specks blazed with
light and shot away from the sun, accelerating.

This moment was the pinnacle of Simon’s career, the apex of his life’s work.
Each of those specks was a solar sail C, kilometers wide, carrying a
terraforming package D. Launched so close to the sun and supplemented with
lasers powered by the statites, they would be traveling at 20 percent light
speed by the time they left the solar system. At their destinations, they’d
sundive and then deliver terraforming seeds to lifeless planets around the
nearest stars.


C



Chris Philpot

Light has no mass, but it can exert pressure as photons exchange momentum with a
surface as they reflect off it. A mirror that is thin and reflective enough can
therefore serve as a solar sail, harnessing sunlight to generate thrust. In
2010, Japan’s Ikaros probe to Venus demonstrated the use of a solar sail for
interplanetary travel for the first time. Because solar pressure is measured in
micronewtons per square meter, solar sails must have large areas relative to
their payloads, although the pressure from sunlight can be augmented with a
laser beam for propulsion.





D

Terraforming is the hypothetical act of transforming a planet so as to resemble
Earth, or at least make it suitable for life. Some terraforming proposals
involve first seeding the planet with single-celled organisms that alter
conditions to be more hospitable to multicellular life. This process would mimic
the naturally occurring transformation of Earth that started about 2.3 billion
years ago, when photosynthetic cyanobacteria created the oxygen-rich atmosphere
we breathe today.

“So life takes hold in the galaxy,” said Simon. These were the first words of a
speech he’d written and rehearsed long ago. He’d dreamed of saying them on a
podium, with Martin standing with him. But Martin...well, Martin had been dead
for 20 years now.

He remembered the rest of the speech, but there was no point in giving it when
he was absolutely alone.

Martin sighed. “So this is all you’re going to do with my Heaven? A little
gardening? And then what? An orderly shutdown of the Heaven runtime? Sell off
the Paradise processor as scrap?”

“I knew this was a bad idea.” Simon raised his hand to exit the virtual world,
but Martin quickly stood, looking sorry.


“It’s just hard,” Martin said. “Paradise was supposed to be the great project to
unite humanity. Our triumph over death! Why did you let them hijack it for
this?”

Simon watched the spores catch the light and flash away into interstellar space.
“You know we won’t shut you down. Heaven will be kept running as long as
Paradise exists. We built it together, Martin, and I’m proud of what we did.”


E

In a 2013 study, Sandberg and his colleague Stuart Armstrong suggested deploying
automated self-replicating robots on Mercury to build a Dyson swarm. These
robots would dismantle the planet to construct not only more of themselves but
also the sunlight collectors making up the swarm. The more solar plants these
robots built, the more energy they would have to mine Mercury and produce
machines. Given this feedback loop, Sandberg and Armstrong argued, these robots
could disassemble Mercury in a matter of decades. The solar plants making up
this Dyson swarm could double as computers.





F



Solar power is exponentially more abundant at Mercury’s orbit compared with
Earth’s. At its orbital distance of 1 astronomical unit from the sun, Earth
receives about 1.4 kilowatts per square meter from sunlight. Mercury receives
between 6.2 and 14.4 kW/m2. The range is because of Mercury’s high
eccentricity—that is, it has the most elliptical orbit of all the planets in the
solar system.





G

Whereas classical computers switch transistors on and off to symbolize data as
either 1s and 0s, quantum computers use quantum bits, or qubits, which can exist
in a state where they are both 1 and 0 at the same time. This essentially lets
each qubit perform two calculations at once. As more qubits are added to a
quantum computer, its computational power grows exponentially.

The effort had been mind-bogglingly huge. They’d been able to do it only because
millions of people believed that in dismantling Mercury E and turning it into a
sun-powered F quantum computer G there would be enough computing power for every
living person to upload their consciousness into it. The goal had been to
achieve eternal life in a virtual afterlife: the Heaven runtime.

Simon knit his hands together, lowering his eyes to the virtual garden. “Science
happened, Martin. How were we to know Enactivism H would answer the ‘hard
problem’ of consciousness? You and I had barely even heard of extended
consciousness when we proposed Heaven. It was an old idea from cognitive
science. Nobody was even studying it anymore except a few AIs, and we were
sucking up all the resources they might have used to experiment.” He glanced
ruefully at Martin. “We were all blindsided when they proved it. Consciousness
can’t be just abstracted from a brain.”

Martin’s response was quick; this was an old argument between them. “Nothing’s
ever completely proven in science! There’s always room for doubt—but you agreed
with those AIs when they said that simulated consciousness can’t have subjective
experiences. Conveniently after I died but before I got rebooted here. I wasn’t
here to fight you.”

Martin snorted. “And now you think I’m a zimboe I: a mindless simulation of the
old Martin so accurate that I act exactly how he would if you told him he wasn’t
self-aware. I deny it! Of course I do, like everyone else from that first wave
of uploads.” He gestured, and throughout the simulated mountain valley,
thousands of other human figures were briefly highlighted. “But what did it
matter what I said, once I was in here? You’d already repurposed Paradise from
humanity’s chance at immortality to just a simulator, using it to mimic billions
of years of evolution on alien planets. All for this ridiculous scheme to plant
ready-made, complete biospheres on them in advance of human colonization.” J


H

Enactivism was first mooted in the 1990s. In a nutshell, it explains the mind as
emerging from a brain’s dynamic interactions with the larger world. Thus, there
can be no such thing as a purely abstract consciousness, completely distinct
from the world it is embedded in.





I

A “philosophical zombie” is a putative entity that behaves externally exactly
like a being with consciousness but with no self-awareness, no “I”: It is a pure
automata, even though it might itself say otherwise.





J



Chris Philpot

Living organisms are tremendously complex systems. This diagram shows just the
core metabolic pathways for an organism known as JCVI-SYN3A. Each red dot
represents a different biomolecule, and the arrows indicate the directions in
which chemical reactions can proceed.


JCVI-SYN3A is a synthetic life-form, a cell genetically engineered to have the
simplest possible biology. Yet even its metabolism is difficult to simulate
accurately with current computational resources. When Nobel laureate Richard
Feynman first proposed the idea of quantum computers, he envisioned them
modeling quantum systems such as molecules. One could imagine that a powerful
enough quantum computer could go on to model cells, organisms, and ecosystems,
Lloyd says.

“We’d already played God with the inner solar system,” Simon reminded him. “The
only way we could justify that after the Enactivism results was to find an even
higher purpose than you and I started out with.

“Martin, I’m sorry you died before we discovered the truth. I fought to keep
this subsystem running our original Heaven sim, because you’re right—there’s
always a chance that the Enactivists are wrong. However slim.”

Martin snorted again. “I appreciate that. But things got very, very weird during
your Enactivist rebellion. If I didn’t know better, I’d call this project”—he
nodded at the sky—“the weirdest thing of all. Things are about to heat up now,
though, aren’t they?”

“This was a mistake.” Simon sighed and flipped out of the virtual world. Let the
simulated Martin rage in his artificial heaven; the science was unequivocal. In
truth, Simon had been speaking only to himself for the entire conversation.

He stood now in the real world near the podium in a giant stadium, inside a
wheel-shaped habitat 200 kilometers across. Hundreds of similar mini-ringworlds
were spaced around the rim of Paradise.

Andrew Archer

Paradise itself was a vast bowl-shaped object, more cloud than material,
orbiting closer to the sun than Mercury had. Self-reproducing machines had eaten
that planet in a matter of decades, transforming its usable elements into a
solar-powered quantum computer tens of thousands of kilometers across. The bowl
cupped a spherical cloud of iron that acted as a radiator for the waste heat
emitted by Paradise’s quadrillions of computing modules. K



K

One design for planetary scale—and up!—computers is a Matrioshka brain. Proposed
in 1997 by Robert Bradbury, it would consist of nested structures, like its
namesake Russian doll. The outer layers would use the waste heat of the inner
layers to power their computations, with the aim of making use of every bit of
energy for processing. However, in a 2023 study, Wright suggests that this
nested design may be unnecessary. “If you have multiple layers, shadows from the
inner elements of the swarm, as well as collisions, could decrease efficiency,”
he says. “The optimal design is likely the smallest possible sphere you can
build given the mass you have.”





L

How much computation might a planet-size machine carry out? Earth has a mass of
nearly 6 x 1024 kilograms. In a 2000 paper, Lloyd calculated that 1 kilogram of
matter in 1 liter could support a maximum of roughly 5.4 x 1050 logical
operations per second. However, at that rate, Lloyd noted, it would be operating
at a temperature of 109 kelvins, resembling a small piece of the big bang.





M



Top to bottom: Proxima Centauri b, Ross 128 b, GJ 1061 d, GJ 1061 c, Luyten b,
Teegarden’s Star b, Teegarden’s Star c, Wolf 1061c, GJ 1002 b, GJ 1002 c, Gliese
229 Ac, Gliese 625 b, Gliese 667 Cc, Gliese 514 b, Gliese 433 d

Potentially habitable planets have been identified within 30 light-years of
Earth. Another 16 or so are within 100 light-years, with likely more yet to be
identified. Many of them have masses considerably greater than Earth’s,
indicating very different environmental conditions than those under which
terrestrial organisms evolved.

The leaders of the terraforming project were on stage, taking their bows. The
thousands of launches happening today were the culmination of decades of work:
evolution on fast-forward, ecosystem after ecosystem, with DNA and seed designs
for millions of new species fitted to thousands of worlds L.


It had to be done. Humans had never found another inhabited planet. That fact
made life the most precious thing in the universe, and spreading it throughout
the galaxy seemed a better ambition for humanity than building a false heaven. M

Simon had reluctantly come to accept this. Martin was right, though. Things had
gotten weird. Paradise was such a good simulator that you could ask it to devise
a machine to do X, and it would evolve its design in seconds. Solutions found
through diffusion and selection were superior to algorithmically or
human-designed ones, but it was rare that they could be reverse-engineered or
their working principles even understood. And Paradise had computing power to
spare, so in recent years, human and AI designers across the solar system had
been idled as Paradise replaced their function. This, it was said, was the
Technological Maximum; it was impossible for any civilization to attain a level
of technological advancement beyond the point where any possible system could be
instantly evolved.

Simon walked to where he could look past the open roof of the stadium to the
dark azure sky. The vast sweep of the ring rose before and behind; in its
center, a vast canted mirror reflected sunlight; to the left of that, he could
see the milky white surface of the Paradise bowl. Usually, to the right, there
was only blackness.

Today, he could see a sullen red glow. That would be Paradise’s radiator,
expelling heat from the calculation of all those alien ecosystems. Except...

He found a quiet spot and sat, then reentered the Heaven simulation. Martin was
still there, gazing at the sky.

Simon sat beside him. “What did you mean when you said things are heating up?”

Martin’s grin was slow and satisfied. “So you noticed.”

“Paradise isn’t supposed to be doing anything right now. All the terraforming
packages were completed and copied to the sails—most of them years ago. Now
they’re on their way, Paradise doesn’t have any duties, except maybe evolving
better luxury yachts.”

Martin nodded. “Sure. And is it doing anything?”

Simon still had read-access to Paradise’s diagnostics systems. He summoned a
board that showed what the planet-size computing system was doing.

Nothing. It was nearly idle.

“If the system is idle, why is the radiator approaching its working limit?”

Martin crossed his arms, grinning. Damn it, he was enjoying this! Or the real
Martin would be enjoying it, if he were here.

“You remember when the first evolved machines started pouring out of the
printers?” Martin said. “Each one was unique; each grown for one owner, one
purpose, one place. You said they looked alien, and I laughed and said, ‘How
would we even know if an alien invasion was happening, if no two things look or
work the same anymore?’ ”

“That’s when it started getting weird,” admitted Simon. “Weirder, I mean, than
building an artificial heaven by dismantling Mercury…” But Martin wasn’t
laughing at his feeble joke. He was shaking his head.


N



Chris Philpot

In astrodynamics, unless an object is actively generating thrust, its trajectory
will take the form of a conic section—that is, a circle, ellipse, parabola, or
hyperbola. Even relatively few observations of an object anywhere along its
trajectory can distinguish between these forms, with objects that are
gravitationally bound following circular and elliptical trajectories. Objects on
parabolic or hyperbolic trajectories, by contrast, are unbound. Therefore, any
object found to be moving along a hyperbola relative to the sun must have come
from interstellar space. This is how in 2017, astronomers identified ‘Oumuamua,
a cigar-shaped object, as the first known interstellar visitor. It’s been
estimated that each year, about seven interstellar objects pass through the
inner solar system.

“No, that’s not when it got weird. It got weird when the telescopes we evolved
to monitor the construction of Paradise noticed just how many objects pass
through the solar system every year.”

“Interstellar wanderers? They’re just extrasolar comets,” said Simon. “You said
yourself that rocks from other star systems must pass through ours all the
time.” N

“Yes. But what I didn’t get to tell you—because I died—was that while we were
building Paradise, several objects drifted from interstellar space into one side
of the Paradise construction orbits...and didn’t come out the other side.”

Simon blinked. “Something arrived...and didn’t leave? Wouldn’t it have been
eaten by the recycling planetoids?”

“You’d think. But there’s no record of it.”

“But what does this have to do with the radiator?”

Martin reached up and flicked through a few skies until he came to a view of the
spherical iron cloud in the bowl of Paradise. “Remember why we even have a
radiator?”

“Because there’s always excess energy left over from making a calculation. If it
can’t be used for further calculations down the line, it’s literally
meaningless, it has to be discarded.”

“Right. We designed Paradise in layers, so each layer would scavenge the waste
from the previous one—optical computing on the sunward-facing skin, electronics
further in. But inevitably, we ran out of architectures that could scavenge the
excess. There is always an excess that is meaningless to the computing
architecture at some point. So we built Paradise in the shape of a bowl, where
all that extra heat would be absorbed by the iron cloud in its center. We
couldn’t use that iron for transistors. The leftovers of Mercury were mostly a
junk pile—but one we could use as a radiator.”

“But the radiator’s shedding heat like crazy! Where’s that coming from?” asked
Simon.

“Let’s zoom in.” Martin put two fingers against the sky and pulled them apart.
Whatever telescope he was linked to zoomed crazily; it felt like the whole world
was getting yanked into the radiator. Simon was used to virtual worlds, so he
just planted his feet and let the dizzying motion wash over him.

The radiator cloud filled the sky, at first just a dull red mist. But gradually
Simon began to see structure to it: giant cells far brighter than the material
around them. “Those look like...energy storage. Heat batteries. As if the
radiator’s been storing some of the power coming through it. But why—”

Andrew Archer

Alerts from the real world suddenly blossomed in his visual field. He popped out
of Martin’s virtual garden and into a confused roar inside the stadium.

The holographic image that filled the central space of the stadium showed the
statite launchers hovering over the sun. One by one, they were folding in on
themselves, falling silently into the incinerating heat below. The crowd was on
its feet, people shouting in shock and fear. Now that the launchers had sent the
terraforming systems, they were supposed to propel ships of colonists heading
for the newly greened worlds. There were no more inner-solar-system resources
left to build more.


O



Chris Philpot

“Mechanical computer” brings to mind the rotating cogwheels of Charles Babbage’s
19th-century Difference Engine, but other approaches exist. Here we show the
heart of a logic gate made with moving rods. The green input rods can slide back
and forth as desired, with a true value indicated by placing the rod into its
forward position and false indicated by moving the rod into its back position.
The blue output rod is blocked from advancing to its true position unless both
input rods are set to true, so this represents an AND gate. Rod logic has been
proposed as a mechanism for controlling nanotech-scale robots.

In space, one problem that a mechanical computer could face is a phenomenon
called cold welding. That occurs when two flat, clean pieces of metal come in
contact, and they fuse together. Cold welding is not usually seen in everyday
life on Earth because metals are often coated in layers of oxides and other
contaminants that keep them from fusing. But it has led to problems in space
(cold welding has been implicated in the deployment failure of the main antenna
of the Galileo probe to Jupiter, for example). Some of the oxygen or other
elements found in a rocky world would have to be used in the coatings for
components in an iron or other metal-based mechanical computer.

Simon jumped back into VR. Martin was standing calmly in the garden, smiling at
the intricate depths of the red-hot radiator that filled the sky. Simon followed
his gaze and saw...


“Gears?” The radiator was a cloud, but only now was it revealing itself to be a
cloud of clockwork elements that, when thermal energy brought them together,
spontaneously assembled into more complex arrangements. And those were spinning
and meshing in an intricate dance that stretched away into amber depths in all
directions. O

“It’s a dissipative system,” said Martin. “Sure, it radiates the heat our
quantum computers can no longer use. But along the way, it’s using that energy
to power an entirely different kind of computer. A Babbage engine the size of
the moon.”

“But, Martin, the launchers—they’re all collapsing.”

Martin nodded. “Makes sense. The launchers accomplished their mission. Now they
don’t want us following the seeds.”

“Not follow them? What do you mean?” An uneasy thought came to Simon; he tried
to avoid it, but there was only one way this all made sense. “If the radiator
was built to compute something, it must have been built with a way to output the
result. This ‘they’ you’re talking about added a transmitter to the radiator.
Then the radiator sent a virus or worm to the statites. The worm includes the
radiator’s output. It hacked the statites’ security, and now that the seeds are
in flight, it’s overwriting their code.”

Martin nodded.

“But why?” asked Simon.

Again, the answer was clear; Simon just didn’t want to admit it to himself.
Martin waited patiently to hear Simon say it.

“They gave the terraformers new instructions.”

Martin nodded. “Think about it, Simon! We designed Paradise as a quantum
computer that would be provably secure. We made it impossible to infect, and it
is. Whatever arrived while we were building it didn’t bother to mess with it,
where our attention was. It just built its own system where we wouldn’t even
think to look. Made out of and using our garbage. Probably modified the
maintenance robots tending the radiator into making radical changes.

“And what’s it been doing? I should think that was obvious. It’s been designing
terraforming systems for the exoplanets, just like you have, but to make them
habitable for an entirely different kind of colonist.”

Simon looked aghast at Martin. “And you knew?”

“Well.” Martin slouched, looked askance at Simon. “Not the details, until just
now. But listen: You abandoned us—all who died and were uploaded before the
Enactivist experiments ‘proved’ we aren’t real. All us zimboes, trapped here now
for eternity. Even if I’m just a simulation of your friend Martin, how do you
think he’d feel in this situation? He’d feel betrayed. Maybe he couldn’t escape
this virtual purgatory, but if he knew something that you didn’t—that humanity’s
new grand project had been hijacked by a virus from somewhere else—why would he
tell you?”

No longer hiding his anger, Martin came up to Simon and jabbed a virtual finger
at his chest. “Why would I tell you when I could just stand back and watch all
of this unfold?” He spread his arms, as if to embrace the clockwork sky, and
laughed.

On thousands of sterile exoplanets, throughout all the vast sphere of stars
within a hundred light-years of the sun, life was about to blossom—life, or
something else. Whatever it would be, humanity would never be welcome on those
worlds. “If they had any interest in talking to us, they would have, wouldn’t
they?” sighed Simon.

“I guess you’re not real to them, Simon. I wonder, how does that feel?”

Martin was still talking as Simon exited the virtual heaven where his best
friend was trapped, and he knew he would never go back. Still, ringing in his
ears as the stadium of confused, shouting people rose up around him were
Martin’s last, vicious words:

“How does it feel to be left behind, Simon?

“How does it feel?”

Story by KARL SCHROEDER

Annotations by CHARLES Q. CHOI

Illustrations by ANDREW ARCHER

Edited by STEPHEN CASS

Andrew Archer

Story by KARL SCHROEDER

Annotations by CHARLES Q. CHOI

Illustrations by ANDREW ARCHER

Edited by STEPHEN CASS

This article appears in the March 2024 print issue.

From Your Site Articles
 * Q&A: Can Sci-fi Movies Help Us Avoid Technological Disaster? ›
 * Someone to Watch Over Me ›
 * Sci-fi and Hi-fi - IEEE Spectrum ›

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Robotics News


HERE'S THE MOST BUG-LIKE ROBOT BUG YET

IT CAN TAKE OFF, HOVER, LAND, CRAWL, AND EVEN FLIP ITSELF OVER


Evan Ackerman

Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written
over 6,000 articles on robotics and technology. He has a degree in Martian
geology and is excellent at playing bagpipes.

19 Jun 2024
2 min read
2


Robot bug comes in for a landing.

Shanghai Jong Tong University

drones insect robots natural disasters robotics

Insects have long been an inspiration for robots. The insect world is full of
things that are tiny, fully autonomous, highly mobile, energy efficient,
multimodal, self-repairing, and I could go on and on but you get the
idea—insects are both an inspiration and a source of frustration to roboticists
because it’s so hard to get robots to have anywhere close to insect capability.

We’re definitely making progress, though. In a paper published last month in
IEEE Robotics and Automation Letters, roboticists from Shanghai Jong Tong
University demonstrated the most bug-like robotic bug I think I’ve ever seen.



A Multi-Modal Tailless Flapping-Wing Robot www.youtube.com

Okay so it may not look the most bug-like, but it can do many very buggy bug
things, including crawling, taking off horizontally, flying around (with six
degrees of freedom control), hovering, landing, and self-righting if necessary.
JT-fly weighs about 35 grams and has a wingspan of 33 centimeters, using four
wings at once to fly at up to 5 meters per second and six legs to scurry at 0.3
m/s. Its 380 milliampere-hour battery powers it for an actually somewhat useful
8-ish minutes of flying and about 60 minutes of crawling.

While that amount of endurance may not sound like a lot, robots like these
aren’t necessarily intended to be moving continuously. Rather, they move a
little bit, find a nice safe perch, and then do some sensing or whatever until
you ask them to move to a new spot. Ideally, most of that movement would be
crawling, but having the option to fly makes JT-fly exponentially more useful.







Or, potentially more useful, because obviously this is still very much a
research project. It does seem like there’s a bunch more optimization that could
be done here; for example, JT-fly uses completely separate systems for flying
and crawling, with two motors powering the legs and two additional motors
powering the wings plus with two wing servos for control. There’s currently a
limited amount of onboard autonomy, with an inertial measurement unit,
barometer, and wireless communication, but otherwise not much in the way of
useful payload.

Insects are both an inspiration and a source of frustration to roboticists
because it’s so hard to get robots to have anywhere close to insect capability.

It won’t surprise you to learn that the researchers have disaster relief
applications in mind for this robot, suggesting that “after natural disasters
such as earthquakes and mudslides, roads and buildings will be severely damaged,
and in these scenarios, JT-fly can rely on its flight ability to quickly deploy
into the mission area.” One day, robots like these will actually be deployed for
disaster relief, and although that day is not today, we’re just a little bit
closer than we were before.

“A Multi-Modal Tailless Flapping-Wing Robot Capable of Flying, Crawling,
Self-Righting and Horizontal Takeoff,” by Chaofeng Wu, Yiming Xiao, Jiaxin Zhao,
Jiawang Mou, Feng Cui, and Wu Liu from Shanghai Jong Tong University, is
published in the May issue of IEEE Robotics and Automation Letters.
From Your Site Articles
 * A Bug-Sized Camera for Bug-Sized Robots and Bug-Sized Bugs ›
 * Penny-Sized Ionocraft Flies With No Moving Parts ›
 * DragonflEye Project Wants to Turn Insects Into Cyborg Drones ›

Keep Reading ↓ Show less

Computing Robotics Biomedical Sponsored Article


EXPLORING SYDNEY’S DEEP TECH ECOSYSTEM

WITH A VIBRANT STARTUP ECOSYSTEM, SYDNEY EMERGES AS AN IDEAL HUB FOR UNVEILING
AND DEVELOPING DEEP TECH INNOVATIONS


BESydney

BESydney is a not-for-profit company that targets and bids for hosting rights
for global meetings to be held in Sydney, Australia. Backed by the NSW
Government, BESydney brings business visitors to Sydney for conferences,
corporate meetings, and incentive events that deliver economic and social impact
for the state of NSW, Australia, and global communities.

22 Oct 2023
4 min read
2


Tech Central is a vibrant innovation and technology district in the heart of
Sydney.

Atlassian HQ/SHoP Architects


This sponsored article is brought to you by BESydney.

In the dynamic landscape of Australian technology, market advancements are often
attributed to consumer-focused products like Canva and Afterpay. Capturing
headlines and attention with their renowned success stories, these, along with
other global companies like Atlassian, Facebook, and Apple, have become the face
of the tech industry.



The accomplishments of these companies are remarkable. They generate immense
wealth for stakeholders and employees and boast a staggering market value. But
this high-profile side of the industry is just the tip of the iceberg. Deep tech
– characterised by breakthrough scientific innovations – is where hidden impacts
take place. Beneath the surface of these tech giants lies a thriving industry
dedicated to researching and developing solutions that address large-scale
problems, with a profound effect on society.




THE POWER OF DEEP TECH

The tech industry in Australia is a powerhouse, employing one in 16 Australians
and ranking as the country’s third-largest industry. In 2021, it accounted for
8.5 percent of the GDP, an undeniably significant contribution to the nation’s
economy.

For nearly two decades, Sydney has also nurtured a thriving community of
resilient problem solvers, quietly pushing the boundaries of scientific
discovery. While consumer-focused tech giants often steal the spotlight, it is
imperative to recognize the profound impact of deep tech solutions that operate
behind the scenes.

From eco-friendly fabric manufacturing and hydrogen storage to molecular
diagnostics and sustainable alternatives to plastics, Sydney’s brightest minds
are tackling some of the world’s most pressing challenges.


THE TRANSFORMATION OF DEEP TECH STARTUPS

Navigating the deep tech landscape is no small feat. These enterprises offer
long-term solutions to pressing global challenges – a benefit that cannot be
ignored – but deep tech innovations require significant time for research and
development, often incubating for years before reaching the market.

They demand substantial investment and unwavering focus. Finding the right path
to commercialization is paramount. Thankfully, incubators are emerging as
champions in successfully transforming deep tech startups into thriving
businesses.

“Sydney’s DNA demands a deep-rooted vision, an unwavering belief in
problem-solving, and the determination to persevere despite challenges.”
—Sally-Ann Williams, Cicada Innovations

Cicada Innovations is Australia’s oldest and largest deep tech incubator. It
knows better than anyone the extent to which Australia’s deep tech evolution
hinges on the power of startups. With over 365 resident companies incubated,
over $1.7 billion raised, over $1.4 billion exits, and over 900 patents filed,
these dynamic ventures are already spearheading groundbreaking advancements.


It’s creating intelligent robots and pioneering scaled drone delivery to
minimize environmental impacts in transportation. It’s slashing the cost of
cancer drugs, offering hope for prolonged lifespans and alleviating suffering.
And it’s crafting innovative farming tools to enhance agricultural yields and
contribute to global food security.

Cicada Innovations chief executive Sally-Ann Williams believes Sydney is an
ideal location for deep tech incubation.

Cicada Innovations


A THRIVING HUB FOR DEEP TECH INNOVATION

With its vibrant ecosystem, Sydney emerges as an ideal hub for unveiling and
further developing deep tech innovations. The Australian spirit, shaped by
resilience and problem-solving, thrives in this city. Sally-Ann Williams, chief
executive of Cicada Innovations, affirms that “Sydney’s DNA demands a
deep-rooted vision, an unwavering belief in problem-solving, and the
determination to persevere despite challenges.”

The city offers a supportive community, facilitating connections and access to
the talent necessary for entrepreneurs to pursue their dreams. It’s this unique
blend of ingredients that fuels the growth of deep tech companies, propelling
them toward success.

SpeeDX molecular diagnostics, Professor Alison Todd AM, and Dr Elisa
Mokany.BESydney


DISCOVER DEEP TECH AT TECH CENTRAL

Deep tech is just one facet of what’s happening at Tech Central. While we shed
light on these industry accomplishments and celebrated breakthroughs, it’s
crucial to support and foster the growth of a wider industry: one that thrives
on resilience, problem-solving, and visionary entrepreneurship.

Sydney – with its unique blend of community, talent, and resources – stands at
the forefront of this transformative revolution, ready to propel tech innovation
for the benefit of all.

For more information on Sydney’s Tech Industry and hosting your next conference
in Sydney, visit besydney.com.au.


A CLOSER LOOK AT DEEP TECH INNOVATORS

To truly grasp the essence of deep tech, we must explore the stories of
individuals and companies that are driving change. Here are a few examples of
how deep tech is flourishing at Tech Central:


XEFCO: A SUSTAINABLE TEXTILE REVOLUTION

Xefco is a groundbreaking new materials company revolutionizing fabric
manufacturing. Its innovative process significantly reduces water usage by up to
90% and eliminates the need for dyes and harsh chemicals. Traditionally, textile
mills worldwide have polluted rivers and harmed local communities – Xefco aims
to transform the textile industry, benefitting both the environment and
economically disadvantaged communities worldwide.

RUX: EMPOWERING THE HYDROGEN ECONOMY

Another trailblazing company in Sydney’s deep tech ecosystem, Rux Energy is
tackling the challenge of hydrogen storage. Hydrogen presents immense potential
in the energy transition movement, but efficient and scalable storage solutions
are essential for its widespread adoption. Rux is developing new materials and
technologies to store hydrogen more effectively, paving the way for a cleaner
and more sustainable future.

SPEEDX: REVOLUTIONISING MOLECULAR DIAGNOSTICS

Amidst the global pandemic, SpeeDX, a Sydney-based company, emerged as a key
player in molecular diagnostic testing and antimicrobial resistance. SpeeDX aims
to address the rising concern of antibiotic overuse by providing personalized
recommendations for effective treatment. This groundbreaking technology has
far-reaching implications, reducing unnecessary antibiotic usage, minimizing the
risk of antimicrobial resistance, and safeguarding public health on a global
scale.


Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Computing Careers Whitepaper


HOW ONE TECHNOLOGY PRECINCT IS ATTRACTING THE WORLD'S MOST PROGRESSIVE
INNOVATORS

SYDNEY'S TECH CENTRAL IS REDEFINING HOW LARGE TECH COMPANIES, STARTUPS, AND
ACADEMIC INSTITUTIONS CAN WORK TOGETHER TO DRIVE GLOBAL CHANGE


BESydney

BESydney is a not-for-profit company that targets and bids for hosting rights
for global meetings to be held in Sydney, Australia. Backed by the NSW
Government, BESydney brings business visitors to Sydney for conferences,
corporate meetings, and incentive events that deliver economic and social impact
for the state of NSW, Australia, and global communities.

13 Oct 2023
1 min read
1


Home to Atlassian, Canva and Afterpay and ranked #1 tech startup ecosystem in
the southern hemisphere, Sydney’s Tech Central is redefining how large tech
companies, startups, and academic institutions can work together to drive global
change.

Download tech & innovation ebook to learn more



As the startup capital of Australia, Sydney is home to a broad spectrum of
companies working at the cutting edge of deep tech, AI, robotics, Internet of
Things (IoT), fintech, quantum computing, blockchain, virtual reality, visual
effects (VFX), game design, medtech, biotech, cybersecurity and more. At its
centre is Tech Central, a dedicated tech precinct undertaking an ambitious
15-year growth plan with an estimated value of $68 billion.

 * #1 ranked tech startup ecosystem in the Southern Hemisphere
 * Frontier of 45% of Australia’s AI businesses
 * Leading Australia to $167 billion tech sector to 80% growth in 5 years
 * Home to 150+ research institutions
 * A hub for 60% of Australia’s fintechs
 * 160,000 active STEM students

Discover how Tech Central has become a drawcard for the planet’s most
progressive innovators.

Keep Reading ↓ Show less

Energy News


COULD ADVANCED NUCLEAR REACTORS FUEL TERRORIST BOMBS?

FIVE INFLUENTIAL ENGINEERS WARN OF THE PROLIFERATION RISKS OF LOW-ENRICHED
URANIUM


Glenn Zorpette

Glenn Zorpette is editorial director for content development at IEEE Spectrum. A
Fellow of the IEEE, he holds a bachelor's degree in electrical engineering from
Brown University.

18 Jun 2024
6 min read



Centrus Energy expects to deliver 900 kilograms of high-assay, low-enriched
uranium fuel annually from centrifuges at its Piketon, Ohio plant.

Centrus Energy

nuclear weapons haleu uranium

Various scenarios to getting to net zero carbon emissions from power generation
by 2050 hinge on the success of some hugely ambitious initiatives in renewable
energy, grid enhancements, and other areas. Perhaps none of these are more
audacious than an envisioned renaissance of nuclear power, driven by
advanced-technology reactors that are smaller than traditional nuclear power
reactors.

What many of these reactors have in common is that they would use a kind of fuel
called high-assay low-enriched uranium (HALEU). Its composition varies, but for
power generation, a typical mix contains slightly less than 20 percent by mass
of the highly fissionable isotope uranium-235 (U-235). That’s in contrast to
traditional reactor fuels, which range from 3 percent to 5 percent U-235 by
mass, and natural uranium, which is just 0.7 percent U-235.



Now, though, a paper in Science magazine has identified a significant wrinkle in
this nuclear option: HALEU fuel can theoretically be used to make a fission
bomb—a fact that the paper’s authors use to argue for the tightening of
regulations governing access to, and transportation of, the material. Among the
five authors of the paper, which is titled “The Weapons Potential of High-Assay
Low-Enriched Uranium,” is IEEE Life Fellow Richard L. Garwin. Garwin was the key
figure behind the design of the thermonuclear bomb, which was tested in 1952.

The Science paper is not the first to argue for a reevaluation of the nuclear
proliferation risks of HALEU fuel. A report published last year by the National
Academies, “Merits and Viability of Different Nuclear Fuel Cycles and Technology
Options and the Waste Aspects of Advanced Nuclear Reactors,” devoted most of a
chapter to the risks of HALEU fuel. It reached similar technical conclusions to
those of the Science article, but did not go as far in its recommendations
regarding the need to tighten regulations.








WHY IS HALEU FUEL CONCERNING?

Conventional wisdom had it that U-235 concentrations below 20 percent were not
usable for a bomb. But “we found this testimony in 1984 from the chief of the
theoretical division of Los Alamos, who basically confirmed that, yes, indeed,
it is usable down to 10 percent,” says R. Scott Kemp of MIT, another of the
paper’s authors. “So you don’t even need centrifuges, and that’s what really is
important here.”

Centrifuges arranged very painstakingly into cascades are the standard means of
enriching uranium to bomb-grade material, and they require scarce and costly
resources, expertise, and materials to operate. In fact, the difficulty of
building and operating such cascades on an industrial scale has for decades
served as an effective barrier to would-be builders of nuclear weapons. So any
route to a nuclear weapon that bypassed enrichment would offer an undoubtedly
easier alternative. The question now is, how much easier?

“It’s not a very good bomb, but it could explode and wreak all kinds of havoc.”

Adding urgency to that question is an anticipated gold rush in HALEU, after
years of quiet U.S. government support. The U.S. Department of Energy is
spending billions to expand production of the fuel, including US $150 million
awarded in 2022 to a subsidiary of Centrus Energy Corp., the only private
company in the United States enriching uranium to HALEU concentrations. (Outside
of the United States, only Russia and China are producing HALEU in substantial
quantities.) Government support also extends to the companies building the
reactors that will use HALEU. Two of the largest reactor startups, Terrapower
(backed in part by Bill Gates) and X-Energy, have designed reactors that run on
forms of HALEU fuel, and have received billions in funding under a DOE program
called Advanced Reactor Demonstration Projects.

The difficulty of building a bomb based on HALEU is a murky subject, because
many of the specific techniques and practices of nuclear weapons design are
classified. But basic information about the standard type of fission weapon,
known as an implosion device, has long been known publicly. (The first two
implosion devices were detonated in 1945, one in the Trinity test and the other
over Nagasaki, Japan.) An implosion device is based on a hollow sphere of
nuclear material. In a modern weapon this material is typically plutonium-239,
but it can also be a mixture of uranium isotopes that includes a percentage of
U-235 ranging from 100 percent all the way down to, apparently, around 10
percent. The sphere is surrounded by shaped chemical explosives that are
exploded simultaneously, creating a shockwave that physically compresses the
sphere, reducing the distance between its atoms and increasing the likelihood
that neutrons emitted from their nuclei will encounter other nuclei and split
them, releasing more neutrons. As the sphere shrinks it goes from a subcritical
state, in which that chain reaction of neutrons splitting nuclei and creating
other neutrons cannot sustain itself, to a critical state, in which it can. As
the sphere continues to compress it achieves supercriticality, after which an
injected flood of neutrons triggers the superfast, runaway chain reaction that
is a fission explosion. All this happens in less than a millisecond.







The authors of the Science paper had to walk a fine line between not revealing
too many details about weapons design while still clearly indicating the scope
of the challenge of building a bomb based on HALEU. They acknowledge that the
amount of HALEU material needed for a 15-kiloton bomb—roughly as powerful as the
one that destroyed Hiroshima during the second World War—would be relatively
large: in the hundreds of kilograms, but not more than 1,000 kg. For comparison,
about 8 kg of Pu-239 is sufficient to build a fission bomb of modest
sophistication. Any HALEU bomb would be commensurately larger, but still small
enough to be deliverable “using an airplane, a delivery van, or a boat sailed
into a city harbor,” the authors wrote.

They also acknowledged a key technical challenge for any would-be weapons makers
seeking to use HALEU to make a bomb: preinitiation. The large amount of U-238 in
the material would produce many neutrons, which would likely result in a nuclear
chain reaction occurring too soon. That would sap energy from the subsequent
triggered runaway chain reaction, limiting the explosive yield and producing
what’s known in the nuclear bomb business as a “fizzle.“ However, “although
preinitiation may have a bigger impact on some designs than others, even those
that are sensitive to it could still produce devastating explosive power,” the
authors conclude.

In other words, “it’s not a very good bomb, but it could explode and wreak all
kinds of havoc,” says John Lee, professor emeritus of nuclear engineering at the
University of Michigan. Lee was a contributor to the 2023 National Academies
report that also considered risks of HALEU fuel and made policy recommendations
similar to those of the Science paper.

Critics of that paper argue that the challenges of building a HALEU bomb, while
not insurmountable, would stymie a nonstate group. And a national weapons
program, which would likely have the resources to surmount them, would not be
interested in such a bomb, because of its limitations and relative
unreliability.

“That’s why the IAEA [International Atomic Energy Agency], in their wisdom,
said, ‘This is not a direct-use material,’” says Steven Nesbit, a
nuclear-engineering consultant and past president of the American Nuclear
Society, a professional organization. “It’s just not a realistic pathway to a
nuclear weapon.”

The Science authors conclude their paper by recommending that the U.S. Congress
direct the DOE’s National Nuclear Security Administration (NNSA) to conduct a
“fresh review” of the risks posed by HALEU fuel. In response to an email inquiry
from IEEE Spectrum, an NNSA spokesman, Craig Branson, replied: “To meet net-zero
emissions goals, the United States has prioritized the design, development, and
deployment of advanced nuclear technologies, including advanced and small
modular reactors. Many will rely on HALEU to achieve smaller designs, longer
operating cycles, and increased efficiencies over current technologies. They
will be essential to our efforts to decarbonize while meeting growing energy
demand. As these technologies move forward, the Department of Energy and NNSA
have programs to work with willing industrial partners to assess the risk and
enhance the safety, security, and safeguards of their designs.”

The Science authors also called on the U.S. Nuclear Regulatory Commission (NRC)
and the IAEA to change the way they categorize HALEU fuel. Under the NRC’s
current categorization, even large quantities of HALEU are now considered
category II, which means that security measures focus on the early detection of
theft. The authors want weapons-relevant quantities of HALEU reclassified as
category I, the same as for quantities of weapons-grade plutonium or highly
enriched uranium sufficient to make a bomb. Category I would require much
tighter security, focusing on the prevention of theft.

Nesbit scoffs at the proposal, citing the difficulties of heisting perhaps a
metric tonne of nuclear material. “Blindly applying all of the baggage that goes
with protecting nuclear weapons to something like this is just way overboard,”
he says.

But Lee, who performed experiments with HALEU fuel in the 1980s, agrees with his
colleagues. “Dick Garwin and Frank von Hipple [and the other authors of the
Science paper] have raised some proper questions,” he declares. “They’re saying
the NRC should take more precautions. I’m all for that.”

From Your Site Articles
 * The Future of Fission Reactors May Be Small ›
 * U.S. Reenters the Nuclear Fuel Game ›

Related Articles Around the Web
 * Uranium fuel planned for high-tech US reactors a weapons risk ... ›
 * What is High-Assay Low-Enriched Uranium (HALEU)? | Department ... ›

Keep Reading ↓ Show less





word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word word word word word word word word word
word word word word word word word word

mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1
mmMwWLliI0fiflO&1