writings.stephenwolfram.com
Open in
urlscan Pro
140.177.9.7
Public Scan
Submitted URL: http://writings.stephenwolfram.com/
Effective URL: https://writings.stephenwolfram.com/
Submission: On October 27 via api from US — Scanned from CA
Effective URL: https://writings.stephenwolfram.com/
Submission: On October 27 via api from US — Scanned from CA
Form analysis
2 forms found in the DOMGET https://writings.stephenwolfram.com/
<form class="search-link" method="get" action="https://writings.stephenwolfram.com/">
<svg version="1.0" class="search-button" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 18.2 18.5" enable-background="new 0 0 18.2 18.5" xml:space="preserve">
<g>
<path class="circle" fill="#c42c1f"
d="M7.7,14.2H7.6c-3.6,0-6.5-2.9-6.5-6.4V7.7c0-3.5,2.9-6.4,6.5-6.4h0.1 c3.6,0,6.5,2.9,6.5,6.4v0.1C14.2,11.3,11.3,14.2,7.7,14.2z M7.6,2.5c-2.9,0-5.2,2.3-5.2,5.1v0.1c0,2.8,2.3,5.1,5.2,5.1h0.1 c2.9,0,5.2-2.3,5.2-5.1V7.7C12.9,4.8,10.6,2.5,7.6,2.5L7.6,2.5z">
</path>
<g>
<path class="handle" fill="#c42c1f" d="M16.8,15.4l-4.4-4.4l-1.5,1.6l4.4,4.5c0.2,0.2,0.5,0.3,0.7,0.3s0.5-0.1,0.7-0.3 C17.2,16.5,17.2,15.8,16.8,15.4z"></path>
</g>
</g>
</svg>
<input class="search-field hide" type="text" value="" name="s" placeholder="Search Writings">
<span class="close hide">×</span>
</form>
GET https://writings.stephenwolfram.com/
<form class="headerSearchBox" method="get" action="https://writings.stephenwolfram.com/" id="search">
<input class="searchboxsub" type="text" value="" name="s" id="query_1" placeholder="Search Writings">
<input type="submit" class="headerSearchSubmit" value="" title="Search">
</form>
Text Content
≡ STEPHEN WOLFRAM Writings * * ABOUT * WRITINGS * PUBLICATIONS * MEDIA * SCRAPBOOK * CONTACT * * * * * * Recent | Categories * Artificial Intelligence * Big Picture * Companies & Business * Computational Science * Computational Thinking * Data Science * Education * Future Perspectives * Historical Perspectives * Language & Communication * Life & Times * Life Science * Mathematica * Mathematics * New Kind of Science * New Technology * Personal Analytics * Philosophy * Physics * Ruliology * Software Design * Wolfram|Alpha * Wolfram Language * Other | × ON THE NATURE OF TIME October 8, 2024 THE COMPUTATIONAL VIEW OF TIME Time is a central feature of human experience. But what actually is it? In traditional scientific accounts it’s often represented as some kind of coordinate much like space (though a coordinate that for some reason is always systematically increasing for us). But while this may be a useful mathematical description, it’s not telling us anything about what time in a sense “intrinsically is”. We get closer as soon as we start thinking in computational terms. Because then it’s natural for us to think of successive states of the world as being computed one from the last by the progressive application of some computational rule. And this suggests that we can identify the progress of time with the “progressive doing of computation by the universe”. But does this just mean that we are replacing a “time coordinate” with a “computational step count”? No. Because of the phenomenon of computational irreducibility. With the traditional mathematical idea of a time coordinate one typically imagines that this coordinate can be “set to any value”, and that then one can immediately calculate the state of the system at that time. But computational irreducibility implies that it’s not that easy. Because it says that there’s often essentially no better way to find what a system will do than by explicitly tracing through each step in its evolution. Continue reading NESTEDLY RECURSIVE FUNCTIONS September 27, 2024 YET ANOTHER RULIOLOGICAL SURPRISE Integers. Addition. Subtraction. Maybe multiplication. Surely that’s not enough to be able to generate any serious complexity. In the early 1980s I had made the very surprising discovery that very simple programs based on cellular automata could generate great complexity. But how widespread was this phenomenon? At the beginning of the 1990s I had set about exploring this. Over and over I would consider some type of system and be sure it was too simple to “do anything interesting”. And over and over again I would be wrong. And so it was that on the night of August 13, 1993, I thought I should check what could happen with integer functions defined using just addition and subtraction. Continue reading FIVE MOST PRODUCTIVE YEARS: WHAT HAPPENED AND WHAT’S NEXT August 29, 2024 SO… WHAT HAPPENED? Today is my birthday—for the 65th time. Five years ago, on my 60th birthday, I did a livestream where I talked about some of my plans. So… what happened? Well, what happened was great. And in fact I’ve just had the most productive five years of my life. Nine books. 3939 pages of writings (1,283,267 words). 499 hours of podcasts and 1369 hours of livestreams. 14 software product releases (with our great team). Oh, and a bunch of big—and beautiful—ideas and results. It’s been wonderful. And unexpected. I’ve spent my life alternating between technology and basic science, progressively building a taller and taller tower of practical capabilities and intellectual concepts (and sharing what I’ve done with the world). Five years ago everything was going well, and making steady progress. But then there were the questions I never got to. Over the years I’d come up with a certain number of big questions. And some of them, within a few years, I’d answered. But others I never managed to get around to. And five years ago, as I explained in my birthday livestream, I began to think “it’s now or never”. I had no idea how hard the questions were. Yes, I’d spent a lifetime building up tools and knowledge. But would they be enough? Or were the questions just not for our time, but only perhaps for some future century? Continue reading WHAT’S REALLY GOING ON IN MACHINE LEARNING? SOME MINIMAL MODELS August 22, 2024 THE MYSTERY OF MACHINE LEARNING It’s surprising how little is known about the foundations of machine learning. Yes, from an engineering point of view, an immense amount has been figured out about how to build neural nets that do all kinds of impressive and sometimes almost magical things. But at a fundamental level we still don’t really know why neural nets “work”—and we don’t have any kind of “scientific big picture” of what’s going on inside them. The basic structure of neural networks can be pretty simple. But by the time they’re trained up with all their weights, etc. it’s been hard to tell what’s going on—or even to get any good visualization of it. And indeed it’s far from clear even what aspects of the whole setup are actually essential, and what are just “details” that have perhaps been “grandfathered” all the way from when computational neural nets were first invented in the 1940s. Well, what I’m going to try to do here is to get “underneath” this—and to “strip things down” as much as possible. I’m going to explore some very minimal models—that, among other things, are more directly amenable to visualization. At the outset, I wasn’t at all sure that these minimal models would be able to reproduce any of the kinds of things we see in machine learning. But, rather surprisingly, it seems they can. Continue reading YET MORE NEW IDEAS AND NEW FUNCTIONS: LAUNCHING VERSION 14.1 OF WOLFRAM LANGUAGE & MATHEMATICA July 31, 2024 For the 36th Time… the Latest from Our R&D Pipeline There’s Now a Unified Wolfram App Vector Databases and Semantic Search RAGs and Dynamic Prompting for LLMs Connect to Your Favorite LLM Symbolic Arrays and Their Calculus Binomials and Pitchforks: Navigating Mathematical Conventions Fixed Points and Stability for Differential and Difference Equations The Steady Advance of PDEs Symbolic Biomolecules and Their Visualization Optimizing Neural Nets for GPUs and NPUs The Statistics of Dates Building Videos with Programs Optimizing the Speech Recognition Workflow Historical Geography Becomes Computable Astronomical Graphics and Their Axes When Is Earthrise on Mars? New Level of Astronomical Computation Geometry Goes Color, and Polar New Computation Flow in Notebooks: Introducing Cell-Linked % The UX Journey Continues: New Typing Affordances, and More Syntax for Natural Language Input Diff[ ] … for Notebooks and More! Lots of Little Language Tune-Ups Making the Wolfram Compiler Easier to Use Even Smoother Integration with External Languages Standalone Wolfram Language Applications! And Yet More… FOR THE 36TH TIME… THE LATEST FROM OUR R&D PIPELINE Today we celebrate the arrival of the 36th (x.x) version of the Wolfram Language and Mathematica: Version 14.1. We’ve been doing this since 1986: continually inventing new ideas and implementing them in our larger and larger tower of technology. And it’s always very satisfying to be able to deliver our latest achievements to the world. We released Version 14.0 just half a year ago. And—following our modern version scheduling—we’re now releasing Version 14.1. For most technology companies a .1 release would contain only minor tweaks. But for us it’s a snapshot of what our whole R&D pipeline has delivered—and it’s full of significant new features and new enhancements. If you’ve been following our livestreams, you may have already seen many of these features and enhancements being discussed as part of our open software design process. And we’re grateful as always to members of the Wolfram Language community who’ve made suggestions—and requests. And in fact Version 14.1 contains a particularly large number of long-requested features, some of which involved development that has taken many years and required many intermediate achievements. Continue reading RULIOLOGY OF THE “FORGOTTEN” CODE 10 June 1, 2024 MY ALL-TIME FAVORITE SCIENCE DISCOVERY June 1, 1984—forty years ago today—is when it would be fair to say I made my all-time favorite science discovery. Like with basically all significant science discoveries (despite the way histories often present them) it didn’t happen without several long years of buildup. But June 1, 1984, was when I finally had my “aha” moment—even though in retrospect the discovery had actually been hiding in plain sight for more than two years. My diary from 1984 has a cryptic note that shows what happened on June 1, 1984: There’s a part that says “BA 9 pm → LDN”, recording the fact that at 9pm that day I took a (British Airways) flight to London (from New York; I lived in Princeton at that time). “Sent vega monitor → SUN” indicates that I had sent the broken display of a computer I called “vega” to Sun Microsystems. But what’s important for our purposes here is the little “side” note: Take C10 pict. R30 R110 What did that mean? C10, R30 and R110 were my shorthand designations for particular, very simple programs of types I’d been studying: “code 10”, “rule 30” and “rule 110”. And my note reminded me that I wanted to take pictures of those programs with me that evening, making them on the laser printer I’d just got (laser printers were rare and expensive devices at the time). Continue reading WHY DOES BIOLOGICAL EVOLUTION WORK? A MINIMAL MODEL FOR BIOLOGICAL EVOLUTION AND OTHER ADAPTIVE PROCESSES May 3, 2024 THE MODEL Why does biological evolution work? And, for that matter, why does machine learning work? Both are examples of adaptive processes that surprise us with what they manage to achieve. So what’s the essence of what’s going on? I’m going to concentrate here on biological evolution, though much of what I’ll discuss is also relevant to machine learning—but I’ll plan to explore that in more detail elsewhere. OK, so what is an appropriate minimal model for biology? My core idea here is to think of biological organisms as computational systems that develop by following simple underlying rules. These underlying rules in effect correspond to the genotype of the organism; the result of running them is in effect its phenotype. Cellular automata provide a convenient example of this kind of setup. Here’s an example involving cells with 3 possible colors; the rules are shown on the left, and the behavior they generate is shown on the right: Note: Click any diagram to get Wolfram Language code to reproduce it. We’re starting from a single () cell, and we see that from this “seed” a structure is grown—which in this case dies out after 51 steps. And in a sense it’s already remarkable that we can generate a structure that neither goes on forever nor dies out quickly—but instead manages to live (in this case) for exactly 51 steps. Continue reading WHEN EXACTLY WILL THE ECLIPSE HAPPEN? A MULTIMILLENNIUM TALE OF COMPUTATION March 29, 2024 See also: “Computing the Eclipse: Astronomy in the Wolfram Language” » Updated and expanded from a post for the eclipse of August 21, 2017. PREPARING FOR APRIL 8, 2024 On April 8, 2024, there’s going to be a total eclipse of the Sun visible on a line across the US. But when exactly will the eclipse occur at a given location? Being able to predict astronomical events has historically been one of the great triumphs of exact science. But how well can it actually be done now? The answer is well enough that even though the edge of totality moves at just over 1000 miles per hour, it’s possible to predict when it will arrive at a given location to within perhaps a second. And as a demonstration of this, for the total eclipse back in 2017 we created a website to let anyone enter their geo location (or address) and then immediately compute when the eclipse would reach them—as well as generate many pages of other information. Continue reading COMPUTING THE ECLIPSE: ASTRONOMY IN THE WOLFRAM LANGUAGE March 29, 2024 See also: “When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation” » BASIC ECLIPSE COMPUTATION It’s taken millennia to get to the point where it’s possible to accurately compute eclipses. But now—as a tiny part of making “everything in the world” computable—computation about eclipses is just a built-in feature of the Wolfram Language. The core function is SolarEclipse. By default, SolarEclipse tells us the time of the next solar eclipse from now: Continue reading CAN AI SOLVE SCIENCE? March 5, 2024 Note: Click any diagram to get Wolfram Language code to reproduce it. Wolfram Language code for training the neural nets used here is also available (requires GPU). WON’T AI EVENTUALLY BE ABLE TO DO EVERYTHING? Particularly given its recent surprise successes, there’s a somewhat widespread belief that eventually AI will be able to “do everything”, or at least everything we currently do. So what about science? Over the centuries we humans have made incremental progress, gradually building up what’s now essentially the single largest intellectual edifice of our civilization. But despite all our efforts, there are still all sorts of scientific questions that remain. So can AI now come in and just solve all of them? To this ultimate question we’re going to see that the answer is inevitably and firmly no. But that certainly doesn’t mean AI can’t importantly help the progress of science. At a very practical level, for example, LLMs provide a new kind of linguistic interface to the computational capabilities that we’ve spent so long building in the Wolfram Language. And through their knowledge of “conventional scientific wisdom” LLMs can often provide what amounts to very high-level “autocomplete” for filling in “conventional answers” or “conventional next steps” in scientific work. Continue reading ‹Showing 1–10 of 224› Recent Writings On the Nature of Time October 8, 2024 Nestedly Recursive Functions September 27, 2024 Five Most Productive Years: What Happened and What’s Next August 29, 2024 What’s Really Going On in Machine Learning? Some Minimal Models August 22, 2024 Yet More New Ideas and New Functions: Launching Version 14.1 of Wolfram Language & Mathematica July 31, 2024 All by date Popular Categories * Artificial Intelligence * Big Picture * Companies & Business * Computational Science * Computational Thinking * Data Science * Education * Future Perspectives * Historical Perspectives * Language & Communication * Life & Times * Life Science * Mathematica * Mathematics * New Kind of Science * New Technology * Personal Analytics * Philosophy * Physics * Ruliology * Software Design * Wolfram|Alpha * Wolfram Language * Other Writings by Year * 2024 * 2023 * 2022 * 2021 * 2020 * 2019 * 2018 * 2017 * 2016 * 2015 * 2014 * 2013 * 2012 * 2011 * 2010 * 2009 * 2008 * 2007 * 2006 * 2004 * 2003 * All © Stephen Wolfram, LLC | Open content: (code: ) | Terms | RSS Enable JavaScript to interact with content and submit forms on Wolfram websites. Learn how »