www.innovationendeavors.com
Open in
urlscan Pro
34.249.200.254
Public Scan
Submitted URL: http://www.innovationendeavors.com/
Effective URL: https://www.innovationendeavors.com/
Submission: On November 25 via api from US — Scanned from DE
Effective URL: https://www.innovationendeavors.com/
Submission: On November 25 via api from US — Scanned from DE
Form analysis
1 forms found in the DOMName: email-form — POST https://innovationendeavors.us9.list-manage.com/subscribe/post?u=ca016a0243b68052ec84e7418&id=0da0c8ec03&f_id=000e2de1f0
<form id="email-form" name="email-form" data-name="Email Form" action="https://innovationendeavors.us9.list-manage.com/subscribe/post?u=ca016a0243b68052ec84e7418&id=0da0c8ec03&f_id=000e2de1f0" method="post" class="form"
data-wf-page-id="639c7fd2aec4fd802ef9e4bb" data-wf-element-id="066a0a44-4570-b467-f710-a1c76f829219" aria-label="Email Form"><input type="email" class="text-field w-input" maxlength="256" name="EMAIL" data-name="EMAIL" placeholder="Email Address"
id="EMAIL-3" required=""><input type="submit" value="Subscribe" data-wait="Please wait..." class="white-btn form-btn w-button"></form>
Text Content
* Home * Portfolio * Team * About * Insights * Curiosity Camp * Contact FOLLOW US * Resources Resource Link 1Resource Link 2Resource Link 3 THE SUPER EVOLUTION IS HERE View our Portfolio INNOVATION ENDEAVORS IS AN EARLY‑STAGE VENTURE FIRM INVESTING IN THE SUPER EVOLUTION — A NON‑LINEAR APPROACH TO INNOVATION THAT DRIVES GENERATIONAL CHANGE IN ORDER TO SOLVE PROBLEMS THAT MATTER. We partner with Agents of Change: highly technical founders advancing industries with transformative solutions at scale. View Portfolio Planet AlphaSense Afresh Gatik Atom Computing Eikon Therapeutics Plenty Plenty Plenty Plenty Slide 3 of 4. THERE’S NEVER BEEN A MORE URGENT TIME TO SOLVE PROBLEMS THAT MATTER THERE’S NEVER BEEN A MORE URGENT TIME TO SOLVE PROBLEMS THAT MATTER THERE’S NEVER BEEN A MORE URGENT TIME TO SOLVE PROBLEMS THAT MATTER THE SUPER EVOLUTION IS MADE POSSIBLE BY THE CONVERGENCE OF THREE TECHNICAL DEVELOPMENTS: SENSE Physical-world sensors by the billions are giving us high-resolution data that was previously unavailable. COMPUTE Machine learning and edge computing make computational power stronger and more affordable than ever, allowing us to discover complex patterns and make better predictions. ENGINEER Advances in engineering, robotics, 3D printing, and CRISPR make it possible to translate insights into physical and biological action quickly, effectively, and affordably. This phenomenon is giving rise to an exponential increase in the rates of experimentation, iteration, and progress. Learn More THE SUPER EVOLUTION IS MADE POSSIBLE BY THE CONVERGENCE OF THREE TECHNICAL DEVELOPMENTS: SENSE Physical-world sensors by the billions are giving us high-resolution data that was previously unavailable. COMPUTE Machine learning and edge computing make computational power stronger and more affordable than ever, allowing us to discover complex patterns and make better predictions. ENGINEER Advances in engineering, robotics, 3D printing, and CRISPR make it possible for us to translate insights into physical and biological action quickly, effectively, and affordably. This phenomenon is giving rise to an exponential increase in the rates of experimentation, iteration and progress. Learn More THE SUPER EVOLUTION IS MADE POSSIBLE BY THE CONVERGENCE OF THREE TECHNICAL DEVELOPMENTS: SENSE Physical-world sensors by the billions are giving us high-resolution data that was previously unavailable. COMPUTE Machine learning and edge computing make computational power stronger and more affordable than ever, allowing us to discover complex patterns and make better predictions. ENGINEER Advances in engineering, robotics, 3D printing, and CRISPR make it possible for us to translate insights into physical and biological action quickly, effectively, and affordably. This phenomenon is giving rise to an exponential increase in the rates of experimentation, iteration and progress. Learn More Areas of Focus 1 Physical Economy 2 Intelligent Software 3 Computing Infrastructure 4 Engineering Health 5 Climate View Portfolio LATEST NEWS AND TOP INSIGHTS UNVEILING RDI: HOW IDEAS FOR IMPACTFUL STARTUPS ARE DISCOVERED Innovation Endeavors Read More Read More Innovation Endeavors Read More INSIDE THE AI GOLD RUSH Recode Media Read More Read More Recode Media Read More A FOUNDATION MODEL PRIMER Davis Treybig Read More Read More Davis Treybig Read More 2023: THE ENERGY TRANSITION TAKES FLIGHT DESPITE SOME TURBULENCE Innovation Endeavors Read More Read More Innovation Endeavors Read More 74333ad58e00c0bcef7f12885bf147fe36f39d292f8d52ca1a2109f225dbbcb5 1 Untitled Widget yourwebsite.com innovationendeavors Maaliwalas Theme 1 #ffffff #212529 #0280fe 1 1 300 4 3 10 20 0 1 1 1 1 #555555 #000000 0 0.6 1 #ffffff #555555 #3480dc #ddeeff #ffffff #555555 #3480dc #a2bec4 #555555 Stories by Century Gothic, sans-serif Century Gothic, sans-serif 25 16 rgba(255,255,250,0) #ffffff rgba(31,46,90,0) #ffffff 1 106119 20 67923 medium-publication-feed sk-ww-medium-publication-feed Medium Blog Webflow 0 0 0 0 0 Investing in visionary founders, transformational technology and emergent ecosystems for a new world. Using AI to create incredibly efficient solar cells: Our investment in Cosmos Innovation USING AI TO CREATE INCREDIBLY EFFICIENT SOLAR CELLS: OUR INVESTMENT IN COSMOS INNOVATION Dror Berman · Follow Published in Innovation Endeavors · 5 min read · Nov 10 13 Listen Share By Dror Berman and Josh Rapperport Technological progress is often described in terms of platform shifts. Personal computing. The internet. Mobile. As these new paradigms gain steam, the innovation that follows changes the course of our society. Our core thesis — the Super Evolution — is focused on how platform shifts in data, compute and engineering converge to produce rapid progress in domains where innovation has been driven by human experts. We continue to be fascinated by the way this intersection of technological and scientific methods creates combinatorial magic. Two of the most important platform shifts of the coming decades will be artificial intelligence and the energy transition to carbon-free power. It goes without saying the current rate of improvement in AI is one of the most exciting moments in modern technology. Similarly, the talent and capital flowing into the energy transition is only accelerating and will need to continue if we’re to avoid dangerous climate change. We know AI can transform how science gets done (as Eric Schmidt recently laid out) and we know we need profound innovation in the way we produce electrons if our climate goals are to become reality. This intersection of AI and renewable energy is where our latest investment, Cosmos Innovation, is building. We’re excited to have been a part of the team’s journey from the very beginning as they apply the latest in AI to create a better, more efficient physical world. In concrete terms, Cosmos Innovation is radically accelerating and improving the process by which semiconductor development can produce optimal performance. For semiconductors, like all processes of experimentation, development has historically been human-driven. Researcher’s design-of-experiment (DoE) is guided by their understanding of underlying science, and they make changes through each iteration of an experiment to push toward some desired performance. We can think of this process as traversing a design universe, essentially a matrix of possible process conditions and material compositions, or a ‘recipe’, in the hopes of arriving at the optimal design. The various process parameters provide the ‘knobs’ that the researchers can turn, and they can only change one or two at a time. Cosmos Innovation is using machine learning/iterative learning to turn all the knobs at once. In other words, they are using AI-driven experimental design to enable optimized outcomes with far fewer experiments, and much better target performance. They have demonstrated these capabilities with some of the largest semiconductor companies in the world, and the results are extraordinary — Cosmos Innovation can accelerate process optimization by 10x. The technology allows semiconductor manufacturers to develop novel, optimized recipes in a fraction of the time and cost of conventional methods. Which brings us to the critical application of their technology — producing the most efficient solar panels ever made. Specifically, unlocking the full potential of perovskite-silicon tandem solar cells, a uniquely promising but not-yet-commercialized design. Perovskite-silicon tandem is the ideal candidate for AI-guided design because it is incredibly complex. While the underlying science and performance of crystalline silicon (over 90% of today’s global solar market) is well understood, adding a perovskite layer on top creates significant challenges. Perovskite recipe optimization is hard because of very high combinatorial complexity, somewhere around 5^72 possible permutations. Perovskite tandem architectures can have 12 or more layers, and over 100 input knobs, and there are no performant, physics-based models that researchers can rely on. Furthermore, the “goodness” factors, or relationships between the parameters driving ultimate performance, are also not well understood. This has led to fundamental challenges around degradation of produced solar cells, with stability and efficiency deterioration at elevated temperatures and humidity. In short, making sense of the overwhelming amount of combinations is impossible — it’s clear that human-driven approaches are too slow and won’t yield optimal results. Why is it so important that perovskite-silicon reaches its full potential, and why is the market opportunity so big? Because of the incredible efficiency potential of perovskites. Delivering cheap, abundant power (often described as levelized-cost-of-energy or LCOE) from the sun requires optimizing two primary levers — solar cell cost and solar cell efficiency. While the last decade has seen an order of magnitude decline in costs, crystalline silicon efficiency has increased by only a few percent, going from roughly 18% to 25%, with a theoretical limit of 29%. Building on all the gains of silicon to date, the theoretical limit of perovskite-silicon is 43%. This remarkable potential is why the solar industry is so excited about perovskites. The timing is ideal. Fundamental research in perovskites has been gaining momentum for the last decade, with record efficiencies nearing 34%. The fundamental science has been largely derisked, and now Cosmos Innovation is ready to use AI to close the gap to reality. Not only is the team addressing major challenges in designing solar cells, but they are also bringing self-learning to the manufacturing stage. As First Solar showed, it takes incredible execution and manufacturing prowess to produce novel solar architectures at scale. In this stage, AI can also be a game-changer. Cosmos Innovation intends to utilize next-gen capabilities to drive producibility, quality control and root-cause analysis, allowing for repeatable production of cells with commercial stability and performance. Automated, self-learning process control and tuning — long the ideal among visionaries in solar and semiconductors — has been largely out of reach over the last 50 years of solar manufacturing. The first version will be ready for experimentation beginning this fall at the Singapore-based fab. We first partnered with Cosmos after hearing incredible things about the founders from Demis Hassabis, the CEO of DeepMind and Tomaso Poggio, the renowned MIT academic. It became clear to us that the founders, Vijay and Joel, are world-class technologists bringing together decades of AI, semiconductor, and solar expertise. Vijay led the AI effort at the Institute for Infocomm Research at the Agency of Science, Technology and Research (A*STAR), where he managed a portfolio of more than 50 AI projects across over 10 domains with a key focus on semiconductors. Joel was the former group head at the Solar Energy Research Institute of Singapore, where he led the development of an award-winning PV manufacturing technology. He also served as an AI team lead at A*STAR, where he led the development and deployment of AI solutions for Tier-1 manufacturers and R&D institutes in the semiconductor, material, and chemical domains. It’s rare to see a team of such multidisciplinary depth, and we’re proud to be supporting them alongside Xora and Two Sigma. Solar energy is the single most impactful lever available to reduce planetary warming in this decade. That is the conclusion of a highly anticipated report from the Intergovernmental Panel on Climate Change (IPCC), which assessed the various mitigation pathways to curb climate change between now and 2030. AI provides an unprecedented opportunity to meet this moment and change the way we produce electricity. We look forward to the journey! View on Medium By Dror Berman and Josh Rapperport Read more S3 as the universal infrastructure backend S3 AS THE UNIVERSAL INFRASTRUCTURE BACKEND Davis Treybig · Follow Published in Innovation Endeavors · 12 min read · Oct 24 261 7 Listen Share TL;DR 1. Traditionally, infrastructure services such as databases have built their own storage layer on top of local disk storage (e.g. EBS Volumes). This is partially a holdover from the pre-cloud era. 2. Increasingly, S3 is being used as the core persistence layer for infrastructure services (e.g. Snowflake, Neon, BigQuery, and WarpStream), rather than simply as a backup or tiered storage layer. 3. This S3 as a Storage Layer architecture gives you so many advantages (especially as a startup), that it is likely to become a standard architecture for most cloud services moving forward 4. There is a huge opportunity for startups using these ideas to disrupt large cloud infrastructure categories (especially databases and data systems) Traditionally, cloud infrastructure services have primarily relied on local disk storage as their source of truth storage layer. If you take a look at the average cloud infrastructure service, you’re almost guaranteed to see a storage model based around storing data on local SSDs such as EBS volumes (e.g. see Elastic, Kafka, RDS, MongoDB, AWS Neptune). Most of these services use local I/O to read and write data to local disk, coupling compute and storage in a way that creates huge issues around autoscaling and cost. A few, such as AWS Aurora, disaggregate by having compute workers make networked RPC calls to a separate storage service, which then reads/writes to local disk. But in either case — the service provider is writing a custom storage layer and dealing with all the complexities of distributed cloud storage, including durability, availability, fault tolerance, and similar. Often, a lot of this complexity then leaks to the user of the service. Much of this storage architecture is a holdover from historical on-premise deployments where infrastructure was static and pre-provisioned, and customers were not subject to the pricing model of the cloud. Yet, the cloud has not only changed these dynamics, but also offers a new storage primitive that is effectively infinitely scalable, available, and elastic: BLOB stores. I am now seeing many infrastructure services build around these cloud BLOB stores as their durable storage backend (not just as a backup layer). This “S3 as a Storage Layer” architecture gives you so much for free as an infrastructure service — separation of storage and compute, time travel, fault tolerance, infinite concurrency reads, fast recovery, better developer experience for your users— that I think it will become the default architecture for a large percentage of cloud infrastructure services over the next decade. So, let’s explore what this architecture looks like, its benefits, and some of the early examples of products built with this architecture in mind. > “I mentioned in a talk in 2020 about building a cloud-native database. There’s > a point: how well S3 could be leveraged would be key. I think this point is > still valid today.” — Building a Database in the 2020s THE S3 AS A PERSISTENCE LAYER ARCHITECTURE The above image is a simplified illustration of the architecture I am describing. The core idea is fairly simple — S3 is used as the primary storage of the application, rather than local disk. There is then a stateless compute layer which often includes local caching. Sometimes, there is a “memory layer” which acts as a sort of hot data layer on top of the BLOB store (though importantly, it is not the source of truth persistence layer). Typically, there is also a disaggregated control plane which both manages secondary metadata storage and controls other jobs (e.g. background processing of BLOB file storage, etc). Often, the data & compute plane resides in a customer’s cloud, simplifying the deployment of a system like this, but the control plane resides in the vendor’s cloud. You can see reference implementations of this from Neon (slide 12), Snowflake (page 220), Warpstream, Dremio, and Datadog. “Building a Database on S3” is also a canonical read here. BENEFITS OF S3 AS A BACKEND SEPARATION OF STORAGE AND COMPUTE The first and most foundational advantage of this architecture is that it creates true separation of storage and compute, allowing for efficient and simple autoscaling. If you need to scale reads, you can just spin up new compute workers. Since they are stateless, this takes almost zero time and requires no copying, repartitioning, or rebalancing of data across workers. This means seconds-scale autoscaling. If you need to scale writes, you don’t need to wait for repartitioning or reshuffling of data across disks in order to properly balance load. Failure recovery is easy because there is no need to rehydrate data if you need to recover a compute worker that goes down. The size of your compute layer can scale purely as a function of incoming traffic, independent of the amount of data being stored. This means compute can scale to zero, you pay for only exactly the compute & storage cost you are using (vs. one always being over-provisioned in a disk architecture), and you never need to think about things like upscaling a cluster that is about to run out of disk space. This also means that coordination requirements are massively reduced in the worker pool — you don’t need special “leader” nodes responsible for coordination or consensus because the compute layer is stateless. This ties into a broader concept which is — this architecture lets you offload a lot of distributed system and storage concerns to your cloud vendor. > On the cloud, computing is much more expensive than storage, and if computing > and storage are tied, there is no way to take advantage of the price of > storage, plus for some specific requests, the demand for computing is likely > to be completely unequal to the physical resources of the storage nodes (think > heavy OLAP requests for Reshuffle and distributed aggregation) — PingCap CEO OFFLOAD DISTRIBUTED SYSTEM & STORAGE CONCERNS Large cloud vendors like Amazon have spent billions of dollars making their BLOB stores effectively infinitely available, infinitely durable, and infinitely elastic. Using them as a persistent storage layer means you get all of this for free. This reduces how much time and effort is needed to solve a large class of issues traditionally important to solve in infrastructure products, such as quorum & coordination (e.g. ZooKeeper/RAFT) as well as storage logic (e.g. replication across availability zones, file management), because Amazon has already solved them for you (likely better than you would have). Note that this architecture does not fully obviate the need to consider these things — e.g. Neon still implemented Paxos since they buffer writes to S3. Cloud object stores also offer a lot of rich storage “features”. For example, since BLOB stores use an immutable file structure where changes are simply appended as new files, Neon was able to offer branching via a copy-on-write architecture as well as “time travel” queries almost out of the box. BUSINESS MODEL & COST The S3 as a persistence layer architecture also creates profound advantages from a cost perspective. This takes shape in few key ways. The first is related to the decoupling of storage and compute — since you are no longer over provisioned for one or the other, you will definitionally pay less assuming all else is equal. The second, and a more nuanced point, is that this architecture is much better suited to the business model of the cloud. Cloud vendor pricing extracts a huge premium on certain actions (such as data copies across availability zone) over others (such as reading/writing to S3), in a way that extracts immense rent with the local-disk storage architecture (e.g. see Warpstream’s blog). When you use S3 as the “networking” layer which is replicating across availability zones, you are effectively arbitraging the pricing model of the cloud vendors in some ways. Third — cloud BLOB storage is exceptionally cheap (though there is a caveat here I will get more into later that you need to be careful about how you manage this architecture for lower-latency or high throughput systems that requires tons of writes/reads). DEPLOYMENT Another elegant benefit of this architecture is that it solves a lot of deployment issues as a managed service vendor out of the box. In particular, using S3 as a storage layer makes it very easy to have your data and compute plane run in your customers cloud (on top of their S3). Because you are not storing the data yourself, but instead it is being stored in their own S3 buckets, you immediately solve a large number of data security related issues a customer might ask you about. Even better, you can still keep your control plane and metadata plane in your cloud if you would like. An analogous dynamic can be seen in many of the software vendors over the past few years who use Snowflake as a backend, such as Panther and Eppo. It is so much easier for such vendors to deploy to larger, more security conscious customers as a result of this architecture. DEVELOPER EXPERIENCE The last thing worth calling out here is that using S3 as a backend can typically greatly improve the developer experience of a product. Local disk storage architectures tend to create a lot of complexity by requiring the developer to reason about a stateful storage service. While managed services can partially hide this, the abstraction tends to leak. In general, a system which offloads all storage & distributed system concerns to S3 and which has a stateless pool of compute workers requires far less abstractions and a far smaller API surface for a developer to reason through. Products architected in this way tend to be far simpler as a result — Snowflake being a fantastic example vs. Redshift. RECENT EXAMPLES OF THIS IN ACTION 1. Neon is a serverless Postgres offering which separates compute and storage by using S3 as its persistence layer 2. Warpstream is a kafka-compatible streaming service that uses S3 as its backend, rather than local disk based log storage 3. LanceDB is a new vector database vendor that uses a custom storage format (Lance) and a disk-based approximate nearest neighbors algorithm, allowing for a serverless vector DB offering that runs on top of S3 4. Motherduck uses DuckDB as an in-memory query engine that can run on top of S3 as the storage layer 5. Husky is an internal logs storage engine used at Datadog that runs on S3. KalDB is a similar library out of Slack. 6. Basically all modern cloud data warehouses and lakehouses use this architecture, including Snowflake, BigQuery, & Databricks 7. Serverless query engines such as Dremio and Bauplan CAVEATS & CHALLENGES Importantly, the goal of this article is not to say that many of these benefits can not be achieved building an infrastructure service with a custom storage layer. For example, you can certainly achieve separation of storage and compute without building on S3. Rather, I think there are two key takeaways: 1. Building on top of BLOB stores as a backend gives you all of these things for free (mostly — see challenges below). This gives you such higher velocity as a startup, and as a result opens up a new class of startup ideas to be built that otherwise would have required insane amounts of money and time to build the initial service (e.g. see how fast Neon has come to market with a serverless Postgres offering). 2. It is hard to compete with the durability, availability, and scalability of BLOB stores, and as a result, unless you have a very good reason to design your storage system differently, this is like a suboptimal tradeoff to make Of course, this architecture is not panacea. Indeed, there is a good reason why all the initial adopters of this architecture (Snowflake, BigQuery, Procella) were analytics-oriented, more “offline” systems — S3 is not optimized for high IOPS and, if used naively, is very expensive to constantly write/read to on the scale of seconds. This is part of why it is so interesting to now see very operational product categories such as event streaming (Warpstream) and OLTP databases (Neon) adopt this architecture. Getting such product categories to work requires some additional work, particularly within the following areas: CACHING AND MEMORY Typically, a sophisticated caching or “hot storage” design is required to make an architecture like this work well. For example, Snowflake discusses caching heavily in their original paper and Neon’s PageServer layer acts as a hot storage/cache layer. See also this CMU presentation by Neon. All of these designs leverage an in-memory cache or a local disk based “cache” (e.g. essentially a “higher tier” temporary storage that is not seen as a durable source of truth), or both, as a way to offset these issues. Sometimes, this is coupled with the compute layer (e.g. Snowflake compute VMs have an in-memory cache). Other times, it is an independent layer (e.g. Neon PageServer) distinct from the compute workers. READ/WRITE STRATEGY You can’t naively map the same write/read strategy you would use in a local disk design to an S3-backed design. The volume of reads and writes would lead to insane costs, or create an I/O bottleneck in processing. As such, careful consideration is required in how often and when you access S3 under this architecture. For example, do you bundle or batch requests, and under what situations? Assuming you have a caching or hot storage layer, how do you maintain cache coherence and under what situations do you go to the BLOB store vs not? How do you mediate/minimize the number of times you need to query S3 while maintaining sufficient freshness or consistency guarantees? Often, this is about leaning into S3’s strengths (e.g. pseudo-infinite parallelism) and trying to mitigate it’s weaknesses (relatively high query latency, etc). STORAGE LAYOUT How storage is laid out and organized within S3 also often requires a dramatic rethinking relative to what might have been optimal with a local disk based architecture. For example — you may not want to partition files in the same way, or you may not be able to make the same assumptions about sequential disk access. Warpstream provides an interesting example of this here— completely changing the way topics and partitions are stored on disk relative to what Kafka has traditionally done, in a way that solves a lot of the barriers S3 introduces regarding cost/latency. METADATA MANAGEMENT & OFFLINE PROCESSING Proper metadata management is critical to make architectures like this work. Key ways this takes shape include: 1. Optimizing how S3 is scanned 2. Guiding offline processing of the data in order to continuously optimize its layout for the online system to perform well (e.g. compaction, file restructuring) 3. Optimizing when data is queried from S3 vs. secondary sources (e.g. a cache on a compute worker or similar) When to store metadata in S3 or a third party metadata storage layer, and whether to cache metadata, are also important questions to consider. COST I touched on cost, but it it is worth calling out directly. A lot of the above also relates closely to cost. While S3 is cheap as a storage layer, and using S3 as a “networking” layer for availability zone replication is very cheap, naively making thousands of read/write API calls to S3 will create a huge cost burden. As such, this architecture is not inherently more cost efficient unless you think about the right way to implement it. THE STARTUP OPPORTUNITY Chris Riccomini What I find particularly interesting is that, in spite of the immense benefits of this architectural approach, it is still represents a relatively small portion of cloud databases and data systems. The products I have thus far mentioned plus a few others — Snowflake, Databricks, BigQuery, Procella, Warpstream, LanceDB, Neon, Motherduck, Husky, Bauplan, Quickwit, Earthmover, and Dremio — are the main products I am aware of that fit this architectural paradigm. There are so many huge infrastructure categories where these ideas could allow a disruptive new entrant to emerge — search, graph databases, log analysis, timeseries databases, OLAP (e.g. Clickhouse, Druid), etc. Doing many of these right will require thoughtful consideration with respect to dealing with the drawbacks of S3. But, if done correctly, you often have the oppurtunity to be the first truly “serverless” offering to emerge in the category. As the composable data stack continues to flourish, and open formats such as Iceberg/Delta Lake (Table Formats), Parquet/Lance (File formats), and Arrow (Memory format) continue to improve, it is only going to get easier to design systems in this way. As this pattern becomes more commonplace, there will likely also be interesting second order effects. For example, if most infrastructure becomes a query layer on S3, how will the role of ETL change? It will be a lot less important to move or replicate data in between N different specialized storage systems (e.g. Elastic, Druid, etc), but it may become more important to transform across data formats within S3 to optimize for different workload characteristics (e.g. Parquet to Lance). Building on object storage also drastically lowers the bar for building a new data system, which should allow for the rise of more “vertical” infrastructure startups that differentiate more on developer experience than on pure performance. Neon is a really good example of this. I am deeply interested in investing in companies leveraging this architectural pattern of S3 as a backend. If you are working on something in this space I would be exceptionally interested in talking to you. Shoot me a note at davis (at) innovationendeavors.com Thanks to Chris Riccomini, Ciro Greco, Jacopo Tagliabue, and Chang She for feedback on this. View on Medium TL;DR Read more Medicine reimagined: Philip Jeng, Co-Founder and COO, Think Bioscience MEDICINE REIMAGINED: PHILIP JENG, CO-FOUNDER AND COO, THINK BIOSCIENCE Innovation Endeavors · Follow Published in Innovation Endeavors · 5 min read · Oct 5 Listen Share Innovation Endeavors was founded on the thesis of the Super Evolution: that a proliferation of data, computational capacity, and advanced engineering would converge and translate into significant, fast changes for the world. We’ve seen this firsthand. But it’s people who make this era of innovation possible: changemakers working to revolutionize computing infrastructure, engineering biology, climate, intelligent software, and the physical economy. This interview series highlights the stories of the standout founders and entrepreneurs bringing the Super Evolution to life and gives you a firsthand look at what lessons they’ve learned along the way and how they hope to change our world. We hope their candid insights are helpful for anyone tackling meaningful problems. What initially attracted you to entrepreneurship? As a third culture kid, I’ve always lived at the intersection of different worlds — a sometimes challenging but always exhilarating experience. Looking back, I realize I’ve always chased this feeling that led me to the entrepreneurial journey today. I chose to major in bioengineering and was often described as a “jack of all trades” because I enjoyed all subjects of science. On top of that, I always knew I wanted to build a career that combines science and business — despite not knowing exactly what this meant. After finishing college, I was fortunate to start my career in a rotational program at Genentech and saw how all the different fields of science come together to make a drug. But I also saw that science alone wasn’t enough to create a drug and run an organization to support it. Eager to learn about the industry's business side, I became a management consultant, where I had to learn a new way of thinking from my training as an engineer. I then pursued an MBA at Harvard Business School, where I was exposed to venture capital, startups, and the entrepreneurial spirit of Kendall Square. After HBS, to the surprise of my peers, I created a role at biotech startup in China because I was obsessed with learning about the biotech ecosystem there. There was plenty of culture shock, to say the least. With each of these experiences, I got to see a new facet of building a company and inched closer to putting it all together. Whether cultural or technical, I’ve also enjoyed bridging very different ways of thinking. I realized entrepreneurship was the perfect way to operate at the nexus of many different fields. What inspired you to co-found Think Bioscience? To be honest, serendipity. During COVID, I was introduced to Jerome Fox, a professor at CU Boulder. Remarkably, it was through six degrees of separation. At the time, I was part of the Harvard Business School Blavatnik Fellowship, a program that supported alumni in life science entrepreneurship. In his academic lab, Jerome was working on an idea he had conceived dating back to his post-doc, an elegant process of letting nature solve our toughest drug design challenges. After all, nature has evolved powerful machinery to access diverse and bioactive molecules to solve its own ecological challenges. The science was innovative, to say the least. From my internships in VC, I saw how important it was to have the right team to give innovative science the best shot as a life science business. Our backgrounds complemented, and our workstyles matched well — it was the perfect opportunity. What’s the big problem you’re hoping to solve at Think Bioscience? We’re working to develop small molecule drugs for challenging targets such as protein tyrosine phosphatases, which have no FDA approved drugs. In short, we solve two major small molecule discovery challenges. Accessing novel chemical matter. Historical libraries are biased toward historical drug targets and associated active sites. Furthermore, they only cover a small fraction of the bioactive chemical space. And while natural products constitute many FDA-approved small molecules, they were first designed by nature for ecological solutions and can experience synthesis challenges. Discovering functional binding sites. Proteins are complex. They operate in signaling networks and exhibit dynamic motion. Despite significant advances in computational approaches, identifying novel functional binding sites is incredibly challenging. Think Bioscience’s platform addresses both challenges by leveraging Nature’s machinery. We encode a therapeutic objective into a microbe and instruct it to “find a way to inhibit this target or die.” The target is expressed in a cellular environment where more complex binding behaviors can be captured (not just simulated). Instead of telling the platform how to solve the problem, we simply define the functional activity desired. In parallel, we endow the population of microbes with unique biosynthetic pathways that encode for natural product scaffolds and let natural selection run its course. Those that survive reveal both a bioactive starting point and a functional pocket to begin new drug discovery campaigns. What excites you the most about biotechnology? The combination of intellectual stimulation and greater purpose. It’s a complex and captivating industry that blends cutting-edge science, multi-disciplinary teams, and careful capital allocation. There’s always so much to learn. It’s exciting to work in an industry where the ultimate goal is to help patients, and scientific advancements become the foundation for future work to be built upon. Working daily with people who share these values is icing on the cake. The biotech space is rapidly accelerating. What breakthroughs do you think will be developed in the next ten years? Not a scientific breakthrough per se… I’m excited about new company formation models, particularly in new geographic hubs. Using NSF funding as a proxy for scientific innovation, the substrate of innovative science is relatively democratic across the country. Yet, funding is vastly disproportionately allocated to the coasts — for both understandable historical reasons and habitual pattern following. To me, there is an immense opportunity to catalyze these regions with the right teams and capital. I’ve seen this first-hand in the Colorado biotech community. Array Biopharma (acquired by Pfizer in 2019) has created fertile ground for the next generation of startups. Think Bioscience wouldn’t be where we are today without the expertise from ex-Array veterans and the tight-knit biotech community here. From my vantage point, an important rate-limiting step in biotech is coordinating the complex orchestra of science, people, processes, and capital. In the next ten years, I believe more innovation can be industrialized by continued advancements in enabling technologies, flexible work models, founder training, and more evenly distributed investment regions. Do you have any takeaways from the fundraising process? If you had asked me prior to Think Bioscience, I would have described fundraising along the lines of a discrete evaluation of a polished deck after a long period of being heads-down. Now, I see it as a more collaborative and continuous process. Conversations with investors have helped us refine our company strategy based on their own portfolio companies’ experiences and network with other investors/pharma. Perhaps even more surprising is how human the process is. There are a lot of calls/messages sharing ideas, questions, and feedback — which works best on a foundation of trust. Lastly, what piece of advice do you have for entrepreneurs looking to break into engineering biology? Build a support system for other founders going through the same journey. While there is a lot of written material online, there are still countless details about getting off the ground and operating. I’m incredibly grateful to all the folks who’ve been a great sounding board (and sometimes therapeutic session). View on Medium Innovation Endeavors was founded on the thesis of the Super Evolution: that a proliferation of data, computational capacity, and advanced engineering would converge and translate into significant, fast changes for the world. We’ve seen this firsthand. Read more Bridging technology, defense, and progress: Josh Berglund joins Innovation Endeavors BRIDGING TECHNOLOGY, DEFENSE, AND PROGRESS: JOSH BERGLUND JOINS INNOVATION ENDEAVORS Dror Berman · Follow Published in Innovation Endeavors · 3 min read · Sep 22 2 Listen Share In the ever-evolving landscape of technology, geopolitics, and economic shifts, we at Innovation Endeavors are always on the lookout for leaders who can guide our vision into the future. Today, we are thrilled to announce a new addition to our team, someone who has charted new paths of leveraging technology to bolster American progress — Josh Berglund, our newest Venture Partner. THE GLOBAL CANVAS As we stand at the crossroads of history, the world is witnessing unparalleled geopolitical tensions. With the Russia-Ukraine conflict and escalating US-China tensions, we’re led to a renewed “Space Race” where nations vying for the forefront in cutting-edge technologies like AI, quantum computing, and semiconductors are just the tip of the iceberg. Concurrently, the urge for nations, especially the US and several European countries, to be self-reliant is more prominent than ever. The pressing need to develop infrastructures that promote self-sufficiency, especially in crucial areas like energy, manufacturing, and supply chain, is paramount. This pivot towards independence has been the catalyst for government-led investments, evident in initiatives like the IRA bill and the Chips Act. As a result, a growing number of startups are looking to partner with the Department of Defense alongside other enterprise customers. WELCOMING JOSH In such a transformative era, who better to navigate these waters than Josh Berglund? Josh spent seven years as a non-commissioned officer in Army special operations, with deployments to combat zones in Syria, Iraq, South America, and Eastern Europe. His role in the military included bridging connections between In-Q-Tel, DIU, DARPA, various national labs, and the combat development directorate of his unit. Before donning the uniform, he thrived in the business realm, both as an entrepreneur and an investor, playing pivotal roles in the creation and success of some of the most recognized companies in the last two decades. Josh’s multifaceted expertise spanning government, technology, and finance makes him an invaluable ally for companies setting their sights on government and defense sector contracts. At Innovation Endeavors, we’re not just about backing companies; we aim to drive generational change and impact global policies with the transformative solutions we support. Josh’s leadership and the trust he has garnered over the years will undoubtedly be an incredible asset to our growing team. SHARED BONDS One of the aspects that strengthens our synergy with Josh is our shared experiences beyond the business world. Both of us have served in the Special Forces — Josh with the U.S. Army and I with the Israeli Defense Forces. This shared history instills in us an intrinsic understanding of the gravity behind the missions of our companies. Serving in these elite units has not only honed our sense of duty and purpose but also ingrained in us the ethos of navigating through seemingly insurmountable challenges. It further informs how we approach the challenges faced in the business realm, bringing a unique perspective that blends military strategy with entrepreneurial tenacity. Startups, in many ways, mirror the relentless, ever-changing terrains of Special Forces missions. CEOs face a gauntlet of challenges, navigating unpredictable landscapes, making tough decisions on the fly, and continually adapting to reach their objectives. Our combined experiences have cemented our commitment to guiding and supporting these CEOs, leveraging our shared backgrounds to help them build truly meaningful and impactful companies. ON WORKING TOGETHER Josh will be the linchpin in our team as he mentors our portfolio companies, shapes our investment strategies in government and defense, and expands our network in these sectors, setting a new trajectory for our firm. He’s ready to strengthen partnerships with companies that share our ambition of bolstering American resilience and crafting a future that’s not just technologically advanced but also safe and sustainable. As we begin this exciting journey with Josh, we extend a warm and heartfelt welcome to him. Together, we’ll make significant progress in a world rife with challenges and opportunities. Welcome aboard, Josh! View on Medium In the ever-evolving landscape of technology, geopolitics, and economic shifts, we at Innovation Endeavors are always on the lookout for leaders who can guide our vision into the future. Today, we are thrilled to announce a new addition to our team, someone who has charted new paths of leveraging technology to bolster American progress — Josh Berglund, our newest Venture Partner. Read more Load more STAY UP TO DATE Subscribe to our newsletter for insights, news, and more. Thank you! Your submission has been received! Oops! Something went wrong while submitting the form. NAVIGATE * Portfolio * Team * About * Curiosity Camp * Insights * Contact FOLLOW US