20230711






I began my scientific career at a multidisciplinary research institute, Starlab, located deep in the serene and secluded forests outside Brussels, Belgium. The lab’s principal base of operations was housed in a historic landmark — an imposing 19th century manor, remarkable both in scale and magnificence. In a previous incarnation, the palatial grounds served as official embassy for the First Republic of CzechoslovakiaIts nearest neighbor, the legendary Pastéur Institute, was one of but a handful of highly-secured Biosafety Level 4 labs in the world. 

Cofounded by MIT Media Lab founder Nicholas Negroponte and serial entrepreneur Walter de Brouwer and established in partnership with MIT, Oxford and Ghent University, Starlab was created as a “Noah's Ark” to bring together the world's most brilliant and creative scientists to work on far-ranging multidisciplinary projects that hold the potential to convey a profound and positive impact on future generations. 

Starlab was borne as an incubator for long-term and basic research in the spirit of Bell Labs, MIT Media Lab, Xerox PARC, and Interval Research. Its research mantras were “Deep Future” and “A place where one hundred years means nothing.” Approximately 130 scientists from thirty-seven different nationalities — each established leaders in their respective research fields — lived and worked at the lab. 
A second base of operations, Starlab DF-II (Deep Future II) was established in the Royal Observatory in Spain on a mountaintop perch overlooking the city of Barcelona. With a more tightly-focused mission scope of space-borne and neuroscience research, Starlab DF-II has continued to innovate and grow to present day. 

Discovery Channel Special
Onsite research ranged from artificial intelligence, biophysics, consciousness, emotics, intelligent clothing, materials science, protein folding, neuroscience, new media, nanoelectronics, quantum computation, quantum information, robotics, stem cell research, theoretical physics — e.g., the possibility of time travel — transarchitecture, and wearable computing. 

Our custom-built supercomputer, the CAM-Brain Machine, was supported in part by a 1 Million Euro grant from the European Union. The custom-designed and created supercomputer — as powerful as 10,000 Pentium II PCs — harnessed the power of Xilinx field programmable gate array (FPGA) evolutionary hardware to evolve seventy-five million neurons in a massively-parallel artificial neural network instantiated directly in silico using evolutionary genetic algorithms. With each clock tick, the supercomputer simultaneously updates hundreds of  millions of cellular automata billions of times per second. 

When our laboratory came up short on research grants, I personally went to the President himself when fate brought us together at the same time and place on his first trip overseas after election. The Commander in Chief impressed me with both his immediate familiarity with our work and and his enthusiasm in response to my earnest request for $1M in budget that had been allocated for national security priority scientific research topics through a grant newly created by Clinton with his last act in office, the 2001 National Nanotechnology Initiative.         

For my contributions to the program, I was selected by the US Government as one of three graduate students most likely to impact the future of the field at Salishan, an honor shared with John Carmack and Bill Butera, sponsored to attend conferences and senior administrator briefings at Fort Meade, National Security Agency headquarters outside Washington, DC, attended the World Technology Summit in London, was an invited delegate to the French Sénat to provide testimony on the future of technology and how it will transform our lives over coming decades — and more. 

Following three days of enraptured debate with senior politicians, senators and international diplomats at the French Sénat hearing on artificial intelligence in Paris, the world's first senate hearing on the topic, Starlab's principal investigator and AI program lead predicted I'd one day be elected President myself. Far sooner than that, however, he drew my attention to the threat of assassination from technology Luddites who stand in opposition to the rapid pace of progress in artificial intelligence.

My living arrangements at the lab consisted of an expansive three-bedroom master suite and fully-stocked library, typically reserved for visiting prime ministers and senior diplomats, and was shared with none other than the project's principal investigator, Hugo de Garis. de Garis came up with the idea to obtain a life-size replica of Fat Man — the solid plutonium core, 21 kiloton, 10,300-pound nuclear bomb dropped on Nagasaki — and to mount it precariously to the vaulted ceilings of my apartment, with the bomb hanging directly over my bed

The sheer audacity of the proposal conveyed an unforgettable impression that seemed more than a little bit crazy at the time. Intending to serve as a dramatic and powerful reminder of “the weight of my responsibility to the future of humanity,” he planned to make the message one to remember. de Garis had just finished filming a Discovery Channel documentary on the Future of Artificial Intelligence that featured the potential for future conflict between humanity and artificial intelligence. The Volkswagen beetle-sized replica of the bomb served as a prop for the documentary that he purchased outright from the film studio, making arrangements for expedited shipping and delivery direct to Starlab's headquarters in Brussels.

With the contemporary advent of generative artificial intelligence (GAI) we see all around us today—and with artificial general intelligence (AGI) and artificial superintelligence (ASI) seemingly just around the corner — one could say that de Garis, though radical and exceedingly unconventional in his approach, was just a few decades ahead of his time. OpenAI CEO Sam Altman, xAI founder Elon Musk, DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei have each claimed that AI poses an extinction risk on par with nuclear warThe work and life experiences and the lessons learned from living at Starlab in such a unique and remarkable environment are priceless, growing ever more timely and relevant with each passing day. 

A recent Financial Times (FT) spinoff magazine, Sifted article highlights my background going back to Starlab, AI and time travel research, travels across East Asia to create national quantum roadmaps for US national research funding and IC agency directors — on through foundational research fellowships in quantum mechanics with Nobel laureate Anton Zeilinger’s group in Austria and across Europe — from manned spaceflight training at NASA on to field expeditions employing state-of-the-art sensors, leading multidisciplinary teams of scientists, researchers, special forces domain experts and engineers to field-test next-generation technologies in austere environments  — all under the shared overarching objective of contributing to large-scale global initiatives that hold the potential to convey a profound and positive impact on the future of humanity — our children, our children’s children, and the generations yet to come. 






Our deepest fear is not that we are inadequate. Our deepest fear is that we are powerful beyond measure. It is our light — not our darkness — that most frightens us. We oft ask ourselves: ‘Who am I to be brilliant, gorgeous, talented, fabulous?’ Actually, who are we not to be? You are a child of God. Your playing small here doesn’t serve the world. There’s nothing enlightened about shrinking so other people won’t feel insecure around you. We were born to make manifest the glory of God that lies within us. It’s not just in some of us. It’s in everyone—and as we let our light shine, we unconsciously give other people permission to do the same. As we are liberated from our own fear, our presence automatically liberates others. 


 Marianne Williamson




20230710

Two hundred years ago, if you suggested people would comfortably travel in flying machines—reaching any destination in the world in a few hours time—instantly access the world's cumulative knowledge by speaking to something the size of a deck of cards, or travel to the Moon, or Mars, you'd be labeled a madman. The future is bound only by our imagination. 

Someday very soon we may look back on the world today in much the same way as we did those who lived in the time of Galileo, when everyone lived with such great certainty and self-assuredness that the Earth was flat and the center of the universe. The time is now. A profound shift in consciousness is long overdue. The universe is teeming with life. We're all part of the same human family. 

This is potentially the single most momentous moment in our known history—not just for us as a nation, or us as humanity, but as a planet. The technological leaps that could come from developing open contact with nonhuman intelligence are almost beyond our comprehension. That is why this is such a monumental moment for us as a collective whole. It could literally change every single one of the eight billion human lives on this planet. 


We stand on the shores of a vast cosmic ocean, with untold continents of possibility to explore. As we continue forwards in our collective journey, scaling the cosmic ladder of evolution, progressing onwards, expanding our reach outwards in the transition to a multiplanetary species—Earth will soon be a destination, not just a point of origin. 

20230708

 
Time Magazine, June 2023

How Artificial Intelligence Could Save the Day: The threat of extinction and how AI can help protect biodiversity in Nature

Allianz Global Investors 

The Conversation If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen As a professor of AI, I am also in favor of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns. 

CNN AI industry and researchers sign statement warning of ‘extinction’ risk Dozens of AI industry leaders, academics and even some celebrities called for reducing the risk of global annihilation due to artificial intelligence, arguing that the threat of an AI extinction event should be a top global priority.

NYT AI Poses ‘Risk of Extinction,’ Industry Leaders Warn Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons. 

BBC Experts warn of artificial intelligence risk of extinction Artificial intelligence could lead to the extinction of humanity, experts — including the heads of OpenAI and Google Deepmind — have warned.

PBS Artificial intelligence raises risk of extinction, experts warn Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind. 

NPR Leading experts warn of a risk of extinction from AI Experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us and it is time to impose limits to ensure they don't take control over humans or destroy the world. 

CBC Artificial intelligence poses 'risk of extinction,' tech execs and experts warn More than 350 industry leaders sign a letter equating potential AI risks with pandemics and nuclear war

CBS AI could pose "risk of extinction" akin to nuclear war and pandemics, experts say Artificial intelligence could pose a "risk of extinction" to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a "global priority," according to an open letter signed by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the "godfather" of AI. 

USA Today AI poses risk of extinction, 350 tech leaders warn in open letter CAIS said it released the statement as a way of encouraging AI experts, journalists, policymakers and the public to talk more about urgent risks relating to artificial intelligence.

CNBC AI poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn Sam Altman, CEO of ChatGPT-maker OpenAI, as well as executives from Google’s AI arm DeepMind and Microsoft were among those who supported and signed the short statement. 

Wired Runaway AI Is an Extinction Risk, Experts Warn A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic. 

Forbes Geoff Hinton, AI’s Most Famous Researcher, Warns Of ‘Existential Threat’ From AI The alarm bell I’m ringing has to do with the existential threat of them taking control,” Hinton said Wednesday, referring to powerful AI systems. “I used to think it was a long way off, but I now think it's serious and fairly close.

The Guardian Risk of extinction by AI should be global priority, say experts Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war.

The Associated Press Artificial intelligence raises risk of extinction, experts say in new warning Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Al Jazeera Does artificial intelligence pose the risk of human extinction? Tech industry leaders issue a warning as governments consider how to regulate AI without stifling innovation.

The Atlantic We're Underestimating the Risk of Human Extinction An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Sky News AI is similar extinction risk as nuclear war and pandemics, say industry experts The warning comes after the likes of Elon Musk and Prime Minister Rishi Sunak also sounded significant notes of caution about AI in recent months.

80,000 hours The Case for Reducing Existential Risk Concerns of human extinction have started a new movement working to safeguard civilisation, which has been joined by Stephen Hawking, Max Tegmark, and new institutes founded by researchers at Cambridge, MIT, Oxford, and elsewhere.

The Washington PosAI poses ‘risk of extinction’ on par with nukes, tech leaders say Dozens of tech executives and researchers signed a new statement on AI risks, but their companies are still pushing the technology 

TechCrunch OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk In a Twitter thread accompanying the launch of the statement, CAIS director Dan Hendrycks expands on the aforementioned statement, naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI — not simply risk of extinction.”