Alexandr Wang, founder of ScaleAI, talks about dropping out of school to start a business, U.S.-China competition, and the dark forest border

In the interview, Alexandr Wang, founder of ScaleAI, shared his views on topics such as brain-computer integration, AI development, and Sino-US competition, believing that brain-computer fusion is the only way for humans to avoid being marginalized by AI, while discussing related risks and challenges.

Last week, foreign media reported that Meta is negotiating a strategic stake with data infrastructure platform Scale AI, planning to invest about $14 billion, and inviting its founder Alexandr Wang to join the company to coordinate future superintelligence strategies.

Once this deal is implemented, it will not only be one of Meta’s largest foreign investments in history, but also mark the social giant’s attempt to regain dominance in the AI race through external refactoring.

While this in-depth cooperation is advancing, Wang himself also systematically disclosed for the first time in a three-hour interview on the Shawn Ryan Show, his many judgments on the evolution path of AI, brain-computer fusion, national games, and the future of civilization.

He noted that human biological evolution is slowing down, while AI is breaking through at an exponential rate. To avoid being completely marginalized, the only feasible way is to directly connect the human brain to AI and the Internet to build a new intelligent collaborative system. But because of the great potential, the risks are also more profound – he sees brain-computer interfaces as the starting point for the massive expansion of the attack surface of consciousness.

Once the brain becomes a platform, the cost of manipulation and attack will be lower than any previous device. From advertising injection and memory tampering to emotional manipulation and belief reconstruction, technology is penetrating the final boundaries of consciousness. Neuralink and related companies are moving from perceptual access to volitional regulation, which means that “cognitive integrity” itself will also become an object to be defended.

Despite this, he persisted in exploring. He believes that brain-computer fusion will usher in the era of “thinking is obtaining”, allowing human beings to touch the encyclopedic common sense structure for the first time. But after that, the world will no longer be the way it is now, and human sovereignty will no longer belong only to the body.

MIT drops out of school to YC version of “The Hunger Games”

Wang traced his complete path from dropping out of high school, interning at Quora, briefly attending MIT, and founding Scale AI at the age of 19.

What does a product manager need to do?
In the process of a product from scratch, it is not easy to do a good job in the role of product manager, in addition to the well-known writing requirements, writing requirements, writing requirements, there are many things to do. The product manager is not what you think, but will only ask you for trouble, make a request:

View details >

He admitted that he decided to take a leave of absence from school before graduating from high school because he was already very interested in machine learning at that time. Although “AI” was not popular at that time, they were already using algorithms to optimize the sorting of information flow on social platforms, thinking about how to increase user stickiness and engagement.

His ability to independently complete some machine learning engineering tasks at the age of sixteen or seventeen has earned him recognition from his peers, but he always believes that his “strength” is not talent, but intense concentration training that began very early.

He emphasized that the real difference is not IQ, but whether you can deepen your work in a certain field for a long time and continue to invest with all your might. From mathematics and physics to computer and machine learning, he has always maintained a high-intensity learning rhythm.

He believes that “too much force is better than not enough force”, and continuous high-density investment is the underlying logic of rapid growth. At MIT, he took a number of AI-related courses, even challenging machine learning courses, and quickly excelled in the classroom. It was at this stage that he was keenly aware of a key industry gap: the evolution of AI models was severely limited by the lack of high-quality training data.

A failed “refrigerator monitoring project” became a turning point. Even if the model is properly designed, without massive and well-labeled data support, the AI system cannot reach a practical level.

This epiphany directly led to the decision to start Scale AI. He clearly realized that AI is not just an academic issue, but a reconstruction project of the entire industrial structure, and high-quality data will become one of the most scarce infrastructures. Therefore, he decided to drop out of MIT again and flew to San Francisco at the age of 19 to devote himself to entrepreneurship.

He immediately joined the YC Accelerator Program, known as the “Hunger Games for Entrepreneurs.” There, he competed with hundreds of start-ups, quickly building products and preparing for financing within months. In the end, he successfully secured investment through Demo Day, laying the foundation for the official launch of Scale AI. He never saw dropping out of school as an escape, but out of a clear judgment of the gap in reality: since he has seen the problem, it must be solved immediately. He bluntly said, “When you realize that the door of the times is opening, you have to rush in immediately.” ”

Scale’s initial project was not ambitious, mainly helping e-commerce platforms identify illegal T-shirt patterns and optimize furniture search engines. But it is these “low-key” projects that have become the starting point for the team to train products and polish execution.

Three months later, the company turned to the field of autonomous driving, and the development officially took off. They have established cooperation with first-tier car manufacturers such as General Motors, Toyota, and Waymo to provide high-precision data processing services. Autonomous driving has extremely high requirements for data quality, especially in multi-sensor fusion scenarios, where the system needs to accurately determine “whether it is a car or a shadow” or “one person or two people” and focus all attention on the quality of data annotation required for these complex scenarios.

He admitted that whether he can survive in the early stages depends largely on whether he can get the support of high-quality investors. During his time at the YC Accelerator, he quickly met co-founders, assembled a team, and started with a San Francisco apartment and lived a sleepless life of meeting clients during the day, writing code at night, and tuning models in the early hours of the morning. There are no funds and resources, only relying on execution, cold start emails, and even waiting for opportunities at the door of the customer’s company, just to catch a sentence.

This is how the first key customer was earned. After being acquired by an autonomous driving startup, Scale became a data annotation provider for the new parent company, thereby entering a broader market. As the demand for autonomous driving technology expands, more customers will snowball, driving the company’s rapid growth.

In the early days of entrepreneurship, there was no clear division of labor, and founders had to “do everything” to survive. The infrastructure is built by itself, and customer service, sales, and product verification are also done by themselves. He quickly learned organizational management and product rhythm in the process, integrating technology, business and team culture into a holistic perspective.

At the same time, he also developed a clear understanding of leadership: leadership is not innate, but the ability to be “forced” out in a continuously high-pressure environment. Every day, we deal with various issues such as decision-making, team, and operations, and must constantly try and make mistakes, set goals, and adjust the rhythm, otherwise the company will stagnate. YC’s mentorship and entrepreneurial community have provided him with valuable early feedback.

As the company began to scale, he attracted investment from established institutions such as Accel. What’s more, he recruited the first core engineers based on a relationship of trust—mostly peers he met at MIT. At that time, he did not have any fame or credit endorsement, and could only rely on mutual understanding and tacit understanding to build a team. Some of these people dropped out of school to join, and some joined the company after graduation, becoming the backbone of Scale’s technical architecture.

In terms of business knowledge, he did not receive systematic training, but gradually learned it through repeated running-in with customers. He writes code while learning to sell, sending financing emails and trying to build a business model. “No one tells you what to do, but you have to move forward every day.”

The original version of the product was completely developed independently by Wang. He uploaded his project to the startup project publishing platform, but he didn’t expect it to spread on a small scale in the Twitter entrepreneurial circle, which attracted initial market attention. He relied on his friends to forward and accumulate exposure, “In the early days, I really started by ‘kneeling and begging’.”

Initially, the project was called Scale API simply because the domain name had not yet been registered. A year and a half later, it officially changed its name to Scale AI. Despite the company’s extremely rough early form, the window of opportunity for autonomous driving quickly brought it into the field of vision of tech giants. Waymo became one of the early customers and has now deployed driverless taxis in San Francisco, Los Angeles, Phoenix and other places, and continues to scale.

The company grew slowly at first: only 1 to 3 people in the first year, expanded to about 15 people in the second year, reached 100 people in the third year, and then experienced explosive growth to about 1,100 people. Three years after its founding, the company began to engage with defense projects and eventually became a data partner of the US Department of Defense.

The inflection point of the Sino-US AI game

In Cognition AI’s vision, Devin is not just an assistant that can complete code, but a complete agent system that can be “thinked, debugged, and corrected”. This design concept from the underlying architecture comes from Scott Wu’s years of training in math and programming competitions. He likened many technical challenges to “path planning problems” in a dynamic environment—similar to some informatics Olympiad questions: you need to continuously find the optimal path in an ever-changing network, deal with constraints, solve iteratively, and remain efficient.

Wang pointed out that China launched a national AI strategy as early as 2018, and he summarized the current competitive landscape between China and the United States into three dimensions:

  1. energy, China is far ahead;
  2. chips, China is lagging behind but catching up;
  3. data, China is significantly ahead.

At present, the only position in which the United States is still in a clear leading position is in the field of chips. Nvidia remains the world’s most powerful AI chip manufacturer, and its ecosystem has built a monopoly advantage over global computing power. But this lead is being eroded. Huawei’s chip performance has approached Nvidia’s previous generation products, the gap is rapidly narrowing, and the technology generation gap is being leveled. He warned that if the trend continues, the absolute dominance of the United States in high-end chips could be further eroded within several years.

At present, China is facing a serious problem of population aging. The long-term low fertility rate and the rapid growth of the aging population will continue to weaken its industrial and overall strategic vitality, and once this trend turns, the correction cycle is extremely long. From the perspective of “peak capability”, this stage may be a critical window period.

Secondly, China’s manufacturing system has accumulated huge advantages. The number of industrial robots deployed ranks first in the world, and the automation rate, equipment self-development capabilities, and upstream and downstream integration of the industry far exceed those of other countries. Current manufacturing capabilities are at a peak stage that is difficult to replicate, making any hardware-dependent technological action more viable.

The chip problem is at the heart of the entire structural game. Currently, about 95% of the world’s high-end chip production capacity is concentrated in Taiwan, and once geopolitical risks arise, it will have a fundamental impact on AI’s basic capabilities.

If chip capacity is transferred, blocked, or disrupted in the next few years, it will be difficult for the United States to stay out of the AI level. He believes that in order to ensure stability, the United States must build a multi-level strategic advantage:

  • Maintain global leadership in AI
  • Establish substitution capabilities in key technology and energy chains
  • Build a clear and visible economic checks and balances that make any risky behavior face a huge cost

The prerequisite for any strategic stability is to reduce misjudgment and improve cost perception. Once the opponent misjudges that the risk is controllable or the return is expected, the conflict boundary will be constantly tested. Wang also acknowledged that China has constituted a systemic capability that cannot be ignored in traditional industrial dimensions such as manufacturing, shipbuilding, and human resources. If the United States cannot strengthen its own supply chain resilience, technology execution and policy efficiency, its traditional advantages will gradually be eroded.

TSMC has invested hundreds of billions of dollars in Taiwan’s fabs, including a large number of customized equipment and complex process systems. Even if the U.S. were to rebuild the same capacity, hardware investment alone would require hundreds of billions of dollars. More complex is the talent system. Chip manufacturing is not only equipment, but also system engineering of process, experience and collaboration. If the original team cannot be moved, the United States will have to retrain from scratch, which will be a time-consuming process.

Even though TSMC has built a factory in Arizona, it faces many obstacles to its progress. Problems such as unstable power, complex approval processes, and insufficient skilled local skilled workers have seriously affected the efficiency of the project.

From the perspective of the company itself, TSMC has no urgent motivation to promote the full transfer of production capacity. If the migration is completed, its strategic importance in the global geogame will also decrease. In other words, keeping production capacity locally is a security strategy.

Even if the current policy explicitly prohibits the direct export of advanced chips to specific regions, some companies can still reach indirect procurement through “white glove companies”. Agency enterprises registered in third countries complete orders and logistics transshipment as non-affiliated identities, and finally allow advanced process chips to flow into restricted markets. This makes the global chip control system have obvious loopholes at the execution level.

The global competition around chips, algorithms, data and computing infrastructure has gradually evolved into a structural game model. He warned that some kind of “AI-level catastrophic event” in the future – such as power grid attacks, pathogen abuse, etc., will force the world to redefine the AI governance framework.

Until then, the world must face up to the governance gaps posed by AI as a universal capability. AI should not be an attack tool, but should serve economic development, social governance, and human collaboration. In this all-round race, the United States itself is also facing serious constraints – power bottlenecks, regulatory redundancy, low approval efficiency, infrastructure shortages, etc., which are systematically slowing down the pace of AI infrastructure construction.

He stressed that this is a systematic project and there can be no obvious shortcomings. The most dangerous situation is when the opponent uses the strength of the whole country to promote the AI plan and form a generational advantage in multiple links, which ultimately leads to capability misalignment. Although the focus is currently on China and the United States, he reminded that the global multipolar structure is also rapidly taking shape.

For example, Russia has outstanding integration capabilities in network infrastructure and technology layout; the Middle East has huge capital and energy resources; India is expected to become a key player in the future with its engineering culture and technical labor reserve; Although Europe started a little late, it is also accelerating the construction of a unified data and computing power strategic system.

As for the question of “whether AI will eventually get rid of human control”, he believes that it is still within the controllable range. The premise is that critical systems cannot be completely relied on AI for automated decision-making, they must operate within the framework of human supervision, and AI should not be power itself, but a controllable capability that empowers society and individuals, rather than seizing power. This will also be the core proposition of the next stage of global governance.

Brain-Computer Interface, Consciousness Engineering, and Immortality

Alexandr Wang said he has always had a strong interest in the development of brain-computer interfaces, even linking the timing of fertility to the maturity of this technology. He noted that the human brain has the highest neuroplasticity in the first seven years of life, far more than at any other stage of life. If a brain-computer interface is implanted during this period, children may acquire new abilities in an almost instinctive way, deeply integrating them into neural structures, which is almost impossible for adults to achieve. Therefore, he prefers to wait until the technology matures before considering parenting.

He has a pragmatic approach to brain-computer interfaces, with a core focus on AI. In his view, AI is evolving at an exponential rate, while human biological evolution has almost come to a standstill. Natural selection takes millions of years, and AI pushes boundaries every year. Therefore, he believes that if humans want to remain competitive, they must eventually develop a mechanism to connect the brain with AI, deeply integrate with artificial intelligence systems and even the entire Internet, which will become a necessary means to maintain human importance and viability.

But the risks are just as clear. He warned that once a company can invade the brain, it may unconsciously inject advertising, manipulate consumer impulses, and even further form volitional manipulation. If this ability is controlled by a terrorist organization, hostile state, or hacker group, the consequences will be even more serious, including memory theft, thought interference, and destruction of reality. This “takeover” is essentially a deep manipulation of cognition and behavior, posing a systemic threat to the ontology of consciousness.

He mentioned that companies such as Neuralink have made “conscious manipulation” the next stage of brain-computer interface development. Whether it is memory extraction, intention prediction, or thought input, relevant research has formed a preliminary consensus in academia and industry. He stressed that the use of these technologies must be extremely careful, otherwise it may have structural consequences for human society.

In terms of sensory control, relevant experiments have also shown technical feasibility. He mentioned that scientists implant signals into the monkey’s visual cortex to “see” images that do not exist and accomplish tasks accordingly. This suggests that the brain’s response mechanism to stimuli can be reconstructed or even fictionalized, and future pathways to touch, smell, and even emotional experience have been initially revealed.

Despite the severe, he believes it is a path worth exploring. After brain-computer connection to AI, humans may have “encyclopedic” cognitive abilities, and information retrieval will be transformed into “thinking is obtaining”. In a sense, this is a form of superintelligence that transcends the limits of human neurophysiology. But he warned that the closer you get to this ability, the more exposed humans are to huge attacks.

He did not shy away from the abuse of technology. He pointed out that once the brain-computer interface is implemented, the information input mechanism will determine the “real” version built by the user. If the system is implanted with incorrect data, such as a modified version of a religious text, the user’s understanding of the world will be forged and manipulated. He particularly emphasized that the management power of AI data systems must be far from totalitarian control, which is the bottom line of free will in the information age.

He further pointed out that the current media influence is still at the level of “opinion influence”, but once the brain-computer interface is realized, it will directly enter the level of “belief shaping”. It’s no longer about “being seen,” but “believing.” The propaganda mechanism will jump from today’s level 1 or 2 to level 9 or 10, creating an unprecedented cognitive restructuring.

Because of this, he has repeatedly emphasized the lack of regulatory mechanisms. Today, humanity has not yet figured out how to regulate smartphones, but it must confront systems that can reconstruct consciousness. He pointed out that AI will be a thousand times more powerful than the mobile Internet, and the existing governance structure is simply unprepared, and the consequences will be irreversible if the lessons of the previous wave of technology are not learned.

He himself is willing to implant Neuralink, but the premise is that two key levels of protection must be in place: first, it has strong network defense capabilities to prevent the brain from being remotely invaded; The second is to ensure that the structure of consciousness will not change deeply after connection. He emphasized that these two conditions are the bottom line and one is indispensable.

When it comes to consciousness uploading, he believes that is the correct answer to “eternal life”. He pointed out that extending life and fighting aging ultimately leads to the question of how to decouple consciousness from the body and migrate to a more permanent carrier. In his opinion, Neuralink is the first stage of this evolutionary path.

The future path includes two directions: one is to upload consciousness into a virtual world with highly realistic feedback and obtain a permanent online state of existence; The second is to download consciousness to humanoid robots and continue to exist in humanoid form in the physical world.

He also admitted that his acceptance of the “simulated universe” theory is changing. In the past, he was skeptical of the hypotheses put forward by Elon, but when he saw that videos generated by AI models like Sora were approaching “pseudo-reality”, he began to wonder if human reality itself was part of some kind of simulation system. His logic is that if humans can generate infinitely realistic simulated worlds within a few decades, then the world we live in today is likely to be one of the copies created by the previous civilizations.

He believes that “consciousness” itself may not be as mysterious or unique as human imagination. If AI models with subjective experience can be built in the future, “consciousness engineering” will become a reality, and consciousness will become an engineering system that can be designed, transferred, backed up, and continued, rather than just a philosophical proposition.

From the smart flywheel to the dark forest

When asked if AI is the next oil, Wang gave a double judgment. On the one hand, AI will become the key infrastructure of the future national economy, scientific research and military system, just like oil to the industrial age.

Whoever is the first to deploy powerful AI systems can dramatically improve research efficiency, drive GDP growth, and accelerate automation transformation. On the other hand, AI is different from limited resources such as oil, which has a “self-incrementing” flywheel effect: the stronger the AI, the stronger the country’s capabilities, which in turn can accelerate the evolution of AI and achieve exponential growth.

He pointed out that the underlying layer of AI is composed of three pillars: algorithms, computing power and data. Algorithms are the core of intelligent logic and require continuous scientific research breakthroughs; computing power relies on power, chips and cooling systems to build high-density computing power; Data is the “nutrient” of the training model, which directly determines the knowledge boundary and reasoning accuracy of AI. Among them, data is the most similar to oil, but he emphasized that the real key to success is whether the three can work together to drive the national smart flywheel.

The current focus of competition has shifted from model capabilities to “computing power density”. He bluntly stated that the question is no longer whether there is AI, but whether it can run thousands of powerful instances in parallel.

If one party can deploy 10, 100, or even 100 million AI models with cyberattack capabilities at the same time, its combat effectiveness will far exceed that of traditional military systems. This relies on data center deployment, power delivery capacity, chip inventory, and data update frequency.

The battle for this ground floor facility has begun. Global technology giants such as xAI (Elon’s company), OpenAI, Google, Amazon, Meta and other global technology giants have laid out computing power maps for the next 5~6 years in the past year, competing for data center location, power channels and chip supply chains. This competition is not only a business war, but also a global war for data sovereignty and computing power dominance.

At the same time, AI also poses new threats to the justice system. “Deepfake” capabilities are rapidly breaking through the critical point of legal cognition. As the model’s capabilities increase, the synthesis of images and videos becomes indiscernible, and the existing legal system is almost unprepared for this.

When falsifying evidence can be fake, “objective images” will lose their evidentiary meaning. While AI-generated content can still be recognized, technology is evolving far faster than institutional responses.

He proposed two ways to deal with it: first, the public must establish strong false identification capabilities, especially the younger generation needs to use information identification as a survival skill; Second, the law must quickly establish boundaries and make it clear that the act of submitting AI synthetic evidence is serious perjury and should even constitute a criminal offense. Only through institutional punishment mechanisms can an effective “negative incentive” framework be established to prevent AI from being used to deliberately mislead the judiciary.

But what is more dangerous than individual manipulation is the use of power at the institutional level. Wang emphasized that AI is not inherently decentralized, it actually amplifies the control of existing power structures. When subjects with computing power can use AI to generate high-level imitation information, they can manipulate narratives on a large scale, create credible audio and video materials, and turn “authenticity” itself into a monopoly resource. He warned that such AI-powered information weapons are already emerging in elections and political events.

The only countermeasure is to establish a strong verifiable mechanism to rebuild the chain of trust between the content and the publisher. Especially in governments or large institutions with AI capabilities, it is necessary to restrain their content generation behavior with institutional transparency, otherwise the information itself will evolve into a weapon of war in a world where truth and falsehood are difficult to distinguish.

When it comes to the topic of extraterrestrial life, Wang said he is highly sure. He cites the “Fermi paradox” to point out that in such a vast universe, the Earth is unlikely to be the only planet with intelligent life. He believes that the existence of extraterrestrial life is almost unquestionable, but the distance between them is too far away from each other, making communication or observation difficult.

Further, he cited the “dark forest hypothesis” to explain why we have not yet discovered extraterrestrial civilizations. This theory believes that once a civilization reaches a certain level of technology, its most rational choice is to remain hidden to avoid being discovered and destroyed by higher civilizations. From this perspective, the universe is a dark forest hunted by each other, and highly intelligent beings do not send signals precisely for survival.

He noted that this “stealth motive” may also explain our inability to observe extraterrestrial presence. The more advanced technology is, the better we understand the existential risks posed by exposed locations. Therefore, it is not that extraterrestrial life does not exist, but that they have chosen to remain silent.

Human understanding of the universe is still limited. The sun has a life cycle, and the habitability of the entire solar system is not eternal. Therefore, an assumption that is being discussed by more and more scientists is that life may be attached to different planets at different times. Five billion years ago it might have been Mars, today it was Earth, and in the future it could be a satellite of Saturn. As stars evolve, planetary habitability will shift, and the “rotation” of life is not unfounded.

End of text
 0