Reflection AI, a superintelligence business, raises $130 million at launch

Reflection AI, a superintelligence business, raises $130 million at launch

Reflection AI Inc., a fresh startup founded by ex-Google DeepMind researchers, has officially launched today, securing $130 million in initial funding.

The funding was acquired through two investment rounds. The first round brought in $25 million in seed funding, spearheaded by Sequoia Capital and CRV. The latter firm also played a key role in the following $105 million Series A funding round alongside Lightspeed Venture Partners.

The funding attracted several notable investors, including Nvidia Corp.’s venture capital division, LinkedIn co-founder Reid Hoffman, and Alexandr Wang, CEO of Scale AI Inc. The company is currently valued at $555 million.

At the helm of Reflection AI are co-founders Misha Laskin, who serves as CEO, and Ioannis Antonoglou. Laskin was instrumental in developing the training workflow for Google LLC’s Gemini large language model series, while Antonoglou focused on enhancing the post-training systems, which optimize an LLM’s performance after its initial training.

Reflection AI aims to create what it terms “superintelligence,” an AI system capable of handling most computer-related tasks. As a first step toward this ambitious goal, the company is working on an autonomous programming tool, believing that the foundational technologies required for this tool can also be adapted to achieve superintelligence.

In a blog post, Reflection AI team members stated, “The innovations necessary for a fully autonomous coding system—such as advanced reasoning and iterative self-improvement—naturally extend to a wider range of computer tasks.”

Initially, the company will concentrate on developing AI agents designed to automate specific programming tasks. Some of these agents will be tasked with identifying vulnerabilities in developers’ code, while others will focus on optimizing memory usage and testing applications for reliability.

Reflection AI is set to automate several related tasks as part of its innovative approach. The company claims that its technology can produce documentation that clarifies the functionality of specific code snippets. Additionally, the software will assist in managing the infrastructure that supports customer applications.

A recent job listing on Reflection AI’s website indicates that the company intends to enhance its software with large language models (LLMs) and reinforcement learning techniques. Traditionally, developers trained AI models using datasets where each data point was paired with an explanation. However, reinforcement learning eliminates the necessity for these explanations, simplifying the creation of training datasets.

The job listing further suggests that Reflection AI is keen to “explore novel architectures” for its AI systems, hinting at a potential shift away from the widely used Transformer neural network architecture that most LLMs rely on. An increasing number of LLMs are adopting a competing architecture known as Mamba, which offers greater efficiency in certain areas.

Another job posting for an AI infrastructure specialist implies that Reflection AI plans to utilize up to tens of thousands of graphics cards for training its models. The company also mentioned its intention to develop “vLLM-like platforms for non-LLM models.” Developers leverage vLLM, a well-known open-source AI tool, to minimize the memory consumption of their language models.

“As the team enhances model intelligence to broaden its capabilities, Reflection’s agents will take on more tasks,” wrote Sequoia Capital investors Stephanie Zhan and Charlie Curnin in a blog post. “Envision autonomous coding agents tirelessly working in the background, managing workloads that typically hinder team productivity.”