もっと詳しく

The string of engineers who spoke Thursday night during a deep dive into Cruise’s autonomous vehicle technology never mentioned Tesla’s name. They didn’t have to, although the message was clear enough.

GM’s self-driving subsidiary Cruise presented a technical and deployment roadmap — at a granular level — that aimed to show how it has built autonomous vehicles that are safer and more scalable than any human-driven vehicle, including those equipped with advanced driver assistance systems.

While Cruise was clearly making a case for its own technology (not to mention trying to recruit fresh talent), the event was also an argument for autonomous vehicles in general. Each engineer or product lead who spoke Thursday presented various components, from how it uses simulations and the development of its own chips and other hardware to the design of its app and the vehicle itself.

The branded “Under the Hood” event built off of comments CEO Dan Ammann made last month during GM’s investor day, in which he laid out the company’s plan to launch a commercial robotaxi and delivery service starting with retrofitted Chevy Bolts and eventually scaling to an army of tens of thousands of purpose-built Origin AVs on the road over the next few years.

Cruise just gained approval in California to perform commercial delivery services, and is still one permit away from being able to charge for driverless ride-hailing. Still, Cruise thinks it’ll be able to drive down costs enough to scale up and out quickly.

Here’s how.

Using simulations to scale, not just verify the system

Cruise is relying on simulations not only to prove out its safety case, but also to scale to new cities without having to perform millions of miles of tests in them first.

The company will still have to map the cities it enters. But it won’t have to remap cities to track changes to the environment that inevitably happen, like lane changes or street closures. When Cruise goes to new cities, it starts with a technology it calls WorldGen, which it says does accurate, large-scale generation of entire cities, “from their quirky layouts to the smallest details,” which allows engineers to test out new operational design domains, according to Sid Gandhi, technical strategy lead of simulation at Cruise. In other words, WorldGen becomes the stage where the future simulations are set.

To ensure optimal world creation, Cruise takes into account things like lighting at 24 different unique times of day and weather conditions, even going as far as to systematically measure light from a range of street lamps in San Francisco.

“When we combine a high-fidelity environment with a procedurally generated city, that’s when we unlock the capability to efficiently scale our business to new cities,” said Gandhi.

He then laid out the technology for the “Road to Sim,” which transforms into editable simulation scenarios real events that have been collected by AVs on the road. This ensures that the AV doesn’t regress by testing against scenarios it has already seen.

“The Road to Sim combines information from perception with heuristics learned from our millions of real-world miles to recreate a full simulation environment from road data,” said Gandhi. “Once we have the simulation, we can actually create permutations of the event and change attributes like vehicle and pedestrian types. It’s a super easy and extremely powerful way to build test suites that accelerate AV development.”

For specific scenarios that Cruise hasn’t been able to collect in real-world road conditions, there’s Morpheus. Morpheus is a system that can generate simulations based on specific locations on the map. It uses machine learning to automatically enter as many parameters as it wants to generate thousands of interesting and rare scenarios against which it tests the AV.

“As we work on solving the longtail, we’ll rely less and less on real-world testing because when you have an event that happens rarely, it takes thousands of road miles to test it properly, and it’s just not scalable,” said Gandhi. “So we’re developing technology to scalably explore large-scale parameter spaces to generate test scenarios.”

Test scenarios also include simulating the way other road users react to the AV. Cruise’s system for this is called non-player character (NPC) AI, which is usually a video game term, but in this context, refers to all of the cars and pedestrians in a scene that represent complex multi-agent behaviors.

“So Morpheus, Road to Sim and NPC AI work together in this really thoughtful way to let us perform more robust testing around rare and difficult events,” said Gandhi. “And it really gives us the confidence that we can solve rare issues now and in future similar issues, as well.”

Generating synthetic data helps the Cruise AV target specific use cases, said Gandhi, pointing specifically to identifying and interacting with emergency vehicles, presumably for no other reason than to take a dig at Tesla, whose Autopilot ADAS system has come under federal scrutiny for repeated crashes into emergency vehicles.

“Emergency vehicles are rare compared to other types of vehicles, but we need to detect them with extremely high accuracy, so we use our data generation pipeline to create millions of simulation images of ambulances, fire trucks and police cars,” said Gandhi. “In our experience targeted synthetic data is about 180 times faster than collecting road data, and millions of dollars cheaper. And with the right mix of synthetic and real data, we can increase relevant data in our data sets by an order of magnitude or more.”

Two custom silicon chips developed in-house

During GM’s investor day in October, Cruise CEO Dan Ammann outlined the company’s plan to invest heavily into the compute power of the Origin in order to decrease costs by 90% over the next four generations so it can scale profitably. At the time, Ammann mentioned Cruise’s intention to manufacture custom silicon in-house to cut costs, but didn’t admit outright using that silicon to build a chip — but TechCrunch had its theories. On Thursday, Rajat Basu, chief engineer for the Origin program, validated those theories.

“Our fourth-generation compute platform will be based on our in-house custom silicon development,” said Basu. “This is purpose-built for our application. It enables focus and improves processing capability, while significantly reducing piece costs and power consumption. Compute is a critical system from a safety perspective, and has redundancy built into it. Add to that an AV system that is processing up to 10 gigabits of data every second, we end up consuming a fair amount of power. Our MLH chip allows us to run our complex machine learning pipelines in a much more focused manner, which in turn helps us to be more energy efficient without compromising on performance.”

Cruise’s AI team developed two chips: The sensor processing chip will handle edge processing for the range of sensors like cameras, radar and acoustics. The second chip, which is designed to be a dedicated neural network processor, supports and accelerates machine learning applications like those large, multitask models developed by the AI team. Basu says the machine learning accelerator (MLA) chip is just the right size to solve exactly a certain class of neural net and ML applications, and nothing more.

“This keeps the performance at an extremely high level, and it ensures that we are not wasting energy on doing anything that is not value added for us,” said Basu. “It can be paired with multiple external hosts or operate standalone. It supports single Ethernet networks up to 25G with a total bandwidth of 400G. The MLA chip we’re putting into volume production is just the start. Over time we will continue to make this even higher-performing while reducing power consumption.”

The Cruise ecosystem

One thing Cruise made clear during its event is that it hasn’t just thought of the AV tech needed to scale up successfully, but also the entire ecosystem, which includes things like remote assistance operators to validate the AV’s decision when it comes across unknown scenarios, customer service, a vehicle that people actually want to ride around in and an app that can efficiently and easily handle things like customer support and incidence response.

“To truly cross the chasm from research and development to a beloved product requires more than just artificial intelligence and robotics,” said Oliver Cameron, Cruise’s VP of product, at the event. “A safe self-driving vehicle alone is insufficient and simply the first step on a long, long journey. To truly build and scale a competitive product that is adopted by millions into their daily lives, you need to build a host of differentiated features and tools atop a safe self-driving foundation. How these features need to be implemented is non-obvious, especially if your company’s still heads-down solving safety issues.”