Home / Blog

AI systems and training models used by Tesla and Waymo

Jason Li
Sr. Software Development Engineer
Skilled Angular and .NET developer, team leader for a healthcare insurance company.
May 15, 2023

Since their introduction, autonomous vehicles have been the focus of the media. The hardware installed on the car and the models it uses to drive itself are both technological wonders in today's autonomous automobiles. Due to the fact that autonomous vehicles must use machine learning algorithms to tackle real-world challenges, they are also a hub for AI innovation.

One of the hottest and most well-liked trends in AI and machine learning is the development of self-driving cars. We witnessed stories in 2020 from businesses like Waymo, which offers a service called Waymo One that enables customers to hail self-driving taxis.

How autonomous vehicles make decisions

Based on object detection and object classification algorithms, driverless automobiles are able to recognize objects, analyze situations, and reach judgments. They do this by identifying objects, classifying them, and determining what they are. To assist the machine learning algorithm in learning how to make the best choices when navigating the roadways, Mindy Support offers thorough data annotation services.

Via its partnership with TSMC, it has made significant innovations in supplying powerful on-vehicle computers. In addition, Tesla has developed internal models for its autopilot feature utilizing a supercomputing cluster with GPUs from NVIDIA.

Every day, the Waymo Driver safely navigates actual city streets while using sensors and software to complete the whole act of driving. To find out more about our entire safety program, look at our Safety Report.

The only other firm to accomplish fully autonomous driving is Waymo, a division of Google. Waymo created and produced a complete set of chips and sensors for use with its automobiles, known as Waymo Drivers, using Google's expertise in the silicon industry. The company behind the vehicle has also developed a simulation, dubbed "Carcraft," that enables them to train their models virtually.

Tesla developed their chips, dubbed Tesla FSD chips, in close collaboration with ARM and TSMC. This chip is designed to run an inference with minimal latency and high power efficiency. The FSD chip then receives this information and uses it to make driving judgments instantly.

Tesla has created a second supercomputer known as the Dojo system in addition to its supercomputing cluster powered by NVIDIA GPUs. Tesla's datasets are used to train these self-driving algorithms, explicitly created from the bottom up for machine learning training tasks.

On the other hand, Waymo created a comprehensive set of hardware for use in their autonomous vehicles. As a base model, the current fifth-generation Waymo Driver is an all-electric Jaguar I-PACE SUV. Then, this vehicle is adapted to work with Waymo's sensor and computing platform, which consists of radar, LIDAR, and cameras linked with business-grade CPUs and GPUs. The algorithms used by Waymo are taught using TPUs and the TensorFlow ecosystem on Google's cloud computing platforms.

Neural networks in autonomous vehicles

Both of these businesses have invested a lot of money into developing AI and ML models for their cars. One of the first businesses to employ neural networks for self-driving applications was Tesla. The Tesla Autopilot team trained over 48 networks for their vehicles using data gathered from their FSD beta test fleet.

Using information from the car's cameras, these algorithms can reconstruct an entire, computer-readable version of the environment. The algorithms use this as the basis for inference utilizing the onboard FSD chip of the vehicle.

The developers of the self-driving automobile have built a highly accurate closed-course testing facility replicating various metropolitan environments. Waymo can train its algorithms to respond to emergency scenarios that people encounter every day by testing its vehicles in this facility.

Waymo has educated its algorithms by simulating over 20 billion miles of driving and testing at Castle. Here, they may produce virtual scenarios to hone their algorithms better and precisely identify the most challenging conditions a Waymo Driver will face. To incorporate cutting-edge AI and ML algorithms in their vehicles, they also closely collaborate with Google Brain.


As one of the Waymo Driver's most potent sensors, lidar creates a 3D image of its surroundings and allows the driver to gauge the size and distance of objects up to 300 meters distant and 360 degrees around the vehicle.


In the most challenging driving circumstances, the vision system is intended to offer crisper photos and capture more detail. The fifth-generation Waymo Driver uses cameras to create a 360-degree vision system that enables the driver to recognize crucial features, such as pedestrians and stop signs, from a distance of more than 500 meters.

What is the Waymo Driver's Process?

1. A map of the area

The Waymo Driver maps the region thoroughly before operating in an unfamiliar area, paying close attention to every aspect and noting anything from stop signs to curbs. The platform then adopts these elaborate and detailed custom maps, which are loaded with real-time sensor data and can pinpoint the precise location of the road at any moment in time, as opposed to solely relying on complex external data like GPS, where the possibility of signal loss persists.

2. Monitoring everything at once

With cutting-edge sensor technology and technologies like machine learning, Waymo Driver's perception system understands everything around it, from other vehicles to pedestrians and bicycles. Additionally, it pays attention to signals and indicators like stop signs and the changing color of traffic lights.

3. Seeing events before they happen

In the driving situation, various items are present, each with unique behaviors and purposes. To predict what other drivers could do, the Waymo Driver analyses real-time information and combines it with its driving expertise. It compares how an automobile behaves to that of a bike, a pedestrian, or any other object before forecasting, in only a few seconds, which routes the remaining road users can take.

4. Preparing for the most secure result

The Waymo Driver plans out the best path or strategy that can be used after gathering all the information, including the precise maps, the objects in the immediate area, and where they might travel.

5. Utilizing the Tensor Flow ecosystem

Waymo trains its neural networks using the TensorFlow ecosystem and Google data centers, including TPUs. Tensor processing units (TPUs) enable the platform to train nets up to 15 times more effectively. Waymo additionally simulates testing its ML models. The platform can quickly implement the newest nets on its self-driving cars and improve its ML models because of its thorough testing and training cycles.

6. Run operations under a variety of weather conditions

Due to the limited visibility, we already know how difficult it is to drive in heavy rain and snow, both for manual and self-driving. Waymo has prepared its vehicles to function in inclement weather.

Machine learning plays a role in filtering out noise and correctly detecting pedestrians, automobiles, and other objects because snowflakes and rainfall can produce a lot of noise in sensor data for self-driving cars.

7. SurfelGAN simulates camera data from autonomous vehicles.

The platform recently revealed its plan to use AI to create camera images to imitate sensor data acquired from its self-driving cars.

The SurfelGAN approach is described and elaborated in a recent study co-authored by Waymo researchers, including Research Head Dragomir Anguelov. SurfelGAN uses camera perspectives and texture-mapped surface elements to handle locations and orientations and recreate scenes. The method saves a significant amount of computational efficiency while maintaining sensor data.

Before deploying their systems for actual automobiles, Waymo and comparable platforms train, test, and validate their systems in simulation environments. Unlike conventional car designs, Waymo's CarCraft simulates materials accurately to ensure that sensors such as lidars and radars perform as they should.

Employing Artificial intelligence technology in production

AI develops original designs that reduce the material needed to make a part while preserving structural integrity. The new AI network from Audi, dubbed FelGAN, creates rims for its vehicles using a generative adversarial network, which is a perfect example. This model can create novel types of rims while keeping them light, thanks to self-supervised learning. Audi's engineers praise the model's capacity to "think outside the box" when it is turned into a prototype and tested.

While significant automakers are attempting to integrate AI into manufacturing, a firm has used cutting-edge algorithms to create a car from scratch. A hypercar called 21C was developed by the American automaker Czinger using AI.

Czinger ensured that not a single gram of the car was wasted by applying Pareto optimization to every one of its components using AI. Due to their complexity, they had to invest significantly in additive manufacturing technology to synthesize some components internally.


AI will soon add another weapon to the arsenal of automakers, ushering in a new era of car design and production. The future automobiles will leave the cars of today in the dust by combining cutting-edge 3D printing technologies, strong generative AI, and breakthroughs in robotics.