3 min read

Hardware4 aka AI4, 101

Camera resolution 4x improvement, neural network computation 3-5x improvement, memory bandwidth 3.3x improvement, RAM 2x increase. Minus one zoomed in front facing camera. Later added front bumper camera to the Cybertruck.

AI4 is a new name for HW4

HW4 is not just the compute. It refers to the chips + sensors, i.e. the cameras. AI4 was released in early 2023 for Model S/X and later in the year for 3/Y. Numerous videos exist on YouTube that highlight the difference between the two. The quickest way to tell is by checking whether the side repeater lenses have a red tint, in which case it will be AI4.

When it comes to the number of the exterior cameras, AI4 lost one. HW3 had 3 front facing cameras - the Main Pitch, the Fisheye Pitch, and the Narrow Pitch. AI4 dropped the Narrow Pitch camera, which is just the Main Pitch, but zoomed in. Otherwise, the two side repeater cameras, two B-pillar cameras, and one rearview camera remain the same. All of them got an upgrade. Front facing cameras got a higher 2896x1896 resolution versus a 1280x960 resolution for the previous one - a jump from 1.2 megapixels to 5megapixels. This 4x jump improves detail retention, low-light performance, and color accuracy. It enables the neural network to understand the surrounding conditions better, including the reading of signs. Cybertruck, which is also deemed a family of AI4, got a 9th exterior camera, on the front bumper. For reference, iPhone 17's main rear camera has a resolution of 48MP.

When it comes to the interior camera located just above the rearview mirror, AI4 retains much more detail than the previous generation. This is quite evident in the video.

HW3 was introduced in 2019, and AI4 in 2023. The silicon chips were both manufactured by Samsung. A 14nm process for HW3, 7nm for HW4. Computational capacity increased from 36 TOPS to 50 TOPS. The previous Nvidia AI compute platform used to deliver 21 TOPS. In total, HW3 featured two redundant FSD chips each housing dual neural network accelerators capable of 36 TOPS. So that's 72 TOPS per chip and 144 TOPS across the system. It is reported that AI4 also features two FSD chips on one board, each with three neural network accelerators, brining the system-wide TOPS up to 240. HW3 and AI4 both use GDDR6 memory, however speed got 3.3x better. HW3 features 68 GB/s while AI4 supports 224 GB/s. AI4 also comes with double the RAM, 16GB compared to 8GB.

A quick reminder that since 3D FinFET transistors, which was around 14-22nm, the nanometer process naming scheme became a generation name and not representative of transistor gate length. As a reference, the latest iPhone 17 Pro's A19 Pro chip uses a 3nm process manufactured by TSMC. The gate length for this 3nm process is in the low teens. Further narrowing is technically infeasible. There isn't an official TOPS figure released by Apple for the A19 Pro, however it has been released that it is 35 TOPS for the A17 Pro chips which were featured in iPhone 15 Pro models. TOPS is less highlighted in general purpose processing units, but often mentioned in context of neural network accelerators. These are specialized gadgets that excel in matrix multiplication. TOPS fits well in Tesla's case because inference depends on how fast the on-device chip can run the trained models.