Camera-to-LiDAR calibration is a fundamental step in sensor fusion systems where data from cameras and LiDAR sensors must be aligned precisely. This calibration allows projecting 3D LiDAR points onto 2D camera images, enabling enhanced perception tasks such as object detection, lane detection, and environmental mapping. Proper calibration ensures accurate spatial and temporal alignment between the sensors, critical for autonomous vehicles, robotics, and advanced driver-assistance systems (ADAS).
In this tutorial, we will interpret and compare the calibration files of two popular Hesai LiDAR models — the OT128 and QT128. These calibration files define the intrinsic camera parameters and the extrinsic transformation between the camera and the LiDAR sensor.
✔ How to read and understand the camera-to-LiDAR calibration files for Hesai OT128 and QT128 sensors.
✔ The structure and meaning of intrinsic and extrinsic parameters in these calibration files.
✔ How to compare the calibration setups between OT128 and QT128 based on their files.
✔ Insights into how these parameters affect sensor fusion and data projection tasks.
By the end of this tutorial, you'll be able to:
✔ Confidently read and interpret camera-to-LiDAR calibration files from Hesai OT128 and QT128 sensors.
✔ Understand the meaning and role of intrinsic and extrinsic parameters in sensor calibration.
✔ Identify key differences between calibration setups of OT128 and QT128 based on their JSON files.
✔ Apply this understanding to assist in sensor fusion, 3D point projection, and autonomous sensing tasks.
Before we start, make sure you have the following installed:
Basic understanding of camera models (especially pinhole camera model) and intrinsic parameters.
Familiarity with 3D transformations and quaternions representing rotations.
Basic JSON format knowledge for reading calibration files.
Some experience with sensor fusion or autonomous sensing systems is helpful but not mandatory.
Let’s break down the file calib_250507_171326_ot.json step-by-step.
Metadata
"00_date" : "25/05/07-08:13:26",
"00_time_offset" : 0,
"00_date": The calibration was done on May 7, 2025, at 08:13:26.
"00_time_offset": No time offset applied; it’s set to 0.
This section outlines camera configuration used for fusion with the QT128 LiDAR.
"0_name" : "01_camera",
"1_model" : "Pinhole",
"2_extrinsicName" : "Camera",
Name: "01_camera"
Model: Pinhole (standard camera projection model without fish-eye or wide-angle distortion).
extrinsicName: Indicates it's the main camera being calibrated with respect to the LiDAR.
The intrinsic parameters define the camera’s optical characteristics that define how the camera captures 3D points onto its 2D image plane.
"3_intrinsic" :
{
"cx" : 953.03153823077309,
"cy" : 609.87086143600197,
"fx" : 1061.5564571092602,
"fy" : 1048.898962753487,
"k1" : -0.13776637324332211,
"k2" : 0.060708678816690682,
"k3" : 0,
"k4" : 0,
"k5" : 0,
"k6" : 0,
"mel" : 0,
"p1" : 0.00046084902994353663,
"p2" : -0.0022276594504792796,
"skew" : 1.1544266947186441
}
fx, fy: Focal lengths in pixels (horizontal and vertical).
cx, cy: Principal point (optical center) coordinates in pixels.
k1, k2, ...: Radial distortion coefficients, correcting lens distortion (non-zero, slight lens distortion).
p1, p2: Tangential distortion coefficients.
skew: angle between x and y pixel axes (usually close to 1). Non-zero skew, indicating that the camera’s image axes are not perfectly orthogonal (unusual but possible in high-precision industrial cameras).
NOTE:
These are used in projection:
u = fx * X/Z + cx, v = fy * Y/Z + cy, corrected with distortion.
Projection Error
"3_intrinsic_projErr" : 0.29286362741085292,
Average reprojection error (in pixels) during calibration.
Good calibration if under 1 pixel, which this is (≈ 0.29 px).
Extrinsic Parameters (LiDAR-to-Camera Transformation)
Extrinsics describe the rigid transformation from the LiDAR frame to the camera frame:
These parameters are essential for mapping 3D points onto the 2D image plane accurately.
"4_extrinsic" :
{
"tx" : 0.0016453978293064025,
"ty" : -0.14533283086354848,
"tz" : -0.11673753620945168,
"w" : -0.0038467641521730436,
"x" : -0.0060783657014777809,
"y" : 0.70956867933272738,
"z" : -0.70459956371400467
}
These describe the transformation from LiDAR to camera coordinates.
The pose is stored as a quaternion (w, x, y, z) + translation (tx, ty, tz) in meters.
tx, ty, tz: translation vector (in meters) from LiDAR to camera coordinate system.
w, x, y, z: quaternion components representing rotation from LiDAR to camera.
This tells us how to rotate and translate LiDAR points into the camera coordinate frame.
NOTE:
To apply the transformation:
Convert LiDAR 3D point to camera frame:
Camera_Point = Rotation(Quaternion) * LiDAR_Point + Translation
This camera appears to be inactive or not utilized in the current setup.
"02_camera" :
{
"0_name" : "02_camera",
"1_model" : "Pinhole",
"2_extrinsicName" : "Camera",
"3_intrinsic" :
{
"cx" : 0,
"cy" : 0,
"fx" : 0,
"fy" : 0,
"k1" : 0,
"k2" : 0,
"k3" : 0,
"k4" : 0,
"k5" : 0,
"k6" : 0,
"mel" : 0,
"p1" : 0,
"p2" : 0,
"skew" : 0
},
"3_intrinsic_projErr" : 0,
"4_extrinsic" :
{
"tx" : 0,
"ty" : 0,
"tz" : 0,
"w" : 1,
"x" : 0,
"y" : 0,
"z" : 0
}
}
Camera 02 is a non-functional placeholder with all intrinsic and extrinsic values set to zero or identity, indicating it’s not used in the current setup but reserved for possible dual-camera configurations or future expansion.
Now, let’s analyze the QT128 calibration file calib_250508_102344_qt.json, for the Hesai QT128 LiDAR, specifically to align it with camera 01 using intrinsic and extrinsic calibration parameters.
Metadata
"00_date" : "25/05/08-01:23:44",
"00_time_offset" : 0,
Calibration done on May 8, 2025, at 01:23:44.
No time offset (0), suggesting timestamps of sensors are already aligned or not modified.
This section describes the camera used for fusion with the QT128 LiDAR.
"0_name" : "01_camera",
"1_model" : "Pinhole",
"2_extrinsicName" : "Camera",
Name: "01_camera"
Model: Pinhole (standard perspective projection model)
Extrinsic name: This tells the system the transformation is from LiDAR to "Camera" frame.
These values are identical to the OT128 calibration, meaning the same camera was used for both QT128 and OT128 calibration.
"3_intrinsic" :
{
"cx" : 953.03153823077309,
"cy" : 609.87086143600197,
"fx" : 1061.5564571092602,
"fy" : 1048.898962753487,
"k1" : -0.13776637324332211,
"k2" : 0.060708678816690682,
"k3" : 0,
"k4" : 0,
"k5" : 0,
"k6" : 0,
"mel" : 0,
"p1" : 0.00046084902994353663,
"p2" : -0.0022276594504792796,
"skew" : 1.1544266947186441
},
NOTE:
These can be used to undistort and project 3D points to 2D image space using camera models like OpenCV.
Reprojection Error
"3_intrinsic_projErr" : 0.29286362741085292,
Reprojection error of ~0.29 pixels which shows excellent calibration.
Extrinsic Parameters (LiDAR → Camera Transformation)
The extrinsic transformation is quite different:
"4_extrinsic" :
{
"tx" : -0.33221422046885812,
"ty" : 1.141891082775605,
"tz" : 2.4862191208213864,
"w" : -0.10364341234167554,
"x" : -0.16747438216138846,
"y" : -0.49372183775315082,
"z" : 0.84702368403928718
}
This transformation maps LiDAR 3D points into the camera coordinate system.
Stored as a quaternion (w, x, y, z) for rotation and translation (tx, ty, tz) in meters.
Compared to the OT128 calibration, these extrinsics are significantly different:
QT128 is physically positioned much higher (z ≈ 2.49m) and shifted (x, y) more than OT128.
This likely reflects a different mounting position of the QT128 sensor on the vehicle/platform.
NOTE:
convert a LiDAR point to the camera frame like:
Camera_Point = Quaternion_Rotation * LiDAR_Point + [tx, ty, tz]
2_camera (Unused / Placeholder)
"02_camera" :
{
"0_name" : "02_camera",
"1_model" : "Pinhole",
"2_extrinsicName" : "Camera",
"3_intrinsic" :
{
"cx" : 0,
"cy" : 0,
"fx" : 0,
"fy" : 0,
"k1" : 0,
"k2" : 0,
"k3" : 0,
"k4" : 0,
"k5" : 0,
"k6" : 0,
"mel" : 0,
"p1" : 0,
"p2" : 0,
"skew" : 0
},
"3_intrinsic_projErr" : 0,
"4_extrinsic" :
{
"tx" : 0,
"ty" : 0,
"tz" : 0,
"w" : 1,
"x" : 0,
"y" : 0,
"z" : 0
}
}
This second camera entry is a placeholder, like in the OT128 file.
It is not active or calibrated, and values are zero or identity.
Comparing OT128 and QT128 Calibration Files
Component
Use Case
Camera Intrinsics
Extrinsic Translation (tx, ty, tz)
Offset (tx)
Elevation (ty)
Mounting Distance (tz)
Extrinsic Rotation (quaternion w,x,y,z)
Projection Error
Hesai OT128
Compact, close-proximity setup
Same for both, pinhole model with lens distortion
Small offsets near origin (≈0.0016, -0.1453, -0.1167)
Almost centered with the camera
Slightly lower than camera (ty ≈ -0.14 m)
Very close to the camera (tz ≈ -0.11 m)
(−0.0038, −0.0061, 0.7096, −0.7046)
Forward Facing FOV
0.29 pixels (good accuracy)
Hesai QT128
Long-range, high-mount setup
Same as OT128 (same camera used in both setups)
Larger offsets (≈-0.332, 1.1419, 2.4862)
Slight left offset (~ -0.33 m)
Elevated LiDAR (ty ≈ +1.14 m)
Much farther from the camera (tz ≈ +2.48 m)
(−0.1036, −0.1675, −0.4937, 0.8470)
Elevated Downward FOV
0.29 pixels (good accuracy)
Understanding the calibration files is crucial before any sensor fusion task. Knowing the intrinsic camera parameters helps with 3D-to-2D projection accuracy, while extrinsic parameters define the spatial relationship between sensors.
The Hesai OT128 and QT128 LiDARs use the same camera with identical intrinsics, but differ significantly in extrinsic calibration, reflecting their physical mounting setups:
OT128 has a more aligned setup — OT128 is a compact, close-proximity setup with the LiDAR mounted very close and slightly below the camera, almost centered, resulting in a forward-facing field of view (FOV).
QT128 has a more rotated setup — QT128 is a high-mounted, long-range setup with the LiDAR placed farther from the camera, slightly offset to the left, leading to an elevated, downward-facing FOV.
Despite these positional differences, both setups achieve excellent projection accuracy of around 0.29 pixels, ensuring reliable sensor fusion.
Next Steps:
Use these calibration files to implement point cloud projection onto camera images.
Explore LiDAR-to-LiDAR calibration if you want to align data between the OT128 and QT128 sensors themselves.
Experiment with sensor fusion pipelines combining these calibrated sensors for object detection, mapping, or autonomous navigation.
Happy Coding!
GitHub Link to download the complete code:
You can download the complete example code and calibration file parser on GitHub.
Feel free to reach out with questions or feedback!
Yongin-si, South Korea
sumairamanzoorpk@gmail.com