In robotics and autonomous vehicles, LiDAR sensors play a crucial role in environment perception. Sometimes, more than one LiDAR sensors are used on a platform to increase coverage, expand field of view and improve accuracy through sensor fusion. However, to effectively utilize data from multiple LiDAR units, precise calibration between their coordinate frames is essential.
This tutorial provides a step-by-step guide for perform LiDAR-to-LiDAR extrinsic calibration between two distinct Hesai LiDAR sensors: the QT128 and OT128 models which differ in their physical characteristics and mounting configurations .We will use their respective JSON calibration files (already discussed here) containing each LiDAR’s pose relative to a shared camera frame. We will compute the rigid transformation that aligns the QT128 LiDAR's coordinate frame with that of the OT128.
You will learn to:
✔ Extract extrinsic parameters from calibration files.
✔ Build transformation matrices using quaternions and translations.
✔ Compute relative transforms between two LiDAR coordinate systems.
✔ Save the resulting transformation in JSON format.
3: By the end of this tutorial, you'll be able to:
✔ Compute a transformation matrix between two LiDARs.
✔ Decompose a 4x4 matrix into rotation (quaternion) and translation.
✔ Generate a JSON file storing the transformation in a a ROS/robotics-friendly JSON format
4: What is LiDAR-to-LiDAR Calibration?
It is the process of determining the spatial relationship (rotation & translation) between two LiDAR sensors mounted on the same platform. Since each LiDAR is typically calibrated relative to a camera (a common reference frame), we can derive their relative pose using matrix transformations into a common coordinate system, enabling multi-sensor fusion.
Mathematical Foundation
In our case, each LiDAR sensor's calibration file contains its extrinsic pose (position and orientation) with respect to a reference camera. By reading both files and computing the transformation from QT128 to camera and OT128 to camera, we can derive the QT128-to-OT128 transformation using matrix algebra.
Given:
T_QT128_to_Cam = QT128’s pose relative to the camera
T_OT128_to_Cam = OT128’s pose relative to the camera
The QT128 → OT128 transformation is computed as:
T_QT128_to_OT128 = inv(T_OT128_to_Cam) × T_QT128_to_Cam
This gives us the transformation from QT128's frame to OT128's frame.
Where:
T_X_to_Cam is the 4×4 transformation matrix from LiDAR to camera
inv() denotes matrix inversion
× is matrix multiplication
All transformations are 4×4 homogeneous matrices
Two JSON calibration files (QT128 and OT128)
Basic understanding of:
3D coordinate systems
Quaternion rotations
Homogeneous transformations
Before we start, make sure you have the following installed:
Python 3
Scipy (pip install scipy)
NumPy (pip install numpy)
6.0: Setps to Follow:
Let's break down the steps
In this step, we will load the camera-to-LiDAR extrinsic calibration data from two JSON files: one for QT128 and one for OT128. These files contain translation vectors and quaternion rotations that describe how each LiDAR is positioned relative to a common camera frame. This data is essential for computing the transformation between the two LiDARs.
6.1.1: Code Implementation
def load_calibration(filepath):
"""Load calibration JSON and extract the primary camera extrinsics"""
with open(filepath, 'r') as f:
data = json.load(f)
return data['01_camera']['4_extrinsic']
6.1.2: Breaking It Down: Line-by-Line Explanation
The load_calibration function takes the path of a calibration JSON file as input. It opens the file in read mode and reads its contents using the json.load method, which converts the JSON content into a Python dictionary. From this dictionary, it extracts the value corresponding to the key '01_camera' → '4_extrinsic', which contains the translation and rotation data of the LiDAR with respect to the camera. This step ensures that we have access to the necessary extrinsic parameters of each LiDAR for further processing.
Once we have the extrinsic parameters, we need to convert them into 4×4 homogeneous transformation matrices. These matrices encode both rotation and translation and are commonly used in robotics to perform coordinate transformations between different reference frames.
6.2.1: Code Implementation
def create_transform_matrix(extrinsic):
"""Create 4x4 homogeneous transformation matrix from extrinsic parameters"""
# Note: scipy uses xyzw quaternion format
quat = [extrinsic['x'], extrinsic['y'], extrinsic['z'], extrinsic['w']]
trans = [extrinsic['tx'], extrinsic['ty'], extrinsic['tz']]
rot = Rotation.from_quat(quat).as_matrix()
T = np.eye(4)
T[:3, :3] = rot
T[:3, 3] = trans
return T
6.2.2: Breaking It Down: Line-by-Line Explanation
In this function, the quaternion values representing rotation (x, y, z, w) are first extracted from the extrinsic dictionary into a list. Similarly, the translation vector (tx, ty, tz) is extracted. The SciPy Rotation.from_quat() method is then used to convert the quaternion into a 3×3 rotation matrix. A 4×4 identity matrix is initialized, and the rotation matrix is placed in its upper-left corner, while the translation vector is assigned to the last column of the matrix. The result is a homogeneous transformation matrix that can be used for chaining or inverting spatial relationships between frames.
Now that both LiDAR-to-camera transforms are available as matrices, we can compute the transformation from QT128 to OT128. Since both are referenced to the same camera frame, we can compute their relative transformation by chaining their individual transformations with respect to the camera. Specifically, we invert the OT128-to-camera matrix to get camera-to-OT128, then multiply it with QT128-to-camera to get QT128-to-OT128.
6.3.1: Code Implementation
def compute_lidar_to_lidar_transform(qt128_file, ot128_file):
"""Compute QT128 → OT128 transform from two calibration files"""
# Load extrinsics
qt128_ext = load_calibration(qt128_file)
ot128_ext = load_calibration(ot128_file)
# Create transformation matrices
T_qt128_cam = create_transform_matrix(qt128_ext)
T_ot128_cam = create_transform_matrix(ot128_ext)
# Compute QT128 → OT128
T_cam_ot128 = np.linalg.inv(T_ot128_cam)
T_qt128_ot128 = T_cam_ot128 @ T_qt128_cam
return T_qt128_ot128
6.3.2: Breaking It Down: Line-by-Line Explanation
This function first loads the calibration data for both QT128 and OT128 using the previously defined load_calibration function. Then, it converts the extrinsic parameters into transformation matrices for both LiDARs. The transformation from the camera to OT128 is obtained by inverting the OT128-to-camera matrix. This inverted matrix is then multiplied with the QT128-to-camera matrix to compute the final transformation from QT128 to OT128. The result is a single 4×4 matrix that represents the pose of QT128 in the coordinate system of OT128.
To make the transformation result easier to store or transmit, we’ll extract the rotation (as a quaternion) and the translation vector from the final transformation matrix computed in the previous step. This format is commonly used in robotics and computer vision applications.
6.4.1: Code Implementation
def decompose_transform(T):
"""Decompose 4x4 matrix into quaternion and translation"""
rot = Rotation.from_matrix(T[:3, :3])
quat = rot.as_quat() # Returns [x, y, z, w]
trans = T[:3, 3]
return quat, trans
6.4.2: Breaking It Down: Line-by-Line Explanation
The decompose_transform function takes a 4×4 transformation matrix as input. It extracts the upper-left 3×3 submatrix, which represents the rotation component. Using SciPy’s Rotation.from_matrix, it converts this into a quaternion. Then, it extracts the last column of the matrix, which represents the translation vector. The function returns both the quaternion and the translation, allowing us to represent the transformation compactly.
After computing and decomposing the final transform, we’ll save it into a JSON file in a format similar to the original calibration files. This allows us to reuse the transform in other modules or systems that expect extrinsic calibration data.
6.5.1: Code Implementation
def save_transform_to_json(transform_matrix, output_path):
"""Save the transformation in JSON format similar to input calibration files"""
quat, trans = decompose_transform(transform_matrix)
transform_data = {
"00_date": datetime.now().strftime("%y/%m/%d-%H:%M:%S"),
"00_time_offset": 0,
"01_lidar_transform": {
"0_name": "QT128_to_OT128",
"1_model": "RigidTransform",
"2_extrinsicName": "LiDAR",
"3_description": "Transformation from QT128 to OT128 coordinate systems",
"4_extrinsic": {
"tx": float(trans[0]),
"ty": float(trans[1]),
"tz": float(trans[2]),
"w": float(quat[3]), # scipy returns xyzw, we need wxyz
"x": float(quat[0]),
"y": float(quat[1]),
"z": float(quat[2])
},
"4_extrinsic_projErr": 0.0 # Can be calculated if needed
}
}
with open(output_path, 'w') as f:
json.dump(transform_data, f, indent=4)
6.5.2: Breaking It Down: Line-by-Line Explanation
The function starts by decomposing the 4×4 transformation matrix into a quaternion and translation vector using the decompose_transform function. It then creates a dictionary structured similarly to the original calibration files, including metadata like the current timestamp and a description. The quaternion and translation values are stored under the "4_extrinsic" key. Finally, the dictionary is written to a JSON file at the specified path using json.dump. This file can now be used as a calibrated transform input for future data fusion tasks.
To wrap everything up, we create a main function that ties all the previous steps together. This allows us to reuse the calibration pipeline easily by simply specifying the input files and output destination.
6.6.1: Code Implementation
def main():
# Input file paths
qt128_calib_file = "/home/sum/Downloads/lane/code/QTtoOT/calib_250508_102344_qt.json"
ot128_calib_file = "/home/sum/Downloads/lane/code/QTtoOT/calib_250507_171326_ot.json"
output_json_path = "/home/sum/Downloads/lane/code/QTtoOT/calib_qt128_to_ot128.json"
# Compute the transform
T_qt128_ot128 = compute_lidar_to_lidar_transform(qt128_calib_file, ot128_calib_file)
# Print results
print("QT128 → OT128 Transformation Matrix:")
print(T_qt128_ot128)
quat, trans = decompose_transform(T_qt128_ot128)
print("\nDecomposed Representation:")
print(f"Translation (meters): [{trans[0]:.6f}, {trans[1]:.6f}, {trans[2]:.6f}]")
print(f"Quaternion (xyzw): [{quat[0]:.6f}, {quat[1]:.6f}, {quat[2]:.6f}, {quat[3]:.6f}]")
# Save to JSON
save_transform_to_json(T_qt128_ot128, output_json_path)
print(f"\nTransformation saved to: {output_json_path}")
6.6.2: Breaking It Down: Line-by-Line Explanation
This function defines file paths for both LiDAR calibration JSON files and the output file. It computes the transformation matrix between QT128 and OT128, prints the resulting matrix for inspection, and then prints the decomposed translation and quaternion values. Finally, it saves the transformation to a JSON file 'calib_qt128_to_ot128.json' using the previously defined save_transform_to_json function. This provides a convenient and repeatable workflow for calibrating and storing LiDAR-to-LiDAR transformations.
7: Result:
QT128 → OT128 Transformation Matrix:
[[ 0.91694374 -0.3505826 0.19054141 0.36620501]
[ 0.39888966 0.7933455 -0.45988037 -2.58892926]
[ 0.01006089 0.49768943 0.86729696 -1.30657193]
[ 0. 0. 0. 1. ]]
Decomposed Representation:
Translation (meters): [0.366205, -2.588929, -1.306572]
Quaternion (xyzw): [0.253131, 0.047710, 0.198121, 0.945725]
Transformation saved to: calib_qt128_to_ot128.json
In this tutorial, we successfully demonstrated how to perform LiDAR-to-LiDAR extrinsic calibration between two different Hesai LiDAR sensors—QT128 and OT128—using their camera-relative extrinsic JSON files. By extracting rotation and translation parameters from the calibration data, we built 4×4 homogeneous transformation matrices and computed the relative transform between the two LiDAR coordinate frames. We further decomposed this transformation into a quaternion and translation vector for compact representation and saved it in a reusable JSON format suitable for integration in downstream robotics or sensor fusion systems.
This approach is particularly powerful when individual LiDARs are pre-calibrated with respect to a shared reference (such as a camera), eliminating the need for direct LiDAR-to-LiDAR manual calibration in many practical scenarios. Understanding and applying these matrix operations equips you with essential tools for multi-sensor fusion and robotics system integration.
Now that you’ve completed LiDAR-to-LiDAR calibration from static extrinsic files, here are a few practical extensions you can explore to strengthen your understanding and expand functionality:
Visual Verification
Use a 3D visualization tool like Open3D or RViz (ROS2) to verfiy that point clouds from QT128 and OT128 align correctly when transformed into the same frame.
Yongin-si, South Korea
sumairamanzoorpk@gmail.com