MCAP is easier to inspect, validate, and move between tools than a lot of older recording setups. That is why it keeps showing up in robotics stacks.
MCAP to point cloud, in your browser
MCAP is becoming the default recording format in modern robotics workflows because it is portable, indexed, and easier to work with than the old pile of bag formats. But the tooling gap is still real: inspect the recording, yes; turn it into a mapped point cloud, not always.
Better recording format, same mapping problem
But once you have the file, you still need to answer the hard question: do you need raw frames from one topic, or do you need a mapped output from the full run?
Good tools for inspection, not a complete map pipeline
MCAP CLI
Start by summarizing the file and confirming the topics you care about.
mcap info demo.mcap
mcap cat demo.mcap --topics /velodyne_points --json | head -n 10Foxglove
Foxglove is excellent for opening the file and visually inspecting the data stream.
foxglove-studio /path/to/your/file.mcapGreat for playback and debugging. It is not a SLAM-to-map service.
Python
If you want to inspect or filter ROS1 messages inside MCAP, the upstream Python helpers get you there quickly.
from mcap_ros1.reader import read_ros1_messages
for msg in read_ros1_messages("input.mcap"):
print(f"{msg.topic}: {msg.ros_msg}")You can inspect the recording and still not have a map
The CLI, Foxglove, and Python libraries are exactly what you want when the job is "tell me what is inside this file." They are not the full answer when the job is "hand me back one mapped point cloud from the run."
It closes the last mile from recording to mapped artifact
Upload the MCAP, let the product validate the topics, run LiDAR SLAM, georeference the result when GNSS exists, and download the mapped artifacts from the browser.