Sensor Tuning using AV elevate

The performance of autonomous vehicles relies heavily on the proper configuration and optimisation of sensor systems. To do this effectively has traditionally required a vast amount of real-world data. With the launch of AV elevate™, rFpro has introduced a comprehensive platform that enables developers to tune, train and test automated driving components and full systems in simulation. In the first of a three-part series, we sat down with Matt Daley, Technical Director at rFpro, to understand how AV elevate is transforming the sensor tuning process.

Q: To start with, what exactly do we mean by tuning when it comes to sensor systems?

Matt: There are two main types of tuning we’re looking at here. First, there’s system-level tuning, where you’re considering the types, positions and orientations of sensors that feed your perception system. This is at a high level and is fundamental work that determines how your sensors will work together as a complete package. Then there’s individual component tuning, where you’re optimising the setup of each sensor to capture the best possible data according to the specific driving conditions.

Q: What types of sensors can be tuned?

Matt: Every major AV perception sensor type can be tuned and developed in AV elevate, from camera and radar to LiDAR and ultrasonic. The synchronous architecture of AV elevate ensures that data from multiple sensors is perfectly aligned in time, enabling sensor fusion testing to ensure that multiple sensors work harmoniously together.

sensor tuning
Physically modelled exposures for camera sensors

Q: Specifically, what characteristics of a sensor need to be tuned?

Matt: Across the various different sensor types there are hundreds of sensor settings that developers need to optimise. For cameras, this might involve adjusting digital gain, white balance, or controlling exposure timing. For LiDAR and radar, you can tune parameters like field-of-view, sensitivity and range, defining how far you want the processing chip to look for objects. With rotating LiDAR you can adjust how many rotations per second you want.

Ultimately, all the various aspects that make up a sensor model can be optimised in AV elevate. While cameras often get the most attention, AV elevate provides comprehensive tuning capabilities across sensor types.

Q: What makes tuning in simulation, specifically with AV elevate, more advantageous than traditional methods?

Matt: The speed of iteration is transformative. In simulation, you can make a change and instantly see its closed-loop effect on your test. In the real world, you might have to wait months for the right conditions to occur again, or even longer if you’re waiting for physical prototypes to be built. Plus, there’s the scalability factor – you can test across a vast range of scenarios and conditions that would be impractical or impossible to replicate in real-world testing.

The key advantage of AV elevate is the unprecedented fidelity of the simulation data. We’re the only solution on the market that offers accurate motion blur reproduction and individual line-by-line rolling shutter effects. This means when you’re tuning something like exposure timing or LED flicker mitigation routines, you’re working with truly representative data.

Q: Do you need to have your own detailed sensor models to get value from AV elevate?

Matt: In many cases, no. AV elevate is supplied with a library of generic sensor models and a range of digital twins of commercially available sensors can be added. These sensor models have been correlated with real-world data to ensure that their performance in simulation closely matches physical hardware. This enables a startup or OEM to immediately begin system-level design work, such as determining optimal sensor positions, quantities, and lens coverage, simply using our generic sensor set.

As development progresses, these can be replaced with specific sensor models or tuned to match particular specifications. It gives teams an instant ability to start system design without waiting for specific hardware decisions or availability.

sensor tuning
The rolling shutter effect causes straight objects to appear slanted

Q: How does AV elevate help vehicle manufacturers manage sensor supplier changes?

Matt: This is an important consideration for OEMs. When switching suppliers or upgrading to newer sensor setups, AV elevate allows you to optimise your new system while continuing to benchmark its performance in a controlled way with your existing perception pipeline. You can evaluate how different sensors process data and look to maximise the benefit of supplier changes on your overall system performance. It is worth noting that all this development work can be done before a physical prototype even exits – accelerating programmes by 9 months or more.

Q: How does tuning fit into the bigger picture of autonomous vehicle development?

Matt: What we’re doing with AV elevate is applying proven engineering principles to the autonomous vehicle challenge. Just as we’ve helped motorsport teams tune their suspension systems before races, we’re now helping AV developers optimise their sensor systems before deployment. The philosophy remains the same: understand your operating conditions, create high-fidelity physical models, and tune your systems to perform at their best when it matters most.

This approach significantly reduces reliance on physical prototypes and real-world data and testing, which has traditionally been one of the biggest barriers to autonomous vehicle development. With AV elevate, we’re making the tuning process more efficient, more thorough, and more cost-effective than ever before.

We then couple this to AV elevate’s ability to produce the highest fidelity synthetic training data and allow full system testing in edge case scenarios and we end up with a single simulation platform that supports our customers across the whole AV development programme – maximising their return on investment in simulation.

 

Share this article

// Related News