Thermal Camera Front-End (LWIR) for ADAS
← Back to: ADAS / Autonomous Driving
I use this page to keep all IC roles around an LWIR thermal camera front end in one place. My focus is the hardware chain from microbolometer FPA and ROIC biasing through references, power rails, readout, and any on-sensor ADC or ISP blocks, until the signal is ready for a bridge into an ADAS SoC or ISP. Vision algorithms, training, and perception stacks are handled in different hubs.
Where I Use LWIR Thermal Cameras in ADAS
I do not drop an LWIR camera into every vehicle program. I reserve it for cases where visible and NIR cameras simply cannot see enough contrast or cannot stay reliable over rain, fog, glare and headlight conditions. In my planning, the main buckets are night vision, animal and pedestrian detection on dark roads, and low-visibility support where thermal contrast matters more than color.
Night vision is where I use the thermal camera as a driver aid or as an extra input to automated braking. I care less about pretty images and more about stable detection of warm objects at distance. That means my front-end ICs must hold NETD and bias stability even when the vehicle sees cold starts, hot soaks, and cycling humidity.
For animal detection on rural roads, LWIR helps me see warm targets against a cooler background long before headlamps or visible sensors can. Here I treat the thermal camera almost like an early-warning sensor: the front-end readout chain must keep its dynamic range under control so that small animals are not lost in noise or clipping.
In low-visibility conditions such as fog, rain or smoke, thermal contrast can survive where visible contrast disappears. I still keep the system-level fusion and decision logic outside this page. Here I only track the hardware front end: which rails power the microbolometer, which references dominate image stability, and which interfaces carry the data into my ADAS compute domain.
Algorithms, object classification, and fusion logic are not discussed here. This page stops at the thermal camera front-end IC roles and assumes that perception, safety budgeting and logging are handled in their own dedicated hubs.
Microbolometer & ROIC Basics
A microbolometer focal plane array is essentially a grid of temperature-sensitive pixels. When I look at a sensor for an automotive program, I start with practical parameters: pixel pitch to understand spatial resolution, array size to gauge field-of-view and aspect ratio, NETD as a proxy for how much contrast I can really trust, and frame rate to see how fast the scene can change without smearing or stuttering in the ADAS stack.
Pixel pitch and array size together tell me what details I can resolve at a given distance. Smaller pitch and larger arrays sound attractive, but they push demands onto the ROIC bias accuracy, readout noise and power delivery. In a vehicle, I have to balance these choices against cost, thermal design and the fact that my power rails will be stressed by cranking, cold starts and hot-soak restarts.
NETD is where the front-end ICs really show up. A good NETD on the datasheet assumes that bias currents, reference voltages and ROIC readout circuitry are behaving as the vendor intended. Once I put the sensor into a noisy, shared automotive power tree, any extra noise, drift or coupling from my own design can quietly erode that headline number.
The ROIC is the local workhorse. It generates and trims the detector bias, controls integration time, performs correlated double sampling, applies analog gain and then sequences the readout through column and row drivers. In practical terms, this is the stage where I decide how sensitive the camera will be to supply noise, how much margin I have on timing, and whether on-sensor ADC or simple analog output makes more sense for my ADAS architecture.
Every time I choose a microbolometer and ROIC combination, I treat bias sources, reference rails and readout amplifiers as part of a single error budget. If I do not control them as a system, the NETD and stability I paid for in the sensor will not show up in real vehicles.
Stereo Sync & Depth Hardware for Dual Cameras
I use this page to turn “two cameras” into a depth-capable stereo front end. Simply mounting a left and right camera is not enough: I need hard requirements on trigger, clock, delay matching, timestamping and the FPGA bridge that carries depth-ready data into my ADAS or robotics compute.
The focus here stays on the timing chain only. Algorithms, ISP tuning, sensor fusion and network time sync live on sibling pages. My goal is to end up with a single number for maximum acceptable skew and a clear list of hardware building blocks that can meet it.
What Do I Actually Mean by Stereo Sync & Depth?
When I say “stereo sync and depth” on this page, I mean the timing layer only. My job is to make the left and right cameras behave like two perfectly timed sensors that share one timebase. Algorithms can change, but if the hardware timing is sloppy, every software team downstream will fight physics.
Minimal definition
Stereo depth relies on two levels of synchronization. Frame-level sync keeps the frame index aligned so both cameras see the same high-level moment in time. Sub-frame or line-level timing keeps exposure and readout aligned within that frame so disparity is computed from truly simultaneous samples.
In hardware terms the chain is simple but unforgiving: a clean trigger starts exposure, the sensors perform their readout, and a timestamp unit tags each frame or event against a shared clock. Everything on this page exists to make that trigger → exposure → readout → timestamp chain precise and repeatable across temperature, lifetime and units.
What is “good enough” sync for depth?
“Good enough” is not an abstract idea; it is a single maximum skew number that falls out of my use cases. I look at vehicle or platform speed, stereo baseline and the range of distances I care about, then translate those into the amount of motion that happens during a timing error.
A slow warehouse AGV with a short baseline can tolerate microsecond-level skew, while a fast passenger car with a longer baseline may need sub-microsecond alignment to keep depth errors within a few centimetres. The end result is a requirement like “end-to-end left-versus-right skew < 1 µs under all operating conditions”, which then drives every clock, trigger, delay and timestamp choice that follows.
Why Timing Skew Breaks My Depth Estimation
Depth is geometry on top of time. If the left camera captures an object at one moment and the right camera sees it a few milliseconds later, disparity is no longer a clean function of distance. The math assumes simultaneity; timing skew quietly violates that assumption and the errors show up as warped depth or duplicate edges.
Geometric view – how skew becomes depth error
I imagine a car driving towards a parked vehicle. At 50 km/h the ego car moves almost 14 m every second. A 10 ms skew means the left image might be taken 14 cm closer or farther than the right image, even though the algorithm is trying to treat them as one instant. That difference in physical position turns into a depth bias that grows with speed and range.
I do not need a full derivation on a whiteboard to see the risk. A small timing error translates into tens of centimetres of apparent motion at highway speed. For near-range robot navigation the numbers are smaller but still real. The safe way to design is to pick my worst-case scenario and back-solve a skew limit that keeps depth error inside a range I can tolerate.
System-level symptoms
In the lab everything looks fine: the rig passes static calibration on a checkerboard at room temperature. Problems appear only when I combine motion, temperature and exposure changes. Outdoor tests at speed show ghost edges, unstable depth on distant cars and seemingly random failures after power cycles.
- Indoor calibration works, but outdoor highway tests fail intermittently.
- Depth on static targets is stable, yet moving objects stretch or duplicate.
- Changing exposure or HDR modes in runtime suddenly breaks stereo consistency.
- Cold and hot soak tests show different depth behaviour with the same calibration file.
- Rebooting one camera or power rail sometimes fixes the issue temporarily.
When I see this pattern, I treat it as a timing problem until proven otherwise. A clean skew budget and a way to measure real skew on hardware usually explain more issues than another round of algorithm tweaks or re-calibration.
Where timing skew comes from
Stereo skew rarely comes from a single obvious bug. It is usually the sum of several small effects: two oscillators drifting apart, software-generated triggers with jitter, asymmetric cables and level shifters, and sensors that take different amounts of time to switch modes or power domains.
Independent clock sources and PLLs slowly walk away from each other over temperature and lifetime. MCU-driven triggers add microsecond-scale variation from interrupt latency and firmware load. Unequal cable lengths or different transceivers shift one edge by a few nanoseconds or microseconds more. Sensor power-up and HDR mode changes introduce hidden delays that only show up after certain sequences.