123 Main Street, New York, NY 10001

Lock-in Amplifier Architecture & Phase-Sensitive Detection

← Back to: Test & Measurement / Instrumentation

A lock-in amplifier recovers a tiny signal by correlating it with a coherent reference, outputting X/Y (and R/θ) while rejecting broadband noise and out-of-reference interference. Real-world performance depends on keeping the front-end linear, choosing τ/roll-off to match response time, and logging coherence/overload/calibration evidence so artifacts are not mistaken for signal.

H2-1 · What a lock-in actually measures (in one sentence)

A lock-in amplifier reports the amplitude and phase of the input component that is correlated with a known reference (same frequency and defined phase), by performing phase-sensitive detection and then low-pass averaging.

Outputs (what they mean)
  • X (in-phase): the baseband component aligned with the reference phase; strongest when phase is correct.
  • Y (quadrature): the 90°-shifted component; useful to detect phase error or orthogonal physical response.
  • R: magnitude from X/Y (robust amplitude readout when phase may drift).
  • θ: phase angle derived from X/Y (phase information relative to the chosen reference).
Bandwidth is set by averaging, not “magic filtering”
  • Time constant (τ) sets how much the demodulated baseband is averaged: larger τ lowers noise but slows response.
  • Roll-off (filter order) sets how aggressively out-of-band baseband noise is rejected: steeper roll-off reduces noise faster, but can increase settling time/overshoot.
  • Practical takeaway: choose τ and roll-off based on the slowest signal change that must be tracked and the noise floor required.
When a lock-in is the right tool
  • A reference exists (internal, external, or derived from a controlled modulation), so the wanted signal is coherent with it.
  • The wanted signal is narrowband around the reference compared with the broadband noise/ interference.
  • The front-end can be kept out of overload; otherwise distortion products can create false baseband outputs.
When results become unreliable
  • Phase is not stable (reference drift or loss of coherence): X/Y rotate, θ becomes noisy, and amplitude can “wander.”
  • Large interferers saturate the input chain: the lock-in can output a clean-looking number that is actually a demodulated artifact.
Fast sanity check (30 seconds)
  1. Set a moderate τ (not too fast) and view X and Y simultaneously.
  2. Adjust phase (or auto-phase) until |Y| is minimized for a purely in-phase response.
  3. Switch to R for amplitude reporting if residual phase drift is expected.
Lock-in concept: reference-driven I/Q phase-sensitive detection Block diagram showing signal plus noise entering a phase-sensitive detector driven by a reference. The detector produces in-phase and quadrature baseband paths, low-pass filtered with time constant tau and roll-off, and outputs X, Y, magnitude R, and phase theta. Phase-sensitive detection extracts the coherent component Input is multiplied (or synchronously switched) by the reference, then averaged by a low-pass filter. Signal + noise wideband interference may be larger than signal Reference phase sin/cos (coherent) PSD (I/Q) synchronous demodulation I (in-phase) Q (quadrature) LPF / Averaging τ + roll-off LPF / Averaging τ + roll-off Outputs X Y R θ Key idea: only the component coherent with the reference survives averaging into X/Y; τ and roll-off set the effective baseband bandwidth.

H2-2 · Reference & phase synthesis: coherence is everything

Lock-in performance is dominated by coherence: the reference must remain phase-related to the wanted signal. If coherence is lost, the demodulated vector (X/Y) rotates and the reported R/θ becomes unstable.

Reference sources (choose by controllability)
  • Internal reference (NCO): best when the system can apply controlled modulation to the DUT (cleanest coherence).
  • External reference input: best when the DUT already has a stable drive/clock; lock to it to keep phase alignment.
  • Derived reference (limited): possible only if the signal contains a stable pilot/marker; otherwise phase drift corrupts results.
Phase synthesis blocks (what they buy)
  • PLL lock: removes slow frequency/phase drift to maintain coherence with an external timebase; exposes a measurable “LOCK” state.
  • Digital NCO: generates precise sin/cos with programmable phase φ; enables multi-tone and harmonic references without analog drift knobs.
  • Phase adjust (φ): aligns the detector to put the dominant response in X and minimize Y (critical for stable readings).
How phase noise and drift degrade the reading
  • Fast phase jitter broadens the effective demodulation process and raises the baseband noise floor (worse sensitivity).
  • Slow phase drift rotates X/Y over time; amplitude may look steady in R, but θ becomes unreliable and X/Y can appear to “wander.”
  • Loss of lock often looks like sudden jumps in X/Y, increased Y leakage, or an R that changes with τ settings rather than physics.
Multi-tone and harmonic references (2f/3f)
  • Why use harmonics: isolate non-linear responses, shift detection away from strong low-frequency noise, or separate mechanisms by harmonic order.
  • Main risks: harmonic contamination from the drive path, intermodulation when multiple tones are present, and false baseband artifacts if the input chain overloads.
  • Practical guardrails: keep front-end out of saturation, validate with a reference injection check, and confirm that results are stable across τ choices.
Coherence checklist (fast)
  1. Confirm a stable reference path: lock indicator OK (if using PLL) and reference level within range.
  2. Run phase alignment: minimize Y for a known in-phase stimulus, then hold φ fixed.
  3. Change τ by 2–4×: a real coherent signal keeps consistent R; a coherence problem often changes apparent amplitude/phase with τ.
Reference and phase synthesis chain for a lock-in amplifier Diagram showing reference input and timebase feeding a PLL lock stage, then a digital NCO producing sin/cos, followed by phase adjust phi into the I/Q phase-sensitive detector. Side indicators show coherent versus phase drift states. Reference chain: lock, synthesize, align phase Coherence depends on the entire path from timebase to sin/cos generation and phase alignment. Ref In external drive / sync Timebase stable clock source PLL lock drift control + LOCK flag Digital NCO phase accumulator multi-tone / 2f/3f sin / cos phase adjust φ PSD (I/Q) coherent demodulation Coherent state LOCK=OK · stable phase Phase drift LOCK lost / jitter rising Practical cue: stable LOCK + consistent R across τ settings indicates true coherence; drift typically appears as rotating X/Y and rising Y leakage.

H2-3 · Input front-end: noise, impedance, and overload survival

The front-end decides whether the lock-in extracts a real coherent component or a clean-looking artifact. A good design keeps the input path low-noise, linear, and out of overload, while presenting the right impedance to the source so amplitude and phase are not unintentionally distorted before demodulation.

Front-end mode selection (fast rules)
  • Voltage mode (diff amp / in-amp): best for low-to-moderate source impedance and voltage-output sensors; preserves phase when input impedance is high and CMRR is strong.
  • Current mode (TIA): best for current-output sources or very high impedance sources where voltage pickup and bias errors dominate; converts input current directly into voltage with a defined transimpedance gain.
  • If the source impedance is high, prioritize bias current and leakage control first; noise density alone will not predict drift and offset errors.
Key criteria that actually move the measurement
Spec → What it breaks → Quick validation cue
  • Noise density (nV/√Hz or fA/√Hz) → sets the achievable baseband noise floor for a given τ; if too high, X/Y remain noisy even with longer averaging.
  • 1/f corner → dominates low-frequency readings; moving the reference away from very low frequencies can reduce drift-like noise without changing τ.
  • Input impedance → interacts with source impedance and capacitance, creating amplitude loss and phase shift before PSD; look for R or θ steps when switching ranges.
  • CMRR → poor CMRR turns common-mode pickup into a coherent-looking residual at baseband after demodulation; Y often rises when wiring/ground changes.
  • Bias current & leakage → create temperature-dependent offsets for high-Z sources; if X drifts with temperature despite stable reference, bias/leakage is a prime suspect.
  • Protection/limiting network → nonlinearity and recovery can create false coherent components; artifacts often appear after a transient or overload event.
Overload survival (the #1 artifact generator)

If the input chain saturates, clips, or recovers slowly, it can create distortion products that the PSD translates into a stable baseband value. This is dangerous because the output can look “clean” while being wrong.

  • Saturation/clipping → generates harmonics and intermodulation → PSD demodulates part of that distortion into X/Y.
  • Clamp recovery (ESD diodes / TVS / limiter) → asymmetric settling → appears as an offset or slow drift in baseband.
  • Range switching transients → repeatable steps or ringing → can masquerade as a coherent response at certain τ values.
Design guardrails
  • Ensure a linear operating region for expected interferers; do not rely on the PSD to “clean up” overload.
  • Place a gentle pre-limit and band-limit before any hard clamp to reduce recovery artifacts.
  • Expose an overload flag/counter so field data can separate physics from saturation-induced artifacts.
Front-end checklist (field-friendly)
  1. Verify no overload: input limiter not active and overload indicator stays clear under worst-case interference.
  2. Check range changes: R and θ should not jump in a way that depends on the selected input range.
  3. Short or terminate the input: X and Y should fall close to the expected noise floor for the chosen τ.
  4. Apply a known coherent stimulus: minimize Y by phase alignment and confirm repeatability after large transients.
Front-end selection tree for lock-in amplifiers Block diagram showing signal source types flowing into a mode selection split: voltage mode with differential or instrumentation amplifiers, and current mode with a transimpedance amplifier. Both paths include protection/limiting and anti-alias filtering to prevent overload artifacts. Front-end choice: match the source and survive overload Wrong mode or overload recovery can create false coherent baseband components. Low-Z voltage strong drive, low pickup High-Z voltage bias/leakage sensitive Current source pico/femto to microamps Choose mode voltage vs current Voltage mode diff amp / in-amp (CMRR) Current mode TIA (transimpedance) Limiter protection + recovery AA filter Overload artifact path distortion → demod → false baseband Practical cue: if results change after transients or overload, suspect limiter recovery or range switching before blaming the PSD.

H2-4 · Phase-sensitive detection (PSD): mixer, chopper, or digital DDC

PSD is the engine that converts a narrowband coherent signal into baseband I/Q components. Implementations differ (analog multiplier, synchronous switching, or digital downconversion), but the principle is the same: multiply the input by sin/cos references, then low-pass to keep only the correlated content.

PSD implementation options (practical trade-offs)
  • Analog multiplier: simple conceptually and continuous-time, but limited by multiplier noise, linearity, and drift.
  • Chopper / synchronous switching: often better low-frequency stability (less 1/f sensitivity), but requires careful band-limiting to control switching artifacts.
  • Digital DDC (ADC + digital sin/cos): flexible for multi-tone and calibration; performance depends on ADC headroom, anti-alias filtering, and reference timing quality.
Why sin/cos (I/Q) is not optional
  • With only one reference phase, amplitude and phase are entangled; a phase shift can look like an amplitude change.
  • I/Q makes the signal vector observable: R reports amplitude, while θ reports phase relative to the reference.
  • Y becomes a diagnostic channel: if a response should be in-phase, a large Y indicates phase misalignment or path mismatch.
Phase error φ causes X→Y leakage

Any phase error (reference delay, imperfect quadrature, gain mismatch) rotates the I/Q axes. A purely in-phase response that should sit in X will spill into Y, and the reported θ becomes biased.

Practical calibration steps
  1. Apply a known coherent stimulus expected to be mostly in-phase.
  2. Adjust φ (or run auto-phase) to minimize |Y| and maximize stable X.
  3. Validate by changing τ: R should remain consistent (only noise reduces with larger τ).
  4. If residual leakage remains, correct I/Q gain and orthogonality (digital rotation/scaling) and re-check Y floor.
PSD validation cues (to avoid being fooled)
  • If R changes dramatically when τ changes, suspect non-coherence or artifact energy entering baseband.
  • If Y rises after transients, suspect phase drift or front-end recovery asymmetry rather than real physics.
  • If a strong interferer is present, verify overload flags; PSD can demodulate distortion into a stable baseband number.
PSD core: I/Q demodulation with phase rotation and leakage Diagram showing input Vin multiplied by cos and sin references, low-pass filtered into Xraw and Yraw, then rotated by phase phi to produce X and Y. A leakage arrow illustrates X spilling into Y when phi is wrong. PSD core: multiply by sin/cos, then average to baseband Phase error rotates the I/Q axes and creates X→Y leakage. Vin from front-end cos(ωt) sin(ωt) × cos mixer / chopper / DDC × sin mixer / chopper / DDC LPF (τ) LPF (τ) Xraw Yraw Phase rotate φ calibration X Y φ error → leakage Practical cue: minimize |Y| with φ calibration for an in-phase stimulus; stable R across τ changes indicates true coherent demodulation.

H2-5 · Low-pass / time constant / roll-off: resolution vs response time

The low-pass filter after PSD defines what “counts as signal” at baseband. Increasing the time constant τ reduces output noise by narrowing the effective noise bandwidth, but it also slows settling and can hide fast changes. Roll-off (filter order) further trades noise suppression against settling behavior.

τ vs ENBW (engineering intuition)
  • Larger τ averages longer → smaller ENBW → lower output noise, but slower response.
  • For common LPF implementations, ENBW scales roughly inversely with τ (τ ↑ → ENBW ↓).
  • Noise RMS at the output tends to scale with √ENBW: cutting ENBW by ~4× typically lowers noise by ~2× (rule of thumb).
  • Use this to estimate “how much noise improvement is worth how much time penalty” before tweaking roll-off.
Roll-off (order) changes the feel of the measurement
  • 1st order: simplest settling, least risk of surprising behavior; weaker suppression of out-of-band baseband noise.
  • 2nd order: practical balance for many measurements; cleaner readout at similar τ, with moderate settling cost.
  • 4th order: strongest rejection of baseband noise away from DC; can require more time to fully settle and may appear “slower” after steps.

Practical rule: meet the required response time first with a conservative order (1st/2nd), then increase τ or roll-off only if the noise floor is still above the target.

Executable setup flow (from speed to τ and roll-off)
  1. Define the fastest change that must be tracked: scan dwell per point, sweep speed, or smallest transient that matters.
  2. Pick a settling requirement: for example, “stable enough for display” vs “stable enough for logging and control.”
  3. Start with 1st or 2nd order and choose τ so the output stabilizes within the allowed time window.
  4. If noise is still too high, increase τ (primary knob) or increase roll-off (secondary knob) while re-checking settling.
  5. Lock the configuration only after step-response and noise-statistics checks agree with expectations.
Verification (to avoid “pretty but wrong” settings)
  • Step check: apply a repeatable amplitude step and measure time to reach a stable plateau (same test for each roll-off).
  • Noise check: with no signal (or shorted input), confirm X/Y RMS drops predictably when τ is increased.
  • Consistency check: R should not change materially with τ for a truly coherent steady signal—only noise should shrink.
LPF roll-off comparison: step response vs settling Simplified plot-like diagram comparing the output settling behavior of 1st, 2nd, and 4th order low-pass filters under the same step input. Labels indicate relative settling time and residual noise ripple. Same input step, different roll-off: noise vs settling Higher order cleans noise faster but can require longer settling for stable readings. Input amplitude step (coherent) Output time final value 1st order: fastest settle 2nd order: cleaner, moderate settle 4th order: lowest ripple, longest settle settle longer settle Use τ to hit the response-time requirement first; increase roll-off only if additional noise suppression is needed without violating settling.

H2-6 · Dynamic reserve & interference rejection: not getting fooled

Dynamic reserve describes how a lock-in can extract a weak coherent component even when a much larger interferer exists. This works only if the interferer is not coherent with the reference and the front-end remains linear. Once the input chain saturates, distortion products can become partially coherent and create a stable-looking false baseband output.

Dynamic reserve requires two conditions
  • Non-coherence: the interferer is not phase-locked to the reference, so correlation rejects it after demodulation.
  • Linear headroom: the interferer does not drive the front-end into compression/clipping; otherwise distortion creates demodulatable artifacts.
Defense layers (in priority order)
  1. Front-end headroom + limiter strategy: prevent saturation and ensure fast, symmetric recovery if limiting occurs.
  2. Pre-filtering: notch or band-limit large known interferers before PSD (especially 50/60 Hz and harmonics in many lab environments).
  3. Reference frequency planning: choose modulation/reference frequencies that avoid interference-rich regions and spur clusters.
  4. Consistency validation: confirm results remain stable across τ changes and under controlled addition/removal of interference.
Failure signatures (how being fooled looks)
  • τ-dependent amplitude: R shifts significantly when τ is changed (beyond noise reduction) → artifact energy is entering baseband.
  • Y/θ instability under interference: Y jumps or θ becomes erratic when a large interferer appears → phase rotation or nonlinear mixing.
  • Slow recovery after transients: readings remain biased after the interferer is removed → limiter/clamp recovery or range switching imprint.
Quick test (minimum effort, high value)
  1. Establish a baseline with a coherent stimulus and record X/Y/R/θ.
  2. Add a large non-coherent interferer gradually and monitor overload flags and Y/θ behavior.
  3. Change τ by 2–4×; true coherent amplitude stays consistent in R, while artifacts often shift with τ.
  4. Remove the interferer and confirm fast return to baseline; slow return indicates recovery-induced baseband bias.
Interference path: overload creates false baseband outputs Diagram showing a large interferer driving front-end saturation, generating distortion products that enter the PSD and produce a stable-looking but wrong X/Y baseband output. Includes overload flag and recovery notes. Interference rejection fails when the front-end saturates Non-coherent interference becomes demodulatable after nonlinear distortion. Big interferer not coherent initially Front-end saturation / compression Overload flag limiter recovery matters Distortion products harmonics / IMD PSD (I/Q) demodulates to baseband False X/Y stable-looking but wrong τ-dependent clues initially non-coherent → becomes demodulatable after nonlinearity Dynamic reserve is real only with headroom: prevent saturation, pre-filter big interferers, and verify stability across τ changes.

H2-7 · ΣΔ ADC & digitization: where the bits really matter

In a lock-in, “more bits” only helps if the digitization chain preserves a clean, linear baseband after correlation. ΣΔ ADCs are widely used because oversampling and digital decimation concentrate performance where lock-ins live: low-frequency noise, stable offsets, and repeatable averaging.

Why ΣΔ fits lock-in measurements
  • Oversampling + noise shaping push quantization noise away from the low-frequency region of interest.
  • Decimation filters set a predictable baseband bandwidth, matching the lock-in’s time-constant-driven averaging.
  • Low-frequency behavior (drift and 1/f-like effects) often dominates real lock-in accuracy more than headline sample rates.
ADC criteria that matter from a lock-in perspective
What to look at → What it impacts → Quick cue
  • Noise spectrum in the target baseband → sets the achievable X/Y noise floor → X/Y RMS should drop predictably as τ increases.
  • Low-frequency drift / offset stability → long-term bias in X/Y → temperature changes should not create directional X/Y shifts after self-cal.
  • Linearity (INL/DNL) and harmonic behavior → distortion can be demodulated into baseband → strong tones should not create unexpected DC-like components.
  • Overload recovery (front-end + modulator behavior) → “stable-looking but wrong” results after transients → recovery should be fast and symmetric, with an overload flag.
  • Digital latency and bandwidth after decimation → apparent measurement sluggishness and phase bias → stable R with τ changes, and predictable settling after steps.
Bandwidth, decimation, and anti-alias: where digitization gets fooled
  • Analog anti-aliasing still matters: large out-of-band content can saturate the front-end or modulator before any digital filter can help.
  • Decimation sets the effective baseband bandwidth: choose it to comfortably cover the intended measurement bandwidth; overly aggressive decimation can “slow” the measurement.
  • Keep headroom under interferers: if a strong interferer drives nonlinear behavior, distortion products can become partially correlated and appear as false baseband.
  • Timing quality affects baseband purity through the reference path; keep this as an impact path and validate with τ-sweep consistency rather than relying on headline specs.
Range switching: the hidden source of “coherent-looking” artifacts

Range changes (gain/attenuation, relay or switch matrix, digital scaling) can inject transients and bias shifts that settle slowly. With long τ, these effects can masquerade as a stable coherent output.

  • Predictable settling time after a range change should be specified and testable.
  • No directional bias: X/Y should return to the same baseline regardless of switching direction.
  • Phase sanity: Y and θ should not jump in ways inconsistent with the physical signal path.
Digitization mini-checklist
  1. Under worst-case interferers, verify no overload and no slow recovery offsets in X/Y.
  2. Change τ by 2–4×: R should remain consistent for a steady coherent signal; only noise should shrink.
  3. Perform a controlled range switch: confirm settling time and absence of directional bias in X/Y.
  4. With input shorted/quiet, ensure X/Y noise follows the expected τ trend and does not show spur-driven “DC” offsets.
ΣΔ digitization chain feeding lock-in baseband outputs Block diagram showing the signal path from front-end through a sigma-delta modulator and decimation filter, then into a digital lock-in core producing X, Y, R, and theta. Includes anti-aliasing and range switching blocks. ΣΔ + decimation: baseband performance is about noise, drift, and recovery Preserve linearity and headroom before the modulator; set decimation to match the intended baseband bandwidth. Front-end low-noise + linear AA filter Range switch ΣΔ modulator noise shaping + headroom Decimation sets baseband BW I/Q baseband + LPF(τ) X Y R θ Strong interferer avoid modulator overload Practical cue: validate digitization by τ-sweep consistency and range-switch settling, not by headline “bit” claims.

H2-8 · Calibration & drift control: amplitude, phase, and offset closure

Calibration is a closure loop, not a one-time adjustment. Lock-in accuracy depends on keeping amplitude gain, phase reference, and offset/drift under control across temperature, time, and range changes. A field-ready design provides repeatable self-cal routines plus traceable calibration versions.

Three calibration closures
  • Amplitude (gain): correct the end-to-end gain per range so the same coherent stimulus maps to the same input-referred value.
  • Phase: define a zero-phase point and correct I/Q quadrature and gain mismatch so X/Y axes are stable.
  • Offset / drift: suppress long-term bias from front-end thermals, reference amplitude drift, and ADC reference drift.
Amplitude calibration (end-to-end, range-aware)
  • Reference injection: inject a known calibration tone/amplitude through an internal switch into the measurement path.
  • Known gain path: verify gain consistency across range steps (analog gain/attenuation and digital scaling).
  • Executable acceptance: input-referred amplitude should match across ranges after correction, and should not change materially with τ (only noise shrinks).
Phase calibration (zero-phase + I/Q orthogonality)

Phase alignment requires a defined reference condition for “in-phase,” plus correction of quadrature errors (non-ideal 90° separation) and I/Q gain mismatch. These errors rotate the measurement axes and create X→Y leakage.

  • Auto-phase: adjust φ to minimize |Y| for a stimulus expected to be in-phase.
  • Quadrature correction: apply digital rotation/scaling to restore orthogonality and equalize I/Q gain.
  • Executable acceptance: for an in-phase stimulus, Y should approach the noise floor and θ should stay stable across temperature and reboot.
Drift map (source → symptom → closure)
  • Front-end thermal drift → X/Y baseline wanders with temperature → periodic zero/self-cal + temperature-tagged constants.
  • Reference amplitude drift → R slowly scales over time → reference monitoring and gain recalibration interval.
  • ADC reference drift → global amplitude bias across ranges → reference monitor + calibration constants with versioning.
  • Range-switch bias → step-like offsets after switching → enforce settling time and store per-range offset trims.
Field-ready self-cal plan (traceable)
  • Triggers: power-up, periodic runtime timer, temperature delta beyond a threshold, and after large range changes.
  • What to run: zero/offset first, then amplitude gain, then phase alignment (in that priority order).
  • What to store: calibration version ID, timestamp, temperature, ranges involved, pass/fail status, and updated constants summary.
Calibration injection loop for gain, phase, and offset closure Diagram showing a calibration source injected through a switch into the measurement chain, then evaluated by an estimator that updates calibration constants stored with a version ID. A temperature sensor feeds the estimator for drift tagging. Calibration closure loop: inject → estimate → store → verify Field-ready designs record version IDs and temperature tags so stability can be proven, not assumed. Cal source known tone/amplitude Injection switch normal / cal path Measurement path front-end + PSD + LPF X/Y/R/θ estimations Estimator gain / phase / offset Constants storage NVM + Version ID temp-tagged sets Temperature drift tagging Field self-cal (periodic) triggers: power-up · runtime timer · temp delta · after range changes records: version · timestamp · temperature · pass/fail Practical cue: calibration is proven by repeatability plus traceable versions, not by one-time factory adjustment.

H2-9 · Measurement recipes: how to set frequency, τ, and ranges fast

Reliable lock-in measurements come from repeatable setup recipes. The fastest path is to choose a frequency plan that avoids interference clusters, keep front-end headroom under worst-case conditions, set τ and roll-off to match the required response time, then validate X/Y/R/θ consistency with a minimal reference check.

Three common scenarios (pick the closest recipe)
A) Low-frequency, slow changes (1/f dominates)
  • Prefer modulation to move the measurement away from the worst 1/f region when possible.
  • If staying near DC, use longer τ and stronger roll-off, and prioritize drift/offset closure.
  • Keep conservative range/headroom to avoid slow recovery offsets that hide inside long averaging.
B) Mid-frequency narrowband (reject nearby interference)
  • Choose a reference frequency that avoids mains harmonics and known spur clusters in the setup.
  • Set τ by the required response time first; narrow ENBW only after stability is confirmed.
  • Pre-filter large interferers if headroom is tight; prevent front-end compression before PSD.
C) Modulated carrier (move signal away from 1/f)
  • Select a modulation/reference frequency compatible with the DUT bandwidth and free of strong interferers.
  • After demodulation, choose τ and roll-off based on the desired envelope/parameter tracking speed.
  • Use auto-phase to minimize Y for a known in-phase condition; confirm θ zero-point repeatability.
Six-step setup method (copy-paste workflow)
  1. Choose reference/modulation frequency: avoid 50/60 Hz harmonics and local spur clusters; pick a quiet window.
  2. Select input mode and range: voltage vs current mode to match the source; ensure headroom under worst-case interferers and transients.
  3. Set τ and roll-off: hit the required response time first; then narrow ENBW for noise reduction without violating settling.
  4. Enable I/Q and phase optimization: auto-phase for a known in-phase stimulus, then lock φ to keep Y near the noise floor.
  5. Check X/Y/R/θ behavior: look for τ-dependent amplitude, θ jumps, or large Y under interference—these point to artifacts or overload.
  6. Run a minimal validation: perform a reference injection (or self-cal) and confirm amplitude/phase consistency before logging real data.
Common pitfalls (symptom → cause → fix)
  • Noisy readout → τ too small / ENBW too wide → increase τ or roll-off after confirming settling.
  • Slow or “frozen” sweep response → τ too large / over-filtered → reduce τ or roll-off, then re-check stability.
  • Stable-looking but inconsistent R → overload or distortion folding into baseband → increase headroom, pre-filter, or move frequency window.
  • Bias after range changes → switching transients and recovery memory → enforce a settling delay and re-run a quick self-cal.
Quick lock-in setup flow: Frequency to record Flowchart showing a quick measurement setup process: choose frequency, set range, set filter, optimize phase, validate with tau-sweep or reference injection, then record configuration and results. Fast setup flow (repeatable recipe) Frequency → Range → Filter → Phase → Validate → Record (loop back if validation fails). Frequency avoid mains quiet window Range headroom no overload Filter τ + roll-off match speed Phase I/Q on auto-phase Validate τ sweep ref inject Record config cal ver/temp If validation fails: move back to Range / Frequency / Filter and repeat Quick cues R should not shift with τ for a steady coherent signal · Y should approach the noise floor after auto-phase · Overload must be avoided Record configuration and calibration version so results can be reproduced later A “recipe” is complete only when the validation step passes and the setup is recorded for repeatability.

H2-10 · Validation checklist: proving sensitivity and rejecting artifacts

Validation turns “a number on the screen” into evidence. A complete lock-in validation plan separates checks by lifecycle: R&D proves the sensitivity limits and failure modes, Production enforces repeatable calibration closure, and Field detects drift and abnormal conditions early.

R&D validation (prove sensitivity and failure modes)
  • Noise floor: shorted/quiet input; confirm X/Y RMS trends with τ and absence of spur-driven DC-like offsets.
  • Phase linearity: apply known phase/latency changes; verify θ response is monotonic and repeatable.
  • Dynamic reserve: add a large interferer; verify no overload in the intended operating range and recognizable artifacts when pushed into nonlinearity.
  • Range-switch settling: enforce predictable settling time and no directional bias after switching.
  • τ-sweep consistency: for a steady coherent signal, R should stay stable as τ changes (noise changes, not mean amplitude).
Production validation (repeatability and limits)
  • Reference injection PASS/FAIL: verify gain/phase/offset closure at key ranges and store a calibration version ID.
  • Auto-phase convergence: for an in-phase stimulus, minimize |Y| to the noise floor and lock the phase reference.
  • I/Q quadrature error limit: enforce limits on X→Y leakage and I/Q gain mismatch after correction.
  • Quick report: store a minimal record (version, temperature, ranges tested, pass/fail) for traceability.
Field validation (self-check and drift monitoring)
  • Self-check mode: quick zero/offset check and optional reference injection to confirm baseline health.
  • Drift thresholds: detect slow bias in X/Y or gain scaling in R using temperature-tagged limits.
  • Abnormal indicators: overload events, slow recovery after transients, and τ-dependent amplitude shifts.
  • Actionable prompts: recommend re-cal, warm-up wait, or range/frequency adjustments when thresholds are exceeded.
Validation matrix across R&D, Production, and Field Three-column matrix listing validation items for R&D, production test, and field self-check. Each column contains short blocks for core checks such as noise floor, reference injection, and drift thresholds. Validation matrix: prove sensitivity and reject artifacts Separate checks by lifecycle to keep results repeatable and explainable. R&D Production Field Noise floor Phase linearity Dynamic reserve Range settling τ-sweep sanity Ref inject PASS Auto-phase I/Q leak limit Version write Quick report Self-check Drift thresholds Overload detect Re-cal prompt Log stamp A good validation plan makes artifacts recognizable and results reproducible across the product lifecycle.

H2-9 · Measurement recipes: how to set frequency, τ, and ranges fast

Reliable lock-in measurements come from repeatable setup recipes. The fastest path is to choose a frequency plan that avoids interference clusters, keep front-end headroom under worst-case conditions, set τ and roll-off to match the required response time, then validate X/Y/R/θ consistency with a minimal reference check.

Three common scenarios (pick the closest recipe)
A) Low-frequency, slow changes (1/f dominates)
  • Prefer modulation to move the measurement away from the worst 1/f region when possible.
  • If staying near DC, use longer τ and stronger roll-off, and prioritize drift/offset closure.
  • Keep conservative range/headroom to avoid slow recovery offsets that hide inside long averaging.
B) Mid-frequency narrowband (reject nearby interference)
  • Choose a reference frequency that avoids mains harmonics and known spur clusters in the setup.
  • Set τ by the required response time first; narrow ENBW only after stability is confirmed.
  • Pre-filter large interferers if headroom is tight; prevent front-end compression before PSD.
C) Modulated carrier (move signal away from 1/f)
  • Select a modulation/reference frequency compatible with the DUT bandwidth and free of strong interferers.
  • After demodulation, choose τ and roll-off based on the desired envelope/parameter tracking speed.
  • Use auto-phase to minimize Y for a known in-phase condition; confirm θ zero-point repeatability.
Six-step setup method (copy-paste workflow)
  1. Choose reference/modulation frequency: avoid 50/60 Hz harmonics and local spur clusters; pick a quiet window.
  2. Select input mode and range: voltage vs current mode to match the source; ensure headroom under worst-case interferers and transients.
  3. Set τ and roll-off: hit the required response time first; then narrow ENBW for noise reduction without violating settling.
  4. Enable I/Q and phase optimization: auto-phase for a known in-phase stimulus, then lock φ to keep Y near the noise floor.
  5. Check X/Y/R/θ behavior: look for τ-dependent amplitude, θ jumps, or large Y under interference—these point to artifacts or overload.
  6. Run a minimal validation: perform a reference injection (or self-cal) and confirm amplitude/phase consistency before logging real data.
Common pitfalls (symptom → cause → fix)
  • Noisy readout → τ too small / ENBW too wide → increase τ or roll-off after confirming settling.
  • Slow or “frozen” sweep response → τ too large / over-filtered → reduce τ or roll-off, then re-check stability.
  • Stable-looking but inconsistent R → overload or distortion folding into baseband → increase headroom, pre-filter, or move frequency window.
  • Bias after range changes → switching transients and recovery memory → enforce a settling delay and re-run a quick self-cal.
Quick lock-in setup flow: Frequency to record Flowchart showing a quick measurement setup process: choose frequency, set range, set filter, optimize phase, validate with tau-sweep or reference injection, then record configuration and results. Fast setup flow (repeatable recipe) Frequency → Range → Filter → Phase → Validate → Record (loop back if validation fails). Frequency avoid mains quiet window Range headroom no overload Filter τ + roll-off match speed Phase I/Q on auto-phase Validate τ sweep ref inject Record config cal ver/temp If validation fails: move back to Range / Frequency / Filter and repeat Quick cues R should not shift with τ for a steady coherent signal · Y should approach the noise floor after auto-phase · Overload must be avoided Record configuration and calibration version so results can be reproduced later A “recipe” is complete only when the validation step passes and the setup is recorded for repeatability.

H2-10 · Validation checklist: proving sensitivity and rejecting artifacts

Validation turns “a number on the screen” into evidence. A complete lock-in validation plan separates checks by lifecycle: R&D proves the sensitivity limits and failure modes, Production enforces repeatable calibration closure, and Field detects drift and abnormal conditions early.

R&D validation (prove sensitivity and failure modes)
  • Noise floor: shorted/quiet input; confirm X/Y RMS trends with τ and absence of spur-driven DC-like offsets.
  • Phase linearity: apply known phase/latency changes; verify θ response is monotonic and repeatable.
  • Dynamic reserve: add a large interferer; verify no overload in the intended operating range and recognizable artifacts when pushed into nonlinearity.
  • Range-switch settling: enforce predictable settling time and no directional bias after switching.
  • τ-sweep consistency: for a steady coherent signal, R should stay stable as τ changes (noise changes, not mean amplitude).
Production validation (repeatability and limits)
  • Reference injection PASS/FAIL: verify gain/phase/offset closure at key ranges and store a calibration version ID.
  • Auto-phase convergence: for an in-phase stimulus, minimize |Y| to the noise floor and lock the phase reference.
  • I/Q quadrature error limit: enforce limits on X→Y leakage and I/Q gain mismatch after correction.
  • Quick report: store a minimal record (version, temperature, ranges tested, pass/fail) for traceability.
Field validation (self-check and drift monitoring)
  • Self-check mode: quick zero/offset check and optional reference injection to confirm baseline health.
  • Drift thresholds: detect slow bias in X/Y or gain scaling in R using temperature-tagged limits.
  • Abnormal indicators: overload events, slow recovery after transients, and τ-dependent amplitude shifts.
  • Actionable prompts: recommend re-cal, warm-up wait, or range/frequency adjustments when thresholds are exceeded.
Validation matrix across R&D, Production, and Field Three-column matrix listing validation items for R&D, production test, and field self-check. Each column contains short blocks for core checks such as noise floor, reference injection, and drift thresholds. Validation matrix: prove sensitivity and reject artifacts Separate checks by lifecycle to keep results repeatable and explainable. R&D Production Field Noise floor Phase linearity Dynamic reserve Range settling τ-sweep sanity Ref inject PASS Auto-phase I/Q leak limit Version write Quick report Self-check Drift thresholds Overload detect Re-cal prompt Log stamp A good validation plan makes artifacts recognizable and results reproducible across the product lifecycle.

H2-11 · Event logs & field evidence: catching latent “works in lab” failures

Field failures often happen at the edges: reference coherence degrades, the input briefly overloads, a range switch settles slowly, or calibration validity drifts with temperature. The logging goal is simple: capture enough evidence to explain drift, jumps, and “false signals” without storing every raw sample.

Minimum log schema (store as counters + events + window stats)
1) Reference coherence (prove the reference was “alive”)
  • pll_lock (0/1) + lock_lost_event (timestamped): shows reference lock continuity.
  • phase_drift_indicator (window RMS or rate): flags slow phase wandering without deep phase-noise analysis.
  • ref_source (INT/EXT/SYNC) + ref_freq_hz: enables field reproduction of conditions.
2) Input health (catch overload and recovery memory)
  • overload_count (optionally near-clip/clip): correlates with jumps and “stable fake values.”
  • limiter_active + limiter_time_ms: shows how often limiting masked a transient.
  • recovery_time_ms (max or P95): highlights slow return-to-linear behavior after large events.
3) Range switching evidence (make jumps explainable)
  • range_id + range_switch_count: ties measurement behavior to a specific gain path.
  • range_switch_fail_count (or settle-time timeout): detects silent switching faults.
  • range_settle_time_ms + range_direction (up/down): reveals direction-dependent bias or slow settling.
4) Calibration identity (answer “which calibration?” instantly)
  • cal_version_id (required): links every result to a known calibration dataset.
  • last_selfcal_timestamp + cal_age_hours: enables age-based re-cal prompts.
  • temp_c (mean/peak) + cal_status (PASS/FAIL): provides temperature context for drift claims.
5) Output statistics (carry “noise floor” into the field)
  • x_rms, y_rms (window RMS) + x_peak, y_peak: detects rising noise and transient spikes.
  • r_mean, r_std (window) + optional theta_std: shows stability vs drift without raw capture.
  • config snapshot (τ, roll-off, frequency, range, ref source): makes every log record reproducible.
Evidence packet (recommended export for support)
  • Last 10 events: lock lost/acquired, overload, limiter active, range switches, self-cal results.
  • Last 60 seconds stats: x/y RMS & peaks, r mean/std, theta std, recovery P95.
  • Current snapshot: frequency, τ, roll-off, range_id, ref source/freq, cal_version_id, temperature.
Field diagnosis: symptoms → log evidence → likely cause → next action
1) Drift (temperature drift, reference drift, or calibration aging)
Symptoms: R slowly shifts, θ baseline moves, X/Y offset trends in one direction.
Log evidence:
  • temp_c trend correlates with r_mean drift, while pll_lock stays 1.
  • cal_age_hours is high or last_selfcal_timestamp is stale.
  • phase_drift_indicator rises without an overload spike.
Likely cause: thermal drift in analog gain/offset, reference amplitude/phase drift, or expired calibration validity.
Next action: warm up to thermal steady state, re-run self-cal, verify ref source/frequency, and stamp results with updated cal_version_id.
2) Sudden jumps (overload recovery or range switching)
Symptoms: R or θ steps abruptly, then slowly returns; step appears near range changes.
Log evidence:
  • Jump window aligns with overload_count increment or limiter_active burst.
  • recovery_time_ms spikes or shows long-tail behavior.
  • A range switch event occurs within the same time window, with abnormal range_settle_time_ms.
Likely cause: brief compression/overload feeding the PSD, or switching transients leaving a “memory” offset.
Next action: increase headroom (attenuation / lower gain), enforce settle delays after switching, and run a quick reference injection to confirm closure.
3) “Signal present” but actually an artifact (mixing products in-band)
Symptoms: R looks stable, but changes with τ; Y remains unusually large; θ locks to a suspiciously constant angle.
Log evidence:
  • r_mean shifts when τ/roll-off changes (mean should not depend on τ for a steady coherent signal).
  • Frequent near-clip or elevated limiter_time_ms without hard overload flags.
  • Increased phase_drift_indicator while pll_lock remains 1.
Likely cause: nonlinearity or intermodulation products fall into the detection bandwidth and get “correlated” by the PSD.
Next action: move the frequency window, add pre-filter/notch, increase headroom, then validate with τ-sweep + reference injection before trusting the result.
Example BOM anchors (by function) — useful for logging & evidence features

These part numbers are practical examples to anchor procurement and implementation choices. Final selection should match noise, bandwidth, leakage, and temperature requirements of the specific instrument design.

  • Clock / coherence (lock + ref identity): SiLabs Si5341, TI LMK04828, ADI ADF4351; DDS for reference tones: ADI AD9833, AD9959.
  • Overload detect / limiter evidence: TI TLV3501, ADI ADCMP600 (comparators); analog switch/limiter building blocks: ADI ADG1419, TI TS5A23157.
  • Range switching & injection switching: low-leakage switch families ADI ADG1209, ADG1414.
  • Event log storage (high endurance): FRAM Infineon/Cypress FM24CL64B; SPI Flash Winbond W25Q64JV, W25Q128JV.
  • Timestamps & temperature tags: RTC ADI/Maxim DS3231; temperature TI TMP117, TMP102.
  • Reset / watchdog (log integrity): TI TPS3823, ADI/Maxim MAX6369, Microchip MCP1316.
  • Controller for counters/stats: ST STM32H743; FPGA options for high-rate event capture: AMD/Xilinx Artix-7, Lattice ECP5.
Log closure loop: measurement to diagnosis to user alert Block diagram showing a closed-loop logging system: measurement engine feeds counters and window statistics and an event log. Diagnosis rules evaluate evidence and trigger user alerts, which can prompt actions like self-calibration and configuration adjustments. F11 · Event log closure loop (field evidence) Measurement → counters/stats → event log → diagnosis rules → user alert (then back to action). Measurement engine X/Y/R/θ + state + τ/roll-off frequency + range + ref source Counters & window stats overload, limiter time, settle time x/y RMS & peaks, r mean/std, θ std temp tag + cal age Event log (timestamped) lock lost/acquired overload / range switch / self-cal config snapshot + cal_version_id Diagnosis rules drift / jump / artifact if/then evidence checks confidence + next action User alert re-cal / change range / move freq export evidence packet Actions feed back: re-cal, adjust range, move frequency window Store “events” when something changes, and “window stats” periodically. This keeps logs compact while preserving field evidence.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.
Core takeaway

A lock-in amplifier extracts the in-phase (X) and quadrature (Y) components that are coherent with a reference, so sensitivity is set by coherence, headroom, and time-constant choices—not just by “more filtering.” Use the FAQs below to pick frequency/range/τ quickly and to rule out overload- or interference-driven artifacts.

H2-12 · FAQs ×12

1) What problem does a lock-in solve better than a simple bandpass filter?
A bandpass filter only selects a frequency region, while a lock-in performs coherent (reference-synchronous) detection, separating the in-phase (X) and quadrature (Y) components. This lets it recover signals buried in broadband noise and reject out-of-reference interference, as long as the reference is coherent and the front-end stays linear. If no reference (or controlled modulation) exists, filtering alone cannot recreate coherence.
2) How do I ensure the reference is truly coherent with the signal?
Coherence means the reference frequency matches the signal component of interest and the relative phase does not wander faster than the chosen time constant can tolerate. Use a synchronized source when possible, and track evidence such as PLL lock events and a phase-drift indicator (θ stability or phase-error RMS). If lock is lost or θ variance grows, reduce τ, re-establish synchronization, or move to an externally disciplined reference.
3) When should I use voltage input vs current input (TIA) front-end?
Choose voltage mode for low-to-moderate source impedance signals where CMRR and input protection dominate, and choose a TIA for true current sources or very high source impedance where current-to-voltage conversion must happen at the input. In lock-in use, the practical boundary is headroom and recovery: a TIA can be highly sensitive but may saturate on large transients or stray capacitance effects, so range strategy and overload evidence matter as much as noise density.
4) Why do I/Q channels leak into each other, and how do I fix it?
I/Q leakage mainly comes from phase error (φ) between the reference and the demodulator, plus I/Q gain mismatch and DC offsets. Fix it by enabling quadrature detection, running an auto-phase step on a known in-phase condition to minimize |Y|, and applying an I/Q orthogonality calibration (gain/phase correction). If leakage reappears with temperature or range changes, log the calibration version and re-run the correction after settling.
5) How do I choose time constant (τ) and roll-off for a given response time?
Start from the response requirement: pick τ to meet the settling time you can tolerate, then increase roll-off to reduce noise only if the slower dynamics are acceptable. A quick sanity check is τ-sweep consistency: for a steady coherent signal, the mean R should not shift as τ changes (only the noise level shrinks). If τ is too small, the readout looks noisy; if τ is too large, sweeps and real changes can disappear behind excessive averaging.
6) What limits dynamic reserve in practice?
Dynamic reserve is usually limited by front-end linearity, not the PSD math. If a large interferer pushes the input chain into clipping or compression, distortion products can fold into the detection bandwidth and create believable artifacts. Practical improvements come from preserving headroom (range/attenuation), applying pre-filtering or notches for known interferers, and choosing a reference frequency window that avoids strong spur clusters and mains harmonics.
7) Why are ΣΔ ADCs common in lock-ins, and when are they not enough?
ΣΔ ADCs are common because they offer high resolution, strong low-frequency performance, and pair naturally with digital decimation to produce clean baseband X/Y. They may be insufficient when the required measurement bandwidth is wide, when overload recovery must be extremely fast, or when reference/analog drift dominates error more than quantization noise. In those cases, range strategy, reference stability, and recovery evidence matter more than “more bits.”
8) How can overload recovery create “ghost signals”?
After overload, the front-end can exhibit slow recovery or temporary nonlinearity, so the PSD correlates distortion or intermodulation products into baseband and the result looks like a stable signal. This is why “stable” is not the same as “true.” Use overload/limiter event counts plus recovery-time statistics, and apply τ-sweep sanity: if mean R changes with τ or Y becomes unusually large, treat the reading as suspect until headroom and linear recovery are restored.
9) What’s a quick way to validate sensitivity without special equipment?
Use a minimal three-step check: (1) measure the noise floor with a quiet/shorted input and confirm noise decreases as τ increases, (2) perform a simple reference injection (or built-in cal tone) to confirm gain/phase closure, and (3) run a τ-sweep consistency check where mean R should not shift for a steady coherent signal. If any step fails, adjust range/headroom first, then revisit frequency and filter settings.
10) How do I detect phase drift and prevent it from corrupting results?
Detect drift by tracking PLL lock transitions and a lightweight phase-drift indicator such as θ standard deviation or phase-error RMS over a fixed window. Prevent corruption by using a coherent reference source (synchronized when possible), keeping τ consistent with expected drift rates, and re-running auto-phase or self-cal when drift exceeds thresholds. Logging drift evidence and calibration/version context allows results to be flagged as “suspect” rather than silently trusted.
11) What should be logged for field troubleshooting and traceability?
Log evidence that explains failures: reference lock state (and drift indicator), overload/limiter events and recovery time, range switch counts and settle times, calibration version with last self-cal time, and window statistics for X/Y (RMS/peaks) plus R mean/std (and optional θ std). Export an “evidence packet” containing recent events, recent window stats, and a configuration snapshot (frequency, τ, roll-off, range, ref source). This makes field reports reproducible and actionable.
12) How do I separate real signal from interference-induced artifacts?
Use a short decision loop: first change τ/roll-off—noise should change but mean R should not shift for a real coherent signal. Next, check overload evidence (near-clip/limiter activity, recovery-time spikes) because artifacts often ride on brief nonlinearity. Finally, move the reference frequency window or add pre-filtering and re-validate with a reference injection. If the “signal” only appears under specific τ settings or near overload, treat it as an artifact.