In the world of DSP, the usual approach is to convert the ADC input into a normalized form like Q15, which is simply an int16_t interpreted as a real number between -1 and (almost) +1. This maximizes the resolution. Since you're usually working with a linear time-invariant system, you can save the scaling to physical units until the very end; for presentation or whatever. For a 12-bit ADC, a simple (i.e., fast) subtraction and bit shift should do the conversion. E.g,
In this form, you can easily process the data with CMSIS-DSP.
Code:
q15_t sample = ((int16_t)ADC_code - 2048) << 4;Statistics: Posted by carlk3 — Tue Feb 27, 2024 6:30 pm