Use this forum to chat about hardware specific topics for the ESP8266 (peripherals, memory, clocks, JTAG, programming)

User avatar
By btidey
#80604 It is well known that the internal ADC of the ESP8266 is not brilliant but still usable in some less demanding applications. Theoretically it is a 10 bit converter but normal use gives much more variable results than one would expect.

I have done some initial measurements to characterise it a bit more with a view to optimising its use.

Test set up was an ESP-12F (i.e. raw Analog input) programmed under Arduino. I set up a voltage source giving a reading of about 940 and tested with different resistors in series between the source and the ADC pin; 15K, 115K, 375K, 1M, 10M.

Sketch was in a 30mSec loop and in each iteration it took 17 ADC measurements in a fast internal loop. iteration. The loop calculated a simple average and a moving average. I captured the raw data and the averages of about 300 loops for each resistor value and analysed them in excel.

Stats.jpg


Here Ave0 is the average value for the first measurement in each loop, Ave1_16 is a simple average of the next 16 samples, and AveRoll is a moving average where the result is combined with a previous value in the ratio of 1:15. The STD values are the standard deviations for these three statistics. For a normal distribution one expects to see values within +/- 2 STD of the average 95% of the time so it is quite a good measure of the quality of the conversion.

First comment is on the effect of source resistance. This starts to have an effect when it starts getting above 115K. The isolated measurements Ave0 show a distinct reduction and the STD grows. This effect is serious at 375K, bad at 1M and unusable at 10M.

The effect on the AVE1_16 is much less severe and even less on the rolling average, but the STD is still rising from 375K.

Part of the reason for the drop in AVE0 can be seen from a sample of the raw results.

adcSequence.jpg


This shows just 1 loops raw results for 3 different source resistance, but the nature of the result is consistent across the loops. In particular, the first analogRead is lower than the rest and this gets worse at higher source resistances. This is probably some effect of a sampling capacitor internal to the chip needing to charge up.

This means 2 things can be helpful in using the ADC. First the source resistance should be kept as low as possible, ideally below 100K. This is the case on NodeMCU units where there is already a medium resistance divider.

Second, the reliability of the readings can be significantly improved by discarding an initial reading and then averaging the results. The 16 sample averaging used here makes the ADC more like a 9 bit converter.

Two example functions are:

Code: Select allint readADC() {
   int a;
   int i;
   
   a = analogRead(A0);
   a = 0;
   for(i=0;i<16;i++) {
      a += analogRead(A0);
   }
   return a>>4;
}

float filteredADC;
float getFilteredADC() {
   int a = analogRead(A0);
   filteredADC = (filteredADC * 15 + analogRead(A0)) / 16;
   return filteredADC;
}


where readADC returns a simple average and getFilteredADC returns an even smoother one at the expense of lag in responding to changes.

Note the raw conversion time of analogRead in Arduino is about 125uSec so the readADc still only takes about 2mSec to execute.

Other interesting characteristics would be supply voltage sensitivity, temperature stability and linearity. I may try to get some more data if I get some time.
You do not have the required permissions to view the files attached to this post.
User avatar
By Bonzo
#80608 Some interesting information btidey; on my battery check I am using an average of 10 results. I will get the Nodemcu back in and adjust the sketch to ignore the first result. I should really get OTA setup; perhaps on the next one!

I have ordered an external ADC as I want to attach more analog items and I am interested to see what the results are like compared to the internal one
User avatar
By btidey
#80621 It is a good idea to use an external ADC for any serious use.

Although my experiments show that one can reduce the effects of the noise some preliminary results from linearity measurements indicate it is unlikely to be useful in applications where accuracy is needed.

I knew that there were problems with measurements close to 0 one doesn't get meaningful measurements until the ADC voltage is above 0.1V. More worrying is that there seems to be quite a variation in the linearity between different devices. I think this might be largely determined by this offfset at the bottom end. But it does seem that even if one has a simple calibration factor for the scale it would still produce significantly different results across the whole range between devices.

It still should be fine for things like battery voltage indication as in this case one is using a small part of the range and the linearity issues are less important.
User avatar
By btidey
#80628 Here are some linearity results. I measured at 50mV steps between 0 and 1v into raw ADC input on 3 devices from different batches. Input voltage was monitored with a decent 4.5 digit voltmeter.

adclinearity.jpg


The first Dev1,Dev2,Dev3 columns show the raw results. As can be seen the input below 0.1V is suspect but linearity above that is not too bad although there are significant difference in the absolute results between the devices.

I did a straight line fit of the data between 0.2V and 0.9V for the 3 devices giving the offset (c) and slope (m). Cal1,Cal2,Cal3 shows calculated values based on these, and diff1,diff2,diff3 show the difference between these and the measured values. These show consistency around the 5mV level for a particular device if it was calibrated using 2 points.

Previously I have used a single calibration factor equivalent to just a slope (m) value and assumed the offset (c) to be 0. This is fairly easy to do for say battery monitoring as it needed just one measurement of the battery voltage to be done. Doing a 2 point calibration is significantly more inconvenient. There may be some merit in just assuming an average offset value (c) for devices of about -50 (=50mV) and factoring that into the one point calibration of the slope. Although not tuned for a particular device it should help improve the accuracy a bit.
You do not have the required permissions to view the files attached to this post.