Advanced Users can post their questions and comments here for the not so Newbie crowd.

Moderator: eriksl

User avatar
By rudy
#81539
eriksl wrote:My question remains, why would you need microsecond (or even millisecond) precision on an ESP8266? I am quite sure none of the timers of the ESP8266 has the required accuracy to maintain this precision, even only for a few minutes (microseconds). I've found the RTC to be off several minutes within just 24 hours and that's supposed to be the accurate one (as opposed to the internal timers).


The RTC timing is not based on the crystal. I'm sure that it is just a capacitive based oscillator. From what I remember (might be wrong) the RTC section can use a 32kHz crystal but it isn't included in any of the designs. ESP32 boards have included a 32kHz crystal, with a loss of IO pins.

When running (not sleep) the timing in the ESP8266 has to be accurate. And the specifications on the crystal used with the chip are very tight.

As far as why anyone would need microsecond resolution? It really depends on what they are doing. If they were trying to make a mesh network and needed to measure the best route (least hops) then maybe it would be important.

I have not read the previous posts to this thread, so I am sure I am missing relative points.
User avatar
By davydnorris
#81549 Hey @rudy - good to see you in the Advanced forum!

You're pretty much spot on there. The RTC is terrible and does have quite bad temperature drift, which makes it unreliable as an actual precision clock, but it does the job as a wake up alarm.

What tends to happen is that the ESP warms up during power on and the RTC calibration is done on a warm chip when you set deep sleep. The moment the ESP goes to sleep, it starts cooling down and the RTC changes speed, with the effect that it wakes the chip up early. This can be compensated for in code at least.

The CPU clock however, is always very accurate, and while the ESP is awake it's very reliable. That's the one I use to correct the time.

There are several projects I am doing where I need an accurate timestamp - for one I am syncing a mesh network of sleeping devices so timing needs to be OK but not microsecond. But I need better than +- 1 sec which is what SNTP offers. NTP at least uses multiple servers to try and correct to better resolution.

For most of my projects I would be happy with even half a second precision, which is what I can easily get to with SNTP oversampling and a little statistical compensation.

But there is one project in particular where I need really accurate and precise timestamps for an event, because I am:
- detecting multiple events from a moving object and using that to calculate its speed
- detecting the same event with multiple sensors placed across a large area and need to locate it in time and space
- sending the event info to a central location for post processing, and the send itself takes significant time

For that project I ideally want millisecond (or better) resolution timestamps, and the devices are not kept powered on all the time. That one's a fun project
User avatar
By eriksl
#81567 I think using SNTP will yield a really not so bad timekeeping, if contrained to a local area network. Especially if repeated frequently. Probably in the order of 10ths or even 100ths of seconds.

I just had a quick glance at the SNTP spec and it's really easy to implement. I don't understand why Espressif needs so much code to implement it (and even then do a lousy job).

So this will be continued ;-)
User avatar
By eriksl
#81568 I've been busy with the following, the last few days, which may be interesting. I noticed the free heap space going down and up all of the time, when using LWIP. That means we can never know if there is enough memory available for proper LWIP operation and it's the obvious drawback of using a memory heap mechanism.

I noticed that LWIP also has a static memory mode, where a character array of size X is declared in bss and then LWIP does it's own memory allocation. Tried it and failed, crash. After much debugging is appears there is something very simple going on, the function that's supposed to initialise this array, is commented out by Espressif (...). I learn to love these guys more and more with every minute! So enabled it again and it started to work. Now I am fiddling with the LWIP "tuning knobs" to make optimal use of the allocated memory. With heap based allocation that's all invisible and "everyting" is possible. Until memory runs out.

The amount of memory I need to have allocated in the static model frightens me a bit, in the sense that all of that amount is also used in the heap model, we just don't see it. In other words, if you have, say, 5k of memory available, you can already run into problems sending large payloads. As LWIP debugging is turned off by default, you will never know why it failed.

So now this is my next step, get all of the LWIP tweaks right, have a certain amount of statically allocated memory for LWIP and the all of the rest, we know we can use and don't bother LWIP.

For being able to send payloads of 4k (4 * TCP MSS), I'll probably need around 6k of memory dedicated to LWIP. For the receiving of payload, LWIP can probably use the same memory, but I didn't check that yet. This memory is used for both TCP and UDP payloads, BTW.

And also BTW, for my code, I'll probably remove DNS and MDNS support as I'm not using it anyway and it just uses more memory.

And lastly, while debugging the above issue, I already updated some LWIP source files with the content of the stock LWIP 1.4.1 version. Differences are a few small bug fixes from the LWIP team and it made me see what changes Espressif had made (and I reverted them, wherever possible). Now I only need to do the other 90 files, I'll do that another time I guess ;-) End goal is to have a clean "stock" LWIP with the minimum of patching to have it run on ESP8266. From there it should be doable to move on to LWIP 2.