Current Lua downloadable firmware will be posted here
User avatar
By marcelstoer
gschulz wrote:Where did you get your build from? I just did a fresh build at and ...

The build options are documented at and yes, my cloud builder is one of them. All PRs get merged to the dev branch (#1838 was no exception). Thus, you need to pick the dev branch when building the firmware.
User avatar
By gschulz
#63988 marcelstoer, Thank you for the time and effort you put into solving this.

Short version: It works!

Long version: When you originally modified my test code, you removed the conn:close() which caused it to hang (Firefox timeout as you explained). jankop experienced the same issue, so he modified your version to close the port afterwards. I continued to use that version. That version has never worked correctly. After many builds and attempts, I finally decided to try my original version, which now shows the latest build working. Curious as to the cause, I went line by line to identify the culprit. This all boils down to one line of code:

When you simplified my code, you took the following:
Code: Select all  conn:on("sent", function() WiFiclose() end)
  function WiFiclose() conn:close() print(node.heap()) end

and converted it to:
Code: Select all  conn:on("sent", function() print(node.heap()) end)

jankop further added conn:close() back in:
Code: Select all  conn:on("sent", function() conn:close() print(node.heap()) end)

The smoking gun is the callback to the in-line function that executes conn:close(). This in-line function is not getting freed. My original code called a separately defined function that executed conn:close() and gets freed properly. This is the culprit. Bottom line: I believe this issue is now resolved. Now, I don't know if this brings up another issue where the in-line functions are not being freed.
User avatar
By gschulz
#64008 Is there any way to be more deterministic about freeing up memory when a function call is complete? To test this issue further, I opened multiple browsers to see how many simultaneous requests the ESP can service. I had up to 6, but it would crash after about 700 iterations. I can maintain 5 constant connections without crashing, but for this to work, the heap drops to about 7K and is dangerously close to running out of memory. When I started the test, it would stabilize after 32 iterations, which implies there could be 32 concurrent requests at that time (and that's just after one request at a time). It seems to me, that if we can free the memory immediately upon exiting the function, we could service a whole lot more than 5 requests at a time (more than 100). Any thoughts?