mirror of
https://gitlab.com/niansa/libcrosscoro.git
synced 2025-03-06 20:53:32 +01:00
* io_scheduler inline support * add debug info for io_scheduler size issue * move poll info into its own file * cleanup for feature * Fix valgrind introduced use after free with inline processing Running the coroutines inline with event processing caused a use after free bug with valgrind detected in the inline tcp server/client benchmark code. Basically if an event and a timeout occured in the same time period because the inline processing would resume _inline_ with the event or the timeout -- if the timeout and event occured in the same epoll_wait() function call then the second one's coroutine stackframe would already be destroyed upon resuming it so the poll_info->processed check would be reading already free'ed memory. The solution to this was to introduce a vector of coroutine handles which are appended into on each epoll_wait() iteration of events and timeouts, and only then after the events and timeouts are deduplicated are the coroutine handles resumed. This new vector has elided a malloc in the timeout function, but there is still a malloc to extract the poll infos from the timeout multimap data structure. The vector is also on the class member list and is only ever cleared, it is possible with a monster set of timeouts that this vector could grow extremely large, but I think that is worth the price of not re-allocating it. |
||
---|---|---|
.. | ||
concepts | ||
detail | ||
net | ||
coro.hpp | ||
event.hpp | ||
fd.hpp | ||
generator.hpp | ||
io_scheduler.hpp | ||
latch.hpp | ||
mutex.hpp | ||
poll.hpp | ||
ring_buffer.hpp | ||
semaphore.hpp | ||
shared_mutex.hpp | ||
stop_signal.hpp | ||
sync_wait.hpp | ||
task.hpp | ||
task_container.hpp | ||
thread_pool.hpp | ||
when_all.hpp |