1
0
Fork 0
mirror of https://gitlab.com/niansa/libcrosscoro.git synced 2025-03-06 20:53:32 +01:00
Commit graph

5 commits

Author SHA1 Message Date
Nils
40cb369aab Formated 2021-07-28 12:09:16 +02:00
Josh Baldwin
78b6e19927
Update docs on io_scheduler for inline processing (#84)
* Update docs on io_scheduler for inline processing

Support gcc 10.3.1 (fedora 33 updated)
Update ci.yml to run fedora 32,33,34 and support both
gcc 10.2.1 and 10.3.1

* fedora 32 -> gcc-c++ drop version

* Update ci.yml and test_latch.cpp
2021-05-22 19:58:46 -06:00
Josh Baldwin
e9b225e42f
io_scheduler inline support (#79)
* io_scheduler inline support

* add debug info for io_scheduler size issue

* move poll info into its own file

* cleanup for feature

* Fix valgrind introduced use after free with inline processing

Running the coroutines inline with event processing caused
a use after free bug with valgrind detected in the inline
tcp server/client benchmark code.  Basically if an event
and a timeout occured in the same time period because the
inline processing would resume _inline_ with the event or the
timeout -- if the timeout and event occured in the same epoll_wait()
function call then the second one's coroutine stackframe would
already be destroyed upon resuming it so the poll_info->processed
check would be reading already free'ed memory.

The solution to this was to introduce a vector of coroutine handles
which are appended into on each epoll_wait() iteration of events
and timeouts, and only then after the events and timeouts are
deduplicated are the coroutine handles resumed.

This new vector has elided a malloc in the timeout function, but
there is still a malloc to extract the poll infos from the timeout
multimap data structure.  The vector is also on the class member
list and is only ever cleared, it is possible with a monster set
of timeouts that this vector could grow extremely large, but
I think that is worth the price of not re-allocating it.
2021-04-11 15:07:01 -06:00
Josh Baldwin
8a64687510
coro::mutex (#35) 2021-01-16 20:27:11 -07:00
Josh Baldwin
bc3b956ed3
udp_peer! (#33)
* udp_peer!

I hope using the udp peer makes sense on how udp packets are
sent and received now.  Time will tell!

* Fix broken benchmark tcp server listening race condition
2021-01-09 19:18:03 -07:00