1
0
Fork 0
mirror of https://gitlab.com/niansa/libcrosscoro.git synced 2025-03-06 20:53:32 +01:00
Commit graph

14 commits

Author SHA1 Message Date
Josh Baldwin
475bcf6d8b
std::shared_ptr<executor_type> for coro::shared_mutex (#86)
* std::shared_ptr<executor_type> for coro::shared_mutex

* implement remaining types that leverage executor or io_scheduler
2021-05-22 22:36:57 -06:00
Josh Baldwin
78b6e19927
Update docs on io_scheduler for inline processing (#84)
* Update docs on io_scheduler for inline processing

Support gcc 10.3.1 (fedora 33 updated)
Update ci.yml to run fedora 32,33,34 and support both
gcc 10.2.1 and 10.3.1

* fedora 32 -> gcc-c++ drop version

* Update ci.yml and test_latch.cpp
2021-05-22 19:58:46 -06:00
Josh Baldwin
e9b225e42f
io_scheduler inline support (#79)
* io_scheduler inline support

* add debug info for io_scheduler size issue

* move poll info into its own file

* cleanup for feature

* Fix valgrind introduced use after free with inline processing

Running the coroutines inline with event processing caused
a use after free bug with valgrind detected in the inline
tcp server/client benchmark code.  Basically if an event
and a timeout occured in the same time period because the
inline processing would resume _inline_ with the event or the
timeout -- if the timeout and event occured in the same epoll_wait()
function call then the second one's coroutine stackframe would
already be destroyed upon resuming it so the poll_info->processed
check would be reading already free'ed memory.

The solution to this was to introduce a vector of coroutine handles
which are appended into on each epoll_wait() iteration of events
and timeouts, and only then after the events and timeouts are
deduplicated are the coroutine handles resumed.

This new vector has elided a malloc in the timeout function, but
there is still a malloc to extract the poll infos from the timeout
multimap data structure.  The vector is also on the class member
list and is only ever cleared, it is possible with a monster set
of timeouts that this vector could grow extremely large, but
I think that is worth the price of not re-allocating it.
2021-04-11 15:07:01 -06:00
Josh Baldwin
60a74af219
io_scheduler example (#57) 2021-02-15 18:52:45 -07:00
Josh Baldwin
80fea9c49a
io_scheduler uses thread pool to schedule work (#42)
* io_scheduler uses thread pool to schedule work

fixes #41

* use task_container in bench tcp server test

* adjust benchmark for github actions CI

* fix io_scheduler tests cross thread memory boundaries

* more memory barriers

* sprinkle some shutdowns in there

* update readme
2021-01-24 19:34:39 -07:00
Josh Baldwin
8a64687510
coro::mutex (#35) 2021-01-16 20:27:11 -07:00
Josh Baldwin
bc3b956ed3
udp_peer! (#33)
* udp_peer!

I hope using the udp peer makes sense on how udp packets are
sent and received now.  Time will tell!

* Fix broken benchmark tcp server listening race condition
2021-01-09 19:18:03 -07:00
Josh Baldwin
92a42699bc
udp client + server (#31) 2021-01-08 20:28:55 -07:00
Josh Baldwin
6faafa0688
Refactor net and into cpp files (#25) 2020-12-31 13:53:13 -07:00
Josh Baldwin
c02aefe26e
libc-ares dns client for hostname -> ip addres lookups (#24)
* libc-ares dns client for hostname -> ip addres lookups

* Add tcp_client dns lookup if hostname + dns available
2020-12-29 17:19:26 -07:00
Josh Baldwin
e11058ef22
tcp_client (#22)
* tcp_client

fixes #21

* remove double ci build
2020-12-27 14:32:03 -07:00
Josh Baldwin
b15c7c1d16
io_scheduler support timeouts (#20)
* io_scheduler support timeouts

Closes #19

* io_scheduler resume_token<poll_status> for poll()

* io_scheduler read/write now use poll_status + size return
2020-11-11 23:06:42 -07:00
Josh Baldwin
1c7b340c72
add tcp_scheduler (#18)
Closes #17
2020-11-01 18:46:41 -07:00
Josh Baldwin
ddd3c76c53
Rename scheduler to io_scheduler (#16)
Closes #15
2020-11-01 12:08:09 -07:00
Renamed from inc/coro/scheduler.hpp (Browse further)