* Update README with section links
* add # to links
* try event instead of coro::event
* Update section names to remove "::" since markdown doesn't seem to link
properly with them
* Add coro::mutex example to readme
* explicit lock_operation ctor
* lock_operation await_ready() uses try_lock
This allows for the lock operation to skip await_suspend() entirely
if the lock was unlocked.
* io_scheduler uses thread pool to schedule work
fixes#41
* use task_container in bench tcp server test
* adjust benchmark for github actions CI
* fix io_scheduler tests cross thread memory boundaries
* more memory barriers
* sprinkle some shutdowns in there
* update readme
* udp_peer!
I hope using the udp peer makes sense on how udp packets are
sent and received now. Time will tell!
* Fix broken benchmark tcp server listening race condition
* io_scheduler support timeouts
Closes#19
* io_scheduler resume_token<poll_status> for poll()
* io_scheduler read/write now use poll_status + size return
See issue for more details, in general attempting to
implement a coro::thread_pool exposed that the coro::sync_wait
and coro::when_all only worked if the coroutines executed on
that same thread. They should now possibly have the ability
to execute on another thread, to be determined in a later issue.
Fixes#7
Lots of things tried including slabbing requests to reduce
allocations on schedule. Turns out just not calling read/write
by setting an atomic flag if its already been trigger was
a major win.
Tuned all the atomic operations with std::memory_order*
to release/acquire or relaxed appropriately.
When processing items in the accept queue they are grabbed
now in 128 task chunks and processed inline. This had a monster
speedup effect since the lock is significantly less contentious.
In all went from about 1.5mm ops/sec to 4mm ops/sec.
Good fun day.
The scheduler had a 'nice' optimization where any newly
submitted or resumed task would try and check if the current
thread its executing was the process event thread and if so
directly start or resume the task rather than pushing it into
the FIFO queues. Well this has a bad side effect of a recursive
task which generates sub tasks will eventually cause a
stackoverflow to occur. To avoid this the tasks for
submitting and resuming go through the normal FIFO queue which
is slower but removes the recursive function calls.
Attempted to test an accept task coroutine but
the performance was lacking, took a major hit
so scrapping that idea for now. Currently
proccessing events inline on the background
thread epoll loop appears to be the most
efficient.
Prioritize resumed tasks over new tasks.
Fixed issue with operator() called immediately
on lambdas causing them to go out of scope,
Debug builds didn't show a problem but Release did.