1
0
Fork 0
mirror of https://gitlab.com/niansa/libcrosscoro.git synced 2025-03-06 20:53:32 +01:00
Commit graph

26 commits

Author SHA1 Message Date
Josh Baldwin
303cc3384c
Issue 5/clang format (#6)
* clang-format all existing files

* Add detailed comments for event
2020-10-14 08:53:00 -06:00
Josh Baldwin
1a2ec073ca
Add tests for tasks that throw (#4)
* Add tests for tasks that throw

* Additional task types for throwing coverage
2020-10-12 17:29:47 -06:00
Josh Baldwin
31dded8611
Add CI & code coverage (#2)
* Add CI & code coverage

* Remove un-used -lzmq dependency

* Update readme with coverage/background/goals
2020-10-11 18:23:05 -06:00
jbaldwin
c820498f50 Add coro::generator<T> 2020-10-11 11:42:12 -07:00
jbaldwin
771e52e985 addd latch, renamed "amre" to event
Remove the event return type, this should just be a task<T>
2020-10-03 16:29:30 -06:00
jbaldwin
6c593cafad Enable -Wall and -Wextra
Renamed some scheduler internals, made scheduler
bench tests use std::memory_order_relaxed for counters.
2020-09-30 22:57:54 -06:00
jbaldwin
cbd1046161 Performance tuning
Lots of things tried including slabbing requests to reduce
allocations on schedule.  Turns out just not calling read/write
by setting an atomic flag if its already been trigger was
a major win.

Tuned all the atomic operations with std::memory_order*
to release/acquire or relaxed appropriately.

When processing items in the accept queue they are grabbed
now in 128 task chunks and processed inline.  This had a monster
speedup effect since the lock is significantly less contentious.

In all went from about 1.5mm ops/sec to 4mm ops/sec.

Good fun day.
2020-09-30 22:01:27 -06:00
jbaldwin
e0b8eb3f27 Added thread_strategy types for scheduler, fixed stackoverflow
The scheduler had a 'nice' optimization where any newly
submitted or resumed task would try and check if the current
thread its executing was the process event thread and if so
directly start or resume the task rather than pushing it into
the FIFO queues.  Well this has a bad side effect of a recursive
task which generates sub tasks will eventually cause a
stackoverflow to occur.  To avoid this the tasks for
submitting and resuming go through the normal FIFO queue which
is slower but removes the recursive function calls.
2020-09-28 23:20:56 -06:00
jbaldwin
40c84c1bf0 Move user resume token to scheduler func
This prevents the user from providing a resume token
to a yield function with another scheduler as its
internal pointer.
2020-09-28 19:39:48 -06:00
jbaldwin
7f3e4af71f Add scheduler_after
Attempted to test an accept task coroutine but
the performance was lacking, took a major hit
so scrapping that idea for now.  Currently
proccessing events inline on the background
thread epoll loop appears to be the most
efficient.
2020-09-28 18:52:06 -06:00
jbaldwin
fa374a4e95 adjust ops/sec in benchmark to be more realistic 2020-09-28 15:50:11 -06:00
jbaldwin
81d2ad3b3a Add benchmarks
Prioritize resumed tasks over new tasks.
Fixed issue with operator() called immediately
on lambdas causing them to go out of scope,
Debug builds didn't show a problem but Release did.
2020-09-28 00:29:40 -06:00
jbaldwin
fdfcb0fe62 Create dedicated task manager for scheduler task lifetime management 2020-09-27 18:51:15 -06:00
jbaldwin
cf9bce1d97 split scheduler + resume file descriptors 2020-09-27 14:33:38 -06:00
jbaldwin
7e4e37e1c2 scheduler now owns tasks that are submitted 2020-09-27 14:20:30 -06:00
jbaldwin
0093173c55 rename engine to scheduler
rename schedule_task to resume_token
2020-09-26 23:35:33 -06:00
jbaldwin
6d5c3be6c3 Force engine_event.set() to resume coroutines 2020-09-26 14:05:29 -06:00
jbaldwin
0aaf21e4a6 Have yield return engine_event instead of raw coroutine
This allows for an internal unsafe_yield() which will
call coroutine.resume() directly from internal engine
supported yield functions.

This allows for an external yield() which now co_awaits
the event, and then event upon being set will correctly
resume the awaiting coroutine on the engine thread for
the user.
2020-09-26 13:10:37 -06:00
jbaldwin
222d55de30 Various engine cleanup/rename
suspend -> yield
        added wait
        cleaned up engine growth functions
2020-09-26 11:12:59 -06:00
jbaldwin
2f575861dc engine works with normal coro::task<void> 2020-09-22 12:12:30 -06:00
jbaldwin
8cb23230e1 Added engine_task to properly delete completed root tasks
Added engine functions with tests
	poll()
	read()
	write()
	suspend()
	suspend_point()
	resume()
	shutdown()
2020-09-21 00:43:03 -06:00
jbaldwin
9e14a7b4c3 Task continuation + engine epoll with resume
Task in task didn't work previously consistently
due to continuation issues.
2020-09-12 14:53:33 -06:00
jbaldwin
4aa248cd17 task<void> working, task co_await task working
Turns out that the final_suspend() method is required
to be std::suspend_always() otherwise the coroutine_handle<>.done()
function will not trigger properly.  Refactored the task class
to allow the user to decide if they want to suspend at the beginning
but it now forces a suspend at the end to guarantee that
task.is_ready() will work properly.
2020-09-08 22:44:38 -06:00
jbaldwin
fb04c43370 Template task suspends, prototype out engine thoughts 2020-09-07 23:29:03 -06:00
jbaldwin
bfe97a12b4 task and async_manual_reset_event 2020-09-07 18:21:40 -06:00
Josh Baldwin
da140b9319
Initial commit 2020-09-07 12:56:57 -06:00