mirror of
https://gitlab.com/niansa/libcrosscoro.git
synced 2025-03-06 20:53:32 +01:00
* io_scheduler inline support * add debug info for io_scheduler size issue * move poll info into its own file * cleanup for feature * Fix valgrind introduced use after free with inline processing Running the coroutines inline with event processing caused a use after free bug with valgrind detected in the inline tcp server/client benchmark code. Basically if an event and a timeout occured in the same time period because the inline processing would resume _inline_ with the event or the timeout -- if the timeout and event occured in the same epoll_wait() function call then the second one's coroutine stackframe would already be destroyed upon resuming it so the poll_info->processed check would be reading already free'ed memory. The solution to this was to introduce a vector of coroutine handles which are appended into on each epoll_wait() iteration of events and timeouts, and only then after the events and timeouts are deduplicated are the coroutine handles resumed. This new vector has elided a malloc in the timeout function, but there is still a malloc to extract the poll infos from the timeout multimap data structure. The vector is also on the class member list and is only ever cleared, it is possible with a monster set of timeouts that this vector could grow extremely large, but I think that is worth the price of not re-allocating it.
29 lines
477 B
C++
29 lines
477 B
C++
#pragma once
|
|
|
|
#include "coro/concepts/awaitable.hpp"
|
|
|
|
#include <concepts>
|
|
#include <coroutine>
|
|
// #include <type_traits>
|
|
// #include <utility>
|
|
|
|
namespace coro::concepts
|
|
{
|
|
template<typename type>
|
|
concept executor = requires(type t, std::coroutine_handle<> c)
|
|
{
|
|
{
|
|
t.schedule()
|
|
}
|
|
->coro::concepts::awaiter;
|
|
{
|
|
t.yield()
|
|
}
|
|
->coro::concepts::awaiter;
|
|
{
|
|
t.resume(c)
|
|
}
|
|
->std::same_as<void>;
|
|
};
|
|
|
|
} // namespace coro::concepts
|