1
0
Fork 0
mirror of https://gitlab.com/niansa/libcrosscoro.git synced 2025-03-06 20:53:32 +01:00

coro::semaphore (#65)

* coro::semaphore

* coro::ring_buffer<E, N>
This commit is contained in:
Josh Baldwin 2021-02-23 11:05:21 -07:00 committed by GitHub
parent 6a2f398f9a
commit c1acf8b80d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
16 changed files with 1144 additions and 4 deletions

View file

@ -66,4 +66,14 @@ template_contents=$(cat 'README.md')
example_contents=$(cat 'examples/coro_task_container.cpp') example_contents=$(cat 'examples/coro_task_container.cpp')
echo "${template_contents/\$\{EXAMPLE_CORO_TASK_CONTAINER_CPP\}/$example_contents}" > README.md echo "${template_contents/\$\{EXAMPLE_CORO_TASK_CONTAINER_CPP\}/$example_contents}" > README.md
template_contents=$(cat 'README.md')
example_contents=$(cat 'examples/coro_semaphore.cpp')
echo "${template_contents/\$\{EXAMPLE_CORO_SEMAPHORE_CPP\}/$example_contents}" > README.md
template_contents=$(cat 'README.md')
example_contents=$(cat 'examples/coro_ring_buffer.cpp')
echo "${template_contents/\$\{EXAMPLE_CORO_RING_BUFFER_CPP\}/$example_contents}" > README.md
git add README.md git add README.md

View file

@ -131,6 +131,39 @@ $ ./examples/coro_mutex
1, 2, 3, 4, 5, 6, 7, 8, 10, 9, 12, 11, 13, 14, 15, 16, 17, 18, 19, 21, 22, 20, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 47, 48, 49, 46, 50, 51, 52, 53, 54, 55, 57, 58, 59, 56, 60, 62, 61, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 1, 2, 3, 4, 5, 6, 7, 8, 10, 9, 12, 11, 13, 14, 15, 16, 17, 18, 19, 21, 22, 20, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 47, 48, 49, 46, 50, 51, 52, 53, 54, 55, 57, 58, 59, 56, 60, 62, 61, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100,
``` ```
### coro::semaphore
The `coro::semaphore` is a thread safe async tool to protect a limited number of resources by only allowing so many consumers to acquire the resources a single time. The `coro::semaphore` also has a maximum number of resources denoted by its constructor. This means if a resource is produced or released when the semaphore is at its maximum resource availability then the release operation will await for space to become available. This is useful for a ringbuffer type situation where the resources are produced and then consumed, but will have no effect on a semaphores usage if there is a set known quantity of resources to start with and are acquired and then released back.
```C++
${EXAMPLE_CORO_SEMAPHORE_CPP}
```
Expected output, note that there is no lock around the `std::cout` so some of the output isn't perfect.
```bash
$ ./examples/coro_semaphore
1, 23, 25, 24, 22, 27, 28, 29, 21, 20, 19, 18, 17, 14, 31, 30, 33, 32, 41, 40, 37, 39, 38, 36, 35, 34, 43, 46, 47, 48, 45, 42, 44, 26, 16, 15, 13, 52, 54, 55, 53, 49, 51, 57, 58, 50, 62, 63, 61, 60, 59, 56, 12, 11, 8, 10, 9, 7, 6, 5, 4, 3, 642, , 66, 67, 6568, , 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100,
```
### coro::ring_buffer<element, num_elements>
The `coro::ring_buffer` is thread safe async multi-producer multi-consumer statically sized ring buffer. Producers will that try to produce a value when the ring buffer is full will suspend until space is available. Consumers that try to consume a value when the ring buffer is empty will suspend until space is available. All waiters on the ring buffer for producing or consuming are resumed in a LIFO manner when their respective operation becomes available.
The `coro::ring_buffer` also works with `coro::stop_signal` in that if the ring buffers `stop_
```C++
${EXAMPLE_CORO_RING_BUFFER_CPP}
```
Expected output:
```bash
$ ./examples/coro_ring_buffer
(id=3, v=1), (id=2, v=2), (id=1, v=3), (id=0, v=4), (id=3, v=5), (id=2, v=6), (id=1, v=7), (id=0, v=8), (id=3, v=9), (id=2, v=10), (id=1, v=11), (id=0, v=12), (id=3, v=13), (id=2, v=14), (id=1, v=15), (id=0, v=16), (id=3, v=17), (id=2, v=18), (id=1, v=19), (id=0, v=20), (id=3, v=21), (id=2, v=22), (id=1, v=23), (id=0, v=24), (id=3, v=25), (id=2, v=26), (id=1, v=27), (id=0, v=28), (id=3, v=29), (id=2, v=30), (id=1, v=31), (id=0, v=32), (id=3, v=33), (id=2, v=34), (id=1, v=35), (id=0, v=36), (id=3, v=37), (id=2, v=38), (id=1, v=39), (id=0, v=40), (id=3, v=41), (id=2, v=42), (id=0, v=44), (id=1, v=43), (id=3, v=45), (id=2, v=46), (id=0, v=47), (id=3, v=48), (id=2, v=49), (id=0, v=50), (id=3, v=51), (id=2, v=52), (id=0, v=53), (id=3, v=54), (id=2, v=55), (id=0, v=56), (id=3, v=57), (id=2, v=58), (id=0, v=59), (id=3, v=60), (id=1, v=61), (id=2, v=62), (id=0, v=63), (id=3, v=64), (id=1, v=65), (id=2, v=66), (id=0, v=67), (id=3, v=68), (id=1, v=69), (id=2, v=70), (id=0, v=71), (id=3, v=72), (id=1, v=73), (id=2, v=74), (id=0, v=75), (id=3, v=76), (id=1, v=77), (id=2, v=78), (id=0, v=79), (id=3, v=80), (id=2, v=81), (id=1, v=82), (id=0, v=83), (id=3, v=84), (id=2, v=85), (id=1, v=86), (id=0, v=87), (id=3, v=88), (id=2, v=89), (id=1, v=90), (id=0, v=91), (id=3, v=92), (id=2, v=93), (id=1, v=94), (id=0, v=95), (id=3, v=96), (id=2, v=97), (id=1, v=98), (id=0, v=99), (id=3, v=100),
producer is sending stop signal
consumer 0 shutting down, stop signal received
consumer 1 shutting down, stop signal received
consumer 2 shutting down, stop signal received
consumer 3 shutting down, stop signal received
```
### coro::thread_pool ### coro::thread_pool
`coro::thread_pool` is a statically sized pool of worker threads to execute scheduled coroutines from a FIFO queue. To schedule a coroutine on a thread pool the pool's `schedule()` function should be `co_awaited` to transfer the execution from the current thread to a thread pool worker thread. Its important to note that scheduling will first place the coroutine into the FIFO queue and will be picked up by the first available thread in the pool, e.g. there could be a delay if there is a lot of work queued up. `coro::thread_pool` is a statically sized pool of worker threads to execute scheduled coroutines from a FIFO queue. To schedule a coroutine on a thread pool the pool's `schedule()` function should be `co_awaited` to transfer the execution from the current thread to a thread pool worker thread. Its important to note that scheduling will first place the coroutine into the FIFO queue and will be picked up by the first available thread in the pool, e.g. there could be a delay if there is a lot of work queued up.

View file

@ -49,7 +49,10 @@ set(LIBCORO_SOURCE_FILES
inc/coro/latch.hpp inc/coro/latch.hpp
inc/coro/mutex.hpp src/mutex.cpp inc/coro/mutex.hpp src/mutex.cpp
inc/coro/poll.hpp inc/coro/poll.hpp
inc/coro/ring_buffer.hpp
inc/coro/semaphore.hpp src/semaphore.cpp
inc/coro/shutdown.hpp inc/coro/shutdown.hpp
inc/coro/stop_signal.hpp
inc/coro/sync_wait.hpp src/sync_wait.cpp inc/coro/sync_wait.hpp src/sync_wait.cpp
inc/coro/task_container.hpp src/task_container.cpp inc/coro/task_container.hpp src/task_container.cpp
inc/coro/task.hpp inc/coro/task.hpp

132
README.md
View file

@ -351,6 +351,138 @@ $ ./examples/coro_mutex
1, 2, 3, 4, 5, 6, 7, 8, 10, 9, 12, 11, 13, 14, 15, 16, 17, 18, 19, 21, 22, 20, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 47, 48, 49, 46, 50, 51, 52, 53, 54, 55, 57, 58, 59, 56, 60, 62, 61, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 1, 2, 3, 4, 5, 6, 7, 8, 10, 9, 12, 11, 13, 14, 15, 16, 17, 18, 19, 21, 22, 20, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 47, 48, 49, 46, 50, 51, 52, 53, 54, 55, 57, 58, 59, 56, 60, 62, 61, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100,
``` ```
### coro::semaphore
The `coro::semaphore` is a thread safe async tool to protect a limited number of resources by only allowing so many consumers to acquire the resources a single time. The `coro::semaphore` also has a maximum number of resources denoted by its constructor. This means if a resource is produced or released when the semaphore is at its maximum resource availability then the release operation will await for space to become available. This is useful for a ringbuffer type situation where the resources are produced and then consumed, but will have no effect on a semaphores usage if there is a set known quantity of resources to start with and are acquired and then released back.
```C++
#include <coro/coro.hpp>
#include <iostream>
int main()
{
// Have more threads/tasks than the semaphore will allow for at any given point in time.
coro::thread_pool tp{coro::thread_pool::options{.thread_count = 8}};
coro::semaphore semaphore{2};
auto make_rate_limited_task = [&](uint64_t task_num) -> coro::task<void> {
co_await tp.schedule();
// This will only allow 2 tasks through at any given point in time, all other tasks will
// await the resource to be available before proceeding.
co_await semaphore.acquire();
std::cout << task_num << ", ";
semaphore.release();
co_return;
};
const size_t num_tasks{100};
std::vector<coro::task<void>> tasks{};
for (size_t i = 1; i <= num_tasks; ++i)
{
tasks.emplace_back(make_rate_limited_task(i));
}
coro::sync_wait(coro::when_all(std::move(tasks)));
}
```
Expected output, note that there is no lock around the `std::cout` so some of the output isn't perfect.
```bash
$ ./examples/coro_semaphore
1, 23, 25, 24, 22, 27, 28, 29, 21, 20, 19, 18, 17, 14, 31, 30, 33, 32, 41, 40, 37, 39, 38, 36, 35, 34, 43, 46, 47, 48, 45, 42, 44, 26, 16, 15, 13, 52, 54, 55, 53, 49, 51, 57, 58, 50, 62, 63, 61, 60, 59, 56, 12, 11, 8, 10, 9, 7, 6, 5, 4, 3, 642, , 66, 67, 6568, , 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100,
```
### coro::ring_buffer<element, num_elements>
The `coro::ring_buffer` is thread safe async multi-producer multi-consumer statically sized ring buffer. Producers will that try to produce a value when the ring buffer is full will suspend until space is available. Consumers that try to consume a value when the ring buffer is empty will suspend until space is available. All waiters on the ring buffer for producing or consuming are resumed in a LIFO manner when their respective operation becomes available.
The `coro::ring_buffer` also works with `coro::stop_signal` in that if the ring buffers `stop_
```C++
#include <coro/coro.hpp>
#include <iostream>
int main()
{
const size_t iterations = 100;
const size_t consumers = 4;
coro::thread_pool tp{coro::thread_pool::options{.thread_count = 4}};
coro::ring_buffer<uint64_t, 16> rb{};
coro::mutex m{};
std::vector<coro::task<void>> tasks{};
auto make_producer_task = [&]() -> coro::task<void> {
co_await tp.schedule();
for (size_t i = 1; i <= iterations; ++i)
{
co_await rb.produce(i);
}
// Wait for the ring buffer to clear all items so its a clean stop.
while (!rb.empty())
{
co_await tp.yield();
}
// Now that the ring buffer is empty signal to all the consumers its time to stop. Note that
// the stop signal works on producers as well, but this example only uses 1 producer.
{
auto scoped_lock = co_await m.lock();
std::cerr << "\nproducer is sending stop signal";
}
rb.stop_signal_waiters();
co_return;
};
auto make_consumer_task = [&](size_t id) -> coro::task<void> {
co_await tp.schedule();
try
{
while (true)
{
auto value = co_await rb.consume();
auto scoped_lock = co_await m.lock();
std::cout << "(id=" << id << ", v=" << value << "), ";
// Mimic doing some work on the consumed value.
co_await tp.yield();
}
}
catch (const coro::stop_signal&)
{
auto scoped_lock = co_await m.lock();
std::cerr << "\nconsumer " << id << " shutting down, stop signal received";
}
co_return;
};
// Create N consumers
for (size_t i = 0; i < consumers; ++i)
{
tasks.emplace_back(make_consumer_task(i));
}
// Create 1 producer.
tasks.emplace_back(make_producer_task());
// Wait for all the values to be produced and consumed through the ring buffer.
coro::sync_wait(coro::when_all(std::move(tasks)));
}
```
Expected output:
```bash
$ ./examples/coro_ring_buffer
(id=3, v=1), (id=2, v=2), (id=1, v=3), (id=0, v=4), (id=3, v=5), (id=2, v=6), (id=1, v=7), (id=0, v=8), (id=3, v=9), (id=2, v=10), (id=1, v=11), (id=0, v=12), (id=3, v=13), (id=2, v=14), (id=1, v=15), (id=0, v=16), (id=3, v=17), (id=2, v=18), (id=1, v=19), (id=0, v=20), (id=3, v=21), (id=2, v=22), (id=1, v=23), (id=0, v=24), (id=3, v=25), (id=2, v=26), (id=1, v=27), (id=0, v=28), (id=3, v=29), (id=2, v=30), (id=1, v=31), (id=0, v=32), (id=3, v=33), (id=2, v=34), (id=1, v=35), (id=0, v=36), (id=3, v=37), (id=2, v=38), (id=1, v=39), (id=0, v=40), (id=3, v=41), (id=2, v=42), (id=0, v=44), (id=1, v=43), (id=3, v=45), (id=2, v=46), (id=0, v=47), (id=3, v=48), (id=2, v=49), (id=0, v=50), (id=3, v=51), (id=2, v=52), (id=0, v=53), (id=3, v=54), (id=2, v=55), (id=0, v=56), (id=3, v=57), (id=2, v=58), (id=0, v=59), (id=3, v=60), (id=1, v=61), (id=2, v=62), (id=0, v=63), (id=3, v=64), (id=1, v=65), (id=2, v=66), (id=0, v=67), (id=3, v=68), (id=1, v=69), (id=2, v=70), (id=0, v=71), (id=3, v=72), (id=1, v=73), (id=2, v=74), (id=0, v=75), (id=3, v=76), (id=1, v=77), (id=2, v=78), (id=0, v=79), (id=3, v=80), (id=2, v=81), (id=1, v=82), (id=0, v=83), (id=3, v=84), (id=2, v=85), (id=1, v=86), (id=0, v=87), (id=3, v=88), (id=2, v=89), (id=1, v=90), (id=0, v=91), (id=3, v=92), (id=2, v=93), (id=1, v=94), (id=0, v=95), (id=3, v=96), (id=2, v=97), (id=1, v=98), (id=0, v=99), (id=3, v=100),
producer is sending stop signal
consumer 0 shutting down, stop signal received
consumer 1 shutting down, stop signal received
consumer 2 shutting down, stop signal received
consumer 3 shutting down, stop signal received
```
### coro::thread_pool ### coro::thread_pool
`coro::thread_pool` is a statically sized pool of worker threads to execute scheduled coroutines from a FIFO queue. To schedule a coroutine on a thread pool the pool's `schedule()` function should be `co_awaited` to transfer the execution from the current thread to a thread pool worker thread. Its important to note that scheduling will first place the coroutine into the FIFO queue and will be picked up by the first available thread in the pool, e.g. there could be a delay if there is a lot of work queued up. `coro::thread_pool` is a statically sized pool of worker threads to execute scheduled coroutines from a FIFO queue. To schedule a coroutine on a thread pool the pool's `schedule()` function should be `co_awaited` to transfer the execution from the current thread to a thread pool worker thread. Its important to note that scheduling will first place the coroutine into the FIFO queue and will be picked up by the first available thread in the pool, e.g. there could be a delay if there is a lot of work queued up.

View file

@ -33,6 +33,14 @@ add_executable(coro_task_container coro_task_container.cpp)
target_compile_features(coro_task_container PUBLIC cxx_std_20) target_compile_features(coro_task_container PUBLIC cxx_std_20)
target_link_libraries(coro_task_container PUBLIC libcoro) target_link_libraries(coro_task_container PUBLIC libcoro)
add_executable(coro_semaphore coro_semaphore.cpp)
target_compile_features(coro_semaphore PUBLIC cxx_std_20)
target_link_libraries(coro_semaphore PUBLIC libcoro)
add_executable(coro_ring_buffer coro_ring_buffer.cpp)
target_compile_features(coro_ring_buffer PUBLIC cxx_std_20)
target_link_libraries(coro_ring_buffer PUBLIC libcoro)
if(${CMAKE_CXX_COMPILER_ID} MATCHES "GNU") if(${CMAKE_CXX_COMPILER_ID} MATCHES "GNU")
target_compile_options(coro_task PUBLIC -fcoroutines -Wall -Wextra -pipe) target_compile_options(coro_task PUBLIC -fcoroutines -Wall -Wextra -pipe)
target_compile_options(coro_generator PUBLIC -fcoroutines -Wall -Wextra -pipe) target_compile_options(coro_generator PUBLIC -fcoroutines -Wall -Wextra -pipe)
@ -42,6 +50,8 @@ if(${CMAKE_CXX_COMPILER_ID} MATCHES "GNU")
target_compile_options(coro_thread_pool PUBLIC -fcoroutines -Wall -Wextra -pipe) target_compile_options(coro_thread_pool PUBLIC -fcoroutines -Wall -Wextra -pipe)
target_compile_options(coro_io_scheduler PUBLIC -fcoroutines -Wall -Wextra -pipe) target_compile_options(coro_io_scheduler PUBLIC -fcoroutines -Wall -Wextra -pipe)
target_compile_options(coro_task_container PUBLIC -fcoroutines -Wall -Wextra -pipe) target_compile_options(coro_task_container PUBLIC -fcoroutines -Wall -Wextra -pipe)
target_compile_options(coro_semaphore PUBLIC -fcoroutines -Wall -Wextra -pipe)
target_compile_options(coro_ring_buffer PUBLIC -fcoroutines -Wall -Wextra -pipe)
elseif(${CMAKE_CXX_COMPILER_ID} MATCHES "Clang") elseif(${CMAKE_CXX_COMPILER_ID} MATCHES "Clang")
message(FATAL_ERROR "Clang is currently not supported.") message(FATAL_ERROR "Clang is currently not supported.")
else() else()

View file

@ -0,0 +1,72 @@
#include <coro/coro.hpp>
#include <iostream>
int main()
{
const size_t iterations = 100;
const size_t consumers = 4;
coro::thread_pool tp{coro::thread_pool::options{.thread_count = 4}};
coro::ring_buffer<uint64_t, 16> rb{};
coro::mutex m{};
std::vector<coro::task<void>> tasks{};
auto make_producer_task = [&]() -> coro::task<void> {
co_await tp.schedule();
for (size_t i = 1; i <= iterations; ++i)
{
co_await rb.produce(i);
}
// Wait for the ring buffer to clear all items so its a clean stop.
while (!rb.empty())
{
co_await tp.yield();
}
// Now that the ring buffer is empty signal to all the consumers its time to stop. Note that
// the stop signal works on producers as well, but this example only uses 1 producer.
{
auto scoped_lock = co_await m.lock();
std::cerr << "\nproducer is sending stop signal";
}
rb.stop_signal_waiters();
co_return;
};
auto make_consumer_task = [&](size_t id) -> coro::task<void> {
co_await tp.schedule();
try
{
while (true)
{
auto value = co_await rb.consume();
auto scoped_lock = co_await m.lock();
std::cout << "(id=" << id << ", v=" << value << "), ";
// Mimic doing some work on the consumed value.
co_await tp.yield();
}
}
catch (const coro::stop_signal&)
{
auto scoped_lock = co_await m.lock();
std::cerr << "\nconsumer " << id << " shutting down, stop signal received";
}
co_return;
};
// Create N consumers
for (size_t i = 0; i < consumers; ++i)
{
tasks.emplace_back(make_consumer_task(i));
}
// Create 1 producer.
tasks.emplace_back(make_producer_task());
// Wait for all the values to be produced and consumed through the ring buffer.
coro::sync_wait(coro::when_all(std::move(tasks)));
}

View file

@ -0,0 +1,29 @@
#include <coro/coro.hpp>
#include <iostream>
int main()
{
// Have more threads/tasks than the semaphore will allow for at any given point in time.
coro::thread_pool tp{coro::thread_pool::options{.thread_count = 8}};
coro::semaphore semaphore{2};
auto make_rate_limited_task = [&](uint64_t task_num) -> coro::task<void> {
co_await tp.schedule();
// This will only allow 2 tasks through at any given point in time, all other tasks will
// await the resource to be available before proceeding.
co_await semaphore.acquire();
std::cout << task_num << ", ";
semaphore.release();
co_return;
};
const size_t num_tasks{100};
std::vector<coro::task<void>> tasks{};
for (size_t i = 1; i <= num_tasks; ++i)
{
tasks.emplace_back(make_rate_limited_task(i));
}
coro::sync_wait(coro::when_all(std::move(tasks)));
}

View file

@ -21,6 +21,11 @@
#include "coro/io_scheduler.hpp" #include "coro/io_scheduler.hpp"
#include "coro/latch.hpp" #include "coro/latch.hpp"
#include "coro/mutex.hpp" #include "coro/mutex.hpp"
#include "coro/poll.hpp"
#include "coro/ring_buffer.hpp"
#include "coro/semaphore.hpp"
#include "coro/shutdown.hpp"
#include "coro/stop_signal.hpp"
#include "coro/sync_wait.hpp" #include "coro/sync_wait.hpp"
#include "coro/task.hpp" #include "coro/task.hpp"
#include "coro/task_container.hpp" #include "coro/task_container.hpp"

294
inc/coro/ring_buffer.hpp Normal file
View file

@ -0,0 +1,294 @@
#pragma once
#include "coro/stop_signal.hpp"
#include <array>
#include <atomic>
#include <coroutine>
#include <mutex>
#include <optional>
namespace coro
{
/**
* @tparam element The type of element the ring buffer will store. Note that this type should be
* cheap to move if possible as it is moved into and out of the buffer upon produce and
* consume operations.
* @tparam num_elements The maximum number of elements the ring buffer can store, must be >= 1.
*/
template<typename element, size_t num_elements>
class ring_buffer
{
public:
/**
* @throws std::runtime_error If `num_elements` == 0.
*/
ring_buffer()
{
if (num_elements == 0)
{
throw std::runtime_error{"num_elements cannot be zero"};
}
}
~ring_buffer() = default;
ring_buffer(const ring_buffer<element, num_elements>&) = delete;
ring_buffer(ring_buffer<element, num_elements>&&) = delete;
auto operator=(const ring_buffer<element, num_elements>&) noexcept -> ring_buffer<element, num_elements>& = delete;
auto operator=(ring_buffer<element, num_elements>&&) noexcept -> ring_buffer<element, num_elements>& = delete;
struct produce_operation
{
produce_operation(ring_buffer<element, num_elements>& rb, element e) : m_rb(rb), m_e(std::move(e)) {}
auto await_ready() noexcept -> bool
{
std::unique_lock lk{m_rb.m_mutex};
return m_rb.try_produce_locked(lk, m_e);
}
auto await_suspend(std::coroutine_handle<> awaiting_coroutine) noexcept -> bool
{
std::unique_lock lk{m_rb.m_mutex};
// Don't suspend if the stop signal has been set.
if (m_rb.m_stopped.load(std::memory_order::acquire))
{
m_stopped = true;
return false;
}
m_awaiting_coroutine = awaiting_coroutine;
m_next = m_rb.m_produce_waiters;
m_rb.m_produce_waiters = this;
return true;
}
/**
* @throws coro::stop_signal if the operation was stopped.
*/
auto await_resume() -> void
{
if (m_stopped)
{
throw stop_signal{};
}
}
private:
template<typename element_subtype, size_t num_elements_subtype>
friend class ring_buffer;
/// The ring buffer the element is being produced into.
ring_buffer<element, num_elements>& m_rb;
/// If the operation needs to suspend, the coroutine to resume when the element can be produced.
std::coroutine_handle<> m_awaiting_coroutine;
/// Linked list of produce operations that are awaiting to produce their element.
produce_operation* m_next{nullptr};
/// The element this produce operation is producing into the ring buffer.
element m_e;
/// Was the operation stopped?
bool m_stopped{false};
};
struct consume_operation
{
explicit consume_operation(ring_buffer<element, num_elements>& rb) : m_rb(rb) {}
auto await_ready() noexcept -> bool
{
std::unique_lock lk{m_rb.m_mutex};
return m_rb.try_consume_locked(lk, this);
}
auto await_suspend(std::coroutine_handle<> awaiting_coroutine) noexcept -> bool
{
std::unique_lock lk{m_rb.m_mutex};
// Don't suspend if the stop signal has been set.
if (m_rb.m_stopped.load(std::memory_order::acquire))
{
m_stopped = true;
return false;
}
m_awaiting_coroutine = awaiting_coroutine;
m_next = m_rb.m_consume_waiters;
m_rb.m_consume_waiters = this;
return true;
}
/**
* @throws coro::stop_signal if the operation was stopped.
* @return The consumed element.
*/
auto await_resume() -> element
{
if (m_stopped)
{
throw stop_signal{};
}
return std::move(m_e);
}
private:
template<typename element_subtype, size_t num_elements_subtype>
friend class ring_buffer;
/// The ring buffer to consume an element from.
ring_buffer<element, num_elements>& m_rb;
/// If the operation needs to suspend, the coroutine to resume when the element can be consumed.
std::coroutine_handle<> m_awaiting_coroutine;
/// Linked list of consume operations that are awaiting to consume an element.
consume_operation* m_next{nullptr};
/// The element this consume operation will consume.
element m_e;
/// Was the operation stopped?
bool m_stopped{false};
};
/**
* Produces the given element into the ring buffer. This operation will suspend until a slot
* in the ring buffer becomes available.
* @param e The element to produce.
*/
[[nodiscard]] auto produce(element e) -> produce_operation { return produce_operation{*this, std::move(e)}; }
/**
* Consumes an element from the ring buffer. This operation will suspend until an element in
* the ring buffer becomes available.
*/
[[nodiscard]] auto consume() -> consume_operation { return consume_operation{*this}; }
/**
* @return The current number of elements contained in the ring buffer.
*/
auto size() const -> size_t
{
std::atomic_thread_fence(std::memory_order::acquire);
return m_used;
}
/**
* @return True if the ring buffer contains zero elements.
*/
auto empty() const -> bool { return size() == 0; }
/**
* Stops all currently awaiting producers and consumers. Their await_resume() function
* will throw a coro::stop_signal. Further produce()/consume() calls will always throw
* a coro::stop_signal after this is called for this ring buffer.
*/
auto stop_signal_waiters() -> void
{
// Only wake up waiters once.
if (m_stopped.load(std::memory_order::acquire))
{
return;
}
std::unique_lock lk{m_mutex};
m_stopped.exchange(true, std::memory_order::release);
while (m_produce_waiters != nullptr)
{
auto* to_resume = m_produce_waiters;
to_resume->m_stopped = true;
m_produce_waiters = m_produce_waiters->m_next;
lk.unlock();
to_resume->m_awaiting_coroutine.resume();
lk.lock();
}
while (m_consume_waiters != nullptr)
{
auto* to_resume = m_consume_waiters;
to_resume->m_stopped = true;
m_consume_waiters = m_consume_waiters->m_next;
lk.unlock();
to_resume->m_awaiting_coroutine.resume();
lk.lock();
}
}
private:
friend produce_operation;
friend consume_operation;
std::mutex m_mutex{};
std::array<element, num_elements> m_elements{};
/// The current front pointer to an open slot if not full.
size_t m_front{0};
/// The current back pointer to the oldest item in the buffer if not empty.
size_t m_back{0};
/// The number of items in the ring buffer.
size_t m_used{0};
/// The LIFO list of produce waiters.
produce_operation* m_produce_waiters{nullptr};
/// The LIFO list of consume watier.
consume_operation* m_consume_waiters{nullptr};
std::atomic<bool> m_stopped{false};
auto try_produce_locked(std::unique_lock<std::mutex>& lk, element& e) -> bool
{
if (m_used == num_elements)
{
return false;
}
m_elements[m_front] = std::move(e);
m_front = (m_front + 1) % num_elements;
++m_used;
if (m_consume_waiters != nullptr)
{
consume_operation* to_resume = m_consume_waiters;
m_consume_waiters = m_consume_waiters->m_next;
// Since the consume operation suspended it needs to be provided an element to consume.
to_resume->m_e = std::move(m_elements[m_back]);
m_back = (m_back + 1) % num_elements;
--m_used; // And we just consumed up another item.
lk.unlock();
to_resume->m_awaiting_coroutine.resume();
}
return true;
}
auto try_consume_locked(std::unique_lock<std::mutex>& lk, consume_operation* op) -> bool
{
if (m_used == 0)
{
return false;
}
op->m_e = std::move(m_elements[m_back]);
m_back = (m_back + 1) % num_elements;
--m_used;
if (m_produce_waiters != nullptr)
{
produce_operation* to_resume = m_produce_waiters;
m_produce_waiters = m_produce_waiters->m_next;
// Since the produce operation suspended it needs to be provided a slot to place its element.
m_elements[m_front] = std::move(to_resume->m_e);
m_front = (m_front + 1) % num_elements;
++m_used; // And we just produced another item.
lk.unlock();
to_resume->m_awaiting_coroutine.resume();
}
return true;
}
};
} // namespace coro

82
inc/coro/semaphore.hpp Normal file
View file

@ -0,0 +1,82 @@
#pragma once
#include <atomic>
#include <coroutine>
#include <mutex>
namespace coro
{
class semaphore
{
public:
explicit semaphore(std::ptrdiff_t least_max_value_and_starting_value);
explicit semaphore(std::ptrdiff_t least_max_value, std::ptrdiff_t starting_value);
~semaphore();
semaphore(const semaphore&) = delete;
semaphore(semaphore&&) = delete;
auto operator=(const semaphore&) noexcept -> semaphore& = delete;
auto operator=(semaphore&&) noexcept -> semaphore& = delete;
class acquire_operation
{
public:
explicit acquire_operation(semaphore& s);
auto await_ready() const noexcept -> bool;
auto await_suspend(std::coroutine_handle<> awaiting_coroutine) noexcept -> bool;
auto await_resume() const noexcept -> bool;
private:
friend semaphore;
semaphore& m_semaphore;
std::coroutine_handle<> m_awaiting_coroutine;
acquire_operation* m_next{nullptr};
};
auto release() -> void;
/**
* Acquires a resource from the semaphore, if the semaphore has no resources available then
* this will wait until a resource becomes available.
*/
[[nodiscard]] auto acquire() -> acquire_operation { return acquire_operation{*this}; }
/**
* Attemtps to acquire a resource if there is any resources available.
* @return True if the acquire operation was able to acquire a resource.
*/
auto try_acquire() -> bool;
/**
* @return The maximum number of resources the semaphore can contain.
*/
auto max() const noexcept -> std::ptrdiff_t { return m_least_max_value; }
/**
* The current number of resources available in this semaphore.
*/
auto value() const noexcept -> std::ptrdiff_t { return m_counter.load(std::memory_order::relaxed); }
/**
* Stops the semaphore and will notify all release/acquire waiters to wake up in a failed state.
* Once this is set it cannot be un-done and all future oprations on the semaphore will fail.
*/
auto stop_notify_all() noexcept -> void;
private:
friend class release_operation;
friend class acquire_operation;
const std::ptrdiff_t m_least_max_value;
std::atomic<std::ptrdiff_t> m_counter;
std::mutex m_waiter_mutex{};
acquire_operation* m_acquire_waiters{nullptr};
bool m_notify_all_set{false};
};
} // namespace coro

17
inc/coro/stop_signal.hpp Normal file
View file

@ -0,0 +1,17 @@
#pragma once
#include <stdexcept>
namespace coro
{
/**
* Is thrown by various 'infinite' co_await operations if the parent object has
* been requsted to stop, and wakes up all the awaiters in a stopped state. The
* awaiter's await_resume() will throw this signal to let the user know the operation
* has been cancelled.
*/
struct stop_signal
{
};
} // namespace coro

View file

@ -4,10 +4,7 @@ namespace coro
{ {
scoped_lock::~scoped_lock() scoped_lock::~scoped_lock()
{ {
if (m_mutex != nullptr) unlock();
{
m_mutex->unlock();
}
} }
auto scoped_lock::unlock() -> void auto scoped_lock::unlock() -> void

123
src/semaphore.cpp Normal file
View file

@ -0,0 +1,123 @@
#include "coro/semaphore.hpp"
namespace coro
{
semaphore::semaphore(std::ptrdiff_t least_max_value_and_starting_value)
: semaphore(least_max_value_and_starting_value, least_max_value_and_starting_value)
{
}
semaphore::semaphore(std::ptrdiff_t least_max_value, std::ptrdiff_t starting_value)
: m_least_max_value(least_max_value),
m_counter(starting_value <= least_max_value ? starting_value : least_max_value)
{
}
semaphore::~semaphore()
{
stop_notify_all();
}
semaphore::acquire_operation::acquire_operation(semaphore& s) : m_semaphore(s)
{
}
auto semaphore::acquire_operation::await_ready() const noexcept -> bool
{
return m_semaphore.try_acquire();
}
auto semaphore::acquire_operation::await_suspend(std::coroutine_handle<> awaiting_coroutine) noexcept -> bool
{
std::unique_lock lk{m_semaphore.m_waiter_mutex};
if (m_semaphore.m_notify_all_set)
{
return false;
}
if (m_semaphore.try_acquire())
{
return false;
}
if (m_semaphore.m_acquire_waiters == nullptr)
{
m_semaphore.m_acquire_waiters = this;
}
else
{
// This is LIFO, but semaphores are not meant to be fair.
// Set our next to the current head.
m_next = m_semaphore.m_acquire_waiters;
// Set the semaphore head to this.
m_semaphore.m_acquire_waiters = this;
}
m_awaiting_coroutine = awaiting_coroutine;
return true;
}
auto semaphore::acquire_operation::await_resume() const noexcept -> bool
{
return !m_semaphore.m_notify_all_set;
}
auto semaphore::release() -> void
{
// It seems like the atomic counter could be incremented, but then resuming a waiter could have
// a race between a new acquirer grabbing the just incremented resource value from us. So its
// best to check if there are any waiters first, and transfer owernship of the resource thats
// being released directly to the waiter to avoid this problem.
std::unique_lock lk{m_waiter_mutex};
if (m_acquire_waiters != nullptr)
{
acquire_operation* to_resume = m_acquire_waiters;
m_acquire_waiters = m_acquire_waiters->m_next;
lk.unlock();
// This will transfer ownership of the resource to the resumed waiter.
to_resume->m_awaiting_coroutine.resume();
}
else
{
// Normally would be release but within a lock use releaxed.
m_counter.fetch_add(1, std::memory_order::relaxed);
}
}
auto semaphore::try_acquire() -> bool
{
// Optimistically grab the resource.
auto previous = m_counter.fetch_sub(1, std::memory_order::acq_rel);
if (previous <= 0)
{
// If it wasn't available undo the acquisition.
m_counter.fetch_add(1, std::memory_order::release);
return false;
}
return true;
}
auto semaphore::stop_notify_all() noexcept -> void
{
while (true)
{
std::unique_lock lk{m_waiter_mutex};
if (m_acquire_waiters != nullptr)
{
acquire_operation* to_resume = m_acquire_waiters;
m_acquire_waiters = m_acquire_waiters->m_next;
lk.unlock();
to_resume->m_awaiting_coroutine.resume();
}
else
{
break;
}
}
}
} // namespace coro

View file

@ -13,6 +13,8 @@ set(LIBCORO_TEST_SOURCE_FILES
test_io_scheduler.cpp test_io_scheduler.cpp
test_latch.cpp test_latch.cpp
test_mutex.cpp test_mutex.cpp
test_ring_buffer.cpp
test_semaphore.cpp
test_sync_wait.cpp test_sync_wait.cpp
test_task.cpp test_task.cpp
test_thread_pool.cpp test_thread_pool.cpp

113
test/test_ring_buffer.cpp Normal file
View file

@ -0,0 +1,113 @@
#include "catch.hpp"
#include <coro/coro.hpp>
#include <chrono>
#include <thread>
TEST_CASE("ring_buffer zero num_elements", "[ring_buffer]")
{
REQUIRE_THROWS(coro::ring_buffer<uint64_t, 0>{});
}
TEST_CASE("ring_buffer single element", "[ring_buffer]")
{
const size_t iterations = 10;
coro::ring_buffer<uint64_t, 1> rb{};
std::vector<uint64_t> output{};
auto make_producer_task = [&]() -> coro::task<void> {
for (size_t i = 1; i <= iterations; ++i)
{
std::cerr << "produce: " << i << "\n";
co_await rb.produce(i);
}
co_return;
};
auto make_consumer_task = [&]() -> coro::task<void> {
for (size_t i = 1; i <= iterations; ++i)
{
auto value = co_await rb.consume();
std::cerr << "consume: " << value << "\n";
output.emplace_back(value);
}
co_return;
};
coro::sync_wait(coro::when_all(make_producer_task(), make_consumer_task()));
for (size_t i = 1; i <= iterations; ++i)
{
REQUIRE(output[i - 1] == i);
}
REQUIRE(rb.empty());
}
TEST_CASE("ring_buffer many elements many producers many consumers", "[ring_buffer]")
{
const size_t iterations = 1'000'000;
const size_t consumers = 100;
const size_t producers = 100;
coro::thread_pool tp{coro::thread_pool::options{.thread_count = 4}};
coro::ring_buffer<uint64_t, 64> rb{};
auto make_producer_task = [&]() -> coro::task<void> {
co_await tp.schedule();
auto to_produce = iterations / producers;
for (size_t i = 1; i <= to_produce; ++i)
{
co_await rb.produce(i);
}
// Wait for all the values to be consumed prior to sending the stop signal.
while (!rb.empty())
{
co_await tp.yield();
}
rb.stop_signal_waiters(); // signal to all consumers (or even producers) we are done/shutting down.
co_return;
};
auto make_consumer_task = [&]() -> coro::task<void> {
co_await tp.schedule();
try
{
while (true)
{
auto value = co_await rb.consume();
(void)value;
co_await tp.yield(); // mimic some work
}
}
catch (const coro::stop_signal&)
{
// requested to stop/shutdown.
}
co_return;
};
std::vector<coro::task<void>> tasks{};
tasks.reserve(consumers * producers);
for (size_t i = 0; i < consumers; ++i)
{
tasks.emplace_back(make_consumer_task());
}
for (size_t i = 0; i < producers; ++i)
{
tasks.emplace_back(make_producer_task());
}
coro::sync_wait(coro::when_all(std::move(tasks)));
REQUIRE(rb.empty());
}

218
test/test_semaphore.cpp Normal file
View file

@ -0,0 +1,218 @@
#include "catch.hpp"
#include <coro/coro.hpp>
#include <chrono>
#include <thread>
#include <vector>
TEST_CASE("semaphore binary", "[semaphore]")
{
std::vector<uint64_t> output;
coro::semaphore s{1};
auto make_emplace_task = [&](coro::semaphore& s) -> coro::task<void> {
std::cerr << "Acquiring semaphore\n";
co_await s.acquire();
REQUIRE_FALSE(s.try_acquire());
std::cerr << "semaphore acquired, emplacing back 1\n";
output.emplace_back(1);
std::cerr << "coroutine done with resource, releasing\n";
REQUIRE(s.value() == 0);
s.release();
REQUIRE(s.value() == 1);
REQUIRE(s.try_acquire());
s.release();
co_return;
};
coro::sync_wait(make_emplace_task(s));
REQUIRE(s.value() == 1);
REQUIRE(s.try_acquire());
REQUIRE(s.value() == 0);
s.release();
REQUIRE(s.value() == 1);
REQUIRE(output.size() == 1);
REQUIRE(output[0] == 1);
}
TEST_CASE("semaphore binary many waiters until event", "[semaphore]")
{
std::atomic<uint64_t> value{0};
std::vector<coro::task<void>> tasks;
coro::semaphore s{1}; // acquires and holds the semaphore until the event is triggered
coro::event e; // triggers the blocking thread to release the semaphore
auto make_task = [&](uint64_t id) -> coro::task<void> {
std::cerr << "id = " << id << " waiting to acquire the semaphore\n";
co_await s.acquire();
// Should always be locked upon acquiring the semaphore.
REQUIRE_FALSE(s.try_acquire());
std::cerr << "id = " << id << " semaphore acquired\n";
value.fetch_add(1, std::memory_order::relaxed);
std::cerr << "id = " << id << " semaphore release\n";
s.release();
co_return;
};
auto make_block_task = [&]() -> coro::task<void> {
std::cerr << "block task acquiring lock\n";
co_await s.acquire();
REQUIRE_FALSE(s.try_acquire());
std::cerr << "block task acquired semaphore, waiting on event\n";
co_await e;
std::cerr << "block task releasing semaphore\n";
s.release();
co_return;
};
auto make_set_task = [&]() -> coro::task<void> {
std::cerr << "set task setting event\n";
e.set();
co_return;
};
tasks.emplace_back(make_block_task());
// Create N tasks that attempt to acquire the semaphore.
for (uint64_t i = 1; i <= 4; ++i)
{
tasks.emplace_back(make_task(i));
}
tasks.emplace_back(make_set_task());
coro::sync_wait(coro::when_all(std::move(tasks)));
REQUIRE(value == 4);
}
TEST_CASE("semaphore ringbuffer", "[semaphore]")
{
const std::size_t iterations = 10;
// This test is run in the context of a thread pool so the producer task can yield. Otherwise
// the producer will just run wild!
coro::thread_pool tp{coro::thread_pool::options{.thread_count = 1}};
std::atomic<uint64_t> value{0};
std::vector<coro::task<void>> tasks;
coro::semaphore s{2, 2};
auto make_consumer_task = [&](uint64_t id) -> coro::task<void> {
co_await tp.schedule();
while (value.load(std::memory_order::acquire) < iterations)
{
std::cerr << "id = " << id << " waiting to acquire the semaphore\n";
co_await s.acquire();
std::cerr << "id = " << id << " semaphore acquired, consuming value\n";
value.fetch_add(1, std::memory_order::release);
// In the ringbfuffer acquire is 'consuming', we never release back into the buffer
}
std::cerr << "id = " << id << " exiting\n";
s.stop_notify_all();
co_return;
};
auto make_producer_task = [&]() -> coro::task<void> {
co_await tp.schedule();
while (value.load(std::memory_order::acquire) < iterations)
{
std::cerr << "producer: doing work\n";
// Do some work...
std::cerr << "producer: releasing\n";
s.release();
std::cerr << "producer: produced\n";
co_await tp.yield();
}
std::cerr << "producer exiting\n";
s.stop_notify_all();
co_return;
};
tasks.emplace_back(make_producer_task());
tasks.emplace_back(make_consumer_task(1));
coro::sync_wait(coro::when_all(std::move(tasks)));
REQUIRE(value == iterations);
}
TEST_CASE("semaphore ringbuffer many producers and consumers", "[semaphore]")
{
const std::size_t consumers = 16;
const std::size_t producers = 1;
const std::size_t iterations = 1'000'000;
std::atomic<uint64_t> value{0};
coro::semaphore s{50, 0};
coro::thread_pool tp{}; // let er rip
auto make_consumer_task = [&](uint64_t id) -> coro::task<void> {
co_await tp.schedule();
while (value.load(std::memory_order::acquire) < iterations)
{
auto success = co_await s.acquire();
if (!success)
{
break;
}
co_await tp.schedule();
value.fetch_add(1, std::memory_order::relaxed);
}
std::cerr << "consumer " << id << " exiting\n";
s.stop_notify_all();
co_return;
};
auto make_producer_task = [&](uint64_t id) -> coro::task<void> {
co_await tp.schedule();
while (value.load(std::memory_order::acquire) < iterations)
{
s.release();
}
std::cerr << "producer " << id << " exiting\n";
s.stop_notify_all();
co_return;
};
std::vector<coro::task<void>> tasks{};
for (size_t i = 0; i < consumers; ++i)
{
tasks.emplace_back(make_consumer_task(i));
}
for (size_t i = 0; i < producers; ++i)
{
tasks.emplace_back(make_producer_task(i));
}
coro::sync_wait(coro::when_all(std::move(tasks)));
REQUIRE(value >= iterations);
}