【C/C++】现代C++线程池:从入门到生产级实现

发布于:2025-05-21 ⋅ 阅读:(23) ⋅ 点赞:(0)

现代C++线程池:从入门到生产级实现

Introduction to thread_pool in modern C++, guiding you through the core ideas and gradually moving toward production-quality implementations. The content is designed to help you deeply understand how thread pools work, and how to write your own using C++17/20/23.


🧠 What Is a Thread Pool?

A thread pool is a collection of pre-spawned threads that wait for tasks to execute. Instead of creating a thread for every task (which is expensive), you reuse a fixed number of threads, each pulling tasks from a task queue.


🧩 Why Use a Thread Pool?

  • ✅ Avoid the overhead of frequent thread creation/destruction.
  • ✅ Reuse a fixed number of threads.
  • ✅ Efficient for high-throughput or I/O-bound systems.
  • ✅ Works well with producer-consumer or event-driven designs.

🔰 Part 1: Basic Thread Pool (Beginner)

A very basic thread pool in C++ using:

  • std::thread
  • std::mutex
  • std::condition_variable
  • std::function
  • std::queue

🔧 Minimal Working Code:

#include <iostream>
#include <thread>
#include <vector>
#include <queue>
#include <functional>
#include <mutex>
#include <condition_variable>
#include <atomic>

class ThreadPool {
public:
    ThreadPool(size_t num_threads);
    ~ThreadPool();

    void enqueue(std::function<void()> task);

private:
    std::vector<std::thread> workers;
    std::queue<std::function<void()>> tasks;

    std::mutex queue_mutex;
    std::condition_variable condition;
    std::atomic<bool> stop;
};

ThreadPool::ThreadPool(size_t num_threads) : stop(false) {
    for (size_t i = 0; i < num_threads; ++i) {
        workers.emplace_back([this]() {
            while (true) {
                std::function<void()> task;

                {
                    std::unique_lock<std::mutex> lock(this->queue_mutex);
                    this->condition.wait(lock, [this]() {
                        return this->stop || !this->tasks.empty();
                    });

                    if (this->stop && this->tasks.empty())
                        return;

                    task = std::move(this->tasks.front());
                    this->tasks.pop();
                }

                task(); // run the task
            }
        });
    }
}

void ThreadPool::enqueue(std::function<void()> task) {
    {
        std::lock_guard<std::mutex> lock(queue_mutex);
        tasks.push(std::move(task));
    }
    condition.notify_one();
}

ThreadPool::~ThreadPool() {
    stop = true;
    condition.notify_all();
    for (std::thread &worker : workers)
        worker.join();
}

✅ Usage:

int main() {
    ThreadPool pool(4);

    for (int i = 0; i < 10; ++i) {
        pool.enqueue([i]() {
            std::cout << "Running task " << i << " on thread " 
                      << std::this_thread::get_id() << "\n";
        });
    }

    std::this_thread::sleep_for(std::chrono::seconds(1));
    return 0;
}

🧑‍🔬 Part 2: Improving It (Intermediate)

🧵 Add Return Values with std::future

Change enqueue() to return a std::future<T> for each task.

template<class F, class... Args>
auto enqueue(F&& f, Args&&... args)
    -> std::future<typename std::invoke_result_t<F, Args...>> {
    
    using return_type = typename std::invoke_result_t<F, Args...>;

    auto task = std::make_shared<std::packaged_task<return_type()>>(
        std::bind(std::forward<F>(f), std::forward<Args>(args)...)
    );

    std::future<return_type> res = task->get_future();
    {
        std::lock_guard<std::mutex> lock(queue_mutex);
        tasks.emplace([task]() { (*task)(); });
    }
    condition.notify_one();
    return res;
}

Now you can write:

auto future = pool.enqueue([]() {
    return 42;
});
std::cout << "Result: " << future.get() << "\n";

⚙️ Part 3: Production-Grade Features (Expert)

✅ Features to Add:

Feature Description
Dynamic thread resizing Increase/decrease thread count
Task prioritization Use std::priority_queue
Shutdown options Graceful (drain tasks) vs Immediate
Exception handling Catch exceptions in tasks
Thread affinity / naming Set thread names or pin to cores
Work stealing For maximum throughput
Thread-local storage Use thread_local for caches
Integration with coroutines (C++20) Schedule coroutines using the pool

🧵 Part 4: C++20/23 Style Thread Pool

For advanced users, consider using:

  • std::jthread (C++20)
  • std::stop_token
  • std::barrier or std::latch
  • Coroutines (co_await, std::suspend_always)
  • execution::scheduler (C++23 proposal)

Example for C++20 cooperative cancellation:

void worker(std::stop_token stop_token) {
    while (!stop_token.stop_requested()) {
        // ...
    }
}

std::jthread t(worker); // can be stopped cleanly

📚 Libraries You Should Know

If you prefer using proven libraries:

Library Link Notes
CTPL Easy-to-use thread pool
BS::thread_pool Header-only, fast
Boost::asio Heavy but feature-rich
libunifex Advanced async patterns
folly Facebook’s production async primitives

🧭 Summary

Level Key Concepts
Beginner std::thread, mutex, condition variable, basic queue
Intermediate futures, exception handling, RAII, std::function, shared task management
Expert std::jthread, coroutines, scheduling policies, custom allocators, task stealing