高并发性能的提升
我们在后台服务器开发的过程中不可避免的使用多线程,谈到多线程不可避免的就是锁,平时在工作中有时候谈到锁,但是其实需要真正的了解锁,才能更好使用锁和善用锁,下面我们就通过服务器的实际工作情况来分析一下锁性能限制的本质本质原因,以及如何避免。
这是一段代码
static void* thread_main(void* args) {
std::vector<int> *p_queue = (std::vector<int>*)args;
pthread_t pid = pthread_self();
size_t thread_cnt = 0;
for (;;) {
pthread_mutex_lock(&mutex);
if (!p_queue->empty()) {
int i = p_queue->back();
p_queue->pop_back();
pthread_mutex_unlock(&mutex);
++thread_cnt;
} else {
pthread_mutex_unlock(&mutex);
break;
}
}
char buffer[BUFFER_SIZE];
if (std::snprintf(buffer, BUFFER_SIZE, “pid:%d,thread_main:%d\n”, pid, thread_cnt) > 0) {
std::cout << buffer << std::endl;
}
return NULL;
}
这里其实主要问题不是加锁和解锁问题的时间因为如下图可以看出来pthread_mutex_lock pthread_mutex_unlock 实际上通过本地测试一次加锁解锁操作延时消耗大约在10ns左右,主要时间消耗在__lll_mutex_unlock_wake __lll_mutex_lock_wait 这两个函数是系统函数,从top 中可以看出来消耗的是系统的sys 而不是user 空间 所以,当使用pthread_mutex_lock 和pthread_mutex_unlock 在计算密集型的时候cpu
不会下降,大量的时间都在执行上述两个系统调用 而且系统处于忙的等的状态。
8361 54.4% 54.4% 8361 54.4% __lll_mutex_unlock_wake
6019 39.2% 93.6% 6019 39.2% __lll_mutex_lock_wait
288 1.9% 95.4% 288 1.9% pthread_mutex_lock
254 1.7% 97.1% 254 1.7% pthread_mutex_unlock
94 0.6% 97.7% 94 0.6% __gnu_cxx::__normal_iterator::__normal_iterator
83 0.5% 98.2% 111 0.7% std::vector::end
58 0.4% 98.6% 414 2.7% thread::thread_main
37 0.2% 98.9% 175 1.1% std::vector::back
34 0.2% 99.1% 54 0.4% __gnu_cxx::__normal_iterator::operator-
30 0.2% 99.3% 32 0.2% std::vector::pop_back
27 0.2% 99.4% 43 0.3% __gnu_cxx::operator==
22 0.1% 99.6% 159 1.0% std::vector::empty
20 0.1% 99.7% 20 0.1% __gnu_cxx::__normal_iterator::base
16 0.1% 99.8% 21 0.1% std::vector::begin
14 0.1% 99.9% 14 0.1% __gnu_cxx::__normal_iterator::operator*
7 0.0% 100.0% 7 0.0% std::_Destroy
6 0.0% 100.0% 6 0.0% _init
0 0.0% 100.0% 15370 100.0% start_thread
实际上通过我的测试程序1亿条记录,单线程执行时间大约10s, 多线程(2~10)执行执行大约30s,这里需要解释一下,多线程执行的时候cpu的利用率随着线程数的增加而增加,但是时间并没有减少,实际上cpu 的用户态使用基本保持不变,而cpu 的sys 内核态使用逐渐标高,所以cpu 消耗更多而时间2~10稳定不变,对于单线程而言由于只有加锁和解锁过程而没有解锁唤醒和加锁等待的系统调用所以时间消耗反而比多线程的少,下面为单线程运行结果
144 17.1% 17.1% 144 17.1% __gnu_cxx::__normal_iterator::__normal_iterator
137 16.2% 33.3% 137 16.2% pthread_mutex_unlock
85 10.1% 43.4% 85 10.1% pthread_mutex_lock
64 7.6% 50.9% 134 15.9% std::vector::end
60 7.1% 58.1% 65 7.7% std::vector::pop_back
58 6.9% 64.9% 593 70.3% thread::thread_main
54 6.4% 71.3% 78 9.2% __gnu_cxx::operator==
52 6.2% 77.5% 73 8.6% __gnu_cxx::__normal_iterator::operator-
50 5.9% 83.4% 258 30.6% std::vector::back
36 4.3% 87.7% 47 5.6% std::vector::begin
32 3.8% 91.5% 32 3.8% __gnu_cxx::__normal_iterator::base
28 3.3% 94.8% 195 23.1% std::vector::empty
26 3.1% 97.9% 26 3.1% __gnu_cxx::__normal_iterator::operator*
10 1.2% 99.1% 10 1.2% _init
8 0.9% 100.0% 8 0.9% std::_Destroy
0 0.0% 100.0% 844 100.0% start_thread
这里优化多线程竞争的思路非常简单,也就是减少线程竞争,根本上不要让cpu处于忙等的状态,使用pthread_mutex_trylock 在获取锁失败的时候通过usleep(1)主动释放cpu控制权,代码如下:
static void* thread_main(void* args) {
std::vector<int> *p_queue = (std::vector<int>*)args;
pthread_t pid = pthread_self();
size_t thread_cnt = 0;
for (;;) {
int f = pthread_mutex_trylock(&mutex);
if (f == 0) {
if (!p_queue->empty()) {
int i = p_queue->back();
p_queue->pop_back();
pthread_mutex_unlock(&mutex);
++thread_cnt;
} else {
pthread_mutex_unlock(&mutex);
break;
}
} else {
usleep(1);
}
}
char buffer[BUFFER_SIZE];
if (std::snprintf(buffer, BUFFER_SIZE, “pid:%d,thread_main:%d\n”, pid, thread_cnt) > 0) {
std::cout << buffer << std::endl;
}
return NULL;
}
此时gperftools 分析结果如下,基本上跟单线程状态一样,线程之间几乎没有竞争受到影响
146 16.9% 16.9% 146 16.9% __gnu_cxx::__normal_iterator::__normal_iterator
146 16.9% 33.8% 146 16.9% pthread_mutex_unlock
104 12.0% 45.8% 104 12.0% pthread_mutex_trylock
73 8.4% 54.2% 128 14.8% std::vector::end
65 7.5% 61.7% 275 31.8% std::vector::back
55 6.4% 68.1% 61 7.1% std::vector::pop_back
54 6.2% 74.3% 84 9.7% __gnu_cxx::operator==
52 6.0% 80.3% 85 9.8% __gnu_cxx::__normal_iterator::operator-
43 5.0% 85.3% 582 67.3% thread::thread_main
39 4.5% 89.8% 39 4.5% __gnu_cxx::__normal_iterator::base
27 3.1% 92.9% 182 21.0% std::vector::empty
22 2.5% 95.5% 22 2.5% __gnu_cxx::__normal_iterator::operator*
19 2.2% 97.7% 28 3.2% std::vector::begin
11 1.3% 99.0% 11 1.3% std::_Destroy
9 1.0% 100.0% 9 1.0% _init
0 0.0% 100.0% 865 100.0% start_thread
总结一句:使用trylock 非阻塞方式代替lock,可以大幅度提高并发过程中的性能,当然降低锁的竞争除了trylock之外还有一个常用的方法就是减少锁的作用域,有机会的话会在其他的地方进行详述
转载自:https://blog.csdn.net/ltr6134439/article/details/53109288