Threads, Concurrency, and Synchronization (oh my!)
// This is fold right
int sequential_reduce(int (*function)(char, char), int* arr){
char intial = arr[0];
int offset;
for(offset = 1; arr[offset]; ++offset){
intial = function(intial, arr);
}
return mapped_string;
}
int main(){
char arr[] = {1, 2, 3, 4, 5, 6};
int sum = sequential_reduce(add, arr);
// Whatever you want
return 0;
}
int pthread_create(pthread_t *thread,
const pthread_attr_t *attr,
void *(*start_routine) (void *),
void *arg);
thread
somwhere to write the id of the threadattr
options that you set during pthread, for the most part you don’t need to worry about itstart_routine
where to start your pthreadarg
the arguments to give to each pthreadint pthread_join(pthread_t thread, void **retval);
thread
the value of the thread **not a pointer to it*retval
where should I put the resulting valueJust like waitpid, you want to join all your terminated threads. There is no analog of waitpid(-1, …) because if you need that ‘you probably need to rethink your application design.’ - man page.
#include <pthread.h>
void* do_massive_work(void* payload){
/* Doing massive work */
return NULL;
}
int main(){
pthread_t threads[10];
for(int i = 0; i < 10; ++i){
pthread_create(threads+i, NULL, do_massive_work, NULL);
}
for(int i = 0; i < 10; ++i){
pthread_join(threads[i], NULL);
}
return 0;
}
You can guess what happens in pthread_destroy This may be a bit advanced, but the general gist is that they let you leverage parallelism
We want you to start a thread for each of the elements, do the computation and alter the array. Dividing up the work it should look something like the following
You have been going through mutexes and other synchronization primitives in lecture, but the most efficient data structure uses no synchronization. This means that so long as no other thread touches the exact samepiece of memory that another thread is touching – there is no race condition. We are then using threads to their full potential of parallelism.