It has been a number of years since I've used named semaphores. It could be that the last time I used a named semaphore was back in my OS/2 days. But I recently needed to coordinate some work between several applications running in a linux environment, and named semaphores was the right solution.
Or so I thought...
Here is what I discovered, which hopefully explains why POSIX named semaphores as implemented in Linux are not working for me:
I was a bit confused when I started looking into inter-process communication on linux systems. It took me a while to understand that we have 2 different IPC systems: the old traditional SysV-based IPC, and the new POSIX-based IPC.
The SysV semaphores are in #include <sys/sem.h>. This includes:
Meanwhile, the new POSIX semaphores are in #include <semaphore.h>. A partial list of this API includes:
Of particular importance to note is the SysV IPC command-line tools such as ipcmk, ipcs and ipcrm will not work with POSIX semaphores, though that is actually of minimal importance when trying to write self-contained C/C++ applications.
Opening a POSIX named semaphore is very simple. Note the following source code extract:
To wait on the semaphore, you'd call sem_wait( sem ) and to release it sem_post( sem ).
In a simple scenario, all of this works very well. Client applications open/create the semaphore, call sem_wait() when it is needed, and then sem_post() when finished. But in non-trivial, embedded and/or commercial software, where you have to be prepared for and recover from external signals, there is a problem. There are at least two signals that cannot be caught: SIGKILL and SIGSTOP.
Unfortunately, if a client application receives one of these signals between the call to sem_wait() and sem_post(), the semaphore is now unusable. Or, at the very least, you're leaking one count every time this happens. If your initial semaphore count is 1, then a single instance of someone sending SIGKILL to a client application will starve all other clients waiting for that semaphore.
What I was expecting is for either the semaphore to be auto-posted back when the application gets cleaned up -- somewhat like open files get automatically closed and memory freed -- or for an additional sem_*() API that a watcher can query to determine when this situation has occurred. If there was a reliable way to determine that an application had been killed between the two calls, the watcher itself could decide to sem_post() and "recover" the lost semaphore.
Note that old SysV-style semaphore do have a way to cleanup after themselves to prevent this type of problem. the SEM_UNDO flag can be specified which causes changes to the semaphore to be reverted if the application terminates abnormally. Sadly, this is specific to SysV and does not apply to POSIX semaphores.
Depending on how the semaphore is used, there may be alternatives to POSIX semaphores:
Without adding much more complexity, the socket and file solutions only work when the semaphore is boolean, used like a "named mutex". The file lock solution proved to be adequate for what I was working on. This alternative solution caused me to replace all of my calls to sem_wait() and sem_post() with calls to lockf(...).
This is still not a perfect solution since the lock can be circumvented by manually deleting the lock file. But in my case, the likelyhood of an uncaught signal was higher and more devastating than someone removing the lock file.