This section describes several example applications of RCU.
One common application of RCU is a list structure protected by a global mutual exclusion whose individual elements are protected by a per-element mutual exclusion. Usually, searches of the list are more common than insertions or deletions.
For example, the code for simple search and delete functions for a circular, doubly-linked list of elements would have many lock round-trips and some complicated deadlock-avoidance code, if you did not use call_rcu. The following code is much shorter and more simple, and it uses call_rcu:
struct element {
struct element *next;
struct element *prev;
atomic_t refcnt;
int address;
int data;
struct rcu_head rcu;
};
/*
* Find element with specified address, lock it, and return
* a pointer to it. Return NULL if no such address.
*/
struct element *search(int addr)
{
struct element *p;
struct element *p0;
p0 = &head[hashit(addr)];
rmb(); /* read_barrier_depends() */
p = p0->next;
while (p != p0) {
rmb(); /* read_barrier_depends() */
if (p->address == addr) {
atomic_inc(&p->refcnt);
return (p);
}
p = p->next;
}
return ((struct element *)NULL);
}
/*
* Insert an element into the hash table
*/
void insert(struct element *element)
{
struct element *p0;
p0 = &head[hashit(p->addr)];
p->next = p0->next;
/*
* Make sure that next pointer of the inserted
* element is visible to other CPUs before
* actual insertion. This allows lock-free
* lookup of the bucket list to search
* the entire list.
*/
wmb();
p0->next = p;
}
void myfree(void *obj)
{
struct element *p = obj;
/*
* Now it is safe to check refcnt since
* noone can be holding a stale pointer
* to this element got during lock-free
* search().
*/
if (atomic_read(&p->refcnt)) {
/*
* Let some other garbage collection
* free it
*/
return;
}
kfree(obj);
}
/*
* Delete the specified element from the table. Element
* must be locked.
*/
void delete(struct element *p)
{
/*
* Get global lock. There is no possible deadlock,
* since the global lock is now always acquired
* after the per-element lock.
*/
spin_lock(&hash_lock);
/*
* Delete the element from the list, release
* lock, free up the element, and return.
*/
p->next->prev = p->prev;
p->prev->next = p->next;
spin_unlock(&hash_lock);
call_rcu(&p->rcu, myfree, p);
return;
}
|
Unlike [kv]free(), kmem_cache_free() requires one additional pointer to the cache that frees any memory allocated from it. This pointer can be a problem for RCU because it is designed for only one pointer. You can avoid this problem by using a specific destroyer function for each cache that requires deferred freeing. If the cache pointer is a global variable, the destroyer function can directly use that and invoke kmem_cache_free() in the callback. Otherwise, the cache pointer can be embedded in the data structure along with struct rcu_head and can be re-computed by the destroyer function.
For global cache pointers the function is as follows:
static kmem_cache_t xxx_cache;
struct xxx_entry {
struct xxx_entry *next;
struct xxx_entry *prev;
int flag;
int info;
struct rcu_head rch;
}
void xxx_destroy(void *ptr) {
struct xxx_entry *xp = (struct xxx_entry *)ptr;
kmem_cache_free(&xxx_cache, (void *)xp);
}
void xxx_delete(struct xxx *xp) {
call_rcu(&xp->rch, xxx_destroy, xp);
}
|
If the cache pointer is not available as a global variable, it can still be embedded in the structure and accessed during destruction as follows:
struct xxx_entry {
struct xxx_entry *next;
struct xxx_entry *prev;
int flag;
int info;
kmem_cache_t *cachep;
struct rcu_head rch;
}
void xxx_destroy(void *ptr) {
struct xxx_entry *xp = (struct xxx_entry *)ptr;
kmem_cache_free(&xp->cachep, (void *)xp);
}
void xxx_delete(struct xxx *xp) {
call_rcu(&xp->rch, xxx_destroy, xp);
}
|