amino Namespace Reference


Classes

class  CallStdAccumulate
class  CallStdAccumulate_Func
class  BlockingDeque
class  condition_variable
class  Counter
class  StackNode
 The node type, which stores the data and a next pointer in it. More...
class  EBStack
 This a implementation of ebstack... More...
class  ExecutorService
 This class is intended to be inherited by executors. More...
class  CallStdForEach
class  FutureTask
class  Future
 This class reprensents a return value from Runnable object. More...
class  AbstractFuture
 This is a base class for other Future implementations. More...
class  NodeType
 The node type, which stores the data and a next pointer in it. More...
class  FindStateHolder
 a state holder, which stores the pointer of the current node and the pointers of the current node's previous and next in it. More...
struct  __list_iterator
 The iterator of the list. More...
struct  __list_const_iterator
 The const iterator of the list. More...
class  List
 This is an implementation of a lock-free linked list data structure. The implementation is according to the paper High Performance Dynamic Lock-Free Hash Tables and List-Based Sets by Maged M. Michael, 2002. To gain a complete understanding of this data structure, please first read this paper, available at: http://www.research.ibm.com/people/m/michael/spaa-2002.pdf. More...
class  unique_lock
struct  Value
struct  DictNode
class  LockFreeDictionary
struct  PQNode
class  LockFreePriorityQueue
class  SetNode
 The node type of the lock-free set, which stores the element and its hash key in it. More...
class  Set
 This is an implementation of lock-free hash set data structure, based on algorithm described in two paper "Split-Ordered Lists - Lock-free Resizable Hash Tables" by Ori Shalev Tel Aviv University and Nir Shavit Tel-Aviv University and Sun Microsystems Laboratories "High Performance Dynamic Lock-Free Hash Tables and List-Based Set" by Maged M. Michael. More...
class  mutex
class  recursive_mutex
class  OrderedList
 This is a lock-free list, based on:. More...
class  SortTask
 this class represents a sort task which can be executed by an executor. More...
class  MergerTask
 this class represents a merge task which can be executed by an executor. The two merger area must be consecutive. More...
class  LockFreeQueue
 This queue implementation is based on the algorithm defined in the follwoing paper: Simple, fast, and practical non-blocking and blocking concurrent queue algorithms by Michael, M. M. and Scott, M. L. 1996. PODC96 ABA prevention: Maged M. Michael's SMR way The following paper pointed out that Maged Michael's queue doesn't scale well. Moir, M., Nussbaum, D., Shalev, O., and Shavit, N. 2005. Using elimination to implement scalable and lock-free FIFO queues. In Proceedings of the Seventeenth Annual ACM Symposium on Parallelism in Algorithms and Architectures (Las Vegas, Nevada, USA, July 18 - 20, 2005). SPAA '05. ACM Press, New York, NY, 253-262. More...
class  ShavitQueue
 This is an implementation of a lock-free FIFO queue data structure. The implementation is according to the paper An Optimistic Approach to Lock-Free FIFO Queues by Edya Ladan-Mozes and Nir Shavit, 2004. To gain a complete understanding of this data structure, please first read this paper, available at: http://www.springerlink.com/content/h84dfexjftdal4p4/. More...
class  SingleExecutor
 An executor which executes task immediately in current thread. More...
class  LockFreeStack
 This is a lock-free stack, based on: IBM, IBM System/370 Extended Architecture, Principles of Operation, 1983 ABA prevention method is based on Maged M. Michael's: Hazard Pointers: Safe Memory Reclamation for Lock-Free Objects. More...
class  SyncList
 This is a implementation of lock based list. It is simply added lock before every operation of ordinary list(such as the std::list<T>). More...
class  Runnable
class  Thread
class  ThreadPoolExecutor
 An executor which pools a fix number of threads. Different tasks can reuse threads among each execution. More...
class  CallStdTransformUnary
class  CallStdTransformBinary
class  ws_scheduler
 A work stealing scheduler which store pointers to TaskType. More...

Typedefs

typedef ws_scheduler< RunnableScheduler
typedef SchedulerpScheduler

Enumerations

enum  LOCK_TYPE { COMMON_LOCK, TRY_LOCK, NO_LOCK }

Functions

void aa_sort (int *array, int *end)
void incore_sort (int *array, int *end)
 sort the array with in-core algorithm.
void aa_merge (int *start, int *middle, int *end)
 This method merges two consecutive arrays which are both already softed in ascending order.
template<typename iteratorT, typename T, typename Executor>
accumulate (Executor exec, iteratorT first, iteratorT last)
template<typename iteratorT, typename FuncT, typename T, typename Executor>
accumulate (Executor exec, iteratorT first, iteratorT last, FuncT func)
template<typename iteratorT, typename T, typename Executor>
accumulate (Executor &exec, int threadNum, iteratorT first, iteratorT last)
template<typename iteratorT, typename T, typename FuncT, typename Executor>
accumulate (Executor &exec, int threadNum, iteratorT first, iteratorT last, FuncT func)
template<typename iteratorT, typename FuncT, typename Executor>
void for_each (Executor &exec, iteratorT begin, iteratorT end, FuncT func)
template<typename iteratorT, typename FuncT, typename Executor>
void for_each (Executor &exec, int divNum, iteratorT begin, iteratorT end, FuncT func)
template<typename T>
bool operator== (const __list_iterator< T > &lhs, const __list_const_iterator< T > &rhs)
template<typename T>
bool operator!= (const __list_iterator< T > &lhs, const __list_const_iterator< T > &rhs)
template<typename value_type, typename local_sort, typename merge_sort, typename rand_access_iter, typename ExecutorType>
void parallel_sort (rand_access_iter start, rand_access_iter end, local_sort _sort, merge_sort _merge, int threadNum, ExecutorType *executor)
 The most common version of parallel sort, which utilize sort and merge inside STL in a parallel approach.
template<typename ExecutorType, typename value_type>
void parallel_sort (value_type *start, value_type *end, int threadNum, ExecutorType *executor)
template<typename InputIterator, typename OutputIterator, typename UnaryFunction, typename Executor>
OutputIterator transform (Executor &exec, InputIterator first, InputIterator last, OutputIterator result, UnaryFunction op)
template<typename InputIterator1, typename InputIterator2, typename OutputIterator, typename BinaryFunction, typename Executor>
OutputIterator transform (Executor &exec, InputIterator1 first1, InputIterator1 last1, InputIterator2 first2, OutputIterator result, BinaryFunction binary_op)
template<typename InputIterator, typename OutputIterator, typename UnaryFunction, typename Executor>
OutputIterator transform (Executor &exec, int threadNum, InputIterator first, InputIterator last, OutputIterator result, UnaryFunction op)
template<typename InputIterator1, typename InputIterator2, typename OutputIterator, typename BinaryFunction, typename Executor>
OutputIterator transform (Executor &exec, int threadNum, InputIterator1 first1, InputIterator1 last1, InputIterator2 first2, OutputIterator result, BinaryFunction op)
void printVector (int *start)
void printAll (int *start, int length)

Variables

const int SHRINK_FACTOR_UP = 13
const int SHRINK_FACTOR_DOWN = 10


Typedef Documentation


Enumeration Type Documentation

Enumerator:
COMMON_LOCK 
TRY_LOCK 
NO_LOCK 


Function Documentation

void amino::aa_merge ( int *  start,
int *  middle,
int *  end 
)

This method merges two consecutive arrays which are both already softed in ascending order.

Since this method uses SIMD instruction to merge, it requires that the two arrays are 4-byte aligned. And the array length is multiple of 4.

Parameters:
start The 1st array to merge
middle The start of 2nd array to merge
end The 2nd array to merge

void amino::aa_sort ( int *  array,
int *  end 
)

template<typename iteratorT, typename T, typename FuncT, typename Executor>
T amino::accumulate ( Executor &  exec,
int  threadNum,
iteratorT  first,
iteratorT  last,
FuncT  func 
) [inline]

template<typename iteratorT, typename T, typename Executor>
T amino::accumulate ( Executor &  exec,
int  threadNum,
iteratorT  first,
iteratorT  last 
) [inline]

template<typename iteratorT, typename FuncT, typename T, typename Executor>
T amino::accumulate ( Executor  exec,
iteratorT  first,
iteratorT  last,
FuncT  func 
) [inline]

template<typename iteratorT, typename T, typename Executor>
T amino::accumulate ( Executor  exec,
iteratorT  first,
iteratorT  last 
) [inline]

template<typename iteratorT, typename FuncT, typename Executor>
void amino::for_each ( Executor &  exec,
int  divNum,
iteratorT  begin,
iteratorT  end,
FuncT  func 
) [inline]

template<typename iteratorT, typename FuncT, typename Executor>
void amino::for_each ( Executor &  exec,
iteratorT  begin,
iteratorT  end,
FuncT  func 
) [inline]

void amino::incore_sort ( int *  array,
int *  end 
)

sort the array with in-core algorithm.

Parameters:
array start address of array, which must be 16-byte aligned
end end of the array, which satisfies length16==0

template<typename T>
bool amino::operator!= ( const __list_iterator< T > &  lhs,
const __list_const_iterator< T > &  rhs 
) [inline]

template<typename T>
bool amino::operator== ( const __list_iterator< T > &  lhs,
const __list_const_iterator< T > &  rhs 
) [inline]

template<typename ExecutorType, typename value_type>
void amino::parallel_sort ( value_type *  start,
value_type *  end,
int  threadNum,
ExecutorType *  executor 
) [inline]

template<typename value_type, typename local_sort, typename merge_sort, typename rand_access_iter, typename ExecutorType>
void amino::parallel_sort ( rand_access_iter  start,
rand_access_iter  end,
local_sort  _sort,
merge_sort  _merge,
int  threadNum,
ExecutorType *  executor 
) [inline]

The most common version of parallel sort, which utilize sort and merge inside STL in a parallel approach.

void amino::printAll ( int *  start,
int  length 
)

void amino::printVector ( int *  start  ) 

template<typename InputIterator1, typename InputIterator2, typename OutputIterator, typename BinaryFunction, typename Executor>
OutputIterator amino::transform ( Executor &  exec,
int  threadNum,
InputIterator1  first1,
InputIterator1  last1,
InputIterator2  first2,
OutputIterator  result,
BinaryFunction  op 
) [inline]

template<typename InputIterator, typename OutputIterator, typename UnaryFunction, typename Executor>
OutputIterator amino::transform ( Executor &  exec,
int  threadNum,
InputIterator  first,
InputIterator  last,
OutputIterator  result,
UnaryFunction  op 
) [inline]

template<typename InputIterator1, typename InputIterator2, typename OutputIterator, typename BinaryFunction, typename Executor>
OutputIterator amino::transform ( Executor &  exec,
InputIterator1  first1,
InputIterator1  last1,
InputIterator2  first2,
OutputIterator  result,
BinaryFunction  binary_op 
) [inline]

template<typename InputIterator, typename OutputIterator, typename UnaryFunction, typename Executor>
OutputIterator amino::transform ( Executor &  exec,
InputIterator  first,
InputIterator  last,
OutputIterator  result,
UnaryFunction  op 
) [inline]


Variable Documentation

const int amino::SHRINK_FACTOR_DOWN = 10

const int amino::SHRINK_FACTOR_UP = 13


Generated on Tue Dec 9 13:39:39 2008 for Amino by  doxygen 1.5.6