/*! \page dabc_introduction Programmer manual # Introduction The **Programmer Manual** describes the aspects of the Data Acquisition Backbone Core framework that are necessary for programming user extensions. To begin with, this overview chapter explains the software objects and their collaboration, the intended mechanisms for controls and configuration, the dependencies of packages and libraries, and gives a short reference of the most important classes. The following chapters contain full explanations of the [dabc] interface and service classes, describe the set-up with parameters, and give a reference of the Java GUI plug-in possibilities. Finally, some implementation examples are treated in detail to illustrate these issues: the adaption of the GSI legacy DAQ system [mbs] within [dabc]; the application of a distributed event builder network [bnet]; the data import via UDP from a readout controller board [roc]; and the use of a PCI express board [abb]. # Role and functionality of the objects ## Modules All processing code runs in module objects. There are two general types of modules: the dabc::ModuleSync and the dabc::ModuleAsync. ### Class dabc::ModuleSync Each synchronous module is executed by a dedicated thread. The thread executes a method [dabc::ModuleSync::MainLoop()](\ref dabc::ModuleSync::MainLoop) with arbitrary code, which **may block** the thread. In blocking calls of the framework (resource or port wait), optionally command callbacks may be executed implicitly ("non strictly blocking mode"). In the "strictly blocking mode", the blocking calls do nothing but wait. A *timeout* may be set for all blocking calls; this can optionally throw an exception when the time is up. On timeout with exception, either the [MainLoop()](\ref dabc::ModuleSync::MainLoop) is left and the exception is then handled in the framework thread; or the [MainLoop()](\ref dabc::ModuleSync::MainLoop) itself catches and handles the exception. On state machine commands (e.g. `Halt` or `Suspend`, see section @ref prog_fsm), the blocking calls are also left by exception, thus putting the mainloop thread into a stopped state. ### Class dabc::ModuleAsync Several asynchronous modules may be run by the *same thread*. The thread processes an *event queue* and executes appropriate *callback functions* of the module that is the receiver of the event. Events are fired for data input or output, command execution, and if a requested resource (e.g. memory buffer) is available. **The callback functions must never block the working thread**. Instead, the callback must *return* if further processing requires to wait for a requested resource. Thus each callback function must check the available resources explicitly whenever it is entered. ## Commands A module may process dabc::Command object in dabc::Module::ExecuteCommand() method. If necessary, module can define commands definition, that control system could know that kind of commands can be submitted to the module. ## Parameters A module may register dabc::Parameter objects. Parameters are accessible by name; their values can be monitored and optionally changed by the controls system. Initial parameter values can be set from xml configuration files. ## Manager The modules are organized and controlled by one manager object of dabc::Manager; this singleton instance is persistent independent of the application's state. One can always access the manager via dabc::mgr variable. The manager is an *object manager* that owns and keeps all registered basic objects into a folder structure. Manager dispatches different events in the objects and deliver them to control system (if such configured). This covers registering, sending, and receiving of commands; registering, updating, unregistering of parameters; error logging and global error handling. The manager receives and *dispatches commands* to the destination modules where they are queued and eventually executed by the modules threads. The manager has an independent manager thread, used for manager commands execution, parameters timeout processing and so on. ## Memory and buffers Data in memory is referred by dabc::Buffer objects. Allocated memory areas are kept in dabc::MemoryPool objects. In general case dabc::Buffer contains a list of references to scattered memory fragments from memory pool. Typically a buffer references exactly one segment. Buffer may have an empty list of references. The auxiliary class dabc::Pointer offers methods to transparently treat the scattered fragments from the user point of view (concept of "virtual contiguous buffer"). Moreover, the user may also get direct access to each of the fragments. The buffers are provided by one or several memory pools which preallocate reasonable memory from the operating system. A memory pool may keep several sets, each set for a different configurable memory size. A modules communicates with a memory pool via a pool handles. A new buffer may be requested from a memory pool by size. Depending on the module type and mode, this request may either block until an appropriate buffer is available, or it may return an error value if it can not be fulfilled. The delivered buffer has at least the requested size, but may be larger. A buffer as delivered by the memory pool is contiguos. Several buffers may refer to the same fragment of memory. Therefore, the memory as owned by the memory pool has a reference counter which is incremented for each buffer that refers to any of the contained fragments. When a user frees a buffer object, the reference counters of the referred memory blocks are decremented. If a reference counter becomes zero, the memory is marked as "free" in the memory pool. ## Ports Buffers are entering and leaving a module through ports. There are input and output ports. Each port has a buffer queue of configurable length. A module may have several input and (or) output ports. The ports are owned by the module. Depending on the module type, there are different possibilities to work with the ports in the processing functions of the module. These are described in respective sectiops of dabc::ModuleSync and dabc::ModuleAsync. ## Transport Outside the modules the ports are connected to dabc::Transport objects. On each node, a transport may either transfer buffers between the ports of different modules (local data transport), or it may connect the module port to a data source or sink (e.~g.~ file i/o, network connection, hardware readout). In the latter case, it is also possible to connect ports of two modules on different nodes by means of a transport instance of the same kind on each node (e.~g.~ *InfiniBand verbs* transport connecting a sender module on node A with a receiver module on node B via a *verbs* device connection). ## Device In some cases devices managing creation of transport objects for modules (for instance, verbs::Device) and manages them. Such a device may have one or several transports. The threads that run the transport functionality are created by the device. If the dabc::Transport implementation shall be able to block (e.~g.~ on socket receive), there can be only one transport for this thread. A dabc::Device instance usually represents an I/O component (e.~g.~ network card); there may be more than one dabc::Device instances of the same type in an application scope. The device objects are owned by the manager. A device is persistent independent of the connection state of the transport. In contrast, a transport is created during conneection and deleted during connection closing. A device may register parameters and define commands. This is the same functionality as available for modules. ## Application The dabc::Application class is a singleton object that represents the running application of the DAQ node (i.~e.~ one per system process). It provides the main configuration parameters and defines the runtime actions in the different control system states (see section ...). In contrast to the dabc::Manager, the subclass of dabc::Application defines the experiment specific behaviour of the DAQ. \section prog_app_creation Creation of objects \subsection prog_use_app Using dabc::Application class \subsection prog_init_func Initialization function \subsection prog_auto_creation Automatic modules creation \section prog_control_and_config Controls and configuration [need to be revised] \subsection prog_fsm Finite state machine The running state of the DAQ system is ruled by a *Finite State Machine* on each node of the cluster. The manager provides an interface to switch the application state by the external control system. This may be done by calling state change methods of the manager, or by submitting state change commands to the manager. The finite state machine itself is not necessarily part of the manager, but may be provided by an external control system. In this case, the manager defines the states, but does not check if a state transition is allowed. However, the [dabc] core system offers a native state machine to be used in the controls implementation. Some of the application states may be propagated to the active components (modules, device objects), e.g. the *Running* or *Ready* state which correspond to the activity of the thread. Other states like *Halted* or *Failure* do not match a component state; e.g. in *Halted* state, all modules are deleted and thus do not have an internal state. The granularity of the control system state machine is not finer than the node application. There are 5 generic states to treat all set-ups: - Halted : The application is not configured and not running. There are no modules, transports, and devices existing. - Configured : The application is mostly configured, but not running. Modules and devices are created. Local port connections are done. Remote transport connections may be not all fully connected, since some connections require active negotiations between different nodes. Thus, the final connecting is done between *Configured* and *Ready*. - Ready : The application is fully configured, but not running (modules are stopped). - Running : The application is fully configured and running. - Failure : This state is reached when there is an error in a state transition function. Note that a run error during the *Running* state would not lead to *Failure*, but rather to stop the run in a usual way *Ready*. The state transitions between the 5 generic states correspond to commands of the control system for each node application: - DoConfigure : between *Halted* and *Configured*. The application plug-in creates application specific devices, modules and memory pools. Application typically establishes all local port connections. - DoEnable : between *Configured* and *Ready*. The application plug-in may establish the necessary connections between remote ports. The framework checks if all required connections are ready. - DoStart : between *Ready* and *Running*. The framework automatically starts all modules, transport and device actions. - DoStop : between *Running* and *Ready*. The framework automaticall stops all modules, transport and device actions, i.e. the code is suspended to *wait* at the next appropriate *waiting point* (e.g. begin of MainLoop(), wait for a requested resource). Note: queued buffers are not flushed or discarded on *Stop* ! - DoHalt : switches states *Ready* , *Running* , *Configured*, or *Failure* to *Halted*. The framework automatically deletes all registered objects (transport, device, module) in the correct order. However, the user may explicitly specify on creation time that an object shall be persistent (e.g. a device may be kept until the end of the process once it had been created). ## Commands The control system may send (user defined) commands to each component (module , device, application). Execution of these commands is independent of the state machine transitions. ## Parameters for configuration and monitoring The *Configuration* is done using parameter objects. The manager provides an interface to register parameters to the configuration/control system. On application startup time, the configuration system may set the parameters from a configuration file (e.g. XML configuration files). During the application lifetime, the control system may change values of the parameters by command. However, since the set up is changed on *DoConfigure* time only, it may be forbidden to change true configuration parameters except when the application is *Halted*. Otherwise, there would be the possibility of a mismatch between the monitored parameter values and the really running set up. However, the control system may change local parameter objects by command in any state to modify minor system properties independent of the configuration set up (e.g. switching on debug output, change details of processing parameters). The current parameters may be stored back to the XML file. Apart from the configuration, the control system may use local parameter objects for *Monitoring* the components. When monitoring parameters change, the control system is updated by interface methods of the manager and may refresh the GUI representation. Chapter [prog_setup] will explain the usage of parameters for configuration in detail. # Package and library organisation ## Core system The **Core system** package uses namespace [dabc](dabc::). It defines all base classes and interfaces, and implements basic functionalities for object organization, memory management, thread control, and event communication. Section [@ref dabc_core_classes] gives a brief overview of the main classes. ## Control and configuration system Depends on the **Core system**. Defines functionality of state machine, command transport, parameter monitoring and modification. Implements the connection of configuration parameters with a database (i.e. a file in the trivial case). Note that default implementations of state machine and a configuration file parser are already provided by the **Core system**. ## Plugin packages Plugin packages may provide special implementations of the core interface classes: dabc::Device, dabc::Transport, dabc::Module, or dabc::Application. Usually, these classes are made available to the system by means of a corresponding dabc::Factory that is automatically registered in the dabc::Manager when loading the plugin library. When installed centrally, the *Plugin packages* are kept in subfolders of the `$DABCSYS/plugins` directory. Alternatively, the *Plugin packages* may be installed in a user directory and linked against the *Core system* installation. ### Bnet package This package uses namespace [bnet](bnet::). It depends on the *Core system* and implements modules to cover a generic event builder network. It defines interfaces (virtual methods) of the special Bnet modules to implement user specific code in subclasses. The *Bnet package* provides a factory to create specific Bnet modules by class name. It also provides application classes to define generic functionalities for worker nodes (class bnet::WorkerApplication) and controller nodes (class bnet::ClusterApplication). These may be used as base classes in further Bnet application packages. ### Transport packages Depend on the **Core system**, and may depend on external libraries or hardware drivers. Implement dabc::Device and dabc::Transport classes for specific data transfer mechanism, e.g. **verbs** or **tcp/ip socket**. May also implement dabc::DataInput and dabc::DataOutput classes for special data input or output (for instance, files or generators). Each transport package provides a factory to create a specific device by class name. However, the most common transport implementations are put directly to the **Core system**, e.g. local memory, or socket transport. ## Application packages They depend on the **Core system**, and may depend on several transport packages, on other plugin packages. They may also depend on other application packages. **Application packages** provide the actual implementation of the core interface dabc::Application class that defines the set-up and behaviour of the DAQ application in different execution states. This may be a subclass of specific existing application (e.g. subclass of bnet::WorkerApplication). Additionally, they may provide experiment specific dabc::Module classes. When installed centrally, the **Application packages** are kept in subfolders of the `$DABCSYS/applications` directory. Alternatively, an **Application package** may be installed in a user directory and linked against the **Core system** installation and the required **Plugin packages**. ## Content of actual DABC distribution The complete system consists of different packages. Each package is represented by a subproject of the source code with own namespace. There may be one or more shared libraries for each package. Main packages are as follows: - libDabcCore - libDabcMbs - libDabcVerbs - libDabcHadaq - libDabcEzca - libDabcDim - libDabcRfio # History - first requirements This was [first requirements draftt](@ref dabc_first_concept), which was formulated in the beginning of 2007, when DABC development was started. Not everything was implemented according to these ideas, but main concept of data-flow engine was done */