\documentclass[11pt,a4paper,final]{article} \usepackage[latin1]{inputenc} \usepackage[english]{babel} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{siunitx} \usepackage{bbm} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \title{ALICE GEM IROC prototype } \author{Philipp Gadow} \begin{document} \maketitle \begin{center} \begin{verbatim} philipp.gadow@mytum.de \end{verbatim} \end{center} \section{Introduction} This document describes the software used for the bachelor's thesis 'dE/dx studies with $\pi$ and e tracks of the ALICE GEM IROC'. Firstly, the Reconstruction Chain for creating tracks out of the ADC data is described. Then a rather short description of the individual macros follows, which are meant to be executed with root. Start root, compile the macro with .L .C+ and execute the macro with Execute. Tab completion is very useful to see the parameters of the macro. If you have questions regarding the macros or the reconstruction chain, don't hesitate to contact me by mail. \section{The TPC Reconstruction Chain} In this section the steps of the track feature extraction from signals of the ADCs connected to the single pads to tracks of particles traversing the TPC will be presented. The reconstruction of particle tracks is done in context of the software framework fopiroot, which was written and implemented in the C++ programming language. fopiroot is based on the FairRoot package developed at GSI, Germany, which is an extension of the ROOT framework developed at CERN. The reconstruction is done by tasks that can conveniently be run from macros. The reconstruction chain is shown in figure \ref{figRecochain}. \begin{figure}[h!] \centering \includegraphics[scale=1.5]{Abbildungen/recoschemepgadow.pdf} \caption{Structure of the reconstruction chain} \label{figRecochain} \end{figure} \subsection{Pulse Shape Analysis} The Pulse Shape Analysis (PSA) is the first algorithm that processes data from single readout pads. It combines consecutive data samples to a pad hit and assigns it a time and an amplitude. The PSA searches the samples for local minima, local valleys or samples below a certain threshold, where it divides samples into pulses. Such a pulse is processed and assigned an amplitude $A$, which is defined by the sum of all amplitudes of the samples in the pulse and a time, to be stored as a pad hit: \begin{align} A_{\text{pad hit}} = \sum_i A_{\text{sample }i} \qquad \quad t_{\text{pad hit}} = t_{\text{max}} - t_{\text{shaping}}. \end{align} The identification number of the pad (pad ID) is also stored with the pad hit to determine the position of the pad in the detector and obtain spatial informations for clustering finding. The time determination can be accomplished in different ways. The simplest way is to define the time of the pad hit $t$ by the time of the sample with the maximum amplitude, shifted backwards in time by the peaking-time of the signal shaper $t_{\text{shaping}} = \SI{120}{\nano\second}$. Another method called constant fraction discriminator which is used in the electromagnetic calorimeter trigger of the COMPASS was also implemented. \label{chapCFD} \begin{figure}[h!] \centering \includegraphics[width = 10 cm]{Abbildungen/cfdillu2.pdf} \caption{Constant Fraction Discriminator (CFD): The original signal is shown in blue crosses, the delayed signal in pink crosses and the difference between both signals in yellow. The time of the pad hit assigned by the CFD is shown by a red line, which is obtained by subtracting a calibration constant from the time, where the difference between delayed signal and signal crosses the abscissa axis.} \end{figure} The algorithm calculates the difference between a pulse and a delayed and amplified version of the pulse. To obtain the time of the pulse, the point where the linear interpolation of the difference between pulse and amplified pulse crosses the axis is calculated. From this point a calibration constant is subtracted and this value is assigned as the time of the pulse. \subsection{Cluster Finding} \label{secClustering} The Cluster Finder takes the pad hits and combines hits that are likely to correspond to a common primary ionization to a cluster. Locally adjacent hits are grouped into a cluster, based on the information provided in a pad plane which represents the spatial distribution of the pads in the $x$- and $y$-plane. The time signal of a pad hit corresponds to the $z$-coordinate of the signal. First the algorithm sorts all pad hits in a certain timeslice by decreasing amplitude. Starting with the pad hit with the highest amplitude, it loops over pad hits and checks each pad hit against all clusters created until then. The cluster finder combines adjacent pads only, if the pad hits come from the same pad or neighbour a pad of any unsplit pad hit already in the cluster and if in addition the processed pad hit is within a certain time slice of the center of gravity in the cluster. If no matching cluster is found, a new cluster is created from the pad hit. If a pad hit matches more than one cluster, its amplitude is split between the matching clusters. From the pad hits assigned to a cluster the amplitude of that cluster is calculated by the sum of its pad hit amplitudes. The cluster is assigned a position by a weighted mean method: \begin{align} \vec{x}_{\text{Cl}} = \frac{\sum_{\text{pad hits}} \vec{x}_{\text{pad hit}} \cdot A_{\text{pad hit}}}{\sum_{\text{pad hits}} A_{\text{pad hit}}}. \end{align} The quantity $\vec{\sigma} = (\sigma_x,\sigma_y,\sigma_z)$ describes the cluster's spatial error. It is estimated from the standard deviation of the signal positions weighted with the cluster's amplitude $A_{\text{Cl}}$. By dividing the variance by the amplitude, which is proportional to the number of ionization electrons, the reduction of the statistical fluctuations from electron drift is taken into account\footnote{The error should then be calculated as $\sigma_i = C \cdot \sqrt{\frac{\text{Var}_i}{A_{\text{Cl}}}}$. However in the code it was implemented as written in the text. Since the arbitrary factor $C$ introduces a large systematic uncertainty this problem is estimated to have a minor effect. The algorithm should be revised.}. An arbitrary factor $C$ was introduced to scale the errors to an appropriate size. The variance and $\sigma_i$ is calculated as \begin{align} \text{Var}_i &= \frac{1}{A_{\text{Cl}}} \sum_{\text{pad hit}} A_{\text{pad hit}}(\vec{x}_{\text{pad hit}, i} - \vec{x}_{\text{Cl}, i})^2\\ \sigma_i &= \frac{C}{A_{\text{Cl}}} \cdot \sqrt{\text{Var}_i}. \end{align} For clusters consisting of one hit, $\vec{\sigma}$ is calculated as \begin{align} \sigma_i = \frac{C}{A_{\text{Cl}}} \cdot \sqrt{\frac{d_i}{12}}, \end{align} where $d_x$ and $d_y$ denote the length and wide of a pad and $d_z$ the jitter in $z$. A two dimensional clustering in padrows and time has been implemented by restricting the existing clustering algorithm to search for adjacent pad hits only in the same padrow and in time. \begin{figure}[h!] \centering \includegraphics[width = 1.0\textwidth]{Abbildungen/clustering.pdf} \caption{Illustration of the 3D cluster finding algorithm working on event 15 in run 681, pad hits (shown as cylinders) are grouped as clusters (shown as spheres). A cylinder's radius is proportional to the digi amplitudes in a logarithmic scaling.} \label{figClustering} \end{figure} \subsection{Pattern Recognition} In order to reconstruct the trajectories of particles passing through the detector, a Pattern Recognition algorithm has to group clusters, which originate from a single physical track. Usually in a particle physics experiment, the pattern to be recognized are helices, assuming helical trajectories of charged particles in a homogenous magnetic field. For this analysis however, due to the absence of a magnetic field, the pattern to be recognized are straight lines. The framework provides two different algorithms: A Riemann track follower, written by Johannes Rauch, fitting helices to a set of hits by an extended Riemann fit and a Hough Transform Pattern recognition algorithm written by Felix B\"{o}hmer. The Riemann Pattern Recognition algorithm associates three dimensional space points provided by the clusters to tracks for which a helical shape is assumed. A helix is fitted to a set of hits by an extended Riemann fit. This fit makes use of the Riemann Transformation, a stereographic projection mapping a plane onto the so-called Riemann sphere with diameter of 1 on top of the origin of the complex plane. The Riemann Transformation maps circles and straight lines on the plane to circles on the sphere corresponding to planes in space, thus reducing the challenging task of circle-fitting on the plane to the task of plane-fitting on the Riemann sphere. The Riemann sphere is scaled to appropriate size with respect to the detectors geometry. After the fast fitting algorithm, the hits along the track are sorted properly and estimates of the track parameters are provided to the fitting algorithm to reconstruct particle trajectories. \begin{figure}[h!] \centering \includegraphics[width = 1.0\textwidth]{Abbildungen/riemanntrackbuilding.pdf} \caption{Scheme of the track building process with the Riemann track follower} \label{figRiemann1} \end{figure} A schematic of the algorithm is shown in figure \ref{figRiemann1}. The algorithm starts with a first tracklet which contains only one hit, then the presorted clusters are looped and each hit is checked against each existing tracklet. If both hit-track correlators are matched, the hit is assigned to the best matching tracklet. A matching correlator delivers a matching quality. Two correlators are applied: \begin{itemize} \item A proximity correlator checks the proximity of a cluster to a track by finding the nearest cluster in the tracklet and delivering the distance between those clusters as a matching quality. \item A helix correlator calculates the distance between the cluster and the helix defining the tracklet. By Newton's method, the minimum distance is found and delivered as a matching quality. \end{itemize} If the matching quality of a correlator is smaller than a definable cut, the tracklet survives the correlator, otherwise if no tracklet matches a hit, a new tracklet is created. If the root mean square of the tracklet's hits to the fitted helix is less than a definable cut, then the tracklet is stored and can be further processed. \begin{figure}[h!] \centering \includegraphics[width = 1.0\textwidth]{Abbildungen/trackmergingRAUCH.pdf} \caption{Scheme of the track merging process} \label{figRiemann2} \end{figure} Due to outliers, noise hits, incorrectly assigned hits, fragmentary tracks of particles with low energy loss and even defect readout channels, the actual tracks might not be found as a whole in the process of track building. Because of that, a second level tracklet merging is performed, where the tracklets are presorted and compared to each other. The track merging algorithm is shown schematically in figure \ref{figRiemann2}. Like in the track finding process certain track-track correlators have to be matched in order for two tracks to be merged: \begin{itemize} \item A proximity correlator compares the position of the first and last hits of the two tracklets. If the distance between the position of the hits is smaller than a definable proximity cut, the proximity correlator is survived. \item A dip correlator compares the dip angles of the two tracklets, which are defined as the angle between a line parallel to the z-axis and a tangent on the helix of a track. If the relative z-positions of the tracks match and the absolute difference of the dip angles is smaller than the definable angle-cut, the dip correlator is survived. \item A helix correlator creates a new track temporarily containing the hits of both tracks. A helix fit is performed and a cut on the root-mean-square of the distance of the hits to the helix is applied. If the two tracks together do not have enough hits to be fitted, this correlator is not applicable. \end{itemize} The Hough Transform Pattern Recognition relies on the Hough Transformation, which is a global method for the detection of parametrizable patterns in data sets. For a data set in the space $\mathcal{V}$ consisting of the points $\vec{x}_1, \vec{x}_2, \dots, \vec{x}_n \in \mathcal{V}$ distributed closely around a certain pattern, a pattern parametrization $p: \mathcal{V}_1 \rightarrow \mathcal{V}_2$ can be chosen. The parameter space or \textit{Hough Space} $\mathcal{H}$ is the space spanned by the parameters of $p$. The Hough Transform maps each data point $\vec{x}_i,\, i = (1,\dots,n)$ to a hypersurface in the Hough Space $\mathcal{H}$ with dimensionality of the pattern parametrization. Assuming data points $\vec{x}_i,\, i = (1,\dots,n)$ which can be described perfectly by a chosen parametrization $p$, the hypersurfaces of all data points making up the parametrized pattern cross in a single point in the Hough Space $\mathcal{H}$. The coordinates of this point are the parameters describing the pattern. If there are more patterns of the same type, there will be more than one maximum in the parameter space. The working principle for straight lines is shown in figure \ref{figHoughPrinciple}. \begin{figure}[h!] \centering \includegraphics[width = 10 cm]{Abbildungen/houghtransformPLACEHOLDER.pdf} \caption{Principle of the Hough Transform: Data points $\vec{x_i}$ (cyan) distributed following a pattern are transformed to the parameter space of the chosen pattern parametrization (red). If the parametrization matches the pattern of the data points, the lines in Hough Space cross in one point.} \label{figHoughPrinciple} \end{figure} For an imperfect description of the data points by a chosen parametrization - as it is the case with real data points measured by a particle detector instead of single points in the Hough Space $\mathcal{H}$ - areas of elevated hypersurface density will be observed due to imprecision of the measurement. The task of the actual Pattern Recognition is to find the regions of elevated hypersurface density in the Hough Space $\mathcal{H}$. This is done by the Fast Hough Transform, which is basically a N dimensional tree search ($N= 5$ for helices and $N=4$ for straight lines). $\mathcal{H}$ is scaled and interpreted as a $N$-dimensional unit cube, which is divided into subvolumes of same dimensionality in each iteration of the algorithm. Depending on how many hypersurfaces in $\mathcal{H}$ intersect with each of these subvolumes a vote is defined. If the vote of a subvolume does not exceed a definable threshold, the node is discarded and not processed in the next step of the algorithm. This is repeated until the desired granularity is reached. Further information and a working example can be found in the diploma thesis of Felix V. B\"{o}hmer. \subsection{Track Representation} The unfitted tracks found by the pattern recognition are parametrized in a 5 dimensional track parametrization in virtual detector plane coordinates with the state vector $p = (q/p,u',v',u,v)^T$. Here $q$ denotes the particle's charge, $p$ its momentum, $u,v$ the position of its track on the (virtual) detector plane and $u', v'$ the direction cosines of the track in the plane. Figure \ref{figVirtualplane} shows a virtual detector plane as used in the GENFIT track parametrization. Further information on the treatment of tracks can be found in the manual for GENFIT and the diploma thesis of Johannes Rauch. \begin{figure}[!h] \begin{minipage}{0.5\textwidth} \includegraphics[width= 1.0\textwidth]{Abbildungen/GENFITvirtualplane.pdf} \caption{Virtual detector plane for a space point hit} \label{figVirtualplane} \end{minipage} \hfill \begin{minipage}{0.45\textwidth} \vspace*{2.5 cm} \begin{align*} u = (\vec{x} - \vec{o}) \cdot \vec{u}\\ v = (\vec{x} - \vec{o}) \cdot \vec{v}\\ u' = \frac{\vec{a}\cdot \vec{u}}{\vec{a}\cdot \vec{n}}\\ v' = \frac{\vec{a}\cdot \vec{v}}{\vec{a}\cdot \vec{n}}\\ \end{align*} with $\vec{x}:$ position in space in a master coordinate system, $\vec{a}:$ direction of the track and $\vec{o}:$ origin of the plane in a master coordinate system \end{minipage} \end{figure} The master coordinate system is defined by the ALICE local coordinate system of the IROC as introduced in figure \ref{figIROCOROC}. \subsection{Kalman Track Fitting} The Track Fitting is achieved with a Kalman filter, an algorithm to produce an optimal estimate of a system state from a series of noisy measurements. It is implemented in GENFIT, a generic toolkit for track reconstruction for experiments in particle and nuclear physics. Further information on the working principle of the Kalman filter can be found in the GENFIT documentation. The algorithm consists of two steps that are being iterated, the prediction and the update step. The prediction step extrapolates the system state $p$ to the measurement $m_k$ and makes a prediction for the system state vector $p_{k|k-1}$ and for the covariance matrix $C_{k|k-1}$ \begin{align} p_{k|k-1} = F_k p_{k-1|k-1} + c_k\\ C_{k|k-1} = F_k C_{k-1|k-1} F_k^T + N_k. \end{align} The extrapolation of the system state from plane $k-1$ to plane $k$ is done by the Runge Kutta extrapolator implemented in GENFIT. From the state vector $p_{k-1}$ and the covariance matrix $C_{k-1}$ the state vector $p_k$, the Jacobian $F_k$ as linearization of the transport, the noise matrix $N_k$ and the Covariance matrix $C_k$ are calculated. In the update step a ''weighted average'' between prediction and measurement is calculated. For measurements $m_k$ with less than all 5 track parameters measured the matrix $H_k$ is used to project the track parameters down to dimensionality of $m_i$. For instance to project the prediction to the dimensionality of the measurment of a pixel detector measuring only 2 parameters, $H$ takes the form $H = \begin{pmatrix} 0 & 0 & 0 & 1 &0\\ 0 & 0 & 0 & 0 &1 \end{pmatrix}$. The system state and covariance matrix after the update step are calculated as follows: \begin{align} p_{k|k} &= p_{k|k-1} + K_k \left(m_k - H_k p_{k|k-1}\right)\\ K_k &= C_{k|k-1} H_k^T \left( V_k + H_k C_{k|k-1} H_{k}^T \right)^{-1}\\ C_{k|k} &= \left(\mathbbm{1} - K_k H_k\right) C_{k|k-1} \end{align} \begin{figure}[h!] \centering \includegraphics[width = 1.0\textwidth]{Abbildungen/kalmann.pdf} \caption{Working principle of the Kalman filter. Images by Johannes Rauch} \end{figure} \section{Extraction of Specific Energy Loss Data} To extract specific energy loss data ($dE/dx$) from a track, a $dE/dx$ task optimized for the detector geometry has been implemented in the analysis framework. A reconstructed track is extrapolated to the first padrow of the detector. From there it is iteratively being extrapolated to the next padrow. The length of the track between two rows, calculated as the magnitude of the difference of the track's position at two neighbouring rows, is assigned as $\Delta x$. The sum of all pad hits in the respective padrow corresponding to the track is assigned as $\Delta E$. The energy loss per padrow $\Delta E/\Delta x$ is stored as a $dE/dx$ sample, which is used in the calculation of the truncated mean. A second version of the dE/dx task has also been implemented, where the summation over the pad hit's amplitudes is carried out in a region around the track instead over the amplitudes of pad hits assigned to the track by the pattern recognition. In this way possible deficiencies in the track finding algorithms can be overcome. The region around the track is defined by a rectangular box around the track with definable width and heigth. \begin{figure}[h!] \centering \includegraphics[width = 0.8\textwidth]{Abbildungen/dEdxTask.png} \caption{Illustration of the dE/dx task at work, the track (cyan) is extrapolated to the first padrow, from where the track is extrapolated iteratively to the next padrow. The pad hits (colored spheres) in the respective padrow in a region around the track (grey box) are summed up and together with the length of the track between two padrows they contribute to the specific energy loss.} \end{figure} \newpage \section{The Reconstruction Macro} \subsection{Event display} To find good parameters for the Clustering and Riemann Pattern Recognition tasks, a very useful tool has been written by Johannes Rauch. It is a 3d event display that displays the clustering process and pattern recognition step by step with adjustable parameters. One can find the event display with the path \begin{verbatim} $VMCWORKDIR$/macro/tpc/runClusterVisualisation.C \end{verbatim} It is extremely useful to determine the parameters for the Pattern Recognition by experimenting with different parameters. One has to give the reconstruction file, the parameter file as arguments and the settings ''2'' when starting the display. \subsection{runRecoAlice} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Reco/runRecoAlice.C \end{verbatim} The macro loads the necessary geometry files, the parameter file and sets up the tasks. The parameter file is in the path \begin{verbatim} $VMCWORKDIR$/tpc/ALICE/par/tpc.iroc.par \end{verbatim} and the file that is referenced in the parameter file, where all the paths are specified, is in path \begin{verbatim} $VMCWORKDIR$/tpc/parfiles/tpc.files.par \end{verbatim} If you ever add something to this file, make sure that you do this at the end of the file! One can uncomment the last line of a task section in the macro to exclude this task from the reconstruction, e.g. the gain correction task or one of the pattern recognition tasks. One has to make sure only to use one pattern recognition task and uncomment the correct lines in the track init task. \section{The Analysis Macros} \subsection{trackanalysis} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/trackanalysis.C \end{verbatim} This is the main analysis macro. It shows histograms of the track parameters and of the track reconstruction to check, if the reconstruction tasks work properly. It shows the number of reconstructed tracks, the number of hits (= clusters) per track, the position and angles of the initial state of the track and the $\chi^2/$NDF and p-value distributions of the track fit. \subsection{eventanalysis} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/eventanalysis.C \end{verbatim} This macro gives the number of clusters per event and the number of clusters found in events, where the pattern recognition found 1 track. \subsection{amptimedistribution} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/amptimedistribution.C \end{verbatim} This macro shows the amplitudes and times of samples in a 2d histogram. \subsection{findbadpads} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/runclustvis.C \end{verbatim} This macro shows the padIDs of pads with signals outside the time intervall corresponding to the physical drift length. You can then analyse these pads with the singlepadanalysis amp macro. \subsection{findParameterspaceLimitsHough} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/runclustvis.C \end{verbatim} With this macro one can investigate the parameter space limits of the Hough transform pattern recognition. \subsection{plotOccupancy} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/plotOccupancy.C \end{verbatim} With this macro one can plot the occupancy of pads in an occupancy map. This shows the pads with the most signals. \subsection{shapeofdigi} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/shapeofdigi.C \end{verbatim} This macro displays pad hits in an amplitude vs time diagram. It displays ALL pad hits for the given number of events, so don't specify too many events or kill the macro by ctrl + z and ''kill $\%\%$''. \subsection{trackdisplay} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/trackdisplay.C \end{verbatim} A very primitive track display, which displays the event and the tracks found by the pattern recognition. Also a primitive gui has been written for this trackisplay. \subsection{singlepadanalysis amp} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/Analysis/singlepadanalysis_amp.C \end{verbatim} This macro gives the amplitude distribution of a single pad. You can use it to determine pads that were not properly connected and have a lot of noise on them. \section{The d$E$/d$x$ Macros} \subsection{dEdxPlot} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/dEdxPlot.C \end{verbatim} This macro does the dE/dx analysis. It creates the energy loss spectra histograms that are needed for further analysis. Here you set the truncations used in the calculation of the truncated mean. In this macro the truncations are set as lowtrunc = 0.05, hightrunc = 0.25 for the truncations [5,75] as defined above. Make sure to set the correct datasetnumber (indicating the reconstruction you want to process). \subsection{dEdxFit} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/runclustvis.C \end{verbatim} This macro fits the dE/dx spectra obtained by the plot macro. It saves the fitted canvases and stores the fit results in a file. Here the truncations are set as lowtrunc = 5, hightrunc = 25 for the truncations [5,75]. \subsection{dEdxMakePictures} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/runclustvis.C \end{verbatim} This macro takes the results saved in the file by the dEdxFit macro and plots them for different runs and settings. Certainly there is a better way to create the plots, but this way works. If you want, you can of course improve this. \subsection{dEdxTruncationStudies} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/runclustvis.C \end{verbatim} This macro takes the input from the file created by the dEdxFit macro. Make sure to have created and fitted the histograms for the truncations [0,75], [5,75], [10,75], [0,70], [5,70] and [10,70]. You could do the truncation studies for different beam momenta and HV settings. \subsection{dEdxACPMakePictures} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/dEdxACPMakePictures.C \end{verbatim} This macro plots the results of the dEdxAlisonCobbTest macro, that are stored in the data file. First run the dEdxAlisonCobbTest macro and then this macro to get the pictures. Certainly there is a better way to plot the pictures as I did it with this macro. The prediction was extrapolated as written in my Bachelor's thesis. \subsection{dEdxAlisonCobbTest} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/dEdxAlisonCobbTest.C \end{verbatim} This macro tests the Alison Cobb prediction for the relative resolution depending of the number of $dE/dx$ samples entering the analysis. \subsection{dEdxLandau} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/dEdxLandau.C \end{verbatim} This macro was an experiment to create the straggling function distributions for electrons and pions. It loops over all tracks, assuming all tracks for electrons (respectively pions) are similar. \subsection{dEdxSingleTrack} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/runclustvis.C \end{verbatim} This macro plots the dE/dx distribution for a single track (a single event). With much imagination one can see a straggling function. Test many events and play around with the binning! \subsection{dEdxNumberofdEdxSamples} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/runclustvis.C \end{verbatim} This macro plots the distribution of the number of dE/dx samples per track. It should peak at the number of rows that were included in the analysis (usually 56 rows from the 63 total rows of the padplane). \subsection{rundEdxALICE 1-4 and rundEdxALICEFOPI} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/runclustvis.C \end{verbatim} \begin{verbatim} $VMCWORKDIR$/macro/ALICE/dEdx/runclustvis.C \end{verbatim} This was an attempt to separate the dE/dx analysis from the reconstruction chain in order to test different dE/dx tasks with the same set of reconstructed tracks. However, it seems that there is a memory leak in these macros. You could look at the dE/dx macros and try to solve the problem, so that you can use different dE/dx tasks for the same analysis. If this does not work out, execute the rundEdx macro on small portions of the whole run (4 times 1/4 of all events), so you won't get memory problems and use the HADD helper program of ROOT to add the files for one run. \end{document}