text stringlengths 9 175k |
|---|
3D human face description: landmarks measures and geometrical features
Enrico Vezzetti, Federica Marcolin
Dipartimento di Sistemi di Produzione ed Economia dell’Azienda
Politecnico di Torino
Abstract
Morphometric measures and geometrical features are widely used to describe faces. Generally, they are extracted punctually from landmarks, namely anthropometric reference points. The aims are various, such as face recognition, facial expression recognition, face detection, study of changes in facial morphology due to growth, or dysmorphologies. Most of the time, landmarks were extracted with the help of an algorithm or manually located on the faces. Then, measures are computed or geometrical features are extracted to perform the scope of the study. This paper is intended as a survey collecting and explaining all these features, in order to provide a structured user database of the potential parameters and their characteristics. Firstly, facial soft-tissue landmarks are defined and contextualized; then the various morphometric measures are introduced and some results are given; lastly, the most important measures are compared to identify the best one for face recognition applications.
1. Introduction
Face study has been carried out in these decades for many applications: maxillofacial surgery, delict investigation, authentication, historical research, telecommunications or even games. Recognition is surely the largest branch of this diversified field, embracing subfields such as citizens identification, recognition of suspects, corporate usages in access control and on line banking. Since a new trend emerged to measure and evaluate 3D facial models, for the past decades three dimensional facial data were obtained mostly by direct anthropometric measurements. Anatomical landmarks have been used for over a century by anthropometrists interested in qualifying cranial variations. A great body of work in craniofacial anthropometry is that of Leslie Farkas (Farkas, 1994; Farkas, 1996) who established a database of anthropometric norms by measuring and comparing more than 100 dimensions (linear, angular and surface contours) and proportions in hundreds of people over a period of many years. These measurements include 47 landmark points to describe the face (Čarnický et al., 2006).
Nowadays the information in which researchers are interested are more complete and dynamic. The interest is to use facial landmarks as reference points of the subjects and extract geometrical features from them, in order to keep information of how the examined face is. Their uses are various and may depend on the research area.
The attention to facial landmark is due to the fact that they are points which all faces join and that have a particular biological meaning. Hard-tissue landmarks lie on the skeletal and may be identified only through lateral cephalometric radiographs; soft-tissue landmarks are on the skin and can be identified on the 3D point clouds generated by the scanning or on images. This study only deals with soft-tissue landmarks; the most famous ones are shown in Figure 1. Actually, the set of facial landmarks is much larger than this. In fact, there are approximately 60 indentifiable soft-tissue points on human face, but they may change depending on the application they are used for.
One of the most important application that deal with facial landmarks is face recognition, whose large applications are: citizenship identification in borders, passports, I.D. documents, Visas; criminal identification in database screening, surveillance, alert, mob control and anti-terrorism; corporate usages in access control and time attendance in luxurious buildings, sensitive offices, airports, pharmaceutical factories; utility laptop, desktop, web, airport/sensitive console log-on and file encrypt; on line banking; gaming in casinos and watch-lists; hospitality industries such as hotel and resort CRM’s; important sites like power plants and military installations. The purposes are various, but belong to two big branches: face verification, or authentication, to guarantee secure access, and face identification, or recognition of suspects, dangerous individuals and public enemies by Police, FBI and other safety organizations (Jain et al., 2005).
Much research are carried out on this topic. In his various publications, Rohr et al. proposed multi-step differential procedures for subvoxel localization of 3D point landmarks, addressing the problem of choosing an optimal size for a region-of-interest (ROI) around point landmarks (Frantz et al., 1998; Frantz et al., 1999). They introduced an approach for the localization of 3D anatomical point landmarks based on deformable models. To model the surface at a landmark, they used quadric surfaces combined with global deformations (Frantz et al., 2000; Alker et al., 2001). Then proposed a method based on 3D parametric intensity models which are directly fitted to 3D images, introducing an analytic intensity model based on the Gaussian error function in conjunction with 3D rigid transformations as well as deformations to efficiently model anatomical structures (Wörz et al., 2006). Finally introduced a novel multi-step approach to improve detection of 3D anatomical point landmarks in tomographic images (Frantz et al., 2005). Romero et al. presented a comparison of several approaches that use graph matching and cascade filtering for landmark localization in 3D face data. For the first method, they apply the structural graph matching algorithm relaxation-by-elimination using a simple distance-to-local-plane node property and a Euclidean-distance arc property. After the graph matching process has eliminated unlikely candidates, the most likely triplet is selected, by exhaustive search, as the minimum Mahalanobis distance over a six dimensional space, corresponding to three node variables and three arc variables. A second method uses state-of-the-art pose-invariant feature descriptors embedded into a cascade filter to localize the nose tip. After that, local graph matching is applied to localize the inner eye corners (Romero et al., 2009). Then described and evaluated their pose-invariant pointpair descriptors, which encode 3D shape between a pair of 3D points. Two variants of descriptor are introduced: the first is the point-pair spin image, which is related to the classical spin image of Johnson and Hebert, and the second is derived from an implicit radial basis function (RBF) model of the facial surface. These descriptors can effectively encode edges in graph based representations of 3D shapes. Here they show how the descriptors are able to identify the nose-tip and the eye-corner of a human face.
simultaneously in six promising landmark localisation systems (Romero et al., 2009). Ruiz et al. (Ruiz et al., 2008) presented an algorithm for automatic localization of landmarks on 3D faces. An Active Shape Model (ASM) is used as a statistical joint location model for configurations of facial features. The ASM is adapted to individual faces via a guided search whereby landmark specific Shape Index models are matched to local surface patches. Similarly, Sang-Jun et al. (Sang-Jun et al., 2008) applied the Active Shape Models to extract the position of the eyes, the nose and the mouth. Salah et al. (Salah et al., 2006) proposed a coarse-to-fine method for facial landmark localization that relies on unsupervised modeling of landmark features obtained through different Gabor filter channels. D’Hose et al. (D’Hose et al., 2007) presented a method for localization of landmarks on 3D faces using Gabor wavelets to extract the curvature of the 3D faces, which is then used for performing a coarse detection of landmarks.
A connected but quite different field is face detection, which consists in identifying one or more faces in an image, where many other objects can be present. Most of the literature concerning face detection investigates face detection in two-dimensional (2D) images. Colombo et al. (Colombo et al., 2006) presented an innovative method that combines a feature-based approach with a holistic one for 3D face detection. Salient face features, such as the eyes and nose, are detected through an analysis of the curvature of the surface. Each triplet consisting of a candidate nose and two candidate eyes is processed by a PCA-based classifier trained to discriminate between faces and non-faces. Nair et al. (Nair et al., 2009) presented an accurate and robust framework for detecting faces, localizing landmarks and achieving fine registration of face meshes based on the fitting of a facial model. Face detection is performed by classifying the transformations between model points and candidate vertices based on the upper-bound of the deviation of the parameters from the mean model. Landmark localization is performed on the segmented face by finding the transformation that minimizes the deviation of the model from the mean shape. Jesorsky et al. (Jesorsky et al., 2001) presented a shape comparison approach to achieve fast and accurate face detection that is robust to changes in illumination and background. The proposed method is edge-based and works on greyscale still images. Takács et al. (Takács et al., 1997) described a general approach for the detection of faces and landmarks based on biologically motivated image representation and classification schemes. The optimal set of face, eye pair, nose and mouth feature models, respectively, is found by an enhanced SOFM approach using cross-validation and corrective training. Yow et al. (Yow et al., 1997) identified that a feature-based approach was able to detect faces efficiently over large viewpoint and illumination variations. They enhanced the approach by proposing the use of active contour models to detect the face boundary, and subsequently use it to verify face candidates. Rodrigues et al. (Rodrigues et al., 2005) studied the importance of multi-scale keypoint representation, i.e. retinotopic keypoint maps which are turned to different spatial frequencies. They showed that this representation provided important information for Focus-of-Attention (FoA) and object detection. In particular, they showed that hierarchically-structured saliency maps for FoA can be obtained, and that combinations over scales in conjunction with spatial symmetries can lead to face detection through grouping operators that deal with keypoints at the eyes, nose and mouth, especially when non-classical receptive field inhibition is employed.
A similar application is facial expression recognition, a branch of recognition which deals with identifying different facial expressions. Unlike face recognition, a little work has been done to study the usefulness of facial data for recognizing and understanding facial expressions. Some researchers worked on this topic. In their various papers, Tang et al. (Tang et al., 2008; Tang et al., 2008) performed person and gender independent facial expression recognition based on properties of the line segments connecting certain 3D facial feature points. They proposed an automatic feature selection method based on maximizing the average relative entropy of marginalized class-conditional feature distributions and apply it to a complete pool of candidate features composed of normalized Euclidean distances between 83 facial feature points in the 3D space. Soyel et al. (Soyel et al., 2007; Soyel et al., 2008; Soyel et al., 2009) described a pose invariant three-dimensional
facial expression recognition method using distance vectors retrieved from 3D distributions of facial feature points to classify universal facial expressions. Their works are based on the theories of Paul Ekman, a psychologist who has been a pioneer in the study of emotions and their relation to facial expressions. His theory is that the expressions associated with some emotions were basic or biologically universal to all humans. He devised a list of 6 basic emotions from cross-cultural research: anger, disgust, fear, happiness, sadness and surprise (Ekman, 1992; Ekman, 1999). For his precious and unique work, Ekman has been considered one of the 100 most eminent psychologists of the twentieth century. Nowadays, many authors involved in studies of facial expressions used his theory to concentrate their researches on expressions referred to emotions considered basic.
Another field in which facial landmarks are applied is the study of facial morphology. The purposes are various, such as the analysis of facial abnormalities, dysmorphologies, growth changes, aesthetic or purely theoretical. The discipline that deals with this kind of studies is Anthropometry, which is directly connected to maxillofacial surgery, namely aesthetic, plastic and corrective. Facial landmarks do not only appear in the applications of this discipline, but even belong to it. In fact, they were defined by surgeons in order to have a common name for every specific part of the face. A pioneer in Anthropometry is surely Leslie G. Farkas, who used anatomical landmarks to provide an essential update on the best methods for the measurement of the surfaces of the head and neck (Farkas, 1994). He gathered a set of anthropometric measurements of the face in different ethnic groups (Farkas et al., 2005). Then examined the effects on faces of some syndromes, such as Treacher Collins’s (Kolar et al., 1985), Apert’s (Kolar et al., 1985), cleft lips, nasal deformity (Kohout et al., 1998) and children’s cleft palate (Farkas et al., 1972). He studied the changes of the head and face during the growth (Farkas et al., 1992) and also researched on facial beauty and neoclassical canons in face proportions (Dawei et al., 1997; Le et al., 2002; Farkas et al., 2000; Farkas, 1995).
There are two quite different applications which face landmarks are used for. The first one is face correction. It consists in detecting and correcting imperfections in group photos, such as close eyes, inappropriate, unflattering and goofy faces. Dufresne (Dufresne) presented a method for diagnose and correct these issues. Face and facial landmarks are detected by an implementation of Bayesian Tangent Shape Model search. Then trained an SVM classifier to identify unflattering faces. Bad faces are then warped to match nearest neighbour faces from the good face set.
The other application is the performance evaluation of technical equipments. If the examined equipment is able to identify facial landmark correctly, his performance is considered effective. Enciso et al. (Enciso et al., 2004) investigated on methods for generating 3D facial images such as laser scans, stereo-photogrammetry, infrared imaging and CT and focused on validation of indirect three-dimensional landmark location and measurement of facial soft-tissue with light-base techniques. They also evaluated precision, repeatability and validation of a light-based imaging system. Aung et al. (Aung et al., 1995) analyses the development of laser scanning techniques enabling the capture of 3D images, especially for surface measurements of the face. They used a laser optical surface scanner to took 83 facial anthropometric measurements, using 41 identifiable landmarks on the scanned image. Then demonstrated that the laser scanner can be a useful tool for rapid facial measurements in selected anatomical parts of the face. In fact, accurate location of landmarks and operator skill are important factors to achieve reliable results.
Once landmarks are extracted from faces, manually or automatically, they become useful if it is possible to extrapolate the precious information their particular position give them. Gupta et al. (Gupta et al., 2007; Gupta et al., 2010) indeed investigated the effect of the choice of facial fiducial points on the performance of their proposed 3D face recognition algorithm. They repeated the same steps with distances between arbitrary face points, instead of the anthropometric fiducial points. These points were located in the form of a $5 \times 5$ rectangular grid positioned over the primary facial features of each face. They chose these particular facial points as they measure distances between the significant facial landmarks, including the eyes, nose and the mouth regions, without requiring localization of specific fiducial points. They showed that in their algorithms, when anthropometric
distances are replaced by distances between arbitrary regularly spaced facial points, their performances decrease substantially. As a matter of fact, landmarks have both a geometrical and biological meaning on the human face and for this reason the extraction of measures and features from their links become necessary for providing a complete face description. Next section faces this task.
2. Features types: classification
Facial landmarks lie in zones of the faces which have peculiar geometric and anthropometric features. These features were extrapolated from the faces by various authors in many different ways, depending on the usages they were assigned to. The scope is to extract accurate geometric information of the examined face and allow the comparison with other faces from which the same corresponding information were previously extracted. For face recognition applications, the computation of the Euclidean or geodesic distances between landmarks is a method widely used. They are considered measures, rather than real features. Particularly, in anthropometry applications, these measures, are called morphometric. They are generally distances or angles and their property is that one measure involves more than one landmark. As a matter of fact, both Euclidean and Geodesic distances refer to two points, while angles involve three landmarks. But the information obtained by these reference points may be more geometric in nature, keeping for instance specific data of curvature or shape.
2.1 Euclidean distance
The Euclidean distance or Euclidean metric is the “ordinary” distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. It is shown in Figure 2.
By using this formula as distance, Euclidean space becomes a metric space. The Euclidean distance between points \( P \) and \( Q \) is the length of the line segment connecting them (\( PQ \)). In Cartesian coordinates, if \( P = (p_1, p_2, ..., p_n) \) and \( Q = (q_1, q_2, ..., q_n) \) are two points in Euclidean \( n \)-space, then the distance from \( P \) to \( Q \), or from \( Q \) to \( P \) is given by:
\[
d(P,Q) = d(Q,P) = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + ... + (q_n - p_n)^2} = \sqrt{\sum_{i=1}^{n} (q_i - p_i)^2}.
\]
In three-dimensional Euclidean space, the distance is:
\[ d(P, Q) = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + (q_3 - p_3)^2}. \]
The Euclidean distance between landmarks is used by most authors as a morphometric measure. Once landmarks are obtained from a facial image or a three-dimensional face, they select some significant distances between them and compute the corresponding Euclidean distances. Then these distances are used to compare faces for face recognition purposes or to perform studies on face morphometry, as said above. The Euclidean-distance-based morphometric measures are chosen depending on the application.
There is wide previous work on this topic. Gupta et al. (Gupta et al., 2007; Gupta et al., 2010) presented three-dimensional face recognition algorithms, which employ Euclidean distances between these anthropometric fiducial points as features along with linear discriminant analysis classifiers. Prabhu et al. (Prabhu et al.) addressed the problem of automatically locating the facial landmarks of a single person across frames of a video sequence. By calculating the mean of the Euclidean distances between the coordinates of each of the 79 landmarks that were fitted by the tracking method to those that were manually annotated, they obtained the fitting error for a particular frame. Similarly, Zhao et al. (Zhao et al., 2010) formed a vector by 11 Euclidean distances between facial expression sensitive landmarks. Moreno et al. (Moreno et al.) performed an HK segmentation, i.e. based on the signs of the mean \((H)\) and Gaussian \((K)\) curvatures, for isolating regions of pronounced curvature on 420 3D facial meshes. After the segmentation task, a feature extraction is performed. Among them, Euclidean distances between some fiducial points were computed. Gordon (Gordon, 1992) presented a face recognition system which uses features extracted from range and curvature data to represent the face. She extracted high level features which mark salient events on the face surface in terms of points, lines and regions. Since the most basic set of scalar features describing the face correspond measurements of the face, she firstly computed the Euclidean distance of: left eye width, left eye width, eye separation, total width (span) of eyes, nose height, nose, width, nose depth and head width. Likewise, Lee et al. (Lee et al., 2005) calculated with Euclidean distance relative lengths of extracted facial feature points. Efraty et al. (Efraty et al., 2009; Efraty et al., 2010) studied the silhouette of face profile and introduced a new method for face recognition that improves robustness to rotation. They achieved this by exploring the feature space of profiles under various rotations with the aid of a 3D face model. Based on the fiducial points on the profile silhouette, they extracted a set of rotation-, translation- and scale-invariant features which are used to design and train a hierarchical pose-identity classifier. Euclidean distance was chosen by him as one type of measurements between landmarks. Daniyal et al. (Daniyal et al., 2009) represented the face geometry with inter-landmark distances within selected regions of interest to achieve robustness to expression variations. The proposed recognition algorithm first represents the geometry of the face by a set of Euclidean Inter-Landmark Distances (ILDs) between the selected landmarks. These distances are then compressed using Principal Component Analysis (PCA) and projected onto the classification space using Linear Discriminant Analysis (LDA). Soyle et al. (Soyel et al., 2007; Soyle et al., 2008) used six different Euclidean distances between feature points to form a distance vector for facial expression recognition. They are: openness of eyes, height of eyebrows, openness of mouth, width of mouth, stretching of lip and openness of jaw. During the recognition experiments, a distance vector is derived for every 3D model and the whole procedure is repeated numerous times. Ras et al. (Ras et al., 1996) introduced stereophotogrammetry as a three-dimensional registration method for quantifying facial morphology and detecting changes in facial morphology during growth and development. They used six sets of automatically extracted 3D landmarks coordinates to calculate the Euclidean distances between exocanthion and chelion, chelion and pronasale, exocanthion and pronasale for both sides of the face. Changes in facial morphology due to growth and development were analysed with an analysis of variance of these distances. The last field in which Euclidean distances between landmarks were applied is performance evaluation of technical equipments. Enciso et al. (Enciso
et al., 2004) used a digitizer to obtain landmarks and then directly measured Euclidean distances between them. These distances were compared with the indirect homologous distances measured on the scans with our computer tools. Aung et al. (Aung et al., 1995) firstly carried out direct Euclidean-distance-based anthropometric measurements of the face using standard anthropometric landmarks as defined by Farkas. The subject was then laser scanned with the optical surface scanner and the laser scan measurements were done using selected landmarks identifiable on the laser scan image. The same number of corresponding sets of measurements from the direct and indirect methods were then compared, in order to evaluate laser scanner performance.
2.2 Geodesic distance and arc-length distance
A geodesic is a generalization of the notion of a “straight line” to curved spaces. In the presence of a metric, geodesics are defined to be (locally) the shortest path between points in the space. The term “geodesic” comes from Geodesy, the science of measuring the size and shape of Earth; in the original sense, a geodesic was the shortest route between two points on the Earth's surface, namely, a segment of a great circle. More generally, on a sphere, the images of geodesics are the great circles. The shortest path from point A to point B on a sphere is given by the shorter arc of the great circle passing through A and B. If A and B are antipodal points (like the North pole and the South pole), then there are “infinitely many” shortest paths between them. It is shown in Figure 3.
Figure 3. Geodesic distance between pronasal and right exocanthion.
Formally, geodesics can then be defined as curves whose osculating planes contain the normals to the surface. The parametrized curves $\gamma : I \rightarrow \mathbb{R}^2$ of a plane along which the field of their tangent vectors $\gamma'(t)$ is parallel are precisely the straight lines of that plane. The parametrized curves that satisfy an analogous condition for a surface are called geodesics. More precisely, a nonconstant, parametrized curve $\gamma : I \rightarrow S$ is said to be geodesic at $t \in I$ if the field of its tangent vectors $\gamma'(t)$ is parallel along $\gamma$ at $t$; that is,
$$\frac{D\gamma'(t)}{dt} = 0;$$
$\gamma$ is a parametrized geodesic if it is geodesic for all $t \in I$. Immediately $|\gamma'(t)| = const. = c \neq 0 \Rightarrow$ is obtained. Therefore, the arc-length $s = ct$ may be introduced as a parameter, and it is possible to conclude that the parameter $t$ of a parametrized geodesic $\gamma$ is proportional to the arc length of $\gamma$. A
parametrized geodesic may admit self-intersections. However, its tangent vector is never zero, and thus the parametrization is regular.
The notion of geodesic is clearly local. The previous considerations allow to extend the definition of geodesic to subsets of $S$ that are regular curves. A regular connected curve $C$ in $S$ is said to be a geodesic if, for every $p \in S$, the parametrization $\alpha(s)$ of a coordinate neighborhood of $p$ by the arc length $s$ is a parametrized geodesic; that is, $\alpha'(s)$ is a parallel sector field along $\alpha(s)$. Every straight line contained in a surface satisfies this definition. From a point of view exterior to the surface $S$, the definition is equivalent to saying that $\alpha''(s) = kn$ is normal to the tangent plane, that is, parallel to the normal to the surface. In other words, a regular curve $C \subset S (k \neq 0)$ is a geodesic if and only if its principal normal at each point $p \in C$ is parallel to the normal to $S$ at $p$. The above property can be used to identify some geodesics geometrically.
The great circles of a sphere $S^2$ are geodesics. Indeed, the great circles $C$ are obtained by intersecting the sphere with a plane that passes through the centre $O$ of the sphere. The principal normal at a point $p \in C$ lies in the direction of the line that connects $p$ to $O$ because $C$ is a circle of centre $O$. Since $S^2$ is a sphere, the normal lies in the same direction, which verifies our assertion. For the case of the sphere, through each point and tangent to each direction there passes exactly one great circle, which, as we proved before, is a geodesic. Therefore, by uniqueness, the great circles are the only geodesics of a sphere.
For the right circular cylinder over the circle $x^2 + y^2 = 1$, it is clear that the circles obtained by the intersection of the cylinder with planes that are normal to the axis of the cylinder are geodesics. That is so because the principal normal to any of its points is parallel to the normal to the surface at this point. On the other hand, the straight lines of the cylinder (generators) are also geodesics. To verify the existence of other geodesics on the cylinder $C$ we shall consider a parametrization
$$x(u, v) = (\cos u, \sin u, v)$$
of the cylinder in a point $p \in C$, with $x(0, 0) = p$. In this parametrization, a neighborhood of $p$ in $C$ is expressed by $x(u(s), v(s))$, where $s$ is the arc length of $C$. Then, $x$ is a local isometry which maps a neighborhood $U$ of $(0, 0)$ of the $uv$ plane into the cylinder. Since the condition of being a geodesic is local and invariant by isometries, the curve $(u(s), v(s))$ must be a geodesic in $U$ passing through $(0, 0)$. But the geodesics of the plane are the straight lines. Therefore, excluding the cases already obtained,
$$u(s) = as, \quad v(s) = bs, \quad a^2 + b^2 = 1.$$
It follows that when a regular curve $C$ (which is neither a circle or a line) is a geodesic of the cylinder it is locally of the form
$$(\cos as, \sin as, bs),$$
and thus it is a helix. In this way, all the geodesics of a right circular cylinder are determined.
Observe that given two points on a cylinder which are not in a circle parallel to the $xy$ plane, it is possible to connect them through an infinite number of helices. This fact means that two points of a cylinder may in general be connected through an infinite number of geodesics, in contrast to the situation in the plane. Observe that such a situation may occur only with geodesics that make a “complete turn”, since the cylinder minus a generator is isometric to a plane (Do Carmo, 1976).
In metric geometry, a Geodesic is a curve which is everywhere locally a distance minimizer. More precisely, a curve $\gamma : I \rightarrow M$ from an interval $I$ of the reals to the metric space $M$ is a geodesic if there is a constant $v \geq 0$ such that for any $t \in I$ there is a neighborhood $J$ of $t$ in $I$ such that for any $t_1, t_2 \in J$ we have
\[ d(\gamma(t_1), \gamma(t_2)) = v|t_1 - t_2|. \]
This generalizes the notion of geodesic for Riemannian manifolds. However, in metric geometry the geodesic considered is often equipped with natural parametrization, i.e. in the above identity $v = 1$ and
\[ d(\gamma(t_1), \gamma(t_2)) = |t_1 - t_2|. \]
If the last equality is satisfied for all $t_1, t_2 \in I$, the geodesic is called a minimizing geodesic or shortest path. In general, a metric space may have no geodesics, except constant curves. At the other extreme, any two points in a length metric space are joined by a minimizing sequence of rectifiable paths, although this minimizing sequence need not converge to a geodesic.
Some authors used geodesic distance between facial landmarks. First of all, Bronstein et al. (Bronstein et al., 2003; Bronstein et al., 2004; Bronstein et al., 2005; Bronstein et al., 2005; Bronstein et al., 2006) proposed to model facial expressions as isometries of the facial surface. The facial surface is described as a smooth compact connected two-dimensional Riemannian manifold (surface), denoted by $S$. The minimal geodesics between $s_1, s_2 \in S$ are curves of minimum length on $S$ connecting $s_1$ and $s_2$. The geodesics are denoted by $C^*_S(s_1, s_2)$. The geodesic distances refer to the lengths of the minimum geodesics and are denoted by
\[ d_S(s_1, s_2) = \text{length}(C^*_S(s_1, s_2)). \]
A transformation $\psi : S \rightarrow Q$ is called an isometry if
\[ d_S(s_1, s_2) = d_Q(\psi(s_1), \psi(s_2)) \]
for all $s_1, s_2 \in S$. In other words, an isometry preserves the intrinsic metric structure of the surface. The isometric model, assuming facial expressions to be isometries of some neutral facial expression, is based on the intuitive observation that the facial skin stretches only slightly. All expressions of a face are assumed to be “intrinsically” equivalent (i.e. have the same metric structure), and “extrinsically” different. Broadly speaking, the intrinsic geometry of the facial surface can be attributed to the subject’s identity, while the extrinsic geometry is attributed to the facial expression. The isometric model tacitly assumes that the expressions preserve the topology of the surface. This assumption is valid for most regions of the face except the mouth. Opening the mouth changes the topology of the surface by virtually creating a hole. Based on this model, expression-invariant signatures of the face were constructed by means of approximate isometric embedding into flat spaces. They applied a new method for measuring isometry-invariant similarity between faces by embedding one facial surface into another. Promising face recognition results are obtained in numerical experiments even when the facial surfaces are severely occluded. Gupta et al. (Gupta et al., 2007; Gupta et al., 2007; Gupta et al., 2010) worked on the same assumption, namely that different facial expressions could be regarded as isometric deformations of the face surface. These deformations preserve intrinsic properties of the surface, one of which is the geodesic distance between a pair of points on the surface. Based on these ideas they presented a preliminary
study aimed at investigating the effectiveness of using geodesic distances between all pairs of 25 fiducial points on the surface as features for face recognition. Instead of choosing a random set of points on the face surface, they considered facial landmarks relevant to measuring anthropometric facial proportions employed widely in facial plastic surgery and art. They calculated geodesics using the Dijkstra’s shortest path algorithm by defining 8 connected nearest neighbors about each point. Twenty-five fiducial points, as depicted in, were manually located on each face. Three face recognition algorithms were implemented. The first employed 300 geodesic distances (between all pairs of fiducial points) as features for recognition. The fast marching algorithm for front propagation was employed to calculate the geodesic distance between pairs of points. The second algorithm employed 300 Euclidean distances between all pairs of fiducial points as features. The normalized L1 norm where each dimension was divided by its variance, was used as the metric for matching faces with both the Euclidean distance and geodesic distance features.
The arc-length is the length of an irregular arc segment and is also called “rectification of a curve”. Thanks to its definition, it is strictly connected to the geodesics. Efraty et al. (Efraty et al., 2009; Efraty et al., 2010) were interested in profile-based face recognition. They defined five types of measurements based on the properties of the profile between two landmarks. One of them was exactly the arc-length between landmarks. Nevertheless, Aung et al. (Aung et al., 1995), who used facial landmarks to evaluate the performance of a laser scanner, argued that tangential or arc measurements were slightly more complex and needed careful positioning of the image before accurate measurements could be made.
2.3 Ratios of distances
The ratios of geometric features are common in nature, the golden ratio \( \Phi = \frac{1 + \sqrt{5}}{2} \) being the most familiar. Many artists utilize the golden ratio to make their painting and sculpture more appealing. Scientists believe that some human faces are more attractive because their features are related by the golden ratio. It was demonstrated that the perception of face beauty is not based entirely on cultural influences and that length of the internal features can cause different perceptions of beauty. The ratios of face features play a crucial role in the classification of faces. In the past 20 years, researchers and practitioners in anthropology and aesthetic surgery have analyzed faces from a different perspective. They use a set of canonical points on a human face that are critical for face reconstruction. These points, and distances between them are used to represent a face. In fact, artists developed a set of neoclassical canons (ratios of distances) to represent faces as far back as the Renaissance period. All these observations motivate researchers to explore the role of ratios of the distances between face landmarks in face recognition (Shi et al., 2006). Generally, in face study, ratios are defined for the Euclidean distances or geodesic distances among landmarks. These ratios are often normalized distances, obtained by dividing a distance between points by face width. Shi et al. (Shi et al., 2006) investigated how well-normalized Euclidean distances (special ratios) can be exploited for face recognition. Exploiting symmetry and using principal component analysis, they reduced the number of ratios to 20. They are free from translation, scaling and 2D rotation of face images. The normalized distances for a face are then defined as
\[
r(l_i, l_j) = \frac{d(l_i, l_j)}{d(l_a, l_b)}, \quad \forall l_i, l_j \in \{l_1, ..., l_N\},
\]
where \( \{l_1, ..., l_N\} \) are the landmarks, \( N \) is their cardinality, \( l_a \) and \( l_b \) are two landmarks whose Euclidean distance is defined as a benchmark distance. Together with Euclidean distances and geodesic distances, Gupta et al. (Gupta et al., 2007; Gupta et al., 2007) used ratios. They presented an anthropometric three-dimensional (Anthroface 3D) face recognition algorithm, which is based on a systematically selected set of discriminatory structural characteristics of the human face derived
from the existing scientific literature on facial anthropometry. Anthropometric cranio-facial proportions are ratios of pairs of straight-line and/or along-the-surface distances between specific cranial and facial fiducial points. For example, the most commonly used nasal index $N_1$ is the ratio of the horizontal nose width to the vertical nose height. Lee et al. (Lee et al., 2005) used relative ratios between feature points to perform face recognition. Tang et al. (Tang et al., 2008; Tang et al., 2008) performed facial expression recognition. They devised a set of features based on properties of the line segments connecting certain facial feature points on a 3D face model. Among them, normalized distances were extracted. Mao et al. (Mao et al.) took care of studying facial change due to growth. They formulated a new inverse flatness metric, the ratio of the geodesic distance and the Euclidean distance between landmarks, to study 3D facial surface shape. With this ratio, they were able to analyze curvature asymmetry, which cannot be detected by studying the Euclidean distance alone. They also attempted to combine it with the conventional Euclidean inter-landmark distances based symmetric method to express facial symmetry in terms of both surface flatness and also the geometric symmetry of landmark positions (captured by the Euclidean distances), to give a better overall description of three-dimensional facial symmetry. If $GD_{i,j}$ is the geodesic distance between points $i$ and $j$, and $ED_{i,j}$ is the Euclidean distance, then the ratio of the geodesic to Euclidean distance
$$R = \frac{GD_{i,j}}{ED_{i,j}}$$
is employed in their work to analyze surface flatness, since it can reflect the inverse flatness of the geodesic curve that samples the surface on which the two end points $(i,j)$ lie. Therefore, this ratio is capable of capturing obvious differences in facial curvature.
2.4 Curvature and shape
Punctual values of curvature and shape are precious information about facial surface behaviour. Although their valuable contribution, they are not used as often as distances. That is because they are not as easily tractable and extractable from faces. The necessity to condensate and formalize their values becomes basic in this field, where generally surfaces are not real, but described by point clouds or meshes.
Several techniques have been developed to estimate the curvature information in the last two decades. From the mathematical viewpoint, the curvature information can be retrieved by the first and second partial derivatives of the local surface, the local surface normal and the tensor voting (Worthington et al., 2000). An interesting curvature representation was proposed by Koenderink et al. (Koenderink et al., 1992). It is based on the parametrization of the structure in two features maps, namely the Shape Index $S$ and the Curvedness Index $C$. The formal definition of Shape Index can be given as follows:
$$S = -\frac{2}{\pi} \arctan \left( \frac{k_1 + k_2}{k_1 - k_2} \right), \quad S \in [-1,1], \quad k_1 \geq k_2,$$
where $k_1$ and $k_2$ are the principal curvatures. It describes the shape of the surface. Koenderink et al. proposed a partition of the range $[-1,1]$ in nine categories, which correspond to nine different surfaces. Nevertheless, Dorai et al. (Dorai et al., 1995; Dorai et al., 1996, Dorai et al., 1997) employed a modified definition to identify the shape category to which each surface point on an object belongs. With their definition, all shapes can be mapped on the interval $S \in [0,1]$, conveniently allowing aggregation of surface patches based on their shapes:
\[ S = \frac{1}{2} \arctan \frac{k_1 + k_2}{k_1 - k_2}. \]
Dorai et al. addressed the problem of representing and recognizing arbitrarily curved 3D rigid objects when the objects may vary in shape and complexity, and no restrictive assumptions are made about the types of surfaces on the object. They proposed a new and general surface representation scheme for recognizing objects with free-form (sculpted) surfaces from range data.
\( S \) does not give an indication of the scale of curvature present in the shapes. For this reason, an additional feature is introduced, the Curvedness Index of a surface:
\[ C = \sqrt{\frac{k_1^2 + k_2^2}{2}}. \]
It is a measure of how highly or gently curved a point is and is defined as the distance from the origin in the \((k_1, k_2)\)-plane. Whereas the Shape Index scale is quite independent of the choice of a unit of length, the curvedness scale is not. Curvedness has the dimension of reciprocal length. In practice one has to point out some fiducial sphere as the unit sphere to fix the curvedness scale.
Since principal curvatures may be computed punctually, then both \( S \) and \( C \) may be too. This advantage allow to extract shape and curvedness information from landmarks or fiducial points, guaranteeing a formalization for these features.
Few authors used Shape and Curvedness Indexes for recognition. Worthington et al. (Worthington et al., 2000) investigated whether regions of uniform surface topography can be extracted from intensity images using shape-from-shading and subsequently used for the purposes of thirty object recognition. They drew on the constant Shape Index maximal patch representation of Dorai et al. Song et al. (Song et al., 2005) described a 3D face recognition method using facial Shape Indexes. Given an unknown range image, they extracted invariant facial features based on the facial geometry. For face recognition method, they defined and extracted facial Shape Indexes based on facial curvature characteristics and perform dynamic programming. Shin et al. (Shin et al., 2006) described a pose invariant three-dimensional face recognition method using distinctive facial features. They extracted invariant facial feature points on those components using the facial geometry from a normalized face data and calculated relative features using these feature points. They also calculated a Shape Index on each area of facial feature point to represent curvature characteristics of facial components. Calignano (Calignano, 2009) used Shape and Curvedness Indexes for a morphological analysis methodology for soft-tissue landmarks automatic extraction. Nair et al. (Daniyal et al., 2009; Nair et al., 2009) dealt with face recognition, face detection and landmark localization. In isolation-of-candidate-vertices-phase, in order to characterize the curvature properly of each vertex on the face mesh they computed two feature maps, namely the Shape Index and the Curvedness Index. The low-level feature maps were computed after Laplacian smoothing that reduced outliers arising from the scaling process. The smoothed and decimated mesh is only used for the isolation of the candidate vertices. Zhao et al. (Zhao et al., 2010) analysed facial expressions. To describe local surface properties, they computed Shape Index of all points on the local grids and concatenate into vector \( SI \). They choose Shape Index because it has been proven to be an efficient feature to describe local curvature information and is independent of the coordinate system. The Shape Index is computed on each vertex on local grids and the feature \( SI \) is constructed by concatenating those values into a vector.
Other parameters and methodologies were used to extract shape and curvature information from facial landmarks or fiducial points. Moreno et al. (Moreno et al.) performed face recognition using 3D surface-extracted descriptors. Averages and variances of the mean and Gaussian curvatures, evaluated in points belonging to the various regions which face surface was divided by, were extracted. Gordon (Gordon, 1992) defined a set of features which describe the nose ridge and...
are based on measurement of curvature. They are: maximum Gaussian curvature on the ridge line, average minimum curvature on the ridge above the tip of the nose, Gaussian curvature at the bridge, Gaussian curvature at the base. The maximum Gaussian curvature will occur approximately at the tip of the nose, and provides some description of local shape at that point. The average minimum curvature between the bridge and the tip of the nose is meant provide a simple measure of the curvature along the ridge. Xu et al. (Xu et al., 2004) developed an automatic face recognition method combining the global geometric features with local shape variation information. The scattered 3D point cloud is first represented with a regular mesh. Then the local shape information is extracted to characterize the individual together with the global geometric features. They firstly defined a metric to describe the 3D shape of the principle areas with a 1D vector and then used the Gaussian-Hermite moments to analyze the shape variation. Efraty et al. (Efraty et al., 2009; Efraty et al., 2010) computed for each pair of landmarks the mean curvature of the region between landmarks and the $L_2$-norm of curvature along the contour between landmarks (proportional to bending energy). Wang et al. (Wang J. et al., 2006) dealt with facial expression recognition. They proposed an approach to extract primitive 3D facial expression features from the triangle meshes of faces. They performed principal curvature analysis, which produced a set of attributes that describes the surface property at each vertex. Among them, principal curvatures, representing the maximum and the minimum degrees of bending of a surface, and steepness are included. Using these geometric attributes, they were able to classify every vertex into a category.
2.5 Other features
Other geometrical features were extracted from face landmarks. Ras et al. (Ras et al., 1996) studied facial morphology and computed angles between fiducial points. Particularly, the angles exocanthion-chelion-pronasal, exocanthion-pronasal-exocanthion, pronasal-exocanthion-chelion and between the two planes formed by exocanthion, chelion and pronasal of both sides were calculated. Changes in facial morphology due to growth and development were analyzed with an analysis of variance with the angles. Lee et al. (Lee et al., 2005) performed face recognition calculating relative angles among facial feature points. Moreno et al. (Moreno et al.) computed angles, regions, areas of regions and centroids of regions. Zhao et al. (Zhao et al., 2010), for face recognition purposes, used the multi-scale LBP operator, a powerful texture measure used widely in 2D face analysis. It extracts information which is invariant to local gray-scale variations of the image with low computational complexity. They also computed a landmark displacement vector. The displacement of a landmark means to capture the change of the landmark location when an expression appears on a neutral face. It is informative because it represents the difference between the face with an expression and the neutral one. Similarly, Sun et al. (Sun et al., 2008) derived the displacement vector between each individual frame and the initial frame, namely the neutral expression one. Dufresne (Dufresne) utilized the vectors between selected facial points as features for 2D face correction. He showed that simply measuring the width and height of the mouth does not indicate what pose that mouth is in, i.e. smiling, scowling or smirking. Vectors were selected due to being particularly expressive. That is, a human could understand the expression if only these vectors were presented to them. Tang et al. (Tang et al., 2008) extracted slopes of the line segments connecting a subset of the 83 facial feature points for facial expression recognition purposes. Daniyal et al. (Daniyal et al., 2009) analyzed the performance of different landmark combinations (signatures) to determine a signature that is robust to expressions for the purpose of face recognition. The selected signature is then used to train a Point Distribution Model for the automatic localization of the landmarks. As a validation, Jesorsky et al. (Jesorsky et al., 2001) used relative error for face detection. Relative error is based on the distances between the expected and the estimated eye position, so it must not be considered as a normalized distance. Other authors extracted depth and texture features from landmarks or face zones. A texture coding provides information about facial regions with little geometric structures, such as hair, forehead and eyebrows, while a depth coding provides information about regions where there is little texture such
as the chin, jawline and cheeks. Particularly, Wang et al. (Wang Y. et al., 2002) extracted shape and texture features from defined feature points for face recognition purposes. BenAbdelkader et al. (BenAbdelkader et al., 2005) worked on face coding for recognition and identification. They designed a pattern classifier for three different inputs: depth map, texture map, and both depth and texture maps. Hüskens et al. (Hüskens et al., 2005) included both texture and shape as typical 2D and 3D representations of faces.
3. Results and conclusions
Depending on the application field, these measures were judged by the researchers as valid, effective and suitable to a face description. Since the fields which all these geometrical features are used for are really various, it is out of the scope of this paper to add here the results these measures give in their applications. However, it is possible to give an overview of how functional are the most important features, namely Euclidean and geodesic distances, in recognition usage, i.e. the main field. These evaluations are given by those authors who employed both the two measures and compare the obtained results. Bronstein et al. (Bronstein et al., 2003; Bronstein et al., 2004; Bronstein et al., 2005; Bronstein et al., 2005; Bronstein et al., 2005), which used geodesic distances, obtained promising face recognition results on a small database of 30 subjects even when the facial surfaces were severely occluded. They also demonstrated that the approach has several significant advantages, one of which is the ability to handle partially missing data. This is exactly the contrary of what was proven by Gupta et al. in his first study (Gupta et al., 2007), who tested both Euclidean and geodesic distance. The two algorithms based on Euclidean or geodesic distances between anthropometric facial landmarks performed substantially better than the baseline PCA algorithm. The algorithms based on geodesic distance features performed on a par with the algorithm based on Euclidean distance features. Both were effective, to a degree, at recognizing 3D faces. In this study the performance of the proposed algorithm based on geodesic distances between anthropometric facial landmarks decreased when probes with arbitrary facial expressions were matched against a gallery of neutral expression 3D faces. This suggests that geodesic distances between pairs of landmarks on a face may not be preserved when the facial expression changes. This was contradictory to Bronstein et al.’s assumption regarding facial expressions being isometric deformations of facial surfaces. In conclusion, geodesic distances between anthropometric landmarks were observed to be effective features for recognizing 3D faces, however they were not more effective than Euclidean distances between the same landmarks. The 3D face recognition algorithm based on geodesic distance features was affected by changes in facial expression. Lately, Gupta et al. (Gupta et al., 2010) gained other results. He obtained that, for expressive faces, the recognition rates of the algorithm that was based on both the Euclidean and geodesic facial anthropometric distances were also generally higher than those of the algorithm that was based on only Euclidean distances. This suggests that facial geodesic distances may be useful for expression invariant 3D face recognition and further strengthens Bronstein et al.’s proposition that different facial expressions may be modeled as isometric deformations of the facial surface.
An exhaustive set of morphometric measures and geometrical features extractable from facial landmarks were here presented and explained. The most popular ones are certainly Euclidean and geodesic distance, which were used by many authors, also as benchmarking elements of comparison. The application which involve them the most is recognition, with its various subfields, such as face recognition, facial expression recognition and face detection. Landmarks are the starting point for this study, being exactly the reference points which information are extracted from. This is due to the fact that from various evaluations it resulted necessary use fiducial points. As a matter of fact, most of the work concerning 3D facial morphometry refers exactly to landmarks.
References
|
[REMOVED] |
Tomographic Particle Image Velocimetry Using Colored Shadow Imaging
Thesis by
Meshal K Alarfaj
In Partial Fulfillment of the Requirements
For the Degree of
Master of Science
King Abdullah University of Science and Technology
Thuwal, Kingdom of Saudi Arabia
Insert Approval Date
Saudi Aramco: Company General Use
EXAMINATION COMMITTEE APPROVALS FORM
The dissertation/thesis of [Student Name] is approved by the examination committee.
Committee Chairperson [insert name]
Committee Co-Chair (if appropriate) [insert name]
Committee Member [insert name]
ABSTRACT
Tomographic Particle Image Velocimetry Using Colored Shadow Imaging
by
Meshal K Alarfaj, Master of Science
King Abdullah University of Science & Technology, 2015
Tomographic Particle image velocimetry (PIV) is a recent PIV method capable of reconstructing the full 3D velocity field of complex flows, within a 3-D volume. For nearly the last decade, it has become the most powerful tool for study of turbulent velocity fields and promises great advancements in the study of fluid mechanics. Among the early published studies, a good number of researches have suggested enhancements and optimizations of different aspects of this technique to improve the effectiveness. One major aspect, which is the core of the present work, is related to reducing the cost of the Tomographic PIV setup. In this thesis, we attempt to reduce this cost by using an experimental setup exploiting 4 commercial digital still cameras in combination with low-cost Light emitting diodes (LEDs). We use two different colors to distinguish the two light pulses. By using colored shadows with red and green LEDs, we can identify the particle locations within the measurement volume, at the two different times, thereby allowing calculation of the velocities. The present work tests this technique on the flows patterns of a jet ejected from a tube in a water tank. Results from the images processing are presented and challenges discussed.
# TABLE OF CONTENTS
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Abstract</td>
<td>iv</td>
</tr>
<tr>
<td>List of Abbreviations</td>
<td>vi</td>
</tr>
<tr>
<td>List of Figures</td>
<td>vii</td>
</tr>
<tr>
<td>1 Introduction</td>
<td>1</td>
</tr>
<tr>
<td>1.1 PIV basic principal</td>
<td>2</td>
</tr>
<tr>
<td>1.2 Seeding particles</td>
<td>3</td>
</tr>
<tr>
<td>1.3 Tomographic PIV standard layout</td>
<td>4</td>
</tr>
<tr>
<td>1.3 Tomographic PIV applications and limitations</td>
<td>7</td>
</tr>
<tr>
<td>1.4 Commonly used Illumination sources</td>
<td>8</td>
</tr>
<tr>
<td>2 Objectives</td>
<td>11</td>
</tr>
<tr>
<td>3 Experimental setup</td>
<td>14</td>
</tr>
<tr>
<td>3.1 LED illumination source</td>
<td>14</td>
</tr>
<tr>
<td>3.2 Cameras</td>
<td>20</td>
</tr>
<tr>
<td>3.2.1 Cameras calibration</td>
<td>22</td>
</tr>
<tr>
<td>3.2.2 self calibration</td>
<td>26</td>
</tr>
<tr>
<td>3.3 Water tank</td>
<td>26</td>
</tr>
<tr>
<td>3.4 Flow-field and seeding system</td>
<td>30</td>
</tr>
<tr>
<td>3.5 Signal and air generators</td>
<td>31</td>
</tr>
<tr>
<td>4 Data processing and results</td>
<td>33</td>
</tr>
<tr>
<td>5 Discussion and conclusions</td>
<td>69</td>
</tr>
<tr>
<td>References</td>
<td>71</td>
</tr>
</tbody>
</table>
**LIST OF ABBREVIATIONS**
<table>
<thead>
<tr>
<th>Abbreviation</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bmp</td>
<td>Bitmap image file</td>
</tr>
<tr>
<td>CCD</td>
<td>Charge coupled device</td>
</tr>
<tr>
<td>CMOS</td>
<td>Complementary Metal Oxide Sensors</td>
</tr>
<tr>
<td>CW</td>
<td>Continuous wave</td>
</tr>
<tr>
<td>DSLR</td>
<td>Digital Single-Lens Reflex</td>
</tr>
<tr>
<td>fps</td>
<td>Frame per second</td>
</tr>
<tr>
<td>JPEG</td>
<td>Joint Photographic Experts Group</td>
</tr>
<tr>
<td>LASER</td>
<td>Light Amplification by Stimulated Emission of Radiation</td>
</tr>
<tr>
<td>LED</td>
<td>Light emitting diodes</td>
</tr>
<tr>
<td>MART</td>
<td>Multiplicative Algebraic Reconstruction Technique</td>
</tr>
<tr>
<td>NEF</td>
<td>Nikon electronic format</td>
</tr>
<tr>
<td>PIV</td>
<td>Particle image velocimetry</td>
</tr>
<tr>
<td>PSV</td>
<td>Particle shadow velocimetry</td>
</tr>
<tr>
<td>SRS</td>
<td>Stanford Research Systems</td>
</tr>
</tbody>
</table>
LIST OF FIGURES
Fig 1.1: A schematic drawing showing Displacement of a tracer particle at two consecutive times................................................................. 2
Fig 1.2: Illustration of Tomo-PIV experimental setup and working principle, with 4 cameras and a laser volume-illumination [3]............................. 5
Fig 2.1: Subsection of a Nikon photo demonstrating the basic idea of using two colors to imbed time-information into a single image ....................... 12
Fig 2.2: Sketch showing how the shadows appear to reverse the order of the light illumination ........................................................................ 13
Fig 3.1: A schematic drawing for the LED system layout and octagonal water tank model.................................................................................. 15
Fig 3.2: Photos of the Red & Green LED Chips by Luminus [6] ....................... 16
Fig 3.3: A photo of the aspheric condenser lens used with the LED system........ 16
Fig 3.4: Timing diagram for the LED and camera systems used in the Tomo-PIV setup......................................................................................... 17
Fig 3.5: Schematic drawing and photo of the LED mounting on the heat
Sink [6]........................................................................................................................................20
Fig 3.6: A photo of the commercial cameras used in the Tomo-PIV ......................... 22
Fig 3.7: Schematic drawing & photo for the cameras connection ......................... 24
Fig 3.8: Photos of the calibration setup: (a) calibration plate by Lavision,
(b) calibration stepper motor by Pollux ................................................................. 25
Fig 3.9: Drawing and photo of Tank-A model used with 4 LEDs system ............. 28
Fig 3.10: Drawing and photo of Tank-B model used with 8 LEDs system .......... 29
Fig 3.11: Photos of seeding mechanisms used with Tomo-PIV:
(a) with Tank-A model, (b) with Tank-B model ............................................. 30
Fig 3.12: A photo of the function and delay generators .................................... 31
Fig 3.13: A photo of the air dispenser ..................................................................... 32
Fig 4.1: Top panel: Original image using red/green shadows .......................... 34
Fig 4.2: Photos of RGB image and plot of the vertically average intensities of
the colors .............................................................................................................. 35
Fig 4.3: Plots of the actual intensities with the particles (top image) and the
average intensities of RGB image (bottom plot) ............................................. 36
Fig 4.4: Photos of particles before & after color separation of RGB image:
(a) Enhanced RGB image, (b) Red channel, (c) Green channel,
(d) Blue channel .................................................................................................. 37
Figure 4.5: The details of an image subsection containing the shadows of one particle ......................................................... 38
Figure 4.6: The separation of the two particle images, by using the difference between the Red and the Green channels .......................................................... 40
Figure 4.7: The probability distribution of the intensity values of the pixels average over the entire area of interest, where the particles in the vortex ring are most visible .......................................................... 42
Figure 4.8: The average color fields for the four different cameras .................. 43
Fig 4.9: Photo of a particle image (green component) after Unifying or subtracting background variation ......................................................... 45
Fig 4.10: Photo showing Inverted green channel of particle image .................. 45
Figure 4.11: The particle images using the shadow in the green color-channel, from all four cameras................................................................. 47
Figure 4.12: Close-up particle images from the previous figure ..................... 48
Figure 4.13: The Green-Red fields for all 4 cameras, which correspond to the second time-flash ................................................................. 49
Figure 4.14: Close-up sections of the images in Figure 4.13......................... 50
Figure 4.15: Direct comparison of the particle images in the same area from the
Green (left panel) and Green-Red (right panel) at the two different
times........................................................................................................... 51
Figure 4.16: Direct PIV calculated from the Tomo-images ................. 52
Figure 4.17: The intensity distribution of the particles in the reconstructed
volume, showing clearly where the particles are concentrated ............ 54
Fig 4.18: plots showing the intensities profile in the z-direction through the
reconstructed volume.................................................................................. 55
Figure 4.19: One reconstruction plane out of a total of 1035 adjacent planes..... 56
Figure 4.20: Velocity vectors in a plane near the centerline of the vortex ring..... 57
Figure 4.21: Closeup of the right-side cut through the vortex ring in fig. 4.20..... 58
Figure 4.22 The color indicates the magnitude of the horizontal component of
the velocity vector.................................................................................... 59
Figure 4.23: The color indicates the magnitude of the vertical component of
the velocity vector.................................................................................... 60
Figure 4.24: The out of plane velocity in a plane near the edge of the vortex ....... 61
Figure 4.25: Subsection from a typical Red-Blue LED image ..................... 63
Figure 4.26: Another example of the colour separation using RED-BLUE
LEDs ........................................................................................................... 64
Figure 4.27: The pdf of the RED and BLUE pixel intensities, as well as the
intensities of the difference RED-BLUE (black curve) ......................... 65
Figure 4.28: Subsection of the previous figure, showing the individual color channels .......................................................... 66
Figure 4.29: Image using Red/Blue LEDs. Shows an image subsection with a vortex ring with numerous large bubbles around the core of the ring........ 67
Fig 4.30: Photos and plots for Red and Blue LEDs experiment ......................... 68
CHAPTER 1
Introduction
Tomographic Particle Image Velocimetry (Tomo-PIV) is one of several systems used to measure velocity fields in fluid mechanics. As all PIV techniques it tracks particle motions with time. However, it is the only technique which successfully can measure the full 3D velocity field in a volume and is increasingly being used for analysis of complex or turbulent flows. It has sufficient accuracy to allow calculations of all the velocity gradients and thereby to obtain the 3-D vorticity field, which is fundamental to the study of turbulence dynamics, through the study of coherent structures. For nearly the last decade since the seminal work of Elsinga et al. in 2006 [8], it has become a powerful tool and resulted in great advancements in conducting studies in the field of fluid mechanics [1]. Among these studies, a good number of researches have been published suggesting enhancements and optimizations in the different aspects of this technique, to improve its effectiveness and reduce the computational cost. One major aspect, which is the core of the present work, is related to the cost reduction of Tomographic PIV setup, using consumer cameras.
The remainder of this chapter provides introduction to the general PIV technique, discusses the standard setups of tomographic PIV, along with its main applications and limitations. It also highlights the commonly used illumination systems. The subsequent chapters will detail our modifications of the Tomo-PIV technique.
1.1 PIV Basic Principle:
PIV is a non-intrusive technique able to provide quantitative measurement of instantaneous velocity fields over a relatively large surface with measurements documented at a large number of points simultaneously. It first appeared in literature in the mid eighties for studying turbulence, and continued to be the most practical method and played major role in flow visualization [2].
The basic concept of PIV is to obtain the velocity from the short-term displacement of solid particles imbedded inside the flow-field (Figure 1.1). In other words, the
Fig 1.1: A schematic drawing showing Displacement of a tracer particle at two consecutive times
the velocity vector “\( \vec{U} \)” is calculated using the basic definition of a derivative, considering the tracer displacement “\( \vec{\Delta X} \)” between two successive observations. When the first observation is obtained at time “\( t \)” and the second one is at “\( t + \Delta t \)”, then:
\[
\vec{U} = s \frac{\Delta \vec{X}}{\Delta t}
\]
(1.1)
where “s” represent the magnitude of the displacement, or length of the displacement vector. For high seeding densities, as used in PIV, the displacement of individual particles cannot be identified, but the most likely shifting of a collection of particles is obtained using cross-correlations.
1.2 Seeding particles:
For PIV to give accurate information about the underlying velocity field, the tracers must be small enough to follow the flow in presence of large local and randomly fluctuating accelerations. How well the particles follow the flow is characterized by the so-called Stokes number,
\[
St = \frac{\Delta \rho \ast \Delta u \ast d}{\mu_{liq}}
\]
(1.2)
Where “\( \Delta \rho \)” is the difference in the density of the liquid and the particles, “\( \Delta u \)” is the velocity difference between the particle and surrounding liquid and “\( \mu_{liq} \)” is the
liquid dynamic viscosity and “d” is the particle diameter. Conceptually, the Stokes number compares the inertia difference between the particle and the surrounding fluid, vs the viscous force the fluid applies to reduces this difference in the two velocities. For the particles to faithfully follow the flow, the value of the Stokes number needs to be small. The most straightforward way of accomplishing this is to match the density of the particle as close as possible to that of the liquid [8].
Using small particles also helps in this respect. The particle also must be smaller than the finest velocity structures of the flow-field, to be subjected to approximately a constant velocity over its surface. However, the seeding particles cannot be too small as they need to be visible by the camera sensor. Small particles reflect less light than larger particles. Furthermore, the images of the particles on the sensor must be larger than the pixel size, to accurately capture them.
For standard PIV, a light sheet is formed by a pulsed light source to illuminate the particles. The duration of the illuminated light pulse must be short enough that the particles are almost still during each pulse. However, in our setup we use shadows, which have quite different optical properties than light scattering, but are also constrained by the same intensity requirements. The depth of field of the imaging also enters the optical design, as will be discussed in a later section.
1.3 Tomographic PIV standard layout:
A standard Tomographic PIV system consists of a pulsed laser with volume optics, which creates an illuminated volume slice in the flow, which is seeded with tracers.
Fig 1.2 Illustration of Tomographic PIV experimental setup and working principle, with 4 cameras and a laser volume-illumination [3].
The particle images are recorded with four digital cameras, which view the particle field from different directions. The cameras and illumination are synchronized using a computer to control the system, store the data and perform the required analysis.
In Tomographic PIV, the 3D velocity fields are measured using particle-based interrogation process, as illustrated in Figure 1.2. The process starts with having several camera-views of the tracer particles illuminated by the laser light sheet. These different viewing directions are captured simultaneously and the region of study corresponds to the overlap of all of these fields of view of the cameras. This is followed by reconstructing the three-dimensional particle field. In essence, this is done by triangulating the images from the different cameras and finding the location in 3-D where they overlap. In practice, this is done by an iterative reconstruction method called Multiplicative Algebraic Reconstruction Technique (MART).
Finally, when two particle-field volumes, taken at different times $t_1$ and $t_2$, have been reconstructed, we can use three-dimensional cross-correlation, on small sub-volumes within the larger volume, to obtain the local 3-D velocity vector, representing the particle motions. This provides velocity values over a regular 3-D grid spanning the entire measurement volume [3].
The illumination volume can be formed from different lighting sources as will be discussed in section 1.4. An important factor that affects the results is the exposure time, i.e. the duration of the illumination pulse. For best quality, one needs to ensure that the exposure time has to be short so that particles motion is “frozen”
without “streaks”. However, it should not be too short in order to guaranty a good illumination of particles for sufficient intensity needed for the camera sensor. Using Q-switched lasers this is usually not a serious constraint, as these pulses are typically around 5 ns long. However, for LEDs this needs to be optimized as will be discussed later.
For recording, the high-speed charge coupled device (CCD) or Complementary Metal Oxide Sensors (CMOS) video cameras are now commonly used since they have the capability to capture multiple frames at very high speed. The distance between the cameras and the orientation of their planes of view which is an important parameter to the imaging process and sensitivity to the out of plane motions. The magnification characterizes the pixel size on the sensors. The four cameras and the laser are connected through a synchronizer, which is controlled by a computer and dictates the timing of the camera sequence in conjunction with the firing of the laser [9].
1.4 Tomographic PIV Applications & Limitations:
Since its early development, Tomographic PIV has been widely used in many applications, such as the following areas [4]:
- Time resolved cylinder wake [11]
- Flow around a cylinder [14]
- Boundary layers [12]
- Round jet [4, 15, 16, 17]
• Shock wave [13]
Moreover, this technique proved to be a promising candidate in the ongoing research and development in different industries including the aeronautics and automotive and also in the medical and biological fields [3].
As with any other experimental systems, However, tomographic PIV has limitation factors. These factors were listed in the review by Scarano [1] and summarized below:
- High lighting power is required for illumination of a volume.
- The size of recorded images needs to be much larger than for regular planar or stereo PIV.
- The hardware used are of high cost and presents some safety concerns
- Large computational effort is required to analyze the recorded images as compared with the conventional planar PIV. This is particularly true of the 3-D cross-correlation.
- Complicated and sensitive 3-D calibration procedure.
1.5 Commonly used illumination sources:
The most common way of PIV illumination is by using the Light Amplification by Stimulated Emission of Radiation (LASER). This is because its ability to emit monochromatic light with high energy density, which can easily be formed into thin
light sheets. Light pulses can be obtained with pulsed lasers or with continuous wave (CW) lasers, when combined with a chopping system for producing light pulses and/or simply by shuttered recording of the video camera.
An optical system must be added to generate the illumination needed for specific purpose and each has its own importance. Lenses and mirrors are first used to generate from the laser beam a light sheet or light volume in the desired position within the test section. Second, two types of lenses are used: a cylindrical lens to expand the beam into a plane and a spherical lens to compress the plane into a thin sheet. Lastly, mirrors can be used to deflect the beam in the desired position, or scan it through the test volume [4]. Scanning beams have higher intensity when they hit the individual particles and can in that way improve the signal to noise ratio. The requirements of the optics for Tomo-PIV volume illumination are in a sense not as demanding as regular PIV, as the volume does not have to be as well defined, as the very thin sheets used in planar PIV. The Tomo-PIV processing extracts the 3-D location of the particles within the volume, whereas in regular PIV this location is determined by the laser sheet itself.
An alternative and effective method to obtain the needed illumination is by using high power Light-Emitting Diodes (LED). They can be used with a specialized PIV technique called particle shadow velocimetry (PSV) which has been validated through many PIV applications [5]. It works by focusing the LED light into a collimated beam which is directly toward the camera while seeding. The shadow of
the seeding particles will result in a bright background with negative appearance, or
dark region where the particles are present.
This illumination method features a significant cost reduction when
compared with the previous method. The output power can reach up to 1 mJ per
pulse when operated at high frequency. The handling and installation is simple with
least maintenance required. With recent developments of the blue LED, which
earned Isamu Akasaki, Hiroshi Amano and Shuji Nakamura the Nobel Prize in
Physics in 2014 [18], the entire visible light spectrum of wavelengths range from
460 nm in blue spectrum up to 645 nm in Red are accessible to in-expensive
illumination. Also it is workable with any type of cameras, from high-speed video
cameras to the normal low-speed digital consumer cameras.
CHAPTER 2
Objectives
The main objective of this work was to test a new technique to perform the Tomographic PIV in a more economical way, than the expensive specialized setups used in research laboratories today. Different setup was proposed that offers great cost reduction by exploiting the rapid technological advances, currently occurring in the lighting sources and the image recording devices. The consumer electronics technology is advancing by leaps and bounds, essentially following the Moore’s Law of chip development, by doubling in capability every 18 months [19]. Consumer cameras are in this way increasing the number of pixels of a sensor every year, with the most recent cameras having up to 50 Mpx on a single sensor. Video cameras with 4k sensors are similarly becoming commonplace, with frame-rates up to 60 fps. The newest 8k video has been announced recently. The idea is therefore to use consumer cameras for Tomo-PIV and ride this rapid advance to realize inexpensive experimental techniques for general use in research and industry. However, using single-frame cameras introduces complications regarding separating the two time-information of the locations of the particles. We propose to solve this by encoding the two images on the same frame, using the color information, green for one time and red for the other. Two inexpensive color LEDs are used to illuminate the measurement volume rather than the LASER which is usually used in standard setups. Likewise, normal low-speed digital consumer cameras replaced the very
Figure 2.1: Subsection of a Nikon photo demonstrating the basic idea of using two colors to embed time-information into a single image. Here a vortex ring is generated by rupturing a membrane, which spans the opening of a cylinder. This cylinder, which holds a suspension of particles, extends just out of the water surface, to give hydrostatic pressure to force the vortex ring from the bottom. The syringe needle is visible at the center, as two dark shadows, with the first shadow in the green color and the second in the red. Here the amount of particles is too high to perform Tomo-PIV, but they show clearly the overall motions. Note the fully dark shadows in the overlap regions, where both the green and red lights have been blocked, by a different set of particles.
expensive high-speed video cameras, or the specialized dual-frame cameras often used in PIV. Figure 2.1 shows a typical image from the Nikon camera, left after the two pulses. The red pulse comes first then the green, but the image seems to indicate the opposite. The explanation is given in the sketch in Figure 2.2.
Figure 2.2: Sketch showing how the shadows appear to reverse the order of the light illumination. In other words, the original location of the particle is marked by the green flash, which is the second flash. Similarly, the second location of the particle is red from the first flash, whereas the surrounding area are yellow, a combination of the two colors.
Figure 2.2 shows the technique in a nutshell. It shows the two shadows from one particle, which has shifted during the time between the background pulses. The first pulse is red and the particle is located at 1, marked in the figure. Then the particle moves to 2 and the green pulse is flashed. The final sketch show the intensities left on the camera sensor after the two pulses. Where both red and green have flashed on the same pixel the resulting color is yellow.
CHAPTER 3
Experimental Setup
The experimental setup for the novel Tomographic PIV technique, using the normal low-speed digital cameras and multi-color LEDs is illustrated in Figure 3.1 and consisted of the following components:
1) LED Illumination sources
2) Four DSLR Cameras
3) Specially designed water tank
4) Cylindrical chamber for seeding the system
5) Function and delay generators
6) In combination with the Davis Tomo-PIV software from LaVision.
3.1 LED Illumination Sources:
In this work we use colored-shadow imaging. The images of the particles formed as shadows, produced by illumination of a background diffuser screen. This screen is a thin sheet of drafting paper and is illuminated by ultra-bright LEDs. To produce a sufficiently bright background, we ended up using 4 separate LEDs for each color, i.e. a separate LED to backlight a diffuser screen facing each of the four cameras.
The same applies for the green and red colors, for a total use of 8 colored LED chips (Figure 3.2). The two colors are generated by consecutive electrical current pulses, so that each color indicates a specific illumination time. The light produced by each

**Fig 3.1:** A schematic drawing for the LED system layout and octagonal water tank model.
LED unit is focused through an aspheric condenser lens (Figure 3.3) onto the diffuser film to make it evenly distributed. The emitted light has a wavelength of 623 nm and 525 nm in the red and green spectrums respectively. The drive condition for the red and green LEDs was set on 30 A giving output illumination flux of 1400 lm
Fig 3.2: Photos of the Red & Green LED Chips by Luminus [6].
for the red and 3100 lm for the green. The minimum pulse exposure time needed was 10 µsec and the time interval separating the two pulses ranged between 10-20 msec. However, for stronger particle contrast, we often needed to use longer exposure time of about 20 µs. Exposures longer than that would lead to significant smearing of the fastest particles.
Fig 3.3: A photo of the Aspheric condenser lens used with the LED system.
Using a function generator, the LED and camera systems were operated at 1 Hz through a delay generator that was used to synchronize both systems and control the timing. One reason for using such a slow frequency between subsequent pulse-sequences is to minimize vibrations of the cameras from the opening of the mirror, which must move out of the way before exposing the sensor. This issue can be avoided with mirror-less cameras, which are becoming more common (see for example new Sony 7R), or by disabling the mirror in the up position. Figure 3.4
Fig 3.4: Timing diagram for the LED and Cameras timing diagram for the Tomographic PIV setup.
shows a diagram for the timing of the LED and Camera systems. The timing sequence is the following:
i. The camera shutters are opened up and stay open for 1 sec.
ii. The green LEDs are turned on for about 100 µsec.
iii. After waiting for a time duration of about $\text{dt}=10 \text{ ms}$ the red LEDs are turned on for about 100 µsec.
The red and green LED chips were mounted next to each other on a common heat sinks for cooling purpose, as illustrated in Figure 3.5. The LED source including the LED evaluation driver cards and the heat sink are manufactured by Luminus, PhlatLight® [6]. LEDs can often be driven at higher currents for short durations than for continuous running, which would burn them up. In this way, one can get much larger illumination intensity for the pulses. This is quite crucial for Tomo-PIV, as the imaging of a volume demands small apertures on the camera lenses, to get sufficiently large depth of field. Smaller aperture, in turn, demands larger illumination strength, than would be required for planar measurements.
Fig 3.5: photo of the LED setup with lenses.
3.2 Cameras:
The Tomo-PIV images were recorded from a typically setup of 4 viewing directions using four DSLR cameras. The cameras were arranged symmetrically with viewing angle "θ" which is the angle between the outer cameras viewing direction and z-axis as illustrated in Figure 3.7. Since best results can be obtained at \( \theta = 30^\circ \), the cameras were placed accordingly [8]. The cameras used were Nikon D3X Model, single-lens reflex type with 24.5 Megapixels sensor size and capable of taking images at up to 5 frames per second (fps). In our PIV application, the cameras were set to record images at frame rate of 1 Hz and exposure of 1 second, which gives essentially single-shot imaging of the flow-field, unless the flow evolves very slowly.
The aperture was set on f-number of 11 (f-number is inversely proportional to the aperture). 50 mm Nikkor lenses were fitted on all cameras to have the same magnification. To minimize vibrations of the system the cameras were mounted on Manfrotto heavy-duty tripod heads, which were in turn mounted on X95 optical rails, as shown in **Figure 3.6**.
The triggering of the start of the camera exposure was initiated by the same delay generator that is used to control the LEDs pulses. Between the cameras and delay generator, the signal is converted and the cable is split into four cables terminated with 10-pin connections. **Schematic of the cameras connections is shown in figure 3.7**. For each experiment, two images are obtained by single camera exposure and two successive pulses of the different colors LEDs. The recorded images are saved in two formats: raw 14-bit format (NEF) and JPEG. The images are then uploaded to a specialized computer to be further processed and analyzed as explained in Chapter 4.
3.2.1 Camera calibration
To be able to triangulate the location of minute particles in the 3-D volume, it is crucial to have an accurate calibration, between every pixel on a sensor and the corresponding line it views through the illuminated volume. The accuracy of this calibration therefore has to be less than the particle size, or ideally less than the size of the pixel on the sensor. This requires an elaborate 3-D calibration procedure, which must be carried out in-situ, under exactly the same conditions as the experiment is conducted under. In other words, the test section must be full of
Fig 3.7: Schematic drawing & photo for the cameras connection.
water, both to have the same refractive index field as well as same shape of the outer wall, which can bend slightly due to the hydrostatic pressure from the water. The calibration of the four cameras was done simultaneously by using Lavision 3D calibration plate Type # 11 as calibration target (see photo in Figure 3.8 (a)). The plate size is 100 x 100 mm. The plate is made of black anodized aluminum, with numerous precision dots, each of diameter 2.2 mm with an in-between spacing of 10 mm. The plate surface has regular grooves to make the surface essentially represent two parallel planes of dots. The plate is traversed in a direction perpendicular to its surface, to calibrate through the volume.
A Pollux Motorized stepper-motor driven by a Micos controller shown in figure 3.8 (b) was controlled by personal computer to translate the plate in the z-direction over a total of 50 mm with 5 mm between each step. Eleven separate views of the calibration plate were in this way recorded so that it covers the whole measurement volume. The recorded images are saved in similar formats to the PIV images for further processing. The Davis program performs this calibration, to relate each pixel to the line cut through the measurement volume. It automatically finds the center of each of the white dots in each image. This is done separately for each of the cameras. In practice it is found that even this careful calibration procedure is not sufficient for the best results and a follow-on correction procedure is required for the best results, as will be described in the following section.
Fig 3.8: Photos of the calibration setup: (a) calibration plate by Lavision, (b) calibration stepper motor by Pollux.
3.2.2 Self calibration
This is done by using the actual particle images during an experimental run, rather than the calibration plate. In essence we search the particle images for especially bright particles, which are clear in all four cameras. If we know the identity of a particular particle, we can check whether the pixel lines from the different cameras intersect at a point in the reconstructed volume, as they should for a perfect calibration. The deviations from a perfect intersection can then be corrected. This will only work if the distortions are consistent between adjacent particle images, but not if these distortions are different for each particle in the field, which would indicate random noise. It may be surprising that this would work better than the highly controlled initial calibration using the translating grid, but in practice it has been noticed that things can change slightly between calibration to experimental runs. To list only one possibility for this change, it could be due to changes in temperature of the working fluid, when the experiment is running, which in turn will alter the refractive index, or expand the plexiglass walls, thereby shifting the lines which are extrapolated from each pixel on the cameras into the volume of the experiment.
3.3 Water tank:
Two water tanks were built for this work. The first one (Tank-A) was of a perfect octagonal shape (Figure 3.9) and the second one (Tank-B), more irregular one, as
shown in Figure 3.10. Both tanks were made from PVC and used as a medium for the PIV measurement. The experiment with Tank-A was associated with illumination system that consists of 2 LEDs per color and designed such that four cameras face two adjacent rectangular sides from front and two LED systems face two adjacent sides from backside.
The shape of Tank-B was designed to minimize optical distortions caused by the plexiglass walls. In the first design the viewing through the walls was at a slight angle, whereas in this new tank all the cameras look exactly perpendicular to the walls. This tank required 4 LEDs per color to get sufficient illumination intensity. This tank has a customized shape in a way that four camera views are identical, where they look through four adjacent rectangular sides and four LED systems face two adjacent rectangular sides.
Fig 3.9: Drawing and photo of Tank-A model used with 4 LEDs system.
Fig 3.10: Drawing and photo of Tank-B model used with 8 LEDs system.
3.4 Flow-field and Seeding system:
Seeding particles:
The particles selected for this application were glass microsphere (soda lime glass) with two different size ranges. For Tank-A model, 100-212 μm clear particles were used, while 90-100 μm silver coated particles were used for Tank-B model. The particles are supplied from Cospheric. The particles have a density of 1.36 g/cc.
Seeding mechanism:
The seeding is done using two Release mechanisms. For Tank-A model, a random mass of the seeded particles is dropped manually through 8 inch test sieve U.S.A std
made by Fisherbrand with Mesh size of 355 µm into the experiment medium (see Figure 3.11 (a)). While for Tank-B model, the seeding was done by ejecting premixed water with seeding particles into the experiment medium as shown in Figure 3.11 (b). In this part, a plastic bottle connected to an air tube was used. This bottle is covered from top with plastic tube that is having the pre-mixed solution of water and the seeds. The mix is sealed from air by a thin membrane. Then the seeding is obtained by applying sudden pressure generated by Air dispenser on the bottom of the membrane.
3.5 Signal & Air Generators:
The signals sent to the LEDs and cameras were generated using 3.1 MHz Synthesized function generator (Model DS 335) while the timing control and

synchronizing was the part done by the Digital Delay generator (model DG645). Both the function and delay generators were manufactured by Stanford Research Systems (SRS). Photos of both generates are displayed in Figure 3.12.
The generation of air was through an air dispenser connected to an air source. The air dispenser is equipped with "SHOT" switch for checking shot volume and time. Using a regulator, the air pressure was set on 204 kPA to release the right amount of particles.
CHAPTER 4
Data Processing and results
Prior to starting the Tomo-PIV data reduction, several steps were necessary to transfer the raw RGB images from the Nikon cameras into a format that can be processed with the Davis 8.0 software from Lavision [7]. The Nikon NEF images, obtained during both the calibration and experimental runs, were first converted to tiff image files (tif), which are without compression. The uploading of these images from the camera memory to the LaVision software was done manually, at this stage and is therefore somewhat tedious. However, in future applications, this could be done automatically through a Matlab program. The image resolution remains as the original 5056 × 4032 pixels with 14-bit color depth. The subsequent steps explain the additional processing to split the colors and perform the 3-D reconstruction of each single color channel separately. The two reconstructions are subsequently combined to allow for the direct 3-D cross-correlation to find the 3-D velocity vectors.
RGB Image Color Splitting:
The process of color splitting represented a challenge to determine the positions of the red and green particles. As detailed below, analysis was performed on the particle images and revealed that clear separation of red and green shadows is hardly obtained especially where they overlap with high intensities.
Figure 4.1: Top panel: Original image using red/green shadows. This is a 2958x1968 pixel subarea of the full 24 Mpx image, where the particles in the vortex ring are visible. Bottom panel: Subsection of 500 x 300 pixels form the center of the above image, which has been brightened and its contrast enhanced. It highlights the very faint green particle images, as compared to the red images. This makes the separation of the two time-images difficult, as explained in the text.
Figure 4.1 shows a typical raw RGB image, while figure 4.2 shows one of the better enhanced images, which contains the two colors from the LED pulses. It is clear from the image that the green pulse is stronger on the left side and the red on the right side. The figure also shows a plot of the vertically averaged intensities for
![Image of RGB image and plot of vertically averaged intensities.]
**Fig 4.2:** Photo of RGB image and plot of the vertically average intensities of the colors.
Fig 4.3: Plots of the actual intensities with the particles (top image) and the average intensities of RGB image (bottom plot), where the spikes from the particles have been removed by averaging in the vertical direction.
Fig 4.4: Photos of particles before & after color separation of RGB image: (a) Enhanced RGB image, (b) Red channel, (c) Green channel, (d) Blue channel.
the three color components obtained by generating a MATLAB code. Here the
green flash is stronger on the left side of the image, while the red is stronger on the
right side. Such strong spatial variation in the background color makes it difficult to
write general programs to separate the two color shadows.
Similarly, other plots of average intensities were obtained and presented in
Figure 4.3, where the green color is stronger over the entire image.
Figure 4.4 shows the three different color channels RGB which constitute the
combined color image shown on the left panel. Even though the color image seems
quite easy to tell which is what, when looking at the Red channel we can see
shadows from both the red and green (see also Fig. 4.5 below). The green channels
is clearer. From this image it is clear that one must use some tricks to automatically
separate the two images, as will be explained below.
Figure 4.5: The details of an image subsection containing the shadows of one particle. When the particle is at location 1 the Red LED flash is illuminated and when it is at 2 the Green LED illuminates. The mid panel shows the three color channels along a horizontal cut though the center of the above image. The bottom curve shows the difference between the two LED colors, i.e. (Green – Red). The intensities are here in 16-bit format.
MATLAB program was also written to split the color image into red and green channels respectively. This produced two images, which represent the two positions of the particles separated by the time difference between the two LEDs. Example of image after color splitting is shown in figure 4.4. However, even though the two colors appear well separated in fig. 4.4(a), it can be noted here that an overlap exists in the Red component making it difficult to distinguish the respective particle’s position of this channel. This is even better shown in figure 4.4 where we plot the pixel intensities across these particle images. The bottom curves in fig. 4.5 show the intensities of the three color channels. The first pulse is red and the particle is located at 1, marked in the figure. Then the particle moves to 2 and the green pulse is flashed. The second green pulse fills in the original shadow from the red light. However, picture 50 shows that in practice this is not the idealized picture, as the green light has a big red component and therefore leaves a hole in the red where the particle is during the green flash. This very strong cross-talk between the two colors, makes the subsequent separation of the particle images from the two different times, difficult! This we have attempted in the following way.
The second green pulse is easier to separate, as it has clear lack of signal in the green channel. The first red pulse can be obtained by looking at the difference in intensities between the red and green colors. In Fig. 4.5(c) we calculate this difference (Green – Red) from Figure 4.5(b). This difference signal is positive and much sharper at the location of the first pulse, than at the second pulse. However, keep in mind that the specific particle image shown in this Figure is selected for clarity and many others are less clearly defined. Using this difference between red and green can work well over limited areas of the images, as is shown in figure 4.6.
Figure 4.6: The separation of the two particle images, by using the difference between the Red and the Green channels. This image is from a region with very sparse particle density. The dark particle images are much larger than the bright ones. However, keep in mind that this depends somewhat on the shifting of the zero intensity value, relative to the background.
However, this method shown in Figures 4.5 and 4.6 will not work all over the entire domain, due to the strongly varying background intensity of the different LED lights, which are not even, due to the large size of the LEDs, which do not fit at the center of focus of the lenses, shown in figures 3.5. The two LED chips facing each of the four cameras, use the same hemispheric lens to collimate the light to form a large spot on the diffuser, as was show in Fig.
3.5. The background variability of the two colors is shown along an average line in Figures 4.2 and 4.3. The intensities are functions of both x and y and over the entire image area it is larger than the particle signal. We therefore must first estimate and then subtract the local background intensities of the different color channels. To find this intensity, it is not enough to simply locally average the image, as the particles are ever-present and skew this estimate. We therefore find the average in a two-stage process. First, we use a moving average of 61x61 pixels, to calculate the average. Following this we do a smaller average over a 25x25 pixel area, but condition the average by the pixel intensity. In other words, if the value of the intensity is too far away from the first average, we discard this number, as it is most likely associated with a particle and not the background. In our preliminary Matlab program, this conditioned averaging takes a lot of computational power and we perform it only around every 5th pixel, filling the gaps with the same value. This is reasonable as the average background intensity changes slowly and not significantly over tens of pixels. This background subtraction can of course be optimized using matrix calculations and packaged software, but for our proof of concept, we are not very concerned about the extent of the required computation time, which is performed on our labtop computer.
If the particles are sparse it might be faster to sort the intensity values over all the pixels in the local pixel area and then simply select the median value.
This two-stage process provides the background intensity fields for each color channel, as are shown in Figure 4.8 for all 4 cameras. Here we have only shown the areas of the images which contain the vortex ring and where the three-dimensional reconstruction should take place.
Figure 4.7: The probability distribution of the intensity values of the pixels average over the entire area of interest, where the particles in the vortex ring are most visible. For this case this area is over a total of 6.847 Mega pixels. The green curve indicates the Green channel and the Red curve is the (Green – Red) channel. The local mean values of the intensities have been subtracted. The black curve corresponds to the red curve flipped about the zero value, to assess the symmetry of the distribution.
Ideally, the effective area of the flow should occupy the entire image, which should be possible with the appropriate optical setup (not reached at this stage in this experimental setup), which must take into account mainly three factors. First, the particle size (i.e. how many pixels span across each particle) should be of the order of 10 px, so the Bayer-filter on the camera sensor will not distort the particle location in the different colors. Secondly, the depth of focus of the lens should be sufficient to retain good focus over the whole relevant flow depth. This is primarily determined by the aperture of the lens, i.e. the smaller the aperture the larger the depth of focus. The trade-off is however the amount of light which reaches the camera sensor.
Figure 4.8: The average color fields for the four different cameras. The average eliminates the individual particle images, trying to approximate the background intensities. Where the particles are particularly dense, there can be areas of slightly darker patches.
The trade-off is usually an aperture of either 16 or 22. Thirdly, the number of particles visible through the depth of the imaging region should not give an image density which is more than 0.05 ppp (particle-per-pixel).
**Characterizing the pixel intensity fields for different colours**
One method we used to estimate the success of the colour separation was to look directly at the image intensity of the different colours. Figure 4.7 shows the resulting probability density functions (PDF) of the intensities of the green field and the (Green – Red) field. These pdfs have a very different shapes. While both are peaked around the background value, where the intensity is zero, the distribution of the green intensities is very skewed, with few positive
values. The skewness is -3.7 and the flatness an astounding 24. On the other hand, the (Green – Red) intensities appear slightly more symmetric, with almost as many positive and negative values, with a skewness of -3 and flatness of 28. The dark line is the flipped distribution around the zero value, which demonstrates differences between the two tails of the pdf, showing how far they are from symmetric. The skewness of the (Green – Red) is also towards negative values. This is of concern as we are trying to find the positive values. This opposite sign of the skewness is probably due to the large amount of cross-talk from the green flash. The tail of the actual intensity distribution is also much wider for the green color. The negative intensity-values in the tails of the distribution (away from the local background intensity) are exactly indicative of the presence of the particle, which is what we are trying to determine. This further highlights that it should be easier to extract the particle locations indicated by the green shadow, than the red shadow. The pdfs for the images from the other three cameras show similar characteristics. Figure 4.8 shows how uneven the background average red and green fields are for all four of the cameras.
**Images Pre-Processing:**
In the pre-processing step, image enhancements are applied using filters, background subtraction or mask overlays. Functions that were used included inverting to compute
Fig 4.9: Photo of a particle image (green component) after Unifying or subtracting background variation
the negative of the particles shadow images, smoothing to remove the noise, and masking to mask out the areas with low illumination. Examples of enhancements are shown in figure 4.9 & 4.10.
Fig 4.10: Photo showing Inverted green channel of particle image.
After merging the resulted data sets, the timing information was added and camera numbers were assigned for all frames. Having these steps done, the images were ready for 3D volume reconstruction. The results for each of these steps are presented below.
Figure 4.11-4.14 show the resulting separation of the two fields, Green and (Green-Red). In Figure 4.12 and we show the separated particles for the Green field, over the entire vortex ring, with Figure 4.12 focusing in on the left side of the ring, as viewed in each camera. The distribution of particles looks similar in all for views, indicating that the vortex ring is fairly axisymmetric. Figures 4.13 and 4.14 shows similar images for the Green-Red shadows. Clearly, this color-separation is not perfect and the images are much more pronounced in the Green channel. The particles in the Green-Red are also more “blotchy”, with outer noisy sections, which will affect the accuracy of the velocity fields. Figure 4.15 makes a direct comparison between the particle images from the two times, demonstrating that the Green channel has much sharper particle images.
Figure 4.16 shows direct correlation performed on the images, before they are used to reconstruct the 3-D particle locations. This is not expected to give reliable velocity vectors.
Figure 4.11: The particle images using the shadow in the green color-channel, from all four cameras. The average intensity has been subtracted first and the intensity magnitudes of the particle images has been adjusted to make the brightest images close to the maximum 8-bit intensity of about 250. Finally, the shadows have been inverted to make the background dark and the particles bright. The closeup marked by the red outline is show in the following figure. The width of each panel is about 2500 px. Close-up images are shown in the following figure.
Figure 4.12: Close-up particle images from the previous figure. The width of each panel is about 700 px.
Figure 4.13: The Green-Red fields for all 4 cameras, which correspond to the second time-flash. The following Figure shows more close-up images.
Figure 4.14: Close-up sections of the images in Figure 4.13.
Figure 4.15: Direct comparison of the particle images in the same area from the Green (left panel) and Green-Red (right panel) at the two different times.
Figure 4.16: Direct PIV calculated from the Tomo-images. Only the velocities at the outer edge of the vortex ring are close enough to two-dimensional to give good velocity results. The magnitudes of the velocity vector, indicated by the length of the arrows, look reasonable, with a strong decay away from the vortex core.
The vectors near the center of the vortex are obviously erroneous. This is easily understood due to the out-of-plane motion of particles due to the overall axisymmetric structure. This further highlights why tomo-PIV is needed for this type of study.
In section 4.2 we will show that using Red and Blue flashes works much better to separate the two time-flashes. However, this is demonstrated using only one of the Nikon cameras and the availability and intensity of the LEDs in the lab, did not allow us to pursue this color-combination in this theses and will remain for future work. We therefore attempt to use the Red/Green combination in this thesis, but knowing that future improvements are possible with the Red/Blue lighting.
**Volume self-calibration:**
As explained in section 3.2.2 it is necessary to correct the original calibration by performing the so-called self-calibration, to correct the calibration coefficients. From averaging these inaccuracies, 3D disparity maps are generated and the calibration function is corrected accordingly. In our experiments for a proof of concept we skipped this step. However, by showing below that we are able to get reasonable velocity fields without self-calibration, one can expect to get an improved result if self-calibration is included. Indeed being able to produce any reasonable velocity field without the self-calibration indicates that our initial calibration is of quite high quality.
Figure 4.17: The intensity distribution of the particles in the reconstructed volume, showing clearly where the particles are concentrated. In our measurements we confined the particles to seeding the vortex ring fluid inside the piston.
**Volume Reconstruction:**
After applying the self-calibration method (skipped herein), the images are ready for the volume reconstruction to calculate the volumetric particle distribution in the 3D voxel space. The processing multiplicative operation “Fast MART” has been used to reconstruct the 3D particle distribution with 5 iterations. Figures 4.17 and 4.18 show the 3D distribution of particles’ intensities and z-profile for the reconstructed volume. Figure 4.18 shows that the average brightness of the reconstructed
Fig 4.18: plots showing the intensities profile in the z-direction through the reconstructed volume.
particles are quite uniform across the reconstructed volume. This can be expected as we are looking at shadows which are uniformly dark across the volume, which have subsequently been inverted. This compares well with the conventional laser-illuminated volume slices where the intensity of the particles will depend greatly on where they are within the laser sheet.
Figure 4.19 shows typical particle reconstructions, where each plane corresponds to a thickness of one voxel. The reconstruction we performed is 1035 planes deep in the z-direction. Each particle is observed in about 10 adjacent planes.
Figure 4.19: One reconstruction plane out of a total of 1035 adjacent planes. Keep in mind that the width into the board of this plane is only 1 voxel, whereas the total horizontal width of the upper image is about 6000 voxels.
3D Cross-Correlation:
At this step the 3D velocity vector field can be calculated from the reconstructed volume using the processing operation “Direct Correlation”. Initial pass of 256 X 256 pixels interrogation region with a 1:1 elliptical weighting is usually used and followed by two 128 X 128 passes with interrogation region overlap of 75%. This

**Figure 4.20**: Velocity vectors in a plane near the centerline of the vortex ring. This is one of 27 reconstructed Tomo-PIV planes. The absolute value of the vorticity out of the plane is plotted by the color field. The right side of the vortex has a clear rotation and higher vorticity near the center of the vortex. The vorticity is broken up around the core on the left side.
results in 27 adjacent velocity planes, where we have about 90 x 80 velocity vectors in each plane, giving a total of 90 x 80 x 27 = 194,000 fully 3-D velocity vectors.
Figure 4.20 shows a typical plane of velocities. The region on the right side outside the vortex, is so devoid of particles due to low illumination intensity that no reliable vectors were found there. It is instructive to compare the right side of the vortex in Figure 4.20 to that from the projected image in Figure 4.16, where no information can be extracted in that region.
Figure 4.22 Velocity information in the center-plane cut through the vortex. The color indicates the magnitude of the horizontal component of the velocity vector. Showing the outflow on top of the vortex, whereas on the bottom the flow is towards the centerline.
Figure 4.23: The color indicates the magnitude of the vertical component of the velocity vector. It shows the up-flow near the center and on top of the vortex, whereas on the outer edges of the vortex flow are flowing downwards.
Figures 4.20 - 4.24 show some more results from this velocity volume.
Figure 4.24: The out of plane velocity in a plane near the edge of the vortex. The red color shows the top of the vortex coming towards us, with the bottom going into the board. This is consistent with predominantly out of plane motions near the edge of the vortex.
The three-dimensional nature of the velocity field is clearly demonstrated by the out-of-plane component shown in Figure 4.24. Here a plane cuts through the edge of the vortex, so the top flow is coming towards us and the bottom is going into the board.
4.2 Using Red and Blue LEDs:
Near the end of the work on this thesis, the difficulty of separating the Red and Green shadows became very apparent, as there was a lot of cross-talk between the colors. To try to fix this we therefore attempted some experiments with Red and Blue flashes, which are better separated in the wavelength space and will therefore have less cross-talk between these two channels. This was only possible using one camera, due to time-constraints and LED availability. This worked considerably better and should be pursued in future work. Figure 4.25 shows a typical example. When the green LEDs were replaced by Blue, splitting was enhanced and the identification of particles positions has determined to be more feasible. Figure 4.26 illustrates these enhancements with main featured advantages as follows:
- Less overlap in spectral space
- Much clearer separation of color-layers
Figure 4.25: Subsection from a typical Red-Blue LED image. The width of this panel is 1544 px.
Figure 4.26: Another example of the colour separation using RED-BLUE LEDs. The large “particles” are bubbles which are attracted to the cores of the vortex ring. The Green channel shown in the middle is almost entirely dark.
Figure 4.27: The pdf of the RED and BLUE pixel intensities, as well as the intensities of the difference RED-BLUE (black curve). The arrows point at two prominent peaks which indicate the pixels of most of the particles, which would make them significantly easier to separate than the RED-GREEN images used earlier in this thesis.
Figure 4.27 shows the pdfs of the intensities of the Red, Blue and their difference, calculated from the image in Figure 4.25. There are clear peaks on both sides of the zero value, representing the particles. There is an additional peak at positive difference, which is due to uneven background intensities, which have not been subtracted in this case.
Figure 4.28: Subsection of the previous figure, showing the individual color channels. The width of each panel is 784 px.
Figure 4.26 and 4.28 shows another example for the Red/Blue LED illumination. In this case the vortex ring has trapped a number of medium-size bubbles along its core. These bubbles are each close to spherical, as they are of the order of a millimeter, which is smaller than the capillary length in water, which is 2.7 mm. The capillary length characterizes the size where the buoyancy and surface tension are balanced.
The color of the bubbles are quite distinct, showing their motion between the flashes. It is nice to see the fully dark region where the two images of the bubble overlap. Figure 4.29 shows another realization where the bubbles are larger and aligned along the vortex core.
Figure 4.29: Image using Red/Blue LEDs. Shows an image subsection with a vortex ring with numerous large bubbles around the core of the ring. The bubbles are attracted to the core, by the low Bernoulli pressure there.
These bubbles are moving up with the translation of the vortex ring, while the outermost bubbles, at left of image, are shown to move downwards due to the vertical motions. This is of course not what such an image should be used for, as it represents a projection of many particles at different depth into the board, but for this case we only had an image from this one direction.
Figure 4.30 shows the intensity cut through two particle shadows, demonstrating a clear separation of the two pulses. This should be compared to the cut in Figure 4.5, for the red/green LEDs, now showing a much better result, which should be used in further experiments with this Tomo-PIV method.
Fig 4.30: photos and plots for Red and Blue LEDs experiment.
CHAPTER 5
Discussion and Conclusions
The work presented in this thesis provides an evaluation of a low-cost colored shadow imaging method for conducting tomographic PIV measurements. This new method is based on using two different color LEDs for the illumination and four commercial DSLR CMOS cameras for the imaging. The two different pulse-times were encoded in the images using two different colors (red and green) from the LED pulses. Two different experimental setups were used, to try to reduce optical distortions. As anticipated, the highest quality images were obtained with 8 LEDs (Tank-B model) to illuminate the volume, i.e. by using 4 pairs of two LEDs one for each color and where the pulse from each of these pairs of LEDs was directed toward a diffuser screen opposite to each camera. Different particle-seeding mechanisms were implemented, while a large pulse-driven vortex ring formed a flow patterns that allowed for the successful tomographic PIV measurement presented herein.
The processing was carried out by transferring the images manually to the Davis commercial software from LaVision (Germany). The color images were converted and split up into the red and green images using localized background subtraction and differences in the Red and Green channels. Following this we could successfully reconstruct two separate particles fields, corresponding to the
two different illumination times, thereby allowing for cross-correlations to get the three-dimensional velocity field.
In conclusion, we were able to perform a proof-of-concept realization using red/green LED illuminations. However, this demonstrated excessive cross-talk between the red and green channels on the sensor, due to overlapping sensitivities on the green wavelengths, into the red component. Finally, this result compelled us to try red/blue LED illumination, where the different color pixels are fully separated, which gave much better results for a single camera. This suggests that using red and blue LEDs shadows could give higher-quality Tomo-PIV results. This shows great promise, but will have to wait for future work.
REFERENCES
|
Generic Registry-Registrar Protocol Requirements
Status of this Memo
This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2002). All Rights Reserved.
Abstract
This document describes high-level functional and interface requirements for a client-server protocol for the registration and management of Internet domain names in shared registries. Specific technical requirements detailed for protocol design are not presented here. Instead, this document focuses on the basic functions and interfaces required of a protocol to support multiple registry and registrar operational models.
Conventions Used In This Document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
Table of Contents
1. Introduction ........................................... 2
1.1 Definitions, Acronyms, and Abbreviations ........... 2
2. General Description ..................................... 4
2.1 System Perspective .................................... 4
2.2 System Functions ...................................... 4
2.3 User Characteristics ................................. 5
2.4 Assumptions ........................................... 5
3. Functional Requirements ............................... 5
3.1 Session Management ................................. 6
3.2 Identification and Authentication ..................... 6
3.3 Transaction Identification ............................ 7
3.4 Object Management .................................... 7
3.5 Domain Status Indicators ............................ 13
1. Introduction
The advent of shared domain name registration systems illustrates the utility of a common, generic protocol for registry-registrar interaction. A standard generic protocol will allow registrars to communicate with multiple registries through a common interface, reducing operational complexity. This document describes high level functional and interface requirements for a generic provisioning protocol suitable for registry-registrar operations. Detailed technical requirements are not addressed in this document.
1.1 Definitions, Acronyms, and Abbreviations
ccTLD: Country Code Top Level Domain. "us" is an example of a ccTLD.
DNS: Domain Name System
gTLD: Generic Top Level Domain. "com" is an example of a gTLD.
IANA: Internet Assigned Numbers Authority
IETF: Internet Engineering Task Force
IP Address: Either or both IPv4 or IPv6 address.
IPv4: Internet Protocol version 4
IPv6: Internet Protocol version 6
RRP: Registry-Registrar Protocol
TLD: Top Level Domain. A generic term used to describe both gTLDs and ccTLDs that exist under the top-level root of the domain name hierarchy.
Exclusive Registration System: A domain name registration system in which registry services are limited to a single registrar. Exclusive Registration Systems are either loosely coupled (in which case the separation between registry and registrar systems is readily evident), or tightly coupled (in which case the separation between registry and registrar systems is obscure).
Name Space: The range of values that can be assigned within a particular node of the domain name hierarchy.
Object: A generic term used to describe entities that are created, updated, deleted, and otherwise managed by a generic registry-registrar protocol.
Registrant: An entity that registers domain names in a registry through the services provided by a registrar. Registrants include individuals, organizations, and corporations.
Registrar: An entity that provides front-end domain name registration services to registrants, providing a public interface to registry services.
Registry: An entity that provides back-end domain name registration services to registrars, managing a central repository of information associated with domain name delegations. A registry is typically responsible for publication and distribution of zone files used by the Domain Name System.
Shared Registration System: A domain name registration system in which registry services are shared among multiple independent registrars. Shared Registration Systems require a loose coupling between registrars and a registry.
Thick Registry: A registry in which all of the information associated with registered entities, including both technical information (information needed to produce zone files) and social information (information needed to implement operational, business, or legal practices), is stored within the registry repository.
Thin Registry: A registry in which all elements of the social information associated with registered entities is distributed between a shared registry and the registrars served by the registry.
Zone: The complete set of information for a particular "pruned" subtree of the domain space. The zone concept is described fully in [RFC1035].
2. General Description
A basic understanding of domain name registration systems provides focus for the enumeration of functional and interface requirements of a protocol to serve those systems. This section provides a high-level description of domain name registration systems to provide context for the requirements identified later in this document.
2.1 System Perspective
A domain name registration system consists of a protocol and associated software and hardware that permits registrars to provide Internet domain name registration services within the name spaces administered by a registry. A registration system can be shared among multiple competing registrars, or it can be served by a single registrar that is either tightly or loosely coupled with back-end registry services. The system providing registration services for the .com, .net, and .org gTLDs is an example of a shared registration system serving multiple competing registrars. The systems providing registration services for some ccTLDs and the .gov and .mil gTLDs are examples of registration systems served by a single registrar.
2.2 System Functions
Registrars access a registry through a protocol to register objects and perform object management functions. Required functions include session management; object creation, update, renewal, and deletion; object query; and object transfer.
A registry generates DNS zone files for the name spaces it serves. Zone files are created and distributed to a series of name servers that provide the foundation for the domain name system.
2.3 User Characteristics
Protocol users fall into two broad categories: entities that use protocol client implementations and entities that use protocol server implementations, though an entity can provide both client and server services if it provides intermediate services. A protocol provides a loose coupling between these communicating entities.
2.4 Assumptions
There is one and only one registry that is authoritative for a given name space and zone.
A registry can be authoritative for more than one name space and zone. Some registry operations can be billable. The impact of a billable operation can be mitigated through the specification of non-billable operations that allow a registrar to make informed decisions before executing billable operations.
A registry can choose to implement a subset of the features provided by a generic registry-registrar protocol. A thin registry, for example, might not provide services to register social information. Specification of minimal implementation compliance requirements is thus an exercise left for a formal protocol definition document that addresses the functional requirements specified here.
A protocol that meets the requirements described here can be called something other than "Generic Registry Registrar Protocol".
The requirements described in this document are not intended to limit the set of objects that can be managed by a generic registry-registrar protocol.
3. Functional Requirements
This section describes functional requirements for a registry-registrar protocol. Technical requirements that describe how these requirements are to be met are out of scope for this document.
3.1 Session Management
[1] The protocol MUST provide services to explicitly establish a client session with a registry server.
[2] In a connection-oriented environment, a server MUST respond to connection attempts with information that identifies the server and the default server protocol version.
[3] The protocol MUST provide services that allow a client to request use of a specific protocol version as part of negotiating a session.
[4] The protocol MUST provide services that allow a server to decline use of a specific protocol version as part of negotiating a session.
[5] A session MUST NOT be established if the client and server are unable to reach agreement on the protocol version to be used for the requested session.
[6] The protocol MUST provide services to explicitly end an established session.
[7] The protocol MUST provide services that provide transactional atomicity, consistency, isolation, and durability in the advent of session management failures.
[8] The protocol MUST provide services to confirm that a transaction has been completed if a session is aborted prematurely.
3.2 Identification and Authentication
[1] The protocol or another layered protocol MUST provide services to identify registrar clients and registry servers before granting access to other protocol services.
[2] The protocol or another layered protocol MUST provide services to authenticate registrar clients and registry servers before granting access to other protocol services.
[3] The protocol or another layered protocol MUST provide services to negotiate an authentication mechanism acceptable to both client and server.
3.3 Transaction Identification
[1] Registry operations that create, modify, or delete objects MUST be associated with a registry-unique identifier. The protocol MUST allow each transaction to be identified in a permanent and globally unique manner to facilitate temporal ordering and state management services.
3.4 Object Management
This section describes requirements for object management, including identification, registration, association, update, transfer, renewal, deletion, and query.
3.4.1 Object Identification
Some objects, such as name servers and contacts, have utility in multiple registries. However, maintaining disjoint copies of object information in multiple registries can lead to inconsistencies that have adverse consequences for the Internet. For example, changing a name server name in one registry, but not in a second registry that refers to the server for domain name delegation, can produce unexpected DNS query results.
[1] The protocol MUST provide services to associate an object identifier with every object.
[3] An object’s identifier MUST NOT change during the lifetime of the object in a particular repository, even if administrative control of the object changes over time.
[4] An object identifier MUST contain information that unambiguously identifies the object.
[5] Object identifier format specified by the protocol SHOULD be easily parsed and understood by humans.
[6] An object’s identifier MUST be generated and stored when an object is created.
3.4.2 Object Registration
[1] The protocol MUST provide services to register Internet domain names.
[2] The protocol MUST permit a starting and ending time for a domain name registration to be negotiated, thereby allowing a registry to implement policies allowing a range of registration validity periods (the start and end points in time during which one normally assumes that an object will be active), and enabling registrars to select a period for each registration they submit from within the valid range based on out-of-band negotiation between the registrar and the registrant. Registries SHOULD be allowed to accept indefinitely valid registrations if the policy that they are implementing permits, and to specify a default validity period if one is not selected by a registrar. Registries MUST be allowed to specify minimal validity periods consistent with prevailing or preferred practices for fee-for-service recovery. The protocol MUST provide features to ensure that both registry and registrar have a mutual understanding of the validity period at the conclusion of a successful registration event.
[3] The protocol MUST provide services to register name servers. Name server registration MUST NOT be limited to a specific period of time. Name servers MUST be registered with a valid IPv4 or IPv6 address when a "glue record" is required for delegation. A name server MAY be registered with multiple IP addresses. Multiple name servers using distinct server names MAY share an IP address.
[4] The protocol MUST provide services to manage delegation of zone authority. Names of name servers MUST NOT be required to be tied to the name of the zone(s) for which the server is authoritative.
[5] The protocol MUST provide services to register social information describing human and organizational entities. Registration of social information MUST NOT be limited to a specific period of time. Social information MAY include a name (individual name, organization name, or both), address (including street address, city, state or province (if applicable), postal code, and country), voice telephone number, email address, and facsimile telephone number.
[6] Protocol services to register an object MUST be available to all authorized registrars.
3.4.3 Object Association
[1] The protocol MUST provide services to associate name servers with domain names to delegate authority for zones. A domain name MAY have multiple authoritative name servers. Name servers MAY be authoritative for multiple zones.
[2] The protocol MUST provide services to associate IP addresses with name servers. A name server MAY have multiple IP addresses. An IP address MAY be associated with multiple name server registrations.
[3] The protocol MUST provide services to associate social information with other objects. Social information associations MUST be identified by type. "Registrant" is an example social information type that might be associated with an object such as a domain name.
[4] The protocol MUST provide services to associate object management capabilities on a per-registrar basis.
[5] Some managed objects represent shared resources that might be referenced by multiple registrars. The protocol MUST provide services that allow a registrar to associate an existing shared resource object with other registered objects sponsored by a second registrar. For example, authority for the example.tld zone (example.tld domain object managed by registrar X) and authority for the test.tld zone (test.tld domain object managed by registrar Y) might be delegated to server ns1.example.tld (managed by registrar X). Registrar X maintains administrative control over domain object example.tld and server object ns1.example.tld, and registrar Y maintains administrative control over domain object test.tld. Registrar Y does not have administrative control over server object ns1.example.tld.
3.4.4 Object Update
[1] The protocol MUST provide services to update information associated with registered Internet domain names.
[2] The protocol MUST provide services to update information associated with registered name servers.
[3] The protocol MUST provide services to update social information associated with registered human and organizational entities.
[4] The protocol MUST provide services to limit requests to update a registered object to the registrar that currently sponsors the registered object.
[5] The protocol MUST provide services to explicitly reject unauthorized attempts to update a registered object.
3.4.5 Object Transfer
[1] The protocol MUST provide services to transfer domain names among authorized registrars. Name servers registered in a domain being transferred MUST be transferred along with the domain itself. For example, name servers "ns1.example.tld" and "ns2.example.tld" MUST be implicitly transferred when domain "example.tld" is transferred.
[2] The protocol MUST provide services to describe all objects, including associated objects, that are transferred as a result of an object transfer.
[3] The protocol MUST provide services to transfer social information objects among authorized registrars.
[4] Protocol transfer requests MUST be initiated by the registrar who wishes to become the new administrator of an object.
[5] The protocol MUST provide services to confirm registrar authorization to transfer an object.
[6] The protocol MUST provide services that allow the requesting registrar to cancel a requested object transfer before the request has been approved or rejected by the original sponsoring registrar. Requests to cancel the transfer of registered objects MUST be limited to the registrar that requested transfer of the registered object. Unauthorized attempts to cancel the transfer of a registered object MUST be explicitly rejected.
[7] The protocol MUST provide services that allow the original sponsoring registrar to approve or reject a requested object transfer. Requests to approve or reject the transfer of registered objects MUST be limited to the registrar that currently sponsors the registered object. Unauthorized attempts to approve or reject the transfer of a registered object MUST be explicitly rejected.
[8] The protocol MUST provide services that allow both the original sponsoring registrar and the potential new registrar to monitor the status of both pending and completed transfer requests.
[9] Transfer of an object MAY extend the object’s registration period. If an object’s registration period will be extended as the result of a transfer, the new expiration date and time MUST be returned after successful completion of a transfer request.
Requests to initiate the transfer of a registered object MUST be available to all authorized registrars.
Registrars might become non-functional and unable to respond to transfer requests. It might be necessary for one registrar to assume management responsibility for the objects associated with another registrar in the event of registrar failure. The protocol MUST NOT restrict the ability to transfer objects in the event of registrar failure.
3.4.6 Object Renewal/Extension
The protocol MUST provide services to renew or extend the validity period of registered domain names. If applicable, the new expiration date and time MUST be returned after successful completion of a request to renew or extend the validity period.
Requests to renew or extend the validity period of a registered object MUST be limited to the registrar that currently sponsors the registered object. Unauthorized attempts to renew or extend the validity period of a registered object MUST be explicitly rejected.
3.4.7 Object Deletion
The protocol MUST provide services to remove a domain name from the registry.
The protocol MUST provide services to remove a name server from the registry.
The protocol MUST provide services to remove a social information object from the registry.
Requests to remove a registered object MUST be limited to the registrar that currently sponsors the registered object. Unauthorized attempts to remove a registered object MUST be explicitly rejected.
3.4.8 Object Existence Query
This section describes requirements for a lightweight query mechanism whose sole purpose is to determine if an object exists in a registry.
The protocol MUST provide services to determine if a domain name exists in the registry. Domain names MUST be searchable by fully qualified name.
[2] The protocol MUST provide services to determine if a name server exists in the registry. Name servers MUST be searchable by fully qualified name.
[3] The protocol MUST provide services to determine if a social information object exists in the registry. Social information MUST be searchable by a registry-unique identifier.
[4] A query to determine if an object exists in the registry MUST return only a positive or negative response so that server software that responds to this query can be optimized for speed.
[5] Requests to determine the existence of a registered object MUST be available to all authorized registrars.
3.4.9 Object Information Query
This section describes requirements for a query mechanism whose purpose is to provide detailed information describing objects that exist in a registry.
[1] The protocol MUST provide services to retrieve information describing a domain name from the registry. Returned information MUST include the identifier of the current sponsoring registrar, the identifier of the registrar that originally registered the domain, the creation date and time, the expiration date and time (if any), the date and time of the last successful update (if any), the identifier of the registrar that performed the last update, the date and time of last completed transfer (if any), the current status of the domain, authorization information, identifiers describing social information associated with the domain, and the subordinate name servers registered in the domain. Authorization information MUST only be returned to the current sponsoring registrar.
[2] The protocol MUST provide services to retrieve information describing a name server from the registry. Returned information MUST include the identifier of the current sponsoring registrar, the identifier of the registrar that originally registered the name server, the creation date and time, the date and time of the last successful update (if any), the identifier of the registrar that performed the last update, the date and time of last completed transfer (if any), and the IP addresses currently associated with the name server.
[3] The protocol MUST provide services to retrieve social information from the registry. Returned information MUST include identification attributes (which MAY include name, address, telephone numbers, and email address), the identifier of the registrar that originally
registered the information, the creation date and time, the date and time of the last successful update (if any), the identifier of the registrar that performed the last update, the date and time of last completed transfer (if any), and authorization information. Authorization information MUST only be returned to the current sponsoring registrar.
[4] The protocol MUST provide services to identify all associated object references, such as name servers associated with domains (including delegations and hierarchical relationships) and contacts associated with domains. This information MUST be visible if the object associations have an impact on the success or failure of protocol operations.
[5] Requests to retrieve information describing a registered object MAY be granted by the registrar that currently sponsors the registered object. Unauthorized attempts to retrieve information describing a registered object MUST be explicitly rejected.
3.5 Domain Status Indicators
[1] The protocol MUST provide status indicators that identify the operational state of a domain name. Indicators MAY be provided to identify a newly created state (the domain has been registered but has not yet appeared in a zone), a normal active state (the domain can be modified and is published in a zone), an inactive state (the domain can be modified but is not published in a zone because it has no authoritative name servers), a hold state (the domain can not be modified and is not published in a zone), a lock state (the domain can not be modified and is published in a zone), a pending transfer state, and a pending removal state.
[2] If provided, protocol indicators for hold and lock status MUST allow independent setting by both registry and registrar.
[3] A domain MAY have multiple statuses at any given time. Some statuses MAY be mutually exclusive.
3.6 Transaction Completion Status
[1] The protocol MUST provide services that unambiguously note the success or failure of every transaction. Individual success and error conditions MUST be noted distinctly.
4. External Interface Requirements
External interfaces define the interaction points between a system and entities that communicate with the system. Specific areas of interest include user interfaces, hardware interfaces, software interfaces, and communications interfaces.
4.1 User, Hardware, and Software Interfaces
[1] The protocol MUST define a wire format for data exchange, not an application design for user, hardware, or software interfaces so that any application able to create the same bits on the wire, and to maintain the image of the same integrity constraints, is a valid implementation of the protocol.
4.2 Communications Interfaces
[1] Registries, registrars, and registrants interact using a wide spectrum of communications interfaces built upon multiple protocols, including transport layer protocols such as TCP and application layer protocols such as SMTP. The protocol MUST only be run over IETF approved protocols that feature congestion control, such as TCP and SCTP.
5. Performance Requirements
[1] Run-time performance is an absolutely critical aspect of protocol usability. While performance is very heavily dependent on the hardware and software architecture that implements a protocol, protocol features can have a direct impact on the ability of the underlying architecture to provide optimal performance. The protocol MUST be usable in both high volume and low volume operating environments.
6. Design Constraints
Protocol designers need to be aware of issues beyond functional and interface requirements when balancing protocol design decisions. This section describes additional factors that might have an impact on protocol design, including standards compliance and hardware limitations.
6.1 Standards Compliance
[1] The protocol MUST conform to current IETF standards. Standards for domain and host name syntax, IP address syntax, security, and transport are particularly relevant. Emerging standards for the Domain Name System MUST be considered as they approach maturity.
[2] The protocol MUST NOT reinvent services offered by lower layer protocol standards. For example, the use of a transport that provides reliability is to be chosen over use of a non-reliable transport with the protocol itself using retransmission to achieve reliability.
6.2 Hardware Limitations
[1] The protocol MUST NOT define any features that preclude hardware independence.
7. Service Attributes
Elements of service beyond functional and interface requirements are essential factors to consider as part of a protocol design effort. This section describes several important service elements to be addressed by protocol designers, including reliability, availability, scalability, maintainability, extensibility, and security.
7.1 Reliability
[1] Reliability is a measure of the extent to which a protocol provides a consistent, dependable level of service. Reliability is an important attribute for a domain name management protocol. An unreliable protocol increases the risk of data exchange errors, which at one extreme can have a direct impact on protocol usability and at the other extreme can introduce discontinuity between registry and registrar data stores. The protocol MUST include features that maximize reliability at the application protocol layer. Services provided by underlying transport, session, and presentation protocols SHOULD also be considered when addressing application protocol reliability.
[2] The protocol MUST be run over the most reliable transport option available in a given environment. The protocol MUST NOT implement a service that is otherwise available in an applicable standard transport.
[3] Default protocol actions for when a request or event times out MUST be well defined.
7.2 Availability
[1] Availability is a measure of the extent to which the services provided by a protocol are accessible for an intended use. Availability of an application layer protocol is primarily dependent on the software and hardware systems that implement the protocol.
The protocol MUST NOT include any features that impinge on the underlying availability of the software and hardware systems needed to service the protocol.
7.3 Scalability
[1] Scalability is a measure of the extent to which a protocol can accommodate use growth while preserving acceptable operational characteristics. The protocol MUST be capable of operating at an acceptable level as the load on registry and registrar systems increases.
7.4 Maintainability
[1] Maintainability is a measure of the extent to which a protocol can be adapted or modified to address unforeseen operational needs or defects. The protocol SHOULD be developed under the nominal working group processes of the IETF to provide a well-known mechanism for ongoing maintenance.
7.5 Extensibility
[1] Extensibility is a measure of the extent to which a protocol can be adapted for future uses that were not readily evident when the protocol was originally designed. The protocol SHOULD provide features that at a minimum allow for the management of new object types without requiring revisions to the protocol itself.
[2] The requirements described in this document are not intended to limit the set of objects that might be managed by the protocol. The protocol MUST include features that allow extension to object types that are not described in this document.
[3] The protocol MUST provide an optional field within all commands whose format and use will be controlled by individual registry policy.
7.6 Security
[1] Transactional privacy and integrity services MUST be available at some protocol layer.
[2] This document describes requirements for basic user identification and authentication services. A generic protocol MAY include additional security services to protect against the attacks described here. A generic protocol MUST depend on other-layered protocols to provide security services that are not provided in the generic protocol itself. A generic protocol that relies on security
services from other-layered protocols MUST specify the protocol layers needed to provide security services.
8. Other Requirements
Certain aspects of anticipated operational environments have to be considered when designing a generic registry-registrar protocol. Areas of concern include database operations, operations, site adaptation, and data collection.
8.1 Database Requirements
[1] The protocol MUST NOT have any database dependencies. However, efficient use of database operations and resources has to be considered as part of the protocol design effort. The protocol SHOULD provide atomic features that can be efficiently implemented to minimize database load.
8.2 Operational Requirements
[1] Registry-registrar interactions at the protocol level SHOULD operate without human intervention. However, intermediate services that preserve the integrity of the protocol MAY be provided. For example, an intermediate service that determines if a registrant is authorized to register a name in a name space can be provided.
[2] The protocol MUST provide services that allow clients and servers to maintain a consistent understanding of the current date and time to effectively manage objects with temporal properties.
8.3 Site Adaptation Requirements
[1] Registries and registrars have varying business and operational requirements. Several factors, including governance standards, local laws, customs, and business practices all play roles in determining how registries and registrars are operated. The protocol MUST be flexible enough to operate in diverse registry-registrar environments.
8.4 Data Collection Requirements
[1] Some of the data exchanged between a registrar and registry might be considered personal, private, or otherwise sensitive. Disclosure of such information might be restricted by laws and/or business practices. The protocol MUST provide services to identify data collection policies.
Some of the social information exchanged between a registrar and registry might be required to create, manage, or operate Internet or DNS infrastructure facilities, such as zone files. Such information is subject to public disclosure per relevant IETF standards.
9. Internationalization Requirements
[1] [RFC1035] describes Internet host and domain names using characters traditionally found in a subset of the 7-bit US-ASCII character set. More recent standards, such as [RFC2130] and [RFC2277], describe the need to develop protocols for an international Internet. These and other standards MUST be considered during the protocol design process to ensure world-wide usability of a generic registry registrar protocol.
[2] The protocol MUST allow exchange of data in formats consistent with current international agreements for the representation of such objects. In particular, this means that addresses MUST include country, that telephone numbers MUST start with the international prefix "+", and that appropriate thought be given to the usability of information in both local and international contexts. This means that some elements (like names and addresses) might need to be represented multiple times, or formatted for different contexts (for instance English/French in Canada, or Latin/ideographic in Japan).
[3] All date and time values specified in a generic registry-registrar protocol MUST be expressed in Universal Coordinated Time. Dates and times MUST include information to represent a four-digit calendar year, a calendar month, a calendar day, hours, minutes, seconds, fractional seconds, and the time zone for Universal Coordinated Time. Calendars apart from the Gregorian calendar MUST NOT be used.
10. IANA Considerations
This document does not require any action on the part of IANA. Protocol specifications that require IANA action MUST follow the guidelines described in [RFC2434].
11. Security Considerations
Security services, including confidentiality, authentication, access control, integrity, and non-repudiation SHOULD be applied to protect interactions between registries and registrars as appropriate. Confidentiality services protect sensitive exchanged information from inadvertent disclosure. Authentication services confirm the claimed identity of registries and registrars before engaging in online transactions. Access control services control access to data and
services based on identity. Integrity services guarantee that exchanged data has not been altered between the registry and the registrar. Non-repudiation services provide assurance that the sender of a transaction cannot deny being the source of the transaction, and that the recipient cannot deny being the receiver of the transaction.
12. Acknowledgements
This document was originally written as an individual submission Internet-Draft. The provreg working group later adopted it as a working group document and provided many invaluable comments and suggested improvements. The author wishes to acknowledge the efforts of WG chairs Edward Lewis and Jaap Akkerhuis for their process and editorial contributions.
Specific comments that helped guide development of this document were provided by Harald Tveit Alvestrand, Christopher Ambler, Karl Auerbach, Jorg Bauer, George Belotsky, Eric Brunner-Williams, Jordyn Buchanan, Randy Bush, Bruce Campbell, Dan Cohen, Andre Cormier, Kent Crispin, Dave Crocker, Ayesha Damaraaju, Lucio De Re, Mats Dufberg, Peter Eisenhauer, Sheer El-Showk, Urs Eppenberger, Patrik Falstrom, Paul George, Patrick Greenwell, Jarle Greipsland, Olivier Guillard, Alf Hansen, Paul Hoffman, Paul Kane, Shane Kerr, Elmar Knipp, Mike Lampson, Matt Larson, Ping Lu, Klaus Malorny, Bill Manning, Michael Mealling, Patrick Mevzek, Peter Mott, Catherine Murphy, Martin Oldfield, Geva Patz, Elisabeth Porteneuve, Ross Wm. Rader, Budi Rahardjo, Annie Renard, Scott Rose, Takeshi Saigoh, Marcos Sanz, Marcel Schneider, J. William Semich, James Seng, Richard Shockey, Brian Spolarich, William Tan, Stig Venaas, Herbert Vitzthum, and Rick Wesson.
13. References
Normative References:
Informative References:
14. Editor’s Address
Scott Hollenbeck
VeriSign Global Registry Services
21345 Ridgetop Circle
Dulles, VA 20166-6503
USA
EMail: shollenbeck@verisign.com
15. Full Copyright Statement
Copyright (C) The Internet Society (2002). All Rights Reserved.
This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.
The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.
This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Acknowledgement
Funding for the RFC Editor function is currently provided by the Internet Society. |
The place created by the building complex has a distinct character where specific yet integrated functions occur. The buildings provide a place for the activities to occur. The functions must be accommodated optimally, with high environmental regard. A strong place presupposes that there is meaningful correspondence between site, settlement and architectural detail. The buildings become a concretisation of the concept of healing.
Experiences and meaning come subconsciously while moving around a place. The movement of the body in space provides a measure for things, allowing people to appreciate the splendour and explor that which is hidden; to organise what is there to see, hear, feel, smell and touch in a given environment (Meiss 15:1990). The layout of the buildings affects the orientation and wayfinding of the user. This in turn will either make their experience exciting and helpful, or disorientating and frightening. Wayfinding and spatial orientation are important aspects of an efficient environment. Simplistic environments must be avoided; spatial complexity can be provided without making environments confusing and disorganized.
To provide the correct environment to be able to fulfill the functions of the building complex, certain baseline criteria are needed. These criteria ensure that the building accommodates all users, invites participation, monitors safety and health, reduces short term and long term economic costs, considers context and site, selects materials responsibly, and keeps the environment as an important stakeholder in the project through climatic response and environmental concern.
4.1 Sustainability
Sustainability is evocative of optimistic and protective ideas, recalling sustenance and therefore a nurturing, or at least good common sense (Steele 1997:ix). Linked as it has been to development, sustainability’s connotations are those of building a solid future and achieving prolonged, lasting, worthwhile progress.
What is sustainable architecture? A basic definition is an architecture that meets the needs of the present without compromising the ability of future generations to meet their own needs (Steele 1997:234).
More energy is used in running buildings than in their construction and material manufacturing (Day 1990:31). Buildings themselves, their materials, location, services and design have local effects, as well as affecting the health of people that use these places. Sustainable architecture, variously called ecological, biological, green or Gaia architecture, aligns with this critical response to a perceived global imperative that differs from its predecessors (Steele 1997:234).
In either active or passive mode, sustainable architecture tries to make connections to other buildings to take maximum advantage of mass, to local typologies that can be identified as climatically and culturally effective over time, to regional microclimates and materials or to global supplier if necessary in the implications that some material choices have for non-renewable resource depletion and for the possibility of technology transfer (Steele 1997:237).
“Humanity stands as a defining moment in history. We are confronted with a perpetuation of disparities between and within nations, a worsening of poverty, hunger, ill health and illiteracy and the continuing deterioration of the ecosystems on which we depend for our well-being. However, integration of environment and development concerns and greater attention to them will lead to the fulfilment of basic needs, improved living standards for all, better protected and managed ecosystems and a safer more prosperous future. No nation can achieve this on its own, but together we can in global partnership for sustainable development” (Steele 1997:9), from Agenda 21.
Agenda 21 addresses the built environment and the construction industry, which it identifies as “a major source of environmental damage through the degradation of fragile ecological zones, damage to natural resources, chemical pollution, and the use of building materials, which are harmful to human health.”
Specifically, as a corrective the report recommends:
1. The use of local materials and indigenous building sources.
2. Incentives to promote continuation of traditional techniques, with regional resources and self-help strategies.
3. Regulation of energy-efficient design principles.
4. Standards that would discourage construction in ecologically inappropriate areas.
5. The use of labour-intensive rather than energy-intensive construction techniques.
6. The restructuring of credit institutions to allow the poor to buy building materials and services.
7. International information exchange on all aspects of construction related to the environment, among architects and contractors, particularly about non-renewable resources.
8. Exploration of methods to encourage and facilitate the recycling and reuse of building materials, specially those requiring intensive energy consumption in their manufacture.
9. Financial penalties to discourage the use of materials that damage the environment.
4.2. Social issues
4.2.1 Indoor environment, Occupant Comfort
“The quality of the environment in and around the building has been shown to have a direct impact on health, happiness and productivity of people. Healthier, happier, more effective people contribute to sustainability by being more efficient and therefore reducing resource consumption and waste. The quality of this environment needs to be achieved with minimal cost to the environment” (Gibberd 2004: SBAT). Shelter is the main instrument for fulfilling the requirements of comfort. It modifies the natural environment to approach optimum conditions of liveability. It should filter, absorb or repel environmental elements according to their beneficial or adverse contributions to man’s comfort. Man strives for the point at which minimum energy expenditure is needed to adjust to the environment (Olgyay 1963:15)
Lighting and daylighting
All facilities must be well lit; daylighting is to be used as much as possible. Day light must be controllable, so that glare is kept to a minimum. If used properly day lighting can reduce electrical consumption, reduce cooling requirements and increase occupant comfort. Facilities should be designed so that electrical lighting is kept to a minimum.
Ventilation and indoor air quality
Fresh air is necessary to replenish oxygen, and remove stale air. Required ventilation should be provided by natural means, where mechanical ventilation can be minimised, or even excluded from the building. Building orientation and space linkage must enhance natural ventilation.
The materials used within the building must not contaminate the indoor air quality. Paints, particle board, adhesives and furnishings can contribute to contaminants found inside new buildings. The least toxic materials should be chosen, along with the design of systems that circulate and distribute fresh air passively.
Noise
Due to the nature of the facilities, noise levels in many areas of the facility must be kept as low as possible. Functions must be zoned so that noisy and quiet areas are separated, limiting unwanted excessive noise, and preventing interference between groups. The limited vehicular circulation on site keeps traffic noise down to a minimum.
Views and visual quality
All work and recreational areas have views outside. These views are important, and have influenced the placement of walls and shape of rooms, so that the eye is drawn to outdoor elements.
Access to outside
Users of the buildings must have easy access to outside green spaces. These spaces provide places for outdoor activities, as well as mental rejuvenation between tasks.
4.2.2 Inclusive Environments
“An essential criterion for sustainable buildings is that the building is designed to accommodate everyone, or specially designed buildings need to be provided. Ensuring that buildings are inclusive supports sustainability as replication is avoided and change of use supported” (Gibberd 2004:SBAT).
Transport
Due to the nature of the facilities located on the site, a major part of the site is limited to pedestrian movement, with controlled vehicular movement. All transport on site accommodates wheelchair users. Larger parking bays are provided near entrances and pathways for disabled users.
Routes, signage, level change
All routes and circulation space have an even surface that is easily navigable by wheelchair. Increased isle width, and path width is needed to accommodate all users. Outdoor surfaces take into consideration the various users, including wheelchairs.
Level change within the building as well as between buildings must be facilitated using ramps with a gradient of 1:12. The surface of the ramp must not be slippery. Handrails and rest platforms must be provided on stairs and ramps. Curbs must be provided on ramps.
Visual signs and displays must be clear, simple, and be translated into at least three languages. Visual signals must be used to reinforce audible warning signs, such as a flashing red light used with an audible fire alarm. Certain areas of the buildings are restricted to staff. This must be clearly demarcated and signed.
Toilets and bathrooms
The correct dimensions for toilet cubicles must be provided to aid wheelchair users. Doors must open outwards, with sufficient room to maneuver into the cubicle.
Showers must be of the correct dimension to accommodate disabled users. Handrails and a folding seat must be provided. Water controls, in the shower as well as on basins, must be such that they can be operated by all users.
4.2.3 Access to Facilities
“Convention living and working patterns require regular access to a range of services. Ensuring that these services can be accessed easily and in environmentally friendly ways supports sustainability by increasing efficiency and reducing environmental impact” (Gibberd 2004:SBAT).
Childcare
Childcare facilities are provided for users of the Healing Centre. These facilities are provided off-site in Mamelodi, near the pick-up point for transport to the facility. They are not located at the Healing Centre itself as this may cause distraction to the users.
Residential
Residential areas of the users as well as the staff are located more than 12km from the facility. Due to this reason transport to and from the Healing Centre is available for its users. A similar transportation system for the staff can be arranged, with a central parking area close to their homes. This parking area should be located close to retail and banking facilities where banking, post and groceries can be handled daily if necessary.
4.2.4 Participation & Control
“Ensuring that users participate in decisions about their environment helps ensure that they care for and manage this properly. Control over aspects of their local environment enables personal satisfaction and comfort. Both of these support sustainability by promoting proper management of buildings and increasing productivity” (Gibberd 2004:SBAT).
Environmental control and user adaptation
Users of the building have reasonable control over the building; this is in terms of opening windows and adjusting blinds and curtains. Furniture and fittings allow arrangement or rearrangement by the user. Personalisation of spaces may take place in the office facilities, and on a limited level in the accommodation facility. Provision should be made for places to put up pictures and notes.
Social space
Design for easy informal as well as formal interaction between people has been provided. This is accommodated in terms of various indoor and outdoor seating areas, meeting and counselling rooms and studios. This aids interaction between users themselves, as well as staff.
Community involvement
The community is an important part of this project. The aim of the Healing Centre and the rest of the building complex is to uplift the community, by providing a better psychological state, and so quality of life for its members. Skills training and workshops will benefit the community from the construction phase, through to operation of the buildings. The greater Pretoria community is involved to a certain extent by supporting the Healing facility. Through the Spa and Herbal Centre income, awareness and support is generated to facilitate the functioning and operation of the Healing facility.
4.2.5 Education, Health and Safety
“Buildings need to cater for the well being, development and safety of the people that use them. Awareness and environments that promote health can help reduce the incidence of diseases such as AIDS. Safe environments help to limit the incidence of accidents and where these occur, reduce their effect. Learning and access to information is increasingly seen as a requirement of a competitive work force. All of these factors contribute to sustainability by helping ensure that people remain healthy and economically active, thus reducing the ‘costs’ (to society, the environment and the economy) of unemployment and ill health” (Gibberd 2004:SBAT).
Lifelong learning / education
The nature of the Healing Centre, Herbal Centre and Spa are conducive of education and learning, especially by its users. The staff of all these facilities should periodically be sent on courses, and have access to materials that will further their knowledge, and help them educate users better.
Security, health and safety regulations
Security of the building complex in general will be aided by check points at the entrances. The property must be securely fenced, especially due to the accommodation facilities located at the Healing Centre. At night security should be increased through the employment of security services.
The buildings must comply with health and safety regulations. Policy and checks must be in place to ensure that these are complied with.
First-aid kits must be located in central locations. Staff must be trained in first-aid to be able to assist the injured properly. A protocol on dealing with injuries and emergencies must be established and made known to all staff.
4.3 Economic issues
4.3.1 Local Economy
“The construction and management of buildings can have a major impact on the economy of an area. The economy can be stimulated and sustained by buildings that make use of and develop local skills and resources” (Gibberd 2004:SBAT).
Contractors
80% of the construction should be carried out by contractors based within 100km of the building project. Skilled and unskilled labour must be included, with training programmes and educational tasks.
Materials and manufacture
80% of the construction materials: cement, sand and bricks, and the building components, windows, doors and furniture, must be produced within 200km of the site.
Outsource opportunities
Opportunities should be created for emerging small businesses. This includes outsourcing catering, cleaning and security services, making space and equipment available for these businesses to use. All repairs and maintenance required by the building can be carried out by contractors within 100km of the site. Standardised quality fixtures last longer, and when damaged their components are easier to replace.
4.3.2 Efficiency of Use
“Buildings cost money and make use of resources whether they are used or not. Effective and efficient use of buildings supports sustainability by reducing waste and the need for additional buildings” (Gibberd 2004:SBAT)
Usable space
All buildings must be managed so that they are used productively and generally occupied to ensure efficiency. Programmes and events must be monitored to ascertain which spaces are being used effectively, and which could be used better or more frequently. The use of space must be intensified by space management approaches such as shared work spaces and areas. Some spaces can be adapted and used for more than one function. Non-useable space such as WC’s, plant rooms and circulation must be kept to a minimum.
4.3.3 Adaptability and Flexibility
“Most buildings can have a life-span of at least 50 years. It is likely that within this time the use of buildings will change, or that the feasibility of this will be investigated. Buildings which can accommodate change easily support sustainability by reducing the requirement for change and the need for new buildings” (Gibberd 2004:SBAT).
Partitions
Internal partitions between space are non-load bearing, made from brick, block or plaster board and can be removed or changed relatively easily.
Services
There is easy access to electrical and communication services in usable space. Provision should be made for easy modification of these systems.
Vertical Dimensions
Structural dimensions from the underside of the roof, or slab to the floor should be a minimum of 3m. this ensures ease of change, good depth for future services, as well as a comfortable environment for occupants for visual, acoustic and thermal quality.
4.3.4 Ongoing Costs
Maintenance
Specifications and material specification for low maintenance and or low cost maintenance should be implemented at initial design stages. All plant and fabric have a maintenance cycle of at least two years. Low or no maintenance components, windows, doors, paint and, ironmongery should be selected. Maintenance should be carried out effectively and efficiently, with access to hard-to-reach areas provided for cleaning and repairs.
Security
Measures must be taken to limit the requirement and costs of security. Alarms and other monitoring devices can be installed to minimise the number of security people necessary.
Insurance/ water/ energy/ sewage
Costs of insurance, water, energy, and sewage should be monitored. Consumption and costs must be regularly reported to management and users. Policy and management to reduce consumption should be implemented, whereas passive systems can be used for the control of energy saving, such as photo voltaic cells that control ventilators, or supply night-lighting through energy-efficient controls.
Disruption and down time
Electrical, communication, plant and other services should be located where they can easily be accessed with a minimum of disruption to occupants of the building. Access to these should be from circulation areas and not living and working areas.
4.4 Environmental issues
4.4.1 Environmental Architecture
What is here referred to as Environmental Architecture, has many other names: Construction ecology, Green Architecture, Selective Design etc. In general Environmental Architecture is a reaction to environmental degradation.
Protection of the globe through re-evaluation of the way in which buildings are designed and constructed, reflects the concerns of the green movement generally. The major impact that building design, construction and maintenance have on national energy consumption began to be widely recognised in the early seventies (Jones 1998:12). The design of any building derives from a considered response to climate, technology, culture and site. Considerations of global sustainability and energy conservation bear directly on these four issues and therefore go right to the heart of architectural design.
Under the impact of technological change, there is a growing consensus that architectural objectives and procedure should be realigned to reflect our improved climatic awareness (Hawkes, McDonald, Steemers 2002:17). Global climate change is an issue of widespread social and political concern as it is witnessed by international accords and protocols.
The environmental impact of buildings is widely acknowledged, and in the past quarter-century much progress has been made in developing the means to reduce it through technological development and scientific analysis. However, there is a need to locate this within comprehensive architectural paradigms that connect it to the wider historical, cultural and social discourse without which technology remains of purely instrumental value. The challenge is to reach a point where Environmental Architecture is indistinguishable from good architecture.
Selective design, as opposed to exclusive design, aims to exploit the climatic conditions to maintain comfort, minimising the need for artificial control reliant on the consumption of energy (Hawkes, McDonald, Steemers 2002:123). This manipulation of climate, to filter selectively positive characteristics of the environment, is achieved through architecture. The form of a building is the most significant consideration with respect to the selective potential of a design.
The approach has the following principles:
- To maximise the use of ambient, renewable sources of energy in place of generated energy and fossil fuels.
- To minimise the use of energy-consuming mechanical plant in processes of environmental control.
- To provide the users of buildings with the maximum opportunity to exercise control over their environment and adapt it to their needs.
- To use non-toxic materials that affect the health of construction workers or the users.
- To reuse, recycle and adapt old structures for future construction.
4.4.2 Water
“Water is required for many activities. However the large-scale provision of conventional water supply has many environmental implications. Water needs to be stored (sometimes taking up large areas of valuable land and disturbing natural drainage patterns with associated problems from erosion etc.), it also needs to be pumped (using energy) though a large network of pipes (that need to be maintained and repaired). Having delivered the water, a parallel effort is then required to dispose of this after its use (sewage systems). Reducing water consumption supports sustainability by reducing the environmental impact required to deliver water, and dispose of this after use in a conventional system” (Gibberd 2004:SBAT).
Water consumption and efficiency of use
All water devices should minimise water consumption and encourage efficiency of use. Recycling and reuse of greywater to flush toilets and water plants is encouraged. Onsite treatment of black water must be accommodated in the design of such services. A borehole should be included if a site is located far from municipal services, ground water levels permitting.
Runoff
Runoff can be reduced by using pervious and absorbent surfaces. Hard landscapes should be minimised, with pervious surfaces specified for parking and paths.
Planting and landscaping
Planting must be indigenous with low water requirements. Planting can help to prevent excessive water evaporation, modify the ambient temperature around a building, act as a wind break, help to filter pollution and provide privacy. The character and contours of the site should be retained as far as possible, to assist with water absorption, reducing runoff.
water usage (diagramatic representation) 4_20
4.4.3 Energy
“Buildings consume about 50% of all energy produced. Conventional energy production is responsible for making a large contribution to environmental damage and non-renewable resource depletion. Using less energy or using renewable energy in buildings therefore can make a substantial contribution to sustainability” (Gibberd 2003:SBAT).
Natural lighting
Natural lighting is used as much as possible throughout the building complex. There has to be sufficient light for visual focus and to perform the desired task. Glare must be avoided. Artificial lighting should be limited to nighttime. Energy efficient lighting fixtures must be used.
Ventilation
Natural ventilation is maximised. The interiors are cooled by openable windows, most located near ceiling level to allow stale and warm air out. In areas with high moisture levels and excessive heat, such as bathrooms and kitchen, extractor fans are used to aid ventilation.
Heating and cooling
Energy efficient systems are used within the building to passively control temperatures. Passive methods for heating include direct gain, trombe walls/floors and fireplaces. Passive cooling uses the building’s thermal mass, as well as ventilation to keep the structure, and so the rooms cool. Openings are shaded to prevent uncontrolled solar gain.
Renewable energy
Solar hot water systems are used to heat water in summer. Back-up electrical systems are used in very cold weather, or conditions with little sunlight.
4.4.4 Site
Buildings have a footprint and a size that take up space that could otherwise be occupied by natural ecosystems which contribute to sustainability by helping create and maintain an environment that supports life. Buildings can support sustainability by, limiting development to sites that have already been disturbed, and working with nature by including aspects of natural ecosystems within the development.
Energy
A building consumes energy in a number of ways: in the manufacture of building materials, components and systems (embodied energy); in the distribution and transportation of building materials and components to the site (‘grey energy); in the construction of the building (induced energy); and in running the building and its occupants equipment and appliances (operational energy). A building also consumes energy in its maintenance, alteration and final disposal. An energy efficient building looks to reduce consumption on all of these areas (Jones 1998:36).
‘Brownfields’
The site as a whole is largely a brownfields site. The building complex is situated in areas that have already been disturbed by human intervention. The proposed buildings must not cause further environmental degradation.
Landscape inputs
All new planting must be of indigenous species. Exotic species must be cleared from the site. However, the clumps of exotic Silver Birch are to be retained due to the quality of place they create. The planting and vegetation chosen to be planted on site must take into consideration the natural climatic and soil conditions.
4.4.5 Recycling and Re-use
"Raw materials and new components used in buildings consume resources and energy in their manufacture and processes. Buildings accommodate activities that consume large amounts of resources and products and produce large amounts of waste. Reducing the use of new materials and components in buildings and in the activities accommodated and reducing waste by recycling and reuse supports sustainability by reducing the energy consumption and resource consumption" (Gibberd 2004:SBAT).
Inorganic waste
This waste should be sorted into what can be recycled or re-used, and either stored or arrangements made for the recyclable waste to be taken to an appropriate plant.
Organic waste
This must be recycled and disposed of on site; greywater can be filtered and re-used, blackwater treated and used for irrigation, and other organic waste can be used for compost.
Construction waste
Construction waste must be minimised through design management and construction practices. Design allowances should be made for material recovery with disassembly, and adaptive reuse of salvaged building materials.
4.4.6 Materials and Components
"The construction of buildings usually requires large quantities of materials and components. These may require large amounts of energy to produce. Their development may also require processes that are harmful to the environment and consume non-renewable resources" (Gibberd 2004:SBAT).
Embodied energy studies have assessed the energy taken to bring materials and components to their final position. This includes extraction of the raw material, processing it into a workable material, making components and products, installation and use, removal and demolition, as well as the transport and storage of the product at each stage. Industry and its products can have damaging effects and the environment. If a suitable alternative material can be found which is less damaging to the environment, then it should be used.
Materials should be chosen for their local manufacture, low embodied energy and limited environmental damage, their properties for recycling and re-use at a later stage and lastly their aesthetic appeal.
Earth and stone found on site make up a major part of the construction materials used in the buildings. Other materials used are found within close proximity of the site.
Rammed earth
Rammed earth is a method of simple wall construction that uses utilises form work, wood or steel, into which a damp gravely earth mixture is rammed in layers, till total compaction. When the forms are removed the wall is complete, except for curing, and requires no further treatment other than plaster finishes or cosmetic treatments as desired (McHenry 1984:48). The final product is solid and durable.
There are many benefits to using earth construction in South Africa. Earth has good thermal properties; it stores energy in the form of heat due to its mass, is warm in winter and cool in summer. Soil is a readily available resource that is relatively cheap, or even free if it is excavated on site. Due to a long tradition of earth construction in this country, many people have the skills to build with earth. Earth construction is labour intensive, and provides jobs. Due to availability of the material, cost and available skills, earth building is a highly affordable alternative to some conventional technologies. Local communities become directly involved in the process and production of the building and generate income from its construction.
Ideally soil used in earth construction must contain four elements: course sand or aggregate, fine sand, silt and clay (McHenry 1984:48). Earth construction has good compressive strength, but poor tensile strength. Appropriate structural design and construction has to be addressed.
(A more detailed report on rammed earth and the other building materials used is included in the Technical Documentation chapter.)
The Accommodation Schedule for the building complex is contained in Appendix E.
The Sustainable Building Assessment Tool (SBAT), tables and graph are contained in Appendix F. |
"MYTH TODAY\n\nWhat is a myth, today? I shall give at the outset a first, very simple answer, which (...TRUNCATED) |
"Judgments of linguistic acceptability constitute an important source of evidence for theoretical an(...TRUNCATED) |
"Writing the Nation\n\nTransculturation and nationalism in Hispano-Filipino literature from the earl(...TRUNCATED) |
"PROBLEMS OF THE SPECTRAL THEORY OF NON SELF ADJOINT OPERATORS\n\nby\n\nM. V. Keldysh, V. B. Lidskiy(...TRUNCATED) |
"Using Robust Estimation Algorithms for Tracking Explicit Curves\n\nJean-Philippe Tard\\textsuperscr(...TRUNCATED) |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- -