Buscar

LAMSTAR: Sistema de Reconhecimento Facial para Cidades Inteligentes

Prévia do material em texto

Received: 20 October 2019 Revised: 11 November 2019 Accepted: 23 November 2019
DOI: 10.1002/ett.3843
S P E C I A L I S S U E A R T I C L E
LAMSTAR: For IoT-based face recognition system to
manage the safety factor in smart cities
Prema Kumar Medapati1 P.H.S. Tejo Murthy2 K.P. Sridhar3
1Department of Electronics and
Communication Engineering, Shri Vishnu
Engineering College for Women
(Autonomous), Bhimavaram, India
2Sir CRR College of Engineering, Eluru,
India
3Department of Electronics and
Communication Engineering, Karpagam
Academy of Higher Education,
Coimbatore, India
Correspondence
Prema Kumar Medapati, Department of
Electronics and Communication
Engineering, Shri Vishnu Engineering
College for Women (Autonomous),
Bhimavaram-534 202, India.
Email: premekumarece@gmail.com
Abstract
Digital transformation played a vital role in smart cities because of its abil-
ity to process the different data to providing sustainability, connectivity, and
mobility to data effectively. According to the diver operational productivity,
preserving public safety government law enforcement integrated that face recog-
nition is more important in smart cities. Traditional face recognition system fails
to predict the exact facial features that leads to reduce the facial recognition
accuracy. The false facial point detection process maximizes the computation
complexity. Therefore, in this work, effective and artificial intelligence internet
of things–based facial expression detection system is implemented to predict
and match the face from the database. Initially, the facial images are cap-
tured from the internet of things sensor device, which is processed by applying
the Perona-Malik diffusion algorithm. Then, face location is cropped from the
image, geometric face shape model is created for predicting the exact face
from the template face image. From the face shape, different facial features are
extracted from various region using Fisher linear discriminant analysis. The
derived features are trained with the help of convolution network and the face
recognition process is done by using the adaboost large memory storage and
retrieval neural network. The network successfully recognizes the face from the
template, which is used to eliminate the safety related risk in smart cities.
1 INTRODUCTION
Smart city1 is one of familiar sector that uses the various technologies, efforts, which uses different camera, sensor devices,
internet of things (IoT) and intelligent techniques. These techniques are helps to develop the effective and safer smart
cities in efficient way. The smart city includes several factors2 such as information analysis, energy management, traffic
analyzes, and waste management process, which collects the data from human and device. Among the several smart city
initiatives, face recognition is one of the main factors that integrated with the city initiatives to eliminate the threaten
activities.3 In addition to this, the face recognition process helps to maintain the city safety, maximize the investigation
process and enhance the smart city initiatives. This face identification process creates the top layer in security examination
process in smart cities in terms of matching the template face into the captured criminals face. This matching process
predict the terrorists, suspects and other individual threats, after that matching alert is created to stopping crimes in
beginning stage. Moreover, the face recognition process4 helps to identify the missing children in city, because, around
every year, 800 000 children missing in traffic and other location. Therefore, the IoT-based face identification process
matching the missing children face in database with identified face to predict the missing child successfully. Along with
this, the face identification process helps to product the public events, expedite investigation, and control access process
Trans Emerging Tel Tech. 2019;e3843. wileyonlinelibrary.com/journal/ett © 2019 John Wiley & Sons, Ltd. 1 of 15
https://doi.org/10.1002/ett.3843
https://doi.org/10.1002/ett.3843
https://orcid.org/0000-0003-3858-0939
http://crossmark.crossref.org/dialog/?doi=10.1002%2Fett.3843&domain=pdf&date_stamp=2019-12-08
2 of 15 MEDAPATI ET AL.
FIGURE 1 General structure of face recognition system
in secured area. Due to the importance of face recognition process in different purpose, in this work, face identification
process5 is focused to maintain the security in smart cities.
In smart cities, CCTV cameras6 are placed in different parts to maintain and manage the security, but most of the time,
the suspected threatening people are mingled in the day-to-day people. That time, the suspected is difficult to identify,
which creates the security factor; also, the crime should be reduced before happening in smart cities. Therefore, that
face images are collected with the help of IoT7 camera device, which is processed by applying several artificial intelligent
techniques and machine learning approaches are used to identify the face in general verification process. According to
the discussion, the general structure of face recognition process is illustrated in Figure 1.
Figure 1 illustrated that the general processing structure of face recognition process, which includes face collection, face
crop, detection, feature extraction, and building classification model. These models successfully predict the face from the
set of the face image8 data set, which was trained during the learning model. The traditional techniques are consuming
more time complexity also confused to get the exact facial points from face region. These complexity leads to maximize
the security related issues in smart city application. Therefore, several researchers aim to predict the face recognition
system to predict the face for eliminating security threats in smart cities. Lu et al9 presented IoT-related face identification
process using spatial correlation approach. The author captures the face images with the help of IoT smart device, which
consists of raspberry Pi 3 board. These captured face images are collected in laptops and mobile phone, and then the
exact face is detected cascade classifiers. The detected face is classified using spatial correlation features, which helps to
recognize the face image up to 85.7% of accuracy. The created system successfully recognizes the face images from the
complex background details, but the recognition accuracy must be improved.
Muhammad et al10 created facial expression detection system for improving the health care application in smart cities.
First, the face images are captured from particular smart cities; different subbands are derived using bandlet transform.
From the extracted subbands, various features are extracted with the help of center symmetric local binary patterns. The
patterns are effectively deriving the face images, which is huge in dimension. Then, the dimensionality of the feature set
is reduced by applying Gaussian mixture model, which is processed with the help of support vector machine. The vector
machine approach processes the facial features using the weight factor and detects the facial expressions successfully. The
created effective facial recognition system ensures 99.95% of accuracy.
Abiyev et al11 detected and recognized face features for improving biometric identification process. During this process,
facial images are captured via the biometric system, the face images are detected, and different features are extracted using
different methods. In this work, Fisher linear discriminate analysis, principle component analysis, and fast pixel-based
matching approach are applied to derive the facial features. The extracted features are analyzed with the help of classifiers
to recognize the face, which improves the biometric identification process. The efficiency of the system is determined
using experimental results and analysis.
Duan et al12 discussed effective deep learning techniques to examines the video surveillance systemby applying artificial
intelligent techniques. The introduced approach captures the large-scale surveillance video and is collected, which is
processed with the help deep feature coding that successfully derive the features from video. From the extracted features,
these are matched with the template features for recognizing face by overcoming the standardization problem effectively.13
This detailed opinion and thoughts regarding the traditional face recognition process using different machine learning
techniques give some basic idea to create the face recognition process.14,15 Even though the system successfully predicts,
MEDAPATI ET AL. 3 of 15
they are failing to maximize the recognition rate with complex background image. In addition to this, the exact facial
features are difficult to identify.
To overcome the aforementioned traditional issues in face recognition process, in this work, we use the effective and
intelligent techniques to create the face identification process in smart city application. In this work, face images are
gathered from surveillance cameras (SCs) face database, which is processed by applying different image processing sand
intelligent techniques. The techniques successfully examine each pixel in face image and identify the exact face location
along with the geometric features. Finally, the face recognition is done with the help of adaboost large memory storage and
retrieval neural network (LAMSTAR). The introduced LAMSTAR face recognition process effectively utilizes the memory,
which registers the face-related information that is used to predict the face if the unauthorized user enters into the smart
cities. The excellent examination of the facial features, face shape model, and relationship between the face pixel reduce
the difficulties involved in the face identification process. Then, the efficiency of the LAMSTAR system is evaluated using
MATLAB tool-based results and discussion with different performance metrics such as loss rate and efficiency metrics.
Then, the rest of this manuscript is organized as follows. Section 2 analyzes the LAMSTAR-based face recognition process
in smart cities. Section 2.1 evaluate the excellence of LAMSTAR face recognition process and concludes in Section 3.
2 ADABOOST LARGE MEMORY STORAGE AND RETRIEVAL NEURAL
NETWORK–BASED FACE RECOGNITION PROCESS IN SMART CITIES
This section discusses about the LAMSTAR-based face recognition process in smart cities for improving the security
factor and avoiding the unwanted crimes. Initially, the face images are need to be collected from the cities for examining
he crime activities related human. For this purpose, IoT sensor device camera is placed in the city, which continuously
captures the image and transmitted to the controlling room that is created in the smart city. The detailed explanation of
the materials and methods is discussed as follows.
2.1 Materials and methods
In this work, SC face surveillance face image data set is used to process the introduced method. The images are collected
from six different SC with various qualities. The data set consists of 4160 static images that was collected from 130 subjects.
During the image collection process, law of enforcement and surveillance case study is used to collect the images for
predicting the face at the time of their activities. The images15 are collected in video communication lab at electrical
engineering and computing in Zagreb university, Croatia. The surveillance system includes the SC, laptop, digital video
surveillance recorder, and high-quality photo camera. With the help of the equipment, different model of surveillance
images is captured that helps to predict the exact person face. Among the different camera, Cam 1 and Cam 5 are used to
capture the images in night mode in which Cam 1 is used to capture the image with infrared mode. Remaining cameras
are fixed in the static mode to capture the incoming person face. After placing the camera position, the participants are
subjected to walk in front of the camera with 1 to 7 m distance to capture the images. By doing this process, around 21
images are captured, which is done in indoor lighting. Furthermore, the system examines the participant, and nine images
are collected in−90◦ to +90◦ of photo shoot. Finally, participants are examined using Cam 1 to capture the infrared-related
photos in dark room activities. Therefore, overall, each participant contains around 32 images in database.16 According
to the capturing procedures and process, around 115 male and 15 females are continuously examined in the age of 20
to 75. Each subject is examined in different pose and different direction because the capturing process covers entire facial
features. Depending on the discussion, sample participant face images in different pose are depicted in Figure 2.
Figure 2 illustrated that the captured sensor SC-based subject sample images with different pose and direction. As dis-
cussed in the problem definition, the different direction of subject face is ability to predict the crime activities related
patient effectively. Furthermore, most of the works does not covers the facial features, which leads to create complex-
ity. However, the chosen data set itself covers lot of textural features in captured face image. Then, the covered facial
image–related textural features are listed in Figure 3.
Figure 3 depicted the sample facial feature–related17 images, which are marked with 1, 2, 3, and 4. The marking point
represented as the facial features such as left eye, right eye, nose tip, and mouth features. These marked features are most
important while recognizing the face images. According to the marked features, the respective features values of different
participant information is shown in Table 1.
According to Table 1, it clearly shows that the extracted facial features from x and y directions. The extracted features
help to recognize the face by comparing testing and training face image. Due to the effective examination of the SC-related
4 of 15 MEDAPATI ET AL.
FIGURE 2 Sample face image with
different pose and direction
FIGURE 3 Sample facial feature image
TABLE 1 Different facial feature information about surveillance
camera face database
Participant Feature 1 Feature 2 Feature 3 Feature 4
Images Left Eye Right Eye Nose tip Mouth
001_frontal 466 680 765 678 633 837 631 994
002_frontal 493 690 764 682 633 844 634 971
003_frontal 470 756 750 748 630 900 621 1033
004_frontal 464 723 743 728 611 891 609 1036
005_frontal 519 764 810 749 705 973 712 1110
006_frontal 495 698 811 695 665 918 653 1053
007_frontal 471 712 776 700 660 900 656 1021
008_frontal 451 717 766 719 624 945 617 1068
009_frontal 436 742 741 730 559 942 576 1101
010_frontal 443 736 768 722 618 910 637 1092
face images and facial features, it is our intention to choose this database for this research purpose. Then, the detailed
analysis about the working process is discussed as follows and respective processing structure is shown in Figure 4.
Face image noise removal
The first step of the work is noise removal18 process, which is done with the help of Perona-Malik diffusion technique.
Before removing noise form image, face image must be analyzed by cropping the face region from the captured face image.
Then, the detected face region needs to be converted into grey scale image19 for processing the face image easier. From the
color image, red, green, and blue components are extracted. The extracted components are processed and formed as the
MEDAPATI ET AL. 5 of 15
FIGURE 4 Face recognition system architecture
new matrix, which is same as the RGB image row and column value. After that, each pixel location (I,j) into respective
grayscale value is computed by weighted sum of red, green, and blue components in the face image. The RGB to gray scale
image conversion process is done as follows:
Gs (i, 𝑗) = 0.2989 ∗ R (i, 𝑗) + 0.5870 ∗ G (i, 𝑗) + 0.1140 ∗ B (i, 𝑗) . (1)
Based on Equation (1), the computed pixel valueis assigned to respective pixels present in the face image. After converting
the color image into gray scale image, noise present in the face image must be eliminated by applying above defined
preprocessing techniques. In this work, Perona-Malik diffusion technique20 is chosen because this method eliminates
the noise from the image without affecting the edges, line, and other significant information. During the noise removal
process, the collected face image is analyzed by each pixel in diffusion manner. The pixel examination generates the
different blur images that are convoluted with the image. After convoluting the image, the filtering process is applied to
remove the noise by extending the width of the image effectively. Then, the captured face image is transformed in terms
of using space invariant process to generalize the diffusion process. At the time of this process, the noise pixel is replaced
with the help of image content and the filter is chosen based on the pixel content. Consider the image subset plane is
Ω⊂ R2 and the face gray scale image family is denoted as I(., t) : Ω→ R. After defining the face gray scale image, respective
anisotropic diffusion21 process is done as follows:
𝜕I
𝜕t
= div (c (x, 𝑦, t) ∇I) = ∇c.∇I + c (x, 𝑦, t) ΔI, (2)
where Laplacian is represented as Δ, gradient of the image is denoted as ∇, divergence operator is represented as div(..),
and diffusion coefficient is named as c(x, y, t), which helps to control the diffusion rate. The diffusion rate is generally
chosen for image gradient process that used to maintain the image edge information. Furthermore, diffusion coefficient
values are estimated as follows:
c (‖∇I‖) = e−(‖∇I‖∕K)2 (3)
c (‖∇I‖) = 1
1 +
(‖∇I‖
K
)2 . (4)
In the aforementioned equations, k is used to control the edges related sensitivity value, which is selected while removing
noise from image successfully, where it has 96% of accuracy in noise removing process. After removing noise from image,
face shape model has been created to recognize the threaten related face image.
6 of 15 MEDAPATI ET AL.
2.2 Geometric face shape model creation
The second step of the work is to generating the geometric face shape model22 because it helps to identify the exact pose
and shape of the face effectively. In addition to this, the face shape model helps to determine the exact facial feature points
from different part of the face such as nose, eye, mouth, face, and so on. Consider the captured face (X) having so many
points that is denoted as n and it is the 2n vectors, ie, X = (x1y1, . . . .. xnyn)T. Then, the face image shape vector mush be
defined as {Xi}. According to the definition, the shape vectors of captured image are denoted as follows.
Figure 5 illustrates the face shape model, which is used to generate the vectors in face image with various expression
such as neutral (NEU), mild, full, and excited (EX1,EX2,EX3). After generating the shape model, the parameters are
applied to get the new vectors from X. During this process, the vector points dimensionality must be reduced with the help
of principle component analysis approach.23 The incoming facial image data set and mean value need to be computed at
first, which is done as follows:
X = 1
s
∑s
i=1
Xi. (5)
Afterward, the facial image data set respective covariance matrix should be identified as follows:
S = 1
S − 1
∑s
i=1
(
Xi − X
)(
Xi − X
)T
. (6)
For generating the effective training set for facial image, respective eigenvector Pj and eigenvalue 𝜆j must be computed
for data set S. By using these computed values, training set for specific image is estimated as follows:
X ≈ X + Psbs, (7)
FIGURE 5 Sample representation of face
image shape model
MEDAPATI ET AL. 7 of 15
where Ps is the eigenvector of the image and face shape model parameter is denoted as bs
bs = PTs
(
X − X
)
(8)
bi ∈
{
−3
√
𝜆𝑗 + 3
√
𝜆𝑗
}
. (9)
After reducing the dimensionality of facial points in face, the shape of the face is generated using the transformation on
the face image X points, which is done as follows:
X = T
(
X + Psbs, xc, 𝑦c, sx, s𝑦, 𝜃
)
. (10)
In Equation (10), xc, yc is represented as the translation of the input face image, sx, sy is scaling value of the image, and 𝜃
is rotation of the facial points in face image. Based on the computed values, the face shape models and the facial points
are generated successfully. The created shape model contains several facial features that are effectively used for further
processing. Then, the respective facial features are derived to identify and recognize the facial features.
2.3 Facial features extraction
The third step of the work is to extract the facial features, which is done with the help of Fisher linear discriminant analysis
approach.24 The algorithm computes the features in between scatter matrix and within the scatter matrix. During the
feature extraction process, the features are extracted by using the covariance matrix and the algorithm should maximize
the following objective function:
J (w) = W
TSBW
W TSwW
(11)
In eqn (), between scatter matrix class is defined as SB and within the scatter matrix class is mentioned as Sw, which are
computed as follows:
Sw =
∑
C
∑
i∈C
(Xi − 𝜇C) (Xi − 𝜇C)T . (12)
The eqn 𝜇C is estimated as follows:
𝜇C =
1
NC
∑
i∈C
Xi (13)
SB =
∑
C
NC
(
𝜇C − X
)(
𝜇C − X
)T
. (14)
Equation (14) X is computed using Equation (15)
X = 1
NC
∑
i∈C
Xi =
1
NC
∑
C
NC𝜇C. (15)
From the aforementioned equations, NC is denoted as amount of cases in class, and then the entire scatter value is
estimated as follows:
ST =
∑
i
(
Xi − X
)(
Xt − X
)T
. (16)
Generally, the scatters are computed as follows:
ST = SW + SB. (17)
From the computed scatter points in image, respective eigenvalues are eliminated from scatter point. Then, the nonzero
eigenvalues are arranged in the descending order to get the eigenvectors. After computing the eigenvalue and eigenvector,
the features are extracted by computing the Euclidean distance measure for each feature in face image. Finally, the mini-
mization function is applied to the computed distance value, and then the predicted minimum distance values are treated
8 of 15 MEDAPATI ET AL.
as facial features, which are extracted from different region of face. The extracted features are stored in the database,
which are used to match the incoming facial points with respective feature.
2.4 Facial features training
The next step of this work is to providing training to facial features, which is done by applying the convolution neural
network.25 It is one of the effective deep learning neural network that takes the face images are input and allocate the bias
and weight value to identify the one face image to other face image. During the feature learning process, network uses
the ConvNets for eliminating the noise pixels in the image because its ability to learn the filters characteristics. Then, the
respective structure of convolution neural network–based facial feature learning process is depicted in Figure 6.
Figure 6 illustrates the convolution neural network structure, which includes maxpool layer (depth-7, height-128, and
width-128), convolution layer (depth-7, height-64, and width-64), max pooling (depth-23, height-48, and width-48), dense
layer (depth-23, height-16, and width-16) and, finally, the fully connected neural network with respective output layer.
The main intention of this algorithm is to train the feature with easier way by utilizing small parameters.26 As illustrated
in Figure 6, convolution network is one of the fully connected neural network in which it has consider the image in terms
of m*m*r, ie, m is height and width of image and r is number of channels present in the image. During the learning
process, Convnet is used to remove the noise in which kernel value must be selected with size n*n*q. N is denoted as the
image smaller dimension value and q is same as the channel r, which is varied for every kernel value. After deciding thekernel value, mapping process should be performed in the range in which the sigmoidal nonlinear process is applied. This
process is repeated for every images and respective pixels in the image. These mapping and connection process create
the fully connected neural network and the densely connected network is similar to the multilayer network. At the time
of training process, network may have error value 𝛿(l+1) in the (l + 1) layer with respective cost function J(W, b; x, y).
In the cost function, b and W are denoted as network parameter such as bias and weight value, (x,y) represented as the
training data and respective label value. As shown in the figure, the dense layer(l) is connected with the fully connected
layer(l + 1), and then the error value27 is computed as follows:
𝛿(l) =
((
W (l)
)T
𝛿(l+1)
)
.𝑓 ′
(
z(l)
)
(18)
After computing the lth layer output value, respective gradient values are estimated as follows:
∇W (l)J (W , b; x, 𝑦) = 𝛿(l+1)
(
a(l)
)T (19)
∇b(l)J (W , b; x, 𝑦) = 𝛿(l+1). (20)
Suppose l layer is convolutional and performing subsampling process, and then the error rate is computed as follows:
𝛿
(l)
k = upsample
(((
Wk(l)
)T
𝛿k
(l+1)
)
.𝑓 ′
(
zk(l)
))
. (21)
In the aforementioned eqn (), k denotes that the filter index number and f ′(zk(l)) is represented as the activation function.
The defined upsample process is transmit the error value to the pooling layer. At last, the gradient value of the network
FIGURE 6 Convolution neural network structure
MEDAPATI ET AL. 9 of 15
is computed to provide the proper learning process. Then, the gradient value of convolution layer is estimated as follows:
∇Wk (l)J (W , b; x, 𝑦) =
∑m
i=1
(
a(l)
)
∗ rot 90
(
𝛿k
(l+1), 2
)
(22)
∇bk (l)J (W , b; x, 𝑦) =
∑
a,b
(
𝛿k
(l+1))
a,b. (23)
In Equation (23), lth layer input is a(l), a(1) is respective input face image, and a(l) * 𝛿k(l+1) is only valid if the convolution
between the lth layer and Ith input. This process is repeated continuously, until to train the facial features, which is stored
in the database that is used to match the captured IoT face image into template. Based on the effective feature learning
process, the safety measure of smart city is increased gradually.
2.5 Face recognition
Final step of the work is face recognition,28 which is done with the help of LAMSTAR.29 As discussed in the afore-
mentioned section, in the smart cities, the IoT sensor camera is continuously capture each person images, which are
transmitted to the nearby control room. In that place, the people's face images are continuously examined to match the
database image to predict the threaten related persons clearly. The effective prediction process improves the security and
safety in smart cities. According to the discussion, the set of image features are extracted, which are trained and learned
stored in the database. The new or incoming facial images are processed by aforementioned defined image noise removal,
face shape model generation, and feature extraction steps. The extracted facial features are matched with the database
features to recognize the input features. The matching process is done by LAMSTAR,30 which is one of the effective deep
learning approaches. The network consists of many layers and different filters are used to remove the noise while match-
ing process is performed. In addition to this, the network has self-learning and motivating, which helps to improve the
overall matching process compared to another classifier.31 Moreover, the fastest learning algorithm and optimized link
weights help to maximize the face image recognition process; for this reason, in this work, LAMSTAR approach is used.
The LAMSTAR network checks the faces in trained data set using the larger memory of neurons in network and respec-
tive weight value. During the identification process,32 similarity of the input face features is compared with the trained
data set by minimizing the distance of norms. Then, the matching is done as follows:
‖‖xi − wi,m‖‖ = min ‖‖xi − wi,k‖‖∀k ∈ ⟨l, l + p⟩ . (24)
In eqn (), xi is the input face features, w is the stored facial features respective weight vector, and p is the number of
neurons in the networks. M is the winning facial features that is matched facial features; l is the first feature in the list
during the matching process. During the matching process, network weight must be updated for reducing the deviation
of matching process, which is performed as
wi,m (t + 1) = wi,m (t) + 𝛼i
(
xi, (t) − wi,m (t)
)
,m ∶ 𝜀m < 𝜀i, (25)
where wi,m(t+1) is updated weight value of neuron m and module i, 𝛼i is learning coefficient, 𝜀m is minimum error value
in the vector, and t is number of iterations of matching process. Before analyzing the extracted test facial image features,
it needs to be trained with the help of adaboost approach, which effectively eliminate the weak features in the list. Then,
the boosting process is done as follows:
𝑓 (x) =
∑J
𝑗=1
𝛼𝑗h𝑗 (x) , (26)
where nonnegative weak facial features in layer are denoted as 𝛼j and better facial feature is represented as hj. After
boosting the facial features, the face matching or recognition process should be taken. Already, the list of facial features
is identified and saved as the threaten activities–related information. The testing features are matching with the training
data set for detecting the crime-related people effectively in IoT smart cities. Then, the efficiency of the introduced system
is evaluated in Section 4.
10 of 15 MEDAPATI ET AL.
3 EXPERIMENTAL ANALYSIS
To recognize the face image in IoT smart cities, the data set must be created before using the system. Already, the system
is created the list, which includes the criminal details and respective facial features. In this work, the SC face image data
set is used, which is collected from the communication lab at electrical engineering and computing in Zagreb University,
Croatia. Therefore, the automatic face recognition system detects the face images in static and real-time environment.
Here, the efficiency of the system is evaluated using the SC face image data set. As discussed in Section 3, enough infor-
mation is provided about the database. The captured surveillance images are 680 × 556 pixels, which is further reduced
while detecting face and cropping the face process. After performing the face detection process, the image size will be
reduced up to 100 × 75 pixels, which is varied according to the distance of image cropping. Based on the discussion, the
sample SC face images are depicted in Figure 7.
As discussed above, the data set consists of 4160 images, which are divided into two set training and testing images.
Around 3134 images are taken for training purpose and remaining 1026 images are used as testing images. In this work,
convolution neural network is used for providing the learning purpose and training process. According to the discussion,
the convolution neural network training process structure is depicted in Table 2.
The convolution neural network consists of several layers, which are discussed in the feature learning process. During
the training process, sigmoid activation function and backpropagation-based error propagating process is used to get the
effective learning process. Even though the system uses the huge volume of facial features, system consumes minimum
training time because it already consists of set of facial features in previous process. Then, the time consumption of
different number of facial features are depicted in Table 3.
Table 3 illustrated the convolution neural network effectively trains the face images in different pose with minimum
time. The successful utilization of each layer function along with processing steps improve the overall feature training
process. Then, respective training process graphical representation is depicted in Figure 8.
Figure 8 illustrated the training time of the convolution neural network. According to the discussion, the networkeffec-
tively utilizes each layer and effectively maps the respective layer input features. Among this, the successful computation
of 𝛿(l+1) in l + 1-layer leads to reduce the complexity of facial feature training process. In addition to this, the gradient
value subsampling process also helps to train the facial features with minimum training time. The effective learning
FIGURE 7 Sample SC face database images
TABLE 2 Facial feature learning network structure
Training network Number of Number of Number of Number of Learning rate
input nodes hidden hidden output nodes
nodes in layer 1 nodes in layer 2
Convolution Neural Network 16 12 10 1 0.0005
MEDAPATI ET AL. 11 of 15
Training Number of face images
Time (seconds) 150 300 450 600 750 900 1150 1300 1500
Frontal Pose 0.23 0.234 0.284 0.312 0.289 0.302 0.328 0.324 0.342
L1-pose 0.23 0.254 0.214 0.256 0.248 0.298 0.267 0.287 0.296
L2-Pose 0.267 0.241 0.246 0.237 0.278 0.312 0.302 0.282 0.278
L3-Pose 0.25 0.262 0.253 0.248 0.285 0.297 0.314 0.316 0.352
L4-Pose 0.278 0.255 0.268 0.256 0.287 0.269 0.318 0.298 0.326
R1-Pose 0.312 0.298 0.268 0.308 0.289 0.286 0.291 0.302 0.313
R2-Pose 0.298 0.2914 0.312 0.298 0.314 0.275 0.289 0.275 0.274
R3-Pose 0.302 0.316 0.317 0.315 0.289 0.294 0.312 0.297 0.298
R4-Pose 0.314 0.289 0.314 0.322 0.325 0.317 0.297 0.327 0.319
TABLE 3 Convolution network
training time
FIGURE 8 CNN training time
Accuracy (%) Number of face images
150 300 450 600 750 900 1150 1300 1500
Frontal Pose 98.39 98.24 98.68 98.78 98.60 99.023 98.90 99.23 99.78
L1-pose 98.67 98.79 99.23 98.39 98.92 99.23 98.92 98.924 99.023
L2-Pose 98.93 98.72 98.64 98.72 98.99 99.034 98.38 98.95 99.13
L3-Pose 98.99 98.78 99.02 98.38 99.013 98.93 98.72 99.23 98.92
L4-Pose 98.39 98.84 98.93 98.76 99.02 98.59 98.92 99.1 99.03
R1-Pose 98.93 99.34 98.91 98.84 98.83 99.03 99.19 98.67 99.24
R2-Pose 99.023 99.132 98.67 98.47 98.76 98.42 98.72 99.03 98.88
R3-Pose 98.78 98.62 98.92 98.89 99.03 98.87 99.03 99.13 98.38
R4-Pose 98.93 98.94 99.04 99.13 98.98 99.23 98.93 99.18 98.67
TABLE 4 Convolution network
training accuracy
process improves the overall facial recognition accuracy. Therefore, the accuracy of training process must be decided first
to know the efficiency of the entire matching process. Table 4 shows the convolution network training accuracy.
The successful computation of 𝛿(l),∇W (l)J (W , b; x, 𝑦) and∇b(l)J (W , b; x, 𝑦) in lth layer output process helps to improve the
training efficiency. In addition to this computation, convolution network ability to work on 𝛿(l)k in subsampling process also
helps to improve the training accuracy in unknown or dynamic face images effectively. Furthermore, ∇Wk (l)J (W , b; x, 𝑦)
and ∇bk (l)J (W , b; x, 𝑦) convolution layer gradient value also help to improve the overall feature learning process in face
recognition system. Then, the respective graphical analysis is depicted in Figure 9.
Furthermore, the excellence of introduced facial recognition system excellence is analyzed in terms of testing side.
The testing image is processed by face detection, cropped, noise removal process, shape generation model, and feature
extraction process. During the analysis process, once the system generates the right facial shape, then only system ability
is able to produce the right facial features. Therefore, the effectiveness of the generated face shape model is evaluated
using the f1 score value and the obtained result is depicted in Table 5.
The effective consideration of shape vectors Xi in face image and the respective vector generator help to create the exact
face shape model. The construction of Psbs shape model parameters and X are also used to create the absolute face shape
12 of 15 MEDAPATI ET AL.
FIGURE 9 Training accuracy
TABLE 5 Face shape model
efficiency
F1-Score (%) Number of face images
150 300 450 600 750 900 1150 1300 1500
Frontal Pose 98.53 98.515 98.955 98.585 98.76 99.127 98.91 99.077 99.402
L1-pose 98.96 98.75 98.83 98.55 99.002 98.982 98.55 99.09 99.025
L2-Pose 98.66 99.09 98.92 98.8 98.925 98.81 99.055 98.885 99.135
L3-Pose 98.902 98.876 98.795 98.68 98.895 98.645 98.875 99.08 98.63
L4-Pose 98.855 98.78 98.98 99.01 99.005 99.05 98.98 99.155 98.525
R1-Pose 98.745 98.633 98.893 98.568 98.881 99.055 98.73 99.084 99.214
R2-Pose 98.81 98.92 98.875 98.675 98.964 98.896 98.803 98.988 99.08
R3-Pose 98.781 98.983 98.858 98.74 98.91 98.728 98.965 98.983 98.883
R4-Pose 98.8 98.707 98.937 98.789 98.943 99.053 98.855 99.12 98.87
FIGURE 10 Face shape model efficiency
model by taking the transformation of X +Psbs, xc, 𝑦c, sx, s𝑦, 𝜃. According to the discussion, the efficiency of generated face
shape model is depicted in Figure 10.
From the derived face shape model, effective facial features are extracted, which are processed by adaboost LAMSTAR
network. The network effectively eliminates the week features and compute the exact matching process with training and
testing features. During the matching process, system must contain the minimum deviation value. Then, the obtained
error values are depicted in Figure 11.
The successful elimination of weak features in feature list and the effective computation of ||xi −wi,m||helps to recognize
the facial features from the list of features. During the feature matching process, successfully updating of wi,m(t+1) helps to
minimize the reduce the deviation of estimated and computed facial features. Moreover, the right selection of 𝛼i learning
coefficient reduces the 𝜀m while classifying the face image. The minimum deviation leads to maximize the overall face
recognition rate, which is shown in Figure 12.
MEDAPATI ET AL. 13 of 15
FIGURE 11 (A) MSE and (B) MAE error value of Adaboost LAMSTAR classifier face recognition
FIGURE 12 Face recognition accuracy
Figure 12 illustrated that the accuracy of introduced adaboost LAMSTAR classifier efficiency. The method successfully
recognizes the incoming IoT sensor device-based face images with 99.63% of accuracy of different poses of the face. The
effective recognition of face helps to predict the criminal activities in smart city effectively compared to the traditional
face recognition system.
4 CONCLUSION
This paper has analyzed the LAMSTAR-based facial recognition process in the smart cities' environment. Initially, the
face images are captured with the help of IoT sensor camera; after that, the face region is detected and cropped. The
cropped face images are processed and the noise in the images is eliminated by applying the effective diffusion process.
Then, the face shape model is generated by determining the shape vector and the facial points–related features are derived
effectively using the statistical operators. The extracted features are processed by convolution network to improve the
learning process, which trains the features and stores in the database. During the testing process, incoming features are
matching with the database image using the LAMSTAR distance computation process. The efficiency of the system is
developed using MATLAB tool and the system ensures the 99.63% of accuracy while recognizing the image. The successful
prediction of face image leads to maintain the security in smart cities and helps to eliminate the criminal activities in
beginning stage. In the future, the face shape generation model and facial point detection process will be further improved
with the help of the optimized techniques that enhance the security in smart city environment.
ORCID
Prema Kumar Medapati https://orcid.org/0000-0003-3858-0939
https://orcid.org/0000-0003-3858-0939
https://orcid.org/0000-0003-3858-0939
14 of 15 MEDAPATI ET AL.
REFERENCES
1. Sharma R. Evolution in smart city infrastructure with IoT potential applications. In: Balas V, Solanki V, Kumar R, Khari M, eds. Inter-
net of Things and Big Data Analytics for Smart Generation. Cham, Switzerland: Springer. 2019. Intelligent Systems Reference Library;
vol. 154.
2. Jha S, Kumar R, Chatterjee JM, Khari M. Collaborative handshaking approachesbetween internet of computing and internet of things
towards a smart world: a review from 2009–2017. Telecommun Syst. 2019;70:617-634.
3. Bonatsos A, Middleton L, Melas P, Sabeur Z. Crime open data aggregation and management for the design of safer spaces in urban
environments. In: Hřebíček J, Schimak G, Kubásek M, Rizzoli AE, eds. Environmental Software Systems. Fostering Information Sharing.
Berlin, Heidelberg. Springer; 2013. IFIP Advances in Information and Communication Technology; vol 413.
4. Sajid M, Ali N, Dar SH, et al. Data augmentation-assisted makeup-invariant face recognition. Math Probl Eng. 2018;2018:2850632. https://
doi.org/10.1155/2018/2850632.
5. Masi I, Chang FJ, Choi J, et al. Learning pose-aware models for pose-invariant face recognition in the wild. IEEE Trans Pattern Anal Mach
Intell; 201841:379-393.
6. Taha AEM. An IoT architecture for assessing road safety in smart cities. Wirel Commun Mob Comput. 2018;2018:8214989. https://doi.org/
10.1155/2018/8214989.
7. Preeth SKSL, Dhanalakshmi R, Kumar R, Shakeel PM. An adaptive fuzzy rule based energy efficient clustering and immune-inspired
routing protocol for WSN-assisted IoT system. J Ambient Intell Humaniz Comput. 2018;1-13. https://doi.org/10.1007/s12652-018-1154-z
8. Han H, Shan S, Chen X, Gao W. A comparative study on illumination preprocessing in face recognition. Pattern Recognition.
2013;46(6):1691-1699.
9. Lu J, Fu X, Zhang T. A smart system for face detection with spatial correlation improvement in IoT environment. Paper presented at:
2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications,
Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI);
2017; San Francisco, CA.
10. Muhammad G, Alsulaiman M, Amin SU, Ghoneim A, Alham MF. A facial-expression monitoring system for improved healthcare in
smart cities. IEEE Access. 2017;5;10871-10881.
11. Abiyev RH. Facial feature extraction techniques for face recognition. J Comput Sci. 2014;10(12):2360-2365.
12. Duan L, Lou Y, Wang S, Gao W, Rui Y. AI Oriented Large-Scale Video Management for Smart City: Technologies, Standards and Beyond.
https://arxiv.org/pdf/1712.01432.pdf. 2014.
13. McGuire M. Beyond flatland: when smart cities make stupid citizens. 2018;5:22. https://doi.org/10.1186/s40410-018-0098-0
14. http://www.scface.org/
15. Wallace R, McLaren M, McCool C, Marcel S. Inter-session variability modelling and joint factor analysis for face authentication. In:
Proceedings of International Joint Conference on Biometrics 2011 (IJCB 2011); 2011; Arlington, VA.
16. Tome P, Fierrez J, Vera-Rodriguez R, Ramos D. Identification using face regions: application and assessment in forensic scenarios. Forensic
Sci Int. 2013;233(1):75-83.
17. Baskar S, Periyanayagi S, Shakeel PM, Dhulipala VS. An energy persistent range-dependent regulated transmission communication model
for vehicular network applications. Computer Networks. 2019;152:144-153.
18. Kumar R, Sakthidasan K, Sampath R, Shakeel PM. Analysis of regional atrophy and prolong adaptive exclusive atlas to detect the
Alzheimers neuro disorder using medical images. Multimed Tools Appl. 2019. https://doi.org/10.1007/s11042-019-7213-4
19. Harder J. Color choices: CMYK, RGB, and grayscale. In: Graphics and Multimedia for the Web with Adobe Creative Cloud. Berkeley, CA: A
press; 2018:239-251.
20. Baskar S, Dhulipala VS. Collaboration of trusted node and QoS based energy multi path routing protocol for vehicular ad hoc networks.
Wirel Pers Commun. 2018;103(4):2833-2842.
21. van Houten W, Geradts Z. Using anisotropic diffusion for efficient extraction of sensor noise in camera identification. J Forensic Sci.
2012;57(2):521-527.
22. Cheng D, Zhang Y, Tian F, Liu C, Liu X. Sign correlation subspace for face alignment. Soft Comput. 2019;23:241-249. https://doi.org/10.
1007/s00500-018-3389-1
23. Lin T-K. Adaptive principal component analysis combined with feature extraction-based method for feature identification in manufac-
turing. J Sens. 2019;2019:5736104. https://doi.org/10.1155/2019/5736104.
24. Iatan IF. The Fisher's linear discriminant. In: Borgelt C, ed. Combining Soft Computing and Statistical Methods in Data Analysis. Berlin,
Germany: Springer; 2010:345-352. Advances in Intelligent and Soft Computing; vol 77.
25. Shakeel PM, Burhanuddin MA, Desa MI. Lung cancer detection from CT image using improved profuse clustering and deep learning
instantaneously trained neural networks. Measurement. 2019;145:702-712. https://doi.org/10.1016/j.measurement.2019.05.027
26. Fenil E, Manogaran G, Vivekananda GN, Thanjaivadivel T, Jeeva S, Ahilan A. Real time violence detection framework for football stadium
comprising of big data analysis and deep learning through bidirectional LSTM. Computer Networks. 2019;151:191–200. https://doi.org/10.
1016/j.comnet.2019.01.028.
27. Yu Z, Wang W. Learning DALTS for cross-modal retrievalCAAI Trans Intell Technol. 2019;4(1):916.
28. Vellido A, Lisboa PJG. Neural networks and other machine learning methods in cancer research. In: Sandoval F, Prieto A, Cabestany J,
Graña M, eds. Computational and Ambient Intelligence. Berlin, Germany: Springer; 2007:964-971. Lecture Notes in Computer Science;
vol 4507.
https://doi.org/10.1155/2018/2850632
https://doi.org/10.1155/2018/2850632
https://doi.org/10.1155/2018/8214989
https://doi.org/10.1155/2018/8214989
https://doi.org/10.1007/s12652-018-1154-z
https://arxiv.org/pdf/1712.01432.pdf
https://doi.org/10.1186/s40410-018-0098-0
http://www.scface.org/
https://doi.org/10.1007/s11042-019-7213-4
https://doi.org/10.1007/s00500-018-3389-1
https://doi.org/10.1007/s00500-018-3389-1
https://doi.org/10.1155/2019/5736104
https://doi.org/10.1016/j.measurement.2019.05.027
https://doi.org/10.1016/j.comnet.2019.01.028
https://doi.org/10.1016/j.comnet.2019.01.028
MEDAPATI ET AL. 15 of 15
29. Sivaramakrishnan A, Graupe D. Brain tumor demarcation by applying a LAMSTAR neural network to spectroscopy data. Neurol Res.
2004;26:613-621.
30. Hu X, Duan S, Wang L. A novel chaotic neural network using Memristive synapse with applications in associative memory. Abstr Appl
Anal. 2012;2012:405739. https://doi.org/10.1155/2012/405739.
31. Tian C, Xu Y, Fei L, Wang J, Wen J, Luo N. Enhanced CNN for image denoising. CAAI Trans Intell Technol. 2019;4(1):17-23.
32. Samuel RDJ, Kanna BR. Tuberculosis detection system using deep neural networks. Neural Comput Appl. 2019;31:1533. https://doi.org/
10.1007/s00521-018-3564-4
How to cite this article: Medapati PK, Murthy PHST, Sridhar KP. LAMSTAR: For IoT-based face
recognition system to manage the safety factor in smart cities. Trans Emerging Tel Tech. 2019;e3843.
https://doi.org/10.1002/ett.3843
https://doi.org/10.1155/2012/405739
https://doi.org/10.1007/s00521-018-3564-4
https://doi.org/10.1007/s00521-018-3564-4
https://doi.org/10.1002/ett.3843
	LAMSTAR: For IoT-based face recognition system to manage the safety factor in smart cities
	Abstract
	INTRODUCTION
	Adaboost large memory storage and retrieval neural network–BASED FACE RECOGNITION PROCESS IN SMART CITIES
	Materials andxmltex	?> methods
	Geometric face shape model creation
	Facial features extraction
	Facial features training
	Face recognition
	EXPERIMENTAL ANALYSIS
	CONCLUSION
	REFERENCES
<<
 /ASCII85EncodePages false
 /AllowTransparency false
 /AutoPositionEPSFiles false
 /AutoRotatePages /None
 /Binding /Left
 /CalGrayProfile (Dot Gain 20%)
 /CalRGBProfile (sRGB IEC61966-2.1)
 /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2)
 /sRGBProfile (sRGB IEC61966-2.1)
 /CannotEmbedFontPolicy /Error
 /CompatibilityLevel 1.3
 /CompressObjects /Off
 /CompressPages true
 /ConvertImagesToIndexed true
 /PassThroughJPEGImages true
 /CreateJobTicket false
 /DefaultRenderingIntent /Default
 /DetectBlends false
 /DetectCurves 0.1000
 /ColorConversionStrategy /LeaveColorUnchanged
 /DoThumbnails false
 /EmbedAllFonts true
 /EmbedOpenTypefalse
 /ParseICCProfilesInComments true
 /EmbedJobOptions true
 /DSCReportingLevel 0
 /EmitDSCWarnings false
 /EndPage -1
 /ImageMemory 1048576
 /LockDistillerParams true
 /MaxSubsetPct 100
 /Optimize false
 /OPM 1
 /ParseDSCComments true
 /ParseDSCCommentsForDocInfo true
 /PreserveCopyPage false
 /PreserveDICMYKValues true
 /PreserveEPSInfo false
 /PreserveFlatness true
 /PreserveHalftoneInfo false
 /PreserveOPIComments false
 /PreserveOverprintSettings true
 /StartPage 1
 /SubsetFonts true
 /TransferFunctionInfo /Apply
 /UCRandBGInfo /Remove
 /UsePrologue false
 /ColorSettingsFile ()
 /AlwaysEmbed [ true
 ]
 /NeverEmbed [ true
 ]
 /AntiAliasColorImages false
 /CropColorImages true
 /ColorImageMinResolution 300
 /ColorImageMinResolutionPolicy /OK
 /DownsampleColorImages false
 /ColorImageDownsampleType /Bicubic
 /ColorImageResolution 300
 /ColorImageDepth 8
 /ColorImageMinDownsampleDepth 1
 /ColorImageDownsampleThreshold 1.50000
 /EncodeColorImages true
 /ColorImageFilter /FlateEncode
 /AutoFilterColorImages false
 /ColorImageAutoFilterStrategy /JPEG
 /ColorACSImageDict <<
 /QFactor 0.15
 /HSamples [1 1 1 1] /VSamples [1 1 1 1]
 >>
 /ColorImageDict <<
 /QFactor 0.15
 /HSamples [1 1 1 1] /VSamples [1 1 1 1]
 >>
 /JPEG2000ColorACSImageDict <<
 /TileWidth 256
 /TileHeight 256
 /Quality 30
 >>
 /JPEG2000ColorImageDict <<
 /TileWidth 256
 /TileHeight 256
 /Quality 30
 >>
 /AntiAliasGrayImages false
 /CropGrayImages true
 /GrayImageMinResolution 300
 /GrayImageMinResolutionPolicy /OK
 /DownsampleGrayImages false
 /GrayImageDownsampleType /Bicubic
 /GrayImageResolution 300
 /GrayImageDepth 8
 /GrayImageMinDownsampleDepth 2
 /GrayImageDownsampleThreshold 1.50000
 /EncodeGrayImages true
 /GrayImageFilter /FlateEncode
 /AutoFilterGrayImages false
 /GrayImageAutoFilterStrategy /JPEG
 /GrayACSImageDict <<
 /QFactor 0.15
 /HSamples [1 1 1 1] /VSamples [1 1 1 1]
 >>
 /GrayImageDict <<
 /QFactor 0.15
 /HSamples [1 1 1 1] /VSamples [1 1 1 1]
 >>
 /JPEG2000GrayACSImageDict <<
 /TileWidth 256
 /TileHeight 256
 /Quality 30
 >>
 /JPEG2000GrayImageDict <<
 /TileWidth 256
 /TileHeight 256
 /Quality 30
 >>
 /AntiAliasMonoImages false
 /CropMonoImages true
 /MonoImageMinResolution 1200
 /MonoImageMinResolutionPolicy /OK
 /DownsampleMonoImages false
 /MonoImageDownsampleType /Bicubic
 /MonoImageResolution 1200
 /MonoImageDepth -1
 /MonoImageDownsampleThreshold 1.50000
 /EncodeMonoImages true
 /MonoImageFilter /CCITTFaxEncode
 /MonoImageDict <<
 /K -1
 >>
 /AllowPSXObjects false
 /CheckCompliance [
 /PDFX1a:2001
 ]
 /PDFX1aCheck true
 /PDFX3Check false
 /PDFXCompliantPDFOnly false
 /PDFXNoTrimBoxError false
 /PDFXTrimBoxToMediaBoxOffset [
 0.00000
 0.00000
 0.00000
 0.00000
 ]
 /PDFXSetBleedBoxToMediaBox true
 /PDFXBleedBoxToTrimBoxOffset [
 0.00000
 0.00000
 0.00000
 0.00000
 ]
 /PDFXOutputIntentProfile (Euroscale Coated v2)
 /PDFXOutputConditionIdentifier (FOGRA1)
 /PDFXOutputCondition ()
 /PDFXRegistryName (http://www.color.org)
 /PDFXTrapped /False
 /CreateJDFFile false
 /Description <<
 /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f0062006500200050004400460020658768637b2654080020005000440046002f0058002d00310061003a0032003000300031002089c4830330028fd9662f4e004e2a4e1395e84e3a56fe5f6251855bb94ea46362800c52365b9a7684002000490053004f0020680751c6300251734e8e521b5efa7b2654080020005000440046002f0058002d00310061002089c483037684002000500044004600206587686376848be67ec64fe1606fff0c8bf753c29605300a004100630072006f00620061007400207528623763075357300b300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200034002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002>
 /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef67b2654080020005000440046002f0058002d00310061003a00320030003000310020898f7bc430025f8c8005662f70ba57165f6251675bb94ea463db800c5c08958052365b9a76846a196e96300295dc65bc5efa7acb7b2654080020005000440046002f0058002d003100610020898f7bc476840020005000440046002065874ef676848a737d308cc78a0aff0c8acb53c395b1201c004100630072006f00620061007400204f7f7528800563075357201d300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200034002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002>
 /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c00200064006500720020006600f800720073007400200073006b0061006c00200073006500730020006900670065006e006e0065006d00200065006c006c0065007200200073006b0061006c0020006f0076006500720068006f006c006400650020005000440046002f0058002d00310061003a0032003000300031002c00200065006e002000490053004f002d007300740061006e0064006100720064002000740069006c00200075006400760065006b0073006c0069006e00670020006100660020006700720061006600690073006b00200069006e00640068006f006c0064002e00200059006400650072006c006900670065007200650020006f0070006c00790073006e0069006e0067006500720020006f006d0020006f007000720065007400740065006c007300650020006100660020005000440046002f0058002d00310061002d006b006f006d00700061007400690062006c00650020005000440046002d0064006f006b0075006d0065006e007400650072002000660069006e006400650072002000640075002000690020006200720075006700650072006800e5006e00640062006f00670065006e002000740069006c0020004100630072006f006200610074002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200034002e00300020006f00670020006e0079006500720065002e>
 /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e0020005000440046002f0058002d00310061003a0032003000300031002d006b006f006d00700061007400690062006c0065006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002e0020005000440046002f0058002d003100610020006900730074002000650069006e0065002000490053004f002d004e006f0072006d0020006600fc0072002000640065006e002000410075007300740061007500730063006800200076006f006e0020006700720061006600690073006300680065006e00200049006e00680061006c00740065006e002e0020005700650069007400650072006500200049006e0066006f0072006d006100740069006f006e0065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e0020005000440046002f0058002d00310061002d006b006f006d00700061007400690062006c0065006e0020005000440046002d0044006f006b0075006d0065006e00740065006e002000660069006e00640065006e002000530069006500200069006d0020004100630072006f006200610074002d00480061006e00640062007500630068002e002000450072007300740065006c006c007400650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000410064006f00620065002000520065006100640065007200200034002e00300020006f0064006500720020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e>
 /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f00730020005000440046002000640065002000410064006f00620065002000710075006500200073006500200064006500620065006e00200063006f006d00700072006f0062006100720020006f002000710075006500200064006500620065006e002000630075006d0070006c006900720020006c00610020006e006f0072006d0061002000490053004f0020005000440046002f0058002d00310061003a00320030003000310020007000610072006100200069006e00740065007200630061006d00620069006f00200064006500200063006f006e00740065006e00690064006f00200067007200e1006600690063006f002e002000500061007200610020006f006200740065006e006500720020006d00e1007300200069006e0066006f0072006d00610063006900f3006e00200073006f0062007200650020006c0061002000630072006500610063006900f3006e00200064006500200064006f00630075006d0065006e0074006f0073002000500044004600200063006f006d00700061007400690062006c0065007300200063006f006e0020006c00610020006e006f0072006d00610020005000440046002f0058002d00310061002c00200063006f006e00730075006c007400650020006c006100200047007500ed0061002000640065006c0020007500730075006100720069006f0020006400650020004100630072006f006200610074002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200034002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e>/FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000710075006900200064006f006900760065006e0074002000ea0074007200650020007600e9007200690066006900e900730020006f0075002000ea00740072006500200063006f006e0066006f0072006d00650073002000e00020006c00610020006e006f0072006d00650020005000440046002f0058002d00310061003a0032003000300031002c00200075006e00650020006e006f0072006d0065002000490053004f00200064002700e9006300680061006e0067006500200064006500200063006f006e00740065006e00750020006700720061007000680069007100750065002e00200050006f0075007200200070006c007500730020006400650020006400e9007400610069006c007300200073007500720020006c006100200063007200e9006100740069006f006e00200064006500200064006f00630075006d0065006e00740073002000500044004600200063006f006e0066006f0072006d00650073002000e00020006c00610020006e006f0072006d00650020005000440046002f0058002d00310061002c00200076006f006900720020006c00650020004700750069006400650020006400650020006c0027007500740069006c0069007300610074006500750072002000640027004100630072006f006200610074002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200034002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e>
 /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF che devono essere conformi o verificati in base a PDF/X-1a:2001, uno standard ISO per lo scambio di contenuto grafico. Per ulteriori informazioni sulla creazione di documenti PDF compatibili con PDF/X-1a, consultare la Guida dell'utente di Acrobat. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 4.0 e versioni successive.)
 /JPN <FEFF30b030e930d530a330c330af30b330f330c630f330c4306e590963db306b5bfe3059308b002000490053004f00206a196e96898f683c306e0020005000440046002f0058002d00310061003a00320030003000310020306b6e9662e03057305f002000410064006f0062006500200050004400460020658766f830924f5c62103059308b305f3081306b4f7f75283057307e30593002005000440046002f0058002d0031006100206e9662e0306e00200050004400460020658766f84f5c6210306b306430443066306f3001004100630072006f006200610074002030e630fc30b630ac30a430c9309253c2716730573066304f30603055304430023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200034002e003000204ee5964d3067958b304f30533068304c3067304d307e30593002>
 /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020c791c131d558b294002000410064006f0062006500200050004400460020bb38c11cb2940020d655c778c7740020d544c694d558ba700020adf8b798d53d0020cee8d150d2b8b97c0020ad50d658d558b2940020bc29bc95c5d00020b300d55c002000490053004f0020d45cc900c7780020005000440046002f0058002d00310061003a0032003000300031c7580020addcaca9c5d00020b9dec544c57c0020d569b2c8b2e4002e0020005000440046002f0058002d003100610020d638d65800200050004400460020bb38c11c0020c791c131c5d00020b300d55c0020c790c138d55c0020c815bcf4b2940020004100630072006f0062006100740020c0acc6a90020c124ba85c11cb97c0020cc38c870d558c2edc2dcc624002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200034002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e>
 /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die moeten worden gecontroleerd of moeten voldoen aan PDF/X-1a:2001, een ISO-standaard voor het uitwisselen van grafische gegevens. Raadpleeg de gebruikershandleiding van Acrobat voor meer informatie over het maken van PDF-documenten die compatibel zijn met PDF/X-1a. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 4.0 en hoger.)
 /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d00200073006b0061006c0020006b006f006e00740072006f006c006c0065007200650073002c00200065006c006c0065007200200073006f006d0020006d00e50020007600e6007200650020006b006f006d00700061007400690062006c00650020006d006500640020005000440046002f0058002d00310061003a0032003000300031002c00200065006e002000490053004f002d007300740061006e006400610072006400200066006f007200200075007400760065006b0073006c0069006e00670020006100760020006700720061006600690073006b00200069006e006e0068006f006c0064002e00200048007600690073002000640075002000760069006c0020006800610020006d0065007200200069006e0066006f0072006d00610073006a006f006e0020006f006d002000680076006f007200640061006e0020006400750020006f007000700072006500740074006500720020005000440046002f0058002d00310061002d006b006f006d00700061007400690062006c00650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020007300650020006200720075006b00650072006800e5006e00640062006f006b0065006e00200066006f00720020004100630072006f006200610074002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200034002e003000200065006c006c00650072002000730065006e006500720065002e>
 /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200063006100700061007a0065007300200064006500200073006500720065006d0020007600650072006900660069006300610064006f00730020006f0075002000710075006500200064006500760065006d00200065007300740061007200200065006d00200063006f006e0066006f0072006d0069006400610064006500200063006f006d0020006f0020005000440046002f0058002d00310061003a0032003000300031002c00200075006d0020007000610064007200e3006f002000640061002000490053004f002000700061007200610020006f00200069006e007400650072006300e2006d00620069006f00200064006500200063006f006e0074006500fa0064006f00200067007200e1006600690063006f002e002000500061007200610020006f00620074006500720020006d00610069007300200069006e0066006f0072006d006100e700f50065007300200073006f00620072006500200063006f006d006f00200063007200690061007200200064006f00630075006d0065006e0074006f0073002000500044004600200063006f006d00700061007400ed007600650069007300200063006f006d0020006f0020005000440046002f0058002d00310061002c00200063006f006e00730075006c007400650020006f0020004700750069006100200064006f002000750073007500e100720069006f00200064006f0020004100630072006f006200610074002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200034002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e>
 /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b00610020007400610072006b0069007300740065007400610061006e00200074006100690020006a006f006900640065006e0020007400e400790074007900790020006e006f00750064006100740074006100610020005000440046002f0058002d00310061003a0032003000300031003a007400e400200065006c0069002000490053004f002d007300740061006e006400610072006400690061002000670072006100610066006900730065006e002000730069007300e4006c006c00f6006e00200073006900690072007400e4006d00690073007400e4002000760061007200740065006e002e0020004c0069007300e40074006900650074006f006a00610020005000440046002f0058002d00310061002d00790068007400650065006e0073006f00700069007600690065006e0020005000440046002d0064006f006b0075006d0065006e0074007400690065006e0020006c0075006f006d0069007300650073007400610020006f006e0020004100630072006f0062006100740069006e0020006b00e400790074007400f6006f0070007000610061007300730061002e00200020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200034002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e>/SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200073006b00610020006b006f006e00740072006f006c006c006500720061007300200065006c006c0065007200200073006f006d0020006d00e50073007400650020006d006f0074007300760061007200610020005000440046002f0058002d00310061003a0032003000300031002c00200065006e002000490053004f002d007300740061006e00640061007200640020006600f6007200200075007400620079007400650020006100760020006700720061006600690073006b007400200069006e006e0065006800e5006c006c002e00200020004d0065007200200069006e0066006f0072006d006100740069006f006e0020006f006d00200068007500720020006d0061006e00200073006b00610070006100720020005000440046002f0058002d00310061002d006b006f006d00700061007400690062006c00610020005000440046002d0064006f006b0075006d0065006e0074002000660069006e006e00730020006900200061006e007600e4006e00640061007200680061006e00640062006f006b0065006e002000740069006c006c0020004100630072006f006200610074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200034002e00300020006f00630068002000730065006e006100720065002e>
 /ENG (Modified PDFX1a settings for Blackwell publications)
 /ENU (Use these settings to create Adobe PDF documents that are to be checked or must conform to PDF/X-1a:2001, an ISO standard for graphic content exchange. For more information on creating PDF/X-1a compliant PDF documents, please refer to the Acrobat User Guide. Created PDF documents can be opened with Acrobat and Adobe Reader 4.0 and later.)
 >>
 /Namespace [
 (Adobe)
 (Common)
 (1.0)
 ]
 /OtherNamespaces [
 <<
 /AsReaderSpreads false
 /CropImagesToFrames true
 /ErrorControl /WarnAndContinue
 /FlattenerIgnoreSpreadOverrides false
 /IncludeGuidesGrids false
 /IncludeNonPrinting false
 /IncludeSlug false
 /Namespace [
 (Adobe)
 (InDesign)
 (4.0)
 ]
 /OmitPlacedBitmaps false
 /OmitPlacedEPS false
 /OmitPlacedPDF false
 /SimulateOverprint /Legacy
 >>
 <<
 /AddBleedMarks false
 /AddColorBars false
 /AddCropMarks false
 /AddPageInfo false
 /AddRegMarks false
 /ConvertColors /ConvertToCMYK
 /DestinationProfileName ()
 /DestinationProfileSelector /DocumentCMYK
 /Downsample16BitImages true
 /FlattenerPreset <<
 /PresetSelector /HighResolution
 >>
 /FormElements false
 /GenerateStructure false
 /IncludeBookmarks false
 /IncludeHyperlinks false
 /IncludeInteractive false
 /IncludeLayers false
 /IncludeProfiles false
 /MultimediaHandling /UseObjectSettings
 /Namespace [
 (Adobe)
 (CreativeSuite)
 (2.0)
 ]
 /PDFXOutputIntentProfileSelector /DocumentCMYK
 /PreserveEditing true
 /UntaggedCMYKHandling /LeaveUntagged
 /UntaggedRGBHandling /UseDocumentProfile
 /UseDocumentBleed false
 >>
 ]
>> setdistillerparams
<<
 /HWResolution [2400 2400]
 /PageSize [612.000 792.000]
>> setpagedevice

Continue navegando