Computer vision is difficult because it requires recognition of human faces. Early artificial visual experiments were focused on small problems. The world in which they were observed had to be carefully controlled and constructed. Boxes in the shape of regular polygons could be identified. Or simple objects like a pair of scissors may have been used. The background of most images was controlled so that there was a good contrast between the objects being examined and the rest of the world. Face recognition is not a problem. Face recognition is difficult because it is complex and natural. It does not have easy (automatically) identifiable edges or features. It is therefore difficult to create a mathematical model that represents the face of a person and can be used for prior knowledge in analyzing an image.

Face recognition is a widespread application. The most common is human-computer interaction. Computers could be made easier by simply sitting down at a terminal. The computer could automatically identify the user and load their personal preferences. This technology can also be used to improve other technologies like speech recognition. The computer can recognize the individual speaking and can automatically load personal preferences. Security could also benefit from human face recognition technology. One of many methods to identify an individual could include recognition of their face. Face recognition is an easy and quick security measure. The subject is not inconvenienced by the process, unlike retinal scans. The downside is that it doesn’t guarantee authenticity. Human face appearances can change over time and are subject to rapid changes …),. Face recognition technologies could also be useful in other areas such as search engine technology. It is possible to search for people in specific images using face detection systems. For well-known people, this could be accomplished by simply giving the name of the person or a picture of the person. This technology can be used to create criminal mugshot databases. Automated facial recognition is easy in this setting, as all poses are standard and the scale and lighting are kept constant. This technology is able to extend online searches beyond what is used for indexing information.

Recognition of the Face The History of Development

Image analysis has one of its most valuable applications: Face recognition. It’s not easy to create an automated system capable of recognising faces. Although we can recognize faces that are familiar, our skills are limited when dealing with unknown faces. Face recognition is a problem that computers have the potential to solve. They also have a limitless computational speed and memory. A simple Google search for the phrase “face Recognition” returns 9422 matches. In just one year 2009, 1332 articles.

It could be beneficial to many industries. Video surveillance, human machine interaction, photo cameras and virtual reality are just a few examples. Multidisciplinary research is a great way to attract interest from many disciplines. Computer vision research is not the only problem. Pattern recognition, neural networks and image processing are all relevant subjects in face recognition. The 1950’s were the first time this topic was studied in psychology [21]. These works were also related to other topics such as face expression, emotion interpretation and perception of gestures. Engineering began to take an interest in face recognition in 1960.

Woodrow W. Bledsoe is one of the pioneers in this field. Bledsoe, together with other researchers, established Panoramic Research, Inc., Palo Alto, California in 1960. This company did most of its work with AI-related contracts from U.S. Department of Defense (and other intelligence agencies] [4]. Bledsoe was also a researcher on computers for recognizing human faces in 1964 and 1965. The work was not published because the funding for these researches came from an unknown intelligence agency. Later, he continued his research at Stanford Research Institute. Bledsoe developed and implemented an automated system. A human operator selected some face coordinates, and computers used that information to recognize the faces. The main problems faced by Face Recognition 50 years later were described by him: variations in lighting, head rotation, facial expressions, and aging. Researchers continue to study this subject, trying to determine subjective features such as the size of the ears or distance between the eyes. A. Jay Goldstein (Leon D. Harmon), Ann B. Lesk, and Ann B. Lesk used this method in Bell Laboratories. A vector was described by them, which contained 21 subjective characteristics such as nose length, eyebrow width, or ear protrusion. This vector could be used to identify faces using pattern classification techniques. Fischler & Elschanger tried to automatically measure the same features in 1973 [34]. The algorithm employed local template matching to determine and measure facial features.

Other approaches were used in the 1970s. Others attempted to define the face by using a series of geometric parameters. They then performed some pattern recognition using those parameters. Kenade, however, was the first person to create a fully automated face-recognition system. He created and implemented the face recognition program. It was run in a specially designed computer system. The algorithm automatically extracted 16 facial parameters. Kenade’s work shows that this algorithm is very similar to manual extraction. He was able to identify the correct source of information at 45-75%. He proved that it was easier to identify the correct features when they were not used. Some efforts were focused on improving the measurement of subjective features. Mark Nixon provided a geometric measurement to measure eye spacing [5]. Template matching was made easier by strategies like “deformable models”. New approaches were also introduced in this decade. Artificial neural networks are used by some researchers to recognize faces [1].

L. Sirovich (10) was the first to mention eigenfaces as a method for image processing. This technique would be the most popular in the following years. They used the Principal Component Analysis to create their methods. Their goal was not to lose much information but to render an image in a smaller dimension and then reconstruct it [6]. Their work would lead to many other face recognition algorithms. Mathew Turk of MIT presented a work using eigenfaces in recognition in 1992 [11]. Their algorithm was capable of locating, tracking and classifying a subject’s heads. The face recognition field has been the subject of a lot of interest since the 1990s. There has been a significant increase in publications. Different algorithms have emerged from different approaches. Amongst the most pertinent are PCA and ICA as well LDA and its derivatives. This work will discuss different algorithms and approaches.

Recognition Algorithm Design points of view

Face recognition began with the most prominent facial features. It was an intelligent approach to imitate human face recognition ability. An effort was made [2] to assess the importance certain intuitive features (mouth and eyes, cheeks) as well as geometric measurements (between eye distance [8], width/length ratio). This is still a relevant topic, mainly because removing certain facial features or parts can improve performance [4]. It is important to determine which facial features are essential for good recognition.

The introduction of abstract mathematical tools, such as eigenfaces, has opened up a new way to recognize faces. The similarities of faces could be computed without the need to consider human-relevant features. This new approach allowed for a higher abstraction level. Skin color is one important feature to detect faces [9, 3]. A normalization step is performed prior to feature extraction [12]. However, abstractions are essential and allow you to approach the problem using pure mathematical or computational methods.

Structure for the Face Recognition Systems

Face Recognition includes many sub-problems. These problems can be classified in different ways in the bibliography. This section will discuss some of these. A final classification, either unified or general, will be offered.

A system designed to identify and distinguish faces.

A face recognition system uses a video stream or image as input. The output is a verification or identification of the subject(s) appearing in the image/video. Some systems [15] consider a face recognition system to be a three-step process. See Figure 1.1. From this point of view, the Face Detection and Feature Extraction phases could run simultaneously.Figure 1.1: A generic face recognition system.Face detection is defined as the process of extracting faces from scenes. The system correctly identifies the face in a particular image area. This method has many uses such as face-tracking, pose estimation, and compression. Next, feature extraction is the process of extracting relevant facial features from data. These features could include specific face areas, variations, angles and measures. They can also be relevant to humans (e.g. Or not. This phase can also be used for emotion recognition or facial feature tracking. The system recognizes the face. A database would be used to identify the user in an identification task. This phase includes a comparison method and a classification algorithm. This phase utilizes methods that are similar to those used in other areas such as data mining, sound engineering, and data mining. There are many engineering solutions to face recognition problems. Face detection and recognition can be combined, or performed before moving to normalizing the facial expressions [10].

Face Detection Problem Structure

Face detection includes several sub-problems. While some systems are able to detect faces simultaneously, others may first run a detection program and then try to locate the person. Figure 1.2 illustrates the steps involved in face detection. In order to have a permissible response speed, it is necessary to reduce data dimensions. Preprocessing is also possible to adapt the input image for the algorithm requirements. Some algorithms simply analyze the image and extract relevant facial areas. Others, however, do the opposite. Next, you will need to extract facial features and measurements. These data will be weighed, evaluated and compared to determine if there are any faces. The algorithms may also use a learning procedure to add new data to the models. Finally, face detection can be a two-class problem. This is a simplified approach to face recognition. Face recognition is a process that classifies a face. There can be as many candidates as classes. Many face recognition algorithms can be compared to other face detection techniques. Face recognition algorithms are also often based on techniques used for face detection.

Methods for Extraction of Features

There are many feature-extraction algorithms. These algorithms will be discussed in the paper. Many of these algorithms are also used for face recognition. Face recognition researchers have modified many algorithms and methods for their purposes. Karl Pearson first proposed pattern recognition in 1901 with PCA [8]. It was then used in face recognition and representation by the early 90’s. A list of features extracted algorithms used in face detection can be found in table 1.2

Methods for Choosing Features

The feature selection algorithm’s goal is to pick the subset of extracted elements that produce the lowest classification error. This error is the reason feature selection is dependent on the type of classification used. This problem can be solved by examining every subset possible and choosing the one that meets the criteria function. This task can prove costly and time-consuming. This problem can be solved using algorithms such as branch and bound. For more information on the selection methods suggested in [4], see table 1.3.

Classification of the Face

After the features have been selected and extracted, it is time to classify the image. The classification methods used to recognize facial features using appearance-based algorithms include many. Sometimes, multiple classifiers may be combined to provide better results. However, models-based algorithms match the sample with the template or model. An improvement method can then be applied to the algorithm. Face recognition is a huge task that classifiers can help with in some way. The use of classifiers is widespread in areas such as finance, data mining and signal decoding. There is a lot of literature on this topic. The general topic of classifiers is from the point of recognition. It usually involves some learning, whether it be supervised, semi-supervised or unsupervised. Unsupervised learning can be the most challenging approach because there is no set of tagged examples. Face recognition software often includes a tagged collection of subjects. Face recognition systems typically use supervised-learning methods. There may be small data sets. Sometimes it can prove impossible to acquire new tags samples. Semi-supervised learning can be a good option.

Face Recognition: The Problem

This article has covered the face recognition industry, and the various methods, tools, algorithms, and approaches used since the 1960’s. Some algorithms are better than others. Face recognition is complex, despite the variety of algorithms available. These issues are caused by environmental conditions, hardware limitations, and problem definition. The previous chapter explains some of the specific problems with face detection. These issues may be common to other subjects related to face recognition. These and other issues are discussed in detail in this section. Many algorithm use color information for face recognition. Even though some of the images are gray-scale, they can be used to extract features. The perception of color from a surface depends on its natural characteristics as well as the light that it is exposed to. Color is actually a result of how our eyes perceive light. Images taken in uncontrolled environments can have relevant illumination variations. However, face recognition is a crucial factor due to its chromacity. The intensity of a pixel’s color can change depending on its lighting conditions. There may be variations or relationships between pixels. Most feature extraction methods depend on how the lighting conditions change. Be aware that light sources may vary and that new light sources could cause light intensities to change. Solarization can cause complete face shadows or opacity, making feature extraction difficult. Problem is that faces of identical subjects may have different illuminations. This can lead to more subtle differences than in other subjects.

In conclusion, automated face recognition systems have a lot of challenges in lighting. There is a lot of literature available on this topic. It has been shown that people can recognize faces under different lighting conditions. However, this is not true for humans.

Author

  • evelynnrobertson

    Evelynn Robertson is a 27-year-old blogger and volunteer. She is also a student. Evelynn is originally from the United States but is currently living in the United Kingdom. She is a graduate of the University of Alabama. Evelynn is passionate about education and is always looking for new ways to help others learn. She is also a big fan of travel and enjoys exploring new places.