Speech synthesisSpeech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. The reverse process is speech recognition. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database.
Handwriting recognitionHandwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices. The image of the written text may be sensed "off line" from a piece of paper by optical scanning (optical character recognition) or intelligent word recognition. Alternatively, the movements of the pen tip may be sensed "on line", for example by a pen-based computer screen surface, a generally easier task as there are more clues available.
SpeechSpeech is a human vocal communication using language. Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words (that is, all English words sound different from all French words, even if they are the same word, e.g., "role" or "hotel"), and using those words in their semantic character as words in the lexicon of a language according to the syntactic constraints that govern lexical words' function in a sentence. In speaking, speakers perform many different intentional speech acts, e.
Facial nerveThe facial nerve, also known as the seventh cranial nerve, cranial nerve VII, or simply CN VII, is a cranial nerve that emerges from the pons of the brainstem, controls the muscles of facial expression, and functions in the conveyance of taste sensations from the anterior two-thirds of the tongue. The nerve typically travels from the pons through the facial canal in the temporal bone and exits the skull at the stylomastoid foramen.
Facial musclesThe facial muscles are a group of striated skeletal muscles supplied by the facial nerve (cranial nerve VII) that, among other things, control facial expression. These muscles are also called mimetic muscles. They are only found in mammals, although they derive from neural crest cells found in all vertebrates. They are the only muscles that attach to the dermis. The facial muscles are just under the skin (subcutaneous) muscles that control facial expression.
Intercultural communicationIntercultural communication is a discipline that studies communication across different cultures and social groups, or how culture affects communication. It describes the wide range of communication processes and problems that naturally appear within an organization or social context made up of individuals from different religious, social, ethnic, and educational backgrounds. In this sense, it seeks to understand how people from different countries and cultures act, communicate, and perceive the world around them.
Pattern recognitionPattern recognition is the automated recognition of patterns and regularities in data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent pattern. PR has applications in statistical data analysis, signal processing, , information retrieval, bioinformatics, data compression, computer graphics and machine learning.
Corpus callosumThe corpus callosum (Latin for "tough body"), also callosal commissure, is a wide, thick nerve tract, consisting of a flat bundle of commissural fibers, beneath the cerebral cortex in the brain. The corpus callosum is only found in placental mammals. It spans part of the longitudinal fissure, connecting the left and right cerebral hemispheres, enabling communication between them. It is the largest white matter structure in the human brain, about in length and consisting of 200–300 million axonal projections.
Real-time computer graphicsReal-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time , but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.
Emotional intelligenceEmotional intelligence (EI) is most often defined as the ability to perceive, use, understand, manage, and handle emotions. People with high emotional intelligence can recognize their own emotions and those of others, use emotional information to guide thinking and behavior, discern between different feelings and label them appropriately, and adjust emotions to adapt to environments. Although the term first appeared in 1964, it gained popularity in the 1995 bestselling book Emotional Intelligence by science journalist Daniel Goleman.
Emotional literacyThe term emotional literacy has often been used in parallel to, and sometimes interchangeably with, the term emotional intelligence. However, there are important differences between the two. Emotional literacy was noted as part of a project advocating humanistic education in the early 1970s. The term was used extensively by Claude Steiner (1997) who wrote: "Emotional literacy is made up of 'the ability to understand your emotions, the ability to listen to others and empathise with their emotions, and the ability to express emotions productively.
Emotion recognitionEmotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables.