What's The Fuss About Natural Language Processing? ('NLP')

Natural Language Processing (“NLP”) is one of the areas of machine learning that companies, academics and people love to talk about all the time. In fact, we use it at PLATO, it’s a part of what makes our technology work. So what actually is NLP? (Hint: It's not to be confused with the often criticised 'Neuro-Linguistic Programming')

Put simply, NLP is a way for computers to understand language. In some ways, this was one of the first functions of computing machines, but developments in machine learning have begun to make this better. Some chatbots and smart home devices (think 'Alexa') almost feel human in their speech (almost).

1 5ZuLCsB1KXEPgHu-qJ8WxQ.png

Why we need NLP:

Machine Language Instructions or Machine Code is a language since, on a fundamental level, a language is any description, question or instruction that is represented through a distinct sound or symbol. The issue with machine code or higher level programming languages (such as C) is that they often have very small/incomplete vocabularies and, more importantly, follow strict conventions of syntax. Of course, every human language has syntax, but it’s rarely very strict; it is full of inconsistencies and is painfully complicated. Look at English, think about how frustrating rules in English are:  I before E except after C works in many cases but is full of caveats and exceptions and then exceptions to its caveats. In fact this steadfast rule has been proven widely inaccurate. Not to mention there is not a universal acceptance of grammatical rules or even how words should be said.

England (a relatively small country) has a different pronunciation of different words in towns just 10 miles from each other; each region had its own pronunciation. Northamptonshire (where I grew up) is perfect example, each town had its own pronunciation of certain words.  Human language is complicated. Despite this, humans can easily understand each other, even with inconsistencies.  

Machines need perfectly correct inputs to generate outcomes. Human languages or Natural Languages thus do not transport well when applied to computers. This why we need Natural Language processing or NLP for computers to understand and process our human communication.

Understanding language might seem simple; “plug a dictionary into a computer, and away we go”, but, as described above, language is so complex that doing so would offer very little in the way of computer understanding. If language worked as simply as this, then Google’s Translate would be a doddle, just put in a French dictionary and it can speak French. However, as anyone who has ever relied on Google Translate for accuracy knows, it's rarely very simple.

It’s incredible how much flexibility and variety language has; this A Bit of Fry and Laurie sketch demonstrates just how much . In this sketch Stephen Fry (a language “expert”) says “I can say the following sentence and be utterly sure that nobody has ever said it before in the history of human communication ‘Hold the news readers nose squarely waiter or friendly milk will countermand my trousers’”. Language gives us so many possibilities; we can get Shakespeare and twitter storms and infinite (literally infinite) other possibilities. It is crucial to note here that language isn’t just getting the syntax right, more importantly it’s also getting the meaning or semantics right as well. Language is so complex that even humans misunderstand it all the time. The analytic philosopher and linguist Wittgenstein saw that the language games played by people were why communication was so tricky. There isn’t merely an infinite combination of words and sentences, within language, there is also an infinite number of games. In short, language is extraordinarily complex.

So, how did machines start to understand what humans were saying?

History of NLP:

Computers were first created to replace 'computers' (an actual job title) and even in the 1950s, and not just in sci-fi, people were thinking about computers that understood language. Alan Turing (recently fictionalised by Benedict Cumberbatch in the Imitation Game) wrote his paper 'Computing Machinery and Intelligence' where he describes the Turing Test as a way to determine machine intelligence. Turing proposed that a computer can be said to possess artificial intelligence if it can accurately mimic human responses under specific conditionsIn fact we’ve imagined machines communicating with human at least since the 1600s when the mathematician and father of modern philosophy (I think, therefore I am - Cogito Ergo Sum) René Descartes discussed a machine conversing with a human on his epistemological treatise “Discourse on The Method”.

One of the first problems for computing was deconstructing language to understand humans better. Language lessons at school split words into 9 different fundamental types, Nouns, Articles, Verbs, Adjectives, Pronouns, Adverbs, Prepositions, Conjunctions and Interjections, known as parts of speech. Knowing which words fit each part of speech is essential but, just looking at a word in isolation, it is impossible to tell, since “bark” is a verb and a noun and both have completely different meanings depending on context.

Computers thus need grammar and sentence structure. However, sentence structure is complicated. Take the below diagram and look at the sentence structure rules surrounding “the cat sat on the mat” (I do not envy linguists). This diagram is an example of a parse tree, where the rules of the language (grammar) are applied to the parts of speech. Parsing uses logic, probability and grammar to deconstruct sentences. This allows computers to more easily process natural language and is thus a basic form of NLP. This works on simple and coherent sentences. In fact, Siri and other voice-activated commands work similarly to this; processing the sounds and using the parts of speech like Lego to construct a computable command. “Set an alarm for 8:30” works as it is most likely that there isn’t any confusion there. Changing the phrase to “at 8:30 bark me awake” doesn’t fit the pattern event. A human would get this instantly, but I just tried this with Siri, and it took nine attempts to get it to understand.

 An example of a 'Parse Tree'

An example of a 'Parse Tree'

How Does NLP Go Further?

Machine learning takes NLP to the next level. Millions and millions of lines of human text are applied to algorithms so they can train to understand human speech. This works really well if the machine learning is being used for the same purpose the text is based on, such as a customer service chatbot that was trained on real human customer service transcripts.

The benefit of machine learning techniques is that hand-crafted rules hardcoded in by engineers can be replaced by the algorithm itself through extracting and recognising the patterns that come from real human speech. Identifying, codifying and programming rules is time consuming and rules are rigid - machine learning recognises these automatically from existing data sets of actual human language.


The level of accuracy that comes from machine learning NLP allows computers to identify meaning and gain a level of understanding from text - no matter how different the structure is from a 'norm'. A system built using these techniques can understand any sentence to the same extent a human could.

Understanding language is essential for much of the artificial intelligence being developed at the moment. Computers understanding language allows us to automate and replicate many tasks in life and business. Much of the most mundane things we do can be taken over by machines, giving us the freedom to do things we



About the Author: Xavier Parkhouse-Parker is PLATO Intelligence's CEO. Xavier loves applying Philosophy and History to business scenarios. He enjoys using AI to solve problems in the business and HR world and helping HR teams adopt new technologies.

Contact on Twitter or Linkedin for more