Folks, this is going to be a series of information pieces (more than one blog post about same topic) about text processing. In this series, if intend to discuss some of my experiences and also take this moment to organize the discussion on my blog. In the past, if touched upon some of the text processing recipes purely in an application point of view; however, if have spent more time in my career work with text content in automation and analytics and if owe it to myself to write more on text processing. Without further ado, lets jump into the this series.
Text processing is extracting information that could be used for decision making purposes from a text say like a book/paper/article — WIHTOUT HAVING TO READ IT. I use to read the newspaper to find out about weather forecast (five years ago) but these days if we need information about something we type it in search engine to get results (and ads). But imagine someone need information on day to day basis for decision making purposes, where he/she needs to read many articles or news or books in short period of time. Information is power but timing is important too. Hence the need for text processing data products.
Some basic things that we can do with text data:
Crawling: extract data/content from a website using a crawlier, which is used to extract the data.
Tokenization: process of splitting a string into tiny objects in an array, based on the space between words.
Stemming: word reduction to a smaller word, like a root of word only to use it to search for all of its variations.
Stop word removal: straight forward, to remove frequently occurring words like a, an, the.
Parsing: process of breaking down a sentence into a set of phrases and further breakdown each phrase into noun, verb, adjective, etc… Also called parts of speech tagging. Cool topic.
These are just my experiences and my knowledge, and as always I try to write in a way that anyone can read and understand. More later.