parser

Probabilistic Postal Address Elementalization

I haven’t posted in a few months because most of my time has been consumed with work and school. With respect to school, I have been taking Statistical Natural Language Processing at New York University. As my final project for this class, I have been working on something which I have been curious about for quite some time – probabilistic postal address elementalization.

What exactly does “postal address elementalization” mean? This is the process of breaking a postal address into tokens and classifying the function of each token, such as house number, street suffix or zip code. For my experiment, I created twelve distinct classes: five for administrative areas (country, state, county, city and neighborhood), one for postal codes and six for street address components (house number, prefix, pre-directional, base street name, post-directional and suffix). An example of an elementalized address is below:

address_elementalization

Although this seems like a straightforward problem, it is complicated by the fact that many countries have different languages and address formats. For example, in Ghana and Cameroon, there is no standard postal code system. In the Netherlands and Ireland, there is no province or state in the address. In some countries, such as France and Mexico, the postal code is placed before the city whereas other countries place it after the city. Furthermore, known names of terms in specific classes can overlap, so disambiguating streets can be tricky, like Avenue N in Brooklyn (“N” could be a post-directional or street name) or N Broadway in St. Louis (“St.” could be a suffix or part of a city name).

Traditionally, address elementalization has been implemented by building rule-based systems to handle each individual address format. This makes the process of building international address parsers very time consuming as one would need to implement a new rule-based parser for every single country. By building a statistical model to do this, implementing a new international format is as simple as training the model with a new data set.

Some interesting characteristics of postal addresses are that they have a grammar (as defined by the address format) and the elements are contextually dependent. These traits make the problem well-suited for natural language processing. There are natural language processing techniques that are used for similar purposes, namely part-of-speech taggers which are used to classify the parts of speech in a sentence.

For my final project, I looked at four different techniques of statistical part-of-speech tagging and applied them to the problem of postal address elementalization. The code is here.

The first strategy was a Hidden Markov model (HMM) tagger. HMMs are statistical models that can be used to find the most likely sequence of states for an input. This is done by using transition probabilities (the probability of a specific state given the previous state) and emission probabilities (the probability of the proposed state given the current input token). These probabilities are learned by observing a training set.  The model then tries to classify the input left to right, calculating the probability of a proposed state as the product of the transmission probability, emission probability, and maximum probability from the previous state. The model tries different state combinations over the input and returns the sequence of possible states with the highest overall probability.  A variation of this is the trigram HMM tagger, which uses the two previous probabilities to calculate the transition probability.

The second strategy was a Maximum-Entropy Markov model (MEMM) tagger. MEMMs are similar to HMMs in that they try to find the sequence of states that has the maximum total probability for an input. However, instead of just using observed counts for the transition and emission probabilities, we train a maximum-entropy distribution to calculate the probabilities. The main advantage of doing this is that we can have custom features factor into the probability, such as the current token length or whether the token contains a number. This allows for greater flexibility in tuning the tagger, but it makes the overall classification time slower.

The third strategy was a Transformation-Based Learning (TBL) tagger, also known as a Brill tagger. The TBL tagger generates rules based on observations in training. These rules indicate observed conditions on when a tag should be swapped with a different tag. During classification, initial tags are assigned to the terms based on the observed probability. The tagger iterates through the rules which were learned in training, swapping tags as necessary, until there are no more rules to be applied or a given score threshold is met.

The fourth strategy was a Conditional Random Field (CRF) tagger. CRFs implement a number of feature functions which take the proposed states and the input observation and return some value between 0 and 1. These features, like the features in MEMMs, can be just about anything, such as whether the current token is capitalized or whether it ends in -ed. Each of these feature functions has a different weight based on observations in training. The score for a sequence of states is calculated by summing the feature functions for every word over all words and then normalizing. Just like HMMs and MEMMs, we find the sequence that maximizes the overall probability.

From my initial work, I found that the Maximum-Entropy Markov model and the Conditional Random Field taggers consistently had the highest overall accuracy of the group. Both consistently had accuracies over 98%, even on partial addresses. For full format American addresses, these taggers had an accuracy around 99.7%. If I had to choose between them, I would go with the Conditional Random Field tagger as it was considerably faster in tagging the addresses.

I didn’t have enough time to finish everything that I wanted to implement, so this is still a work in progress. I still want to implement some smoothing techniques for unknown states, such as Katz backoff and Kneser-Ney interpolation. There are also a few more part-of-speech tagging techniques I would like to experiment with. I guess that old adage is true in this case – time flies when you are having fun…  🙂