07/18/2016 – Sentiments!

For someone like me who’s always been interested in practical machine learning, it was wonderfully delightful to have found Word2vec, a neural network that intakes text and outputs numerical vectors. It transforms each word in a sentence into a series of numbers that could be used to predict the probabilities of related items, on top of mathematical interpretations of similar words. This means that this method doesn’t need to know the exact definition of the words, and with enough data could better interpret relationships between words than the average human being.

Of course, that’s not all Word2vec could do. It seems that the applicability of the Word2vec (there are 2 distinct models) goes beyond predictive syntax interpretations.

If you’re interested in learning more about the Word2vec and its intricacies, check out:

  1. This document published by an Israelian computer science PhD (with a link to the PDF).
  2. A publication by the guys at GOOG on sentence and phrase compositions.

12/29/2015 – Wait Time

Running an advanced algorithm in SAS takes a long time, just came back from lunch and the query is still running. Life is a game of patience and frankly by the time it finishes running I’d be fossilized. To be fair, my lunch lasted less than half an hour so it probably isn’t a giant query like the one I wrote a few months a go (which took 50min).

Most of the issues with time comes from joining giant tables, I tried adjusting the algorithms to make it more efficient like reducing the joining data set, adding as much action into each query as I could so there’d be less computation in the next one, and writing macros so that there’d be less . However it seems that code efficiency does not mean computation efficiency.

There may be a few things causing this problem:

  1. Macros are actually slowing things down: since I’m joining huge datasets they are running through them again and again, there is no way to avoid this issue since I need to obtain the same information for different risk levels from the same tables. A possible cure is run the programs through a subset of the original database.
  2. Too many actions in one query: after a bit of Googling I found that this isn’t a problem, it’s actually beneficial to combine different actions into one query. That is one myth busted.
  3. Crummy coding: this is a little unlikely since I’m writing SQL code and they’re quite similar across the board as I’m merely extracting data at this point for exploratory analyses, perhaps I could throw them into separate queries and see how long they take individually.
  4. Conflicting computation requirements: it seems that I’m not the only one not taking vacations a few days after Christmas and before New Years. Plus there’s a minor issue that is currently on the hot seat with another team so they’re requesting a lot of support from the data analysts who operate within and out of the SAS systems. Again, no way out of this one but patience.

I could only potentially improve on 1/4 of the issues described above (#3), so it wouldn’t produce much improvement in processing speed at all. I might spend some time reviewing the loess method I implemented last week, this interesting regression method has yielded some interesting insights regarding some of our products. So far I’ve only been interpreting them visually, so there could be additional information that I could uncover once I (re)learn how to interpret the coefficients.

 

Update: I was able to get the run time to decrease by almost half after using the function <compress=yes> and joining tables using unique accounts to reduce the size of every table created. It turns out that one of the tables had 251 million rows of data, when I tried to join it with another table the information multiplied and thus overwhelmed the system. The new program took about 50 minutes in total for all the risk levels I tested, which is a drastic improvement.