Tag: mle

Modeling with Naive Bayes

As the progenitor and leader of SAIL ON, the Stanford AI Lab’s year-round effort to attract and keep underrepresented minorities in the field of Artificial Intelligence, I engage high schoolers about artificial intelligence, machine learning, and the positive social impacts of our field.  SAIL ON meets once a month in the Computer Science building at Stanford.  Its trifold focus allows past participants in the SAILORS two-week summer camp to continue to learn about AI, to nurture strong relationships with each other, and to lead outreach projects that bring the technical, humanistic, and diversity missions of the AI outreach program to the wider community.


Screenshot of title slide of deck

As the educational component of the October meeting of SAIL ON, we discussed and applied Naive Bayes modeling.  Like other machine learning methods, Naive Bayes is a generic approach to learning from specific data such that we can make predictions in the future.  Whether the application is predicting cancer/whether you’ll care about an email/who will win an election, we can use the mathematics of Naive Bayes to connect a set of input variables to a target output variable.  (Of course, some problems are harder than others!)

We focused on the derivation of Naive Bayes with a chalk-and-talk discussion (slides), identifying why Naive Bayes is mathematically justified and posing some deeper thought questions.  We checked understanding with a hand-calculation of a Naive Bayes problem (handout): does a shy student received an A in a class, given some observations about her and some observations about more forthcoming students?  We then turned to a Jupyter Notebook that applies the same methods on a larger scale, working on the Titanic challenge from Kaggle with an applied introduction to pandas and sklearn: given passenger manifest records, can we predict who survived?

By providing this basis, I hope to increase appreciation for applications of what students are seeing in their math classes, and to facilitate students moving further on their own with applied machine learning before November’s meeting.

Screenshot of worksheet
Screenshot of Jupyter notebook on github

Automatically assessing Integrative Complexity

My final project for Stanford CS 224U was on automatically assessing integrative complexity. I drew on work I’d previously done that demonstrated ongoing value from this political psychology construct, but I had not previously tried to automatically code for this construct. The code is available on github.

Integrative complexity is a construct from political psychology that measures semantic complexity in discourse. Although this metric has been shown useful in predicting violence and understanding elections, it is very time-consuming for analysts to assess. We describe a theory-driven automated system that improves the state-of-the-art for this task from Pearson’s r = 0.57 to r = 0.73 through framing the task as ordinal regression, leveraging dense vector representations of words, and developing syntactic and semantic features that go beyond lexical phrase matching. Our approach is less labor-intensive and more transferable than the previous state-of-the-art for this task.

A brief introduction to convolutional neural networks for computer vision

Convolutional neural networks transformed computer vision from “extremely hard” to “trivially achievable after a few weeks of coursework” between 2012 and 2016.

I prepared a talk for technical professional audiences that describes how neural networks extend linear classification, intuitions behind why convolutional neural networks work well for vision, and the circumstances in which they’re worth consideration. I used the “Intro to CNNs for Computer Vision” materials at two different employers in 2016, and also with the high schoolers who participated in SAIL ON in 2017. (SAIL ON extended a Stanford summer program in AI that captured underrepresented minorities; I led the summer program and extended it to two years of follow-up monthly outreach.)

Automatic sign language identification

My final project for Stanford CS 231N was on automatically identifying sign languages from publicly licensed YouTube clips. For this project I learned from scratch about working with neural networks, computer vision, and video data.

Automatic processing of sign languages can only recently potentially advance beyond the toy problem of fingerspelling recognition. In just the last few years, we have leaped forward in our understanding of sign language theory, effective computer vision practices, and large-scale availability of data. This project achieves better-than-human performance on sign language identification, and it releases a dataset and benchmark for future work on the topic. It is intended as a precursor to sign language machine translation.

Identifying sign languages from video: SLANG-3k

As I haven’t yet created a permanent place to hold the dataset I collected for my most recent class project, I’m hanging it here for now.  SLANG-3k is an uncurated corpus of 3000 clips of 15 seconds each of people signing in American Sign Language, British Sign Language, and German Sign Language, intended as a public benchmark dataset for sign language identification in the wild.  Using 5 frames, I was able to achieve accuracies bounded around 0.66/0.67.  More details can be found in the paper and poster created for CS 231N, Convolutional Neural Networks for Visual Recognition.

Many thanks to everyone who helped with this project — and most especially to the anonymous survey respondents who received only warm fuzzies as compensation for taking the time to help with this early-stage research.