Sorry I'm Rambling Again

A few years ago I read “Surely You’re Joking, Mr. Feynman!” and while that book instantly made Feynman one of my top role models, it also got me thinking about many things, including how dreams work. I’m not a lucid dreamer and I haven’t had much success in my short-lived attempts, so it’s not so much thinking about dreams as just observing what happens as I’m falling asleep and immediately after I wake up and remember a dream. The most interesting observation so far? As I’m on the brink of sleep, my internal narrative begins to ramble.

By default, it sounds like something from Aesop’s Fables. “And then the donkey said to the man, ‘Why don’t we go to the city tomorrow?’ The cow saw that this was not a very smart idea. Rama thought, ‘Certainly it is so.’ …” If I’ve been reading a novel (particularly a well-written one), then there’s often a clear influence from it on the prose style. I recall once rambling a series of texts exchanged between me and a friend. A few days ago I ‘rambled’ a Wikipedia page, internal narrative combined with a visual hallucination of sorts of a subsection heading, paragraphs of text, an image with a caption, and so on. What’s constant in all of these is that the flow of the words sounds incredibly natural and smooth but there is no significant meaning whatsoever.

(Disclaimer: these are the ones I remember, which may not be representative of the actual distribution, and the true ‘default’ in particular might be something else entirely that I have no clue about. It isn’t relevant to this post though.)

Let me clarify that this isn’t the same as daydreaming. When I’m daydreaming, the events make sense, form a story, are plausible in the real world: “She’ll walk up to me and smile and say, ‘Wanna dance?’ And I’ll take her hand and we’ll waltz perfectly, and when the music ends she’ll wrap her arms around my neck and” okay anyway the events make sense and form a story. Also, I’m aware that I’m daydreaming, and I imagine the things I want to (as a self-aware sentient being); when rambling, a few seconds after I become aware of the fact, it stops, and whatever meaning I remember from the narrative seems incoherent and random to me (the self-aware sentient being that actually worries about meaning).

So, what’s going on? Scott Alexander does a very good job of connecting this to a recent breakthrough in computational language models: OpenAI’s GPT-2 samples that are uncannily similar to my dream-rambling (actually they sometimes make more sense than I do). Please read Scott’s blog post; it is vastly superior to my summary/paraphrase in the rest of this paragraph. Essentially, a language model like GPT-2 is a system that takes in some history, such as the beginning of a sentence, and predicts the future, such as the next word in the sentence. To be precise, rather than predicting the next word per se it produces a probability distribution over possibilities for the next word based on what it has seen in its training data, which means it can also be used to generate language by choosing from its own predictions. This is a pretty good approximation of how humans do language most of the time, too, as Robin Hanson writes about (including speculation about the consequences of GPT-2-like machines two years before it happened) and as explained by Scott in a review of a book presenting a beautiful neuroscience theory about how a brain is really a large prediction machine whose models of the world are affected by the senses but whose inputs are also greatly distorted to fit its models. So if you take a human brain detached from sensory input (such as a sleeping human brain) and run its language model generatively, …

This is slightly scary. A machine that can produce the same or better quality natural language compared to a almost-asleep human? What next, Alexa whispering into my ear as I fall asleep to give me an inexplicable urge to shop on Amazon the next morning? And there’s also the fact that while GPT-2’s musings don’t actually make sense, it takes careful reading to discover this, because if you just skim it flows like normal natural text. To quote from a blog post by Sarah Constantin on this very topic, “OpenAI HAS achieved the ability to pass the Turing test against humans on autopilot.” I agree wholeheartedly when she mentions how hard it is to stay focused on reading a GPT-2 sample, without slipping into skimming mode. I find it literally impossible. I cannot say with any confidence that I have entirely read any of the GPT-2 samples. Both Sarah and Robin from the previous paragraph write about how social norms might change given that this level of machine intelligence is possible, so that humans can signal that they are in fact not bots, other humans can catch the presence or absence of these cues in ways the bots can’t, etc. but I think this is mostly optimistic. In my unqualified opinion, more likely scenarios include spammy Wikipedia edits that are extremely hard to catch, fake news that seems very much like real news unless you take the effort yourself to fact-search online, and copypasta that’s somehow, uh, superior to existing ones?

But this is greatly exciting too. I think it’s really interesting how advances in machine learning and results from neural networks of increasing complexity are starting to tell us more about the original biological neuron-based brains they are modeled after. For example, researchers recently found that they could construct visual stimuli that activate specific visual neurons far beyond what natural stimuli would do, by working an image-recognition neural network ‘in reverse’/generatively. Or the study (that I can’t find a link to but I heard about in my human brain class last semester) that found number neurons (neurons in animal brains that fire when there are exactly/approximately X objects in an image) develop in an image-classifier network even without any number tasks during training. Or a machine learning paradigm called a Boltzmann machine that explains dreams (Scott links to this page too, and the page says, “Unfortunately the Boltzmann Machine can only be understood by using Math. So you’ll like it if you know math too.”). I used to doubt that machine learning buzzwords would ever be significant in the broader goal of understanding and creating intelligence, because they seem like mere applications of a few statistical principles, but hey, if intelligence itself is the generalized application of a few statistical principles then I’m starting to change my mind.

Please can I machine-learn machine learning before the machines machine-learn me thanks. Coursera here I come.


Leave a Comment

Username (required)
Comment (Markdown allowed)
Comments will appear after moderation.