You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

706 lines
28 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Anna KaRNNa\n",
"\n",
"In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\n",
"\n",
"This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.\n",
"\n",
"<img src=\"assets/charseq.jpeg\" width=\"500\">"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"import time\n",
"from collections import namedtuple\n",
"\n",
"import numpy as np\n",
"import tensorflow as tf"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"with open('anna.txt', 'r') as f:\n",
" text=f.read()\n",
"vocab = set(text)\n",
"vocab_to_int = {c: i for i, c in enumerate(vocab)}\n",
"int_to_vocab = dict(enumerate(vocab))\n",
"chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"'Chapter 1\\n\\n\\nHappy families are all alike; every unhappy family is unhappy in its own\\nway.\\n\\nEverythin'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"text[:100]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And we can see the characters encoded as integers."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"array([47, 21, 34, 65, 42, 26, 54, 15, 1, 32, 32, 32, 19, 34, 65, 65, 57,\n",
" 15, 74, 34, 5, 80, 27, 80, 26, 68, 15, 34, 54, 26, 15, 34, 27, 27,\n",
" 15, 34, 27, 80, 7, 26, 16, 15, 26, 78, 26, 54, 57, 15, 66, 24, 21,\n",
" 34, 65, 65, 57, 15, 74, 34, 5, 80, 27, 57, 15, 80, 68, 15, 66, 24,\n",
" 21, 34, 65, 65, 57, 15, 80, 24, 15, 80, 42, 68, 15, 40, 2, 24, 32,\n",
" 2, 34, 57, 12, 32, 32, 17, 78, 26, 54, 57, 42, 21, 80, 24], dtype=int32)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chars[:100]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"82"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.max(chars)+1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Making training and validation batches\n",
"\n",
"Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\n",
"\n",
"Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\n",
"\n",
"The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n",
" \"\"\" \n",
" Split character data into training and validation sets, inputs and targets for each set.\n",
" \n",
" Arguments\n",
" ---------\n",
" chars: character array\n",
" batch_size: Size of examples in each of batch\n",
" num_steps: Number of sequence steps to keep in the input and pass to the network\n",
" split_frac: Fraction of batches to keep in the training set\n",
" \n",
" \n",
" Returns train_x, train_y, val_x, val_y\n",
" \"\"\"\n",
" \n",
" slice_size = batch_size * num_steps\n",
" n_batches = int(len(chars) / slice_size)\n",
" \n",
" # Drop the last few characters to make only full batches\n",
" x = chars[: n_batches*slice_size]\n",
" y = chars[1: n_batches*slice_size + 1]\n",
" \n",
" # Split the data into batch_size slices, then stack them into a 2D matrix \n",
" x = np.stack(np.split(x, batch_size))\n",
" y = np.stack(np.split(y, batch_size))\n",
" \n",
" # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n",
" \n",
" # Split into training and validation sets, keep the first split_frac batches for training\n",
" split_idx = int(n_batches*split_frac)\n",
" train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n",
" val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n",
" \n",
" return train_x, train_y, val_x, val_y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"train_x, train_y, val_x, val_y = split_data(chars, 10, 50)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"(10, 178650)"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"train_x.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"array([[47, 21, 34, 65, 42, 26, 54, 15, 1, 32, 32, 32, 19, 34, 65, 65, 57,\n",
" 15, 74, 34, 5, 80, 27, 80, 26, 68, 15, 34, 54, 26, 15, 34, 27, 27,\n",
" 15, 34, 27, 80, 7, 26, 16, 15, 26, 78, 26, 54, 57, 15, 66, 24],\n",
" [15, 34, 5, 15, 24, 40, 42, 15, 37, 40, 80, 24, 37, 15, 42, 40, 15,\n",
" 68, 42, 34, 57, 36, 72, 15, 34, 24, 68, 2, 26, 54, 26, 38, 15, 35,\n",
" 24, 24, 34, 36, 15, 68, 5, 80, 27, 80, 24, 37, 36, 15, 11, 66],\n",
" [78, 80, 24, 12, 32, 32, 72, 67, 26, 68, 36, 15, 80, 42, 0, 68, 15,\n",
" 68, 26, 42, 42, 27, 26, 38, 12, 15, 10, 21, 26, 15, 65, 54, 80, 49,\n",
" 26, 15, 80, 68, 15, 5, 34, 37, 24, 80, 74, 80, 49, 26, 24, 42],\n",
" [24, 15, 38, 66, 54, 80, 24, 37, 15, 21, 80, 68, 15, 49, 40, 24, 78,\n",
" 26, 54, 68, 34, 42, 80, 40, 24, 15, 2, 80, 42, 21, 15, 21, 80, 68,\n",
" 32, 11, 54, 40, 42, 21, 26, 54, 15, 2, 34, 68, 15, 42, 21, 80],\n",
" [15, 80, 42, 15, 80, 68, 36, 15, 68, 80, 54, 81, 72, 15, 68, 34, 80,\n",
" 38, 15, 42, 21, 26, 15, 40, 27, 38, 15, 5, 34, 24, 36, 15, 37, 26,\n",
" 42, 42, 80, 24, 37, 15, 66, 65, 36, 15, 34, 24, 38, 32, 49, 54],\n",
" [15, 79, 42, 15, 2, 34, 68, 32, 40, 24, 27, 57, 15, 2, 21, 26, 24,\n",
" 15, 42, 21, 26, 15, 68, 34, 5, 26, 15, 26, 78, 26, 24, 80, 24, 37,\n",
" 15, 21, 26, 15, 49, 34, 5, 26, 15, 42, 40, 15, 42, 21, 26, 80],\n",
" [21, 26, 24, 15, 49, 40, 5, 26, 15, 74, 40, 54, 15, 5, 26, 36, 72,\n",
" 15, 68, 21, 26, 15, 68, 34, 80, 38, 36, 15, 34, 24, 38, 15, 2, 26,\n",
" 24, 42, 15, 11, 34, 49, 7, 15, 80, 24, 42, 40, 15, 42, 21, 26],\n",
" [16, 15, 11, 66, 42, 15, 24, 40, 2, 15, 68, 21, 26, 15, 2, 40, 66,\n",
" 27, 38, 15, 54, 26, 34, 38, 80, 27, 57, 15, 21, 34, 78, 26, 15, 68,\n",
" 34, 49, 54, 80, 74, 80, 49, 26, 38, 36, 15, 24, 40, 42, 15, 5],\n",
" [42, 15, 80, 68, 24, 0, 42, 12, 15, 10, 21, 26, 57, 0, 54, 26, 15,\n",
" 65, 54, 40, 65, 54, 80, 26, 42, 40, 54, 68, 15, 40, 74, 15, 34, 15,\n",
" 68, 40, 54, 42, 36, 32, 11, 66, 42, 15, 2, 26, 0, 54, 26, 15],\n",
" [15, 68, 34, 80, 38, 15, 42, 40, 15, 21, 26, 54, 68, 26, 27, 74, 36,\n",
" 15, 34, 24, 38, 15, 11, 26, 37, 34, 24, 15, 34, 37, 34, 80, 24, 15,\n",
" 74, 54, 40, 5, 15, 42, 21, 26, 15, 11, 26, 37, 80, 24, 24, 80]], dtype=int32)"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"train_x[:,:50]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"I'll write another function to grab batches out of the arrays made by `split_data`. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def get_batch(arrs, num_steps):\n",
" batch_size, slice_size = arrs[0].shape\n",
" \n",
" n_batches = int(slice_size/num_steps)\n",
" for b in range(n_batches):\n",
" yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the model\n",
"\n",
"Below is a function where I build the graph for the network."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n",
" learning_rate=0.001, grad_clip=5, sampling=False):\n",
" \n",
" # When we're using this network for sampling later, we'll be passing in\n",
" # one character at a time, so providing an option for that\n",
" if sampling == True:\n",
" batch_size, num_steps = 1, 1\n",
"\n",
" tf.reset_default_graph()\n",
" \n",
" # Declare placeholders we'll feed into the graph\n",
" inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n",
" targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n",
" \n",
" # Keep probability placeholder for drop out layers\n",
" keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n",
" \n",
" # One-hot encoding the input and target characters\n",
" x_one_hot = tf.one_hot(inputs, num_classes)\n",
" y_one_hot = tf.one_hot(targets, num_classes)\n",
"\n",
" ### Build the RNN layers\n",
" # Use a basic LSTM cell\n",
" lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n",
" \n",
" # Add dropout to the cell\n",
" drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n",
" \n",
" # Stack up multiple LSTM layers, for deep learning\n",
" cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n",
" initial_state = cell.zero_state(batch_size, tf.float32)\n",
"\n",
" ### Run the data through the RNN layers\n",
" # This makes a list where each element is one step in the sequence\n",
" rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]\n",
" \n",
" # Run each sequence step through the RNN and collect the outputs\n",
" outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)\n",
" final_state = state\n",
" \n",
" # Reshape output so it's a bunch of rows, one output row for each step for each batch\n",
" seq_output = tf.concat(outputs, axis=1)\n",
" output = tf.reshape(seq_output, [-1, lstm_size])\n",
" \n",
" # Now connect the RNN outputs to a softmax layer\n",
" with tf.variable_scope('softmax'):\n",
" softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))\n",
" softmax_b = tf.Variable(tf.zeros(num_classes))\n",
" \n",
" # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch\n",
" # of rows of logit outputs, one for each step and batch\n",
" logits = tf.matmul(output, softmax_w) + softmax_b\n",
" \n",
" # Use softmax to get the probabilities for predicted characters\n",
" preds = tf.nn.softmax(logits, name='predictions')\n",
" \n",
" # Reshape the targets to match the logits\n",
" y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n",
" loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)\n",
" cost = tf.reduce_mean(loss)\n",
"\n",
" # Optimizer for training, using gradient clipping to control exploding gradients\n",
" tvars = tf.trainable_variables()\n",
" grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n",
" train_op = tf.train.AdamOptimizer(learning_rate)\n",
" optimizer = train_op.apply_gradients(zip(grads, tvars))\n",
" \n",
" # Export the nodes\n",
" # NOTE: I'm using a namedtuple here because I think they are cool\n",
" export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n",
" 'keep_prob', 'cost', 'preds', 'optimizer']\n",
" Graph = namedtuple('Graph', export_nodes)\n",
" local_dict = locals()\n",
" graph = Graph(*[local_dict[each] for each in export_nodes])\n",
" \n",
" return graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Hyperparameters\n",
"\n",
"Here I'm defining the hyperparameters for the network. \n",
"\n",
"* `batch_size` - Number of sequences running through the network in one pass.\n",
"* `num_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\n",
"* `lstm_size` - The number of units in the hidden layers.\n",
"* `num_layers` - Number of hidden LSTM layers to use\n",
"* `learning_rate` - Learning rate for training\n",
"* `keep_prob` - The dropout keep probability when training. If you're network is overfitting, try decreasing this.\n",
"\n",
"Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).\n",
"\n",
"> ## Tips and Tricks\n",
"\n",
">### Monitoring Validation Loss vs. Training Loss\n",
">If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n",
"\n",
"> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\n",
"> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n",
"\n",
"> ### Approximate number of parameters\n",
"\n",
"> The two most important parameters that control the model are `lstm_size` and `num_layers`. I would advise that you always use `num_layers` of either 2/3. The `lstm_size` can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n",
"\n",
"> - The number of parameters in your model. This is printed when you start training.\n",
"> - The size of your dataset. 1MB file is approximately 1 million characters.\n",
"\n",
">These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n",
"\n",
"> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `lstm_size` larger.\n",
"> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n",
"\n",
"> ### Best models strategy\n",
"\n",
">The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\n",
"\n",
">It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\n",
"\n",
">By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.\n"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"batch_size = 100\n",
"num_steps = 100 \n",
"lstm_size = 512\n",
"num_layers = 2\n",
"learning_rate = 0.001\n",
"keep_prob = 0.5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training\n",
"\n",
"Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint.\n",
"\n",
"Here I'm saving checkpoints with the format\n",
"\n",
"`i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true,
"scrolled": true
},
"outputs": [],
"source": [
"epochs = 20\n",
"# Save every N iterations\n",
"save_every_n = 200\n",
"train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n",
"\n",
"model = build_rnn(len(vocab), \n",
" batch_size=batch_size,\n",
" num_steps=num_steps,\n",
" learning_rate=learning_rate,\n",
" lstm_size=lstm_size,\n",
" num_layers=num_layers)\n",
"\n",
"saver = tf.train.Saver(max_to_keep=100)\n",
"with tf.Session() as sess:\n",
" sess.run(tf.global_variables_initializer())\n",
" \n",
" # Use the line below to load a checkpoint and resume training\n",
" #saver.restore(sess, 'checkpoints/______.ckpt')\n",
" \n",
" n_batches = int(train_x.shape[1]/num_steps)\n",
" iterations = n_batches * epochs\n",
" for e in range(epochs):\n",
" \n",
" # Train network\n",
" new_state = sess.run(model.initial_state)\n",
" loss = 0\n",
" for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n",
" iteration = e*n_batches + b\n",
" start = time.time()\n",
" feed = {model.inputs: x,\n",
" model.targets: y,\n",
" model.keep_prob: keep_prob,\n",
" model.initial_state: new_state}\n",
" batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n",
" feed_dict=feed)\n",
" loss += batch_loss\n",
" end = time.time()\n",
" print('Epoch {}/{} '.format(e+1, epochs),\n",
" 'Iteration {}/{}'.format(iteration, iterations),\n",
" 'Training loss: {:.4f}'.format(loss/b),\n",
" '{:.4f} sec/batch'.format((end-start)))\n",
" \n",
" \n",
" if (iteration%save_every_n == 0) or (iteration == iterations):\n",
" # Check performance, notice dropout has been set to 1\n",
" val_loss = []\n",
" new_state = sess.run(model.initial_state)\n",
" for x, y in get_batch([val_x, val_y], num_steps):\n",
" feed = {model.inputs: x,\n",
" model.targets: y,\n",
" model.keep_prob: 1.,\n",
" model.initial_state: new_state}\n",
" batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n",
" val_loss.append(batch_loss)\n",
"\n",
" print('Validation loss:', np.mean(val_loss),\n",
" 'Saving checkpoint!')\n",
" saver.save(sess, \"checkpoints/i{}_l{}_v{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Saved checkpoints\n",
"\n",
"Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"tf.train.get_checkpoint_state('checkpoints')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sampling\n",
"\n",
"Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\n",
"\n",
"The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def pick_top_n(preds, vocab_size, top_n=5):\n",
" p = np.squeeze(preds)\n",
" p[np.argsort(p)[:-top_n]] = 0\n",
" p = p / np.sum(p)\n",
" c = np.random.choice(vocab_size, 1, p=p)[0]\n",
" return c"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n",
" samples = [c for c in prime]\n",
" model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n",
" saver = tf.train.Saver()\n",
" with tf.Session() as sess:\n",
" saver.restore(sess, checkpoint)\n",
" new_state = sess.run(model.initial_state)\n",
" for c in prime:\n",
" x = np.zeros((1, 1))\n",
" x[0,0] = vocab_to_int[c]\n",
" feed = {model.inputs: x,\n",
" model.keep_prob: 1.,\n",
" model.initial_state: new_state}\n",
" preds, new_state = sess.run([model.preds, model.final_state], \n",
" feed_dict=feed)\n",
"\n",
" c = pick_top_n(preds, len(vocab))\n",
" samples.append(int_to_vocab[c])\n",
"\n",
" for i in range(n_samples):\n",
" x[0,0] = c\n",
" feed = {model.inputs: x,\n",
" model.keep_prob: 1.,\n",
" model.initial_state: new_state}\n",
" preds, new_state = sess.run([model.preds, model.final_state], \n",
" feed_dict=feed)\n",
"\n",
" c = pick_top_n(preds, len(vocab))\n",
" samples.append(int_to_vocab[c])\n",
" \n",
" return ''.join(samples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, pass in the path to a checkpoint and sample from the network."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"checkpoint = \"checkpoints/____.ckpt\"\n",
"samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\n",
"print(samp)"
]
}
],
"metadata": {
"hide_input": false,
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}