You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1593 lines
87 KiB
Plaintext

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"# Language Translation\n",
"In this project, youre going to take a peek into the realm of neural network machine translation. Youll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\n",
"## Get the Data\n",
"Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"import helper\n",
"import problem_unittests as tests\n",
"\n",
"source_path = '/data/small_vocab_en'\n",
"target_path = '/data/small_vocab_fr'\n",
"source_text = helper.load_data(source_path)\n",
"target_text = helper.load_data(target_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Data\n",
"Play around with view_sentence_range to view different parts of the data."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset Stats\n",
"Roughly the number of unique words: 227\n",
"Number of sentences: 137861\n",
"Average number of words in a sentence: 13.225277634719028\n",
"\n",
"English sentences 0 to 10:\n",
"new jersey is sometimes quiet during autumn , and it is snowy in april .\n",
"the united states is usually chilly during july , and it is usually freezing in november .\n",
"california is usually quiet during march , and it is usually hot in june .\n",
"the united states is sometimes mild during june , and it is cold in september .\n",
"your least liked fruit is the grape , but my least liked is the apple .\n",
"his favorite fruit is the orange , but my favorite is the grape .\n",
"paris is relaxing during december , but it is usually chilly in july .\n",
"new jersey is busy during spring , and it is never hot in march .\n",
"our least liked fruit is the lemon , but my least liked is the grape .\n",
"the united states is sometimes busy during january , and it is sometimes warm in november .\n",
"\n",
"French sentences 0 to 10:\n",
"new jersey est parfois calme pendant l' automne , et il est neigeux en avril .\n",
"les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .\n",
"california est généralement calme en mars , et il est généralement chaud en juin .\n",
"les états-unis est parfois légère en juin , et il fait froid en septembre .\n",
"votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .\n",
"son fruit préféré est l'orange , mais mon préféré est le raisin .\n",
"paris est relaxant en décembre , mais il est généralement froid en juillet .\n",
"new jersey est occupé au printemps , et il est jamais chaude en mars .\n",
"notre fruit est moins aimé le citron , mais mon moins aimé est le raisin .\n",
"les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre .\n"
]
}
],
"source": [
"view_sentence_range = (0, 10)\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"import numpy as np\n",
"\n",
"print('Dataset Stats')\n",
"print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n",
"\n",
"sentences = source_text.split('\\n')\n",
"word_counts = [len(sentence.split()) for sentence in sentences]\n",
"print('Number of sentences: {}'.format(len(sentences)))\n",
"print('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n",
"\n",
"print()\n",
"print('English sentences {} to {}:'.format(*view_sentence_range))\n",
"print('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\n",
"print()\n",
"print('French sentences {} to {}:'.format(*view_sentence_range))\n",
"print('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Implement Preprocessing Function\n",
"### Text to Word Ids\n",
"As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `<EOS>` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.\n",
"\n",
"You can get the `<EOS>` word id by doing:\n",
"```python\n",
"target_vocab_to_int['<EOS>']\n",
"```\n",
"You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n",
" \"\"\"\n",
" Convert source and target text to proper word ids\n",
" :param source_text: String that contains all the source text.\n",
" :param target_text: String that contains all the target text.\n",
" :param source_vocab_to_int: Dictionary to go from the source words to an id\n",
" :param target_vocab_to_int: Dictionary to go from the target words to an id\n",
" :return: A tuple of lists (source_id_text, target_id_text)\n",
" \"\"\"\n",
" \n",
" # Process source text\n",
" words = [[word for word in line.split()] for line in source_text.split('\\n')]\n",
" source_word_ids = [[source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in line.split()] for line in source_text.split('\\n')] # use get to replace ignored/unknown characters by <UNK>\n",
" \n",
" \n",
" # Process target text\n",
" target_word_ids = [[target_vocab_to_int.get(word, target_vocab_to_int['<UNK>']) for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\\n')]\n",
" \n",
" \n",
" return source_word_ids, target_word_ids\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_text_to_ids(text_to_ids)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Preprocess all the data and save it\n",
"Running the code cell below will preprocess all the data and save it to file."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"helper.preprocess_and_save_data(source_path, target_path, text_to_ids)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Check Point\n",
"This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"import numpy as np\n",
"import helper\n",
"import problem_unittests as tests\n",
"\n",
"(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Version of TensorFlow and Access to GPU\n",
"This will check to make sure you have the correct version of TensorFlow and access to a GPU"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"TensorFlow Version: 1.2.1\n",
"Default GPU Device: /gpu:0\n"
]
}
],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"from distutils.version import LooseVersion\n",
"import warnings\n",
"import tensorflow as tf\n",
"from tensorflow.python.layers.core import Dense\n",
"\n",
"# Check TensorFlow Version\n",
"assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\n",
"print('TensorFlow Version: {}'.format(tf.__version__))\n",
"\n",
"# Check for a GPU\n",
"if not tf.test.gpu_device_name():\n",
" warnings.warn('No GPU found. Please use a GPU to train your neural network.')\n",
"else:\n",
" print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build the Neural Network\n",
"You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n",
"- `model_inputs`\n",
"- `process_decoder_input`\n",
"- `encoding_layer`\n",
"- `decoding_layer_train`\n",
"- `decoding_layer_infer`\n",
"- `decoding_layer`\n",
"- `seq2seq_model`\n",
"\n",
"### Input\n",
"Implement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n",
"\n",
"- Input text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\n",
"- Targets placeholder with rank 2.\n",
"- Learning rate placeholder with rank 0.\n",
"- Keep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n",
"- Target sequence length placeholder named \"target_sequence_length\" with rank 1\n",
"- Max target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\n",
"- Source sequence length placeholder named \"source_sequence_length\" with rank 1\n",
"\n",
"Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ERROR:tensorflow:==================================\n",
"Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>):\n",
"<tf.Operation 'assert_rank_2/Assert/Assert' type=Assert>\n",
"If you want to mark it as used call its \"mark_used()\" method.\n",
"It was originally created here:\n",
"['File \"/usr/local/lib/python3.5/runpy.py\", line 193, in _run_module_as_main\\n \"__main__\", mod_spec)', 'File \"/usr/local/lib/python3.5/runpy.py\", line 85, in _run_code\\n exec(code, run_globals)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel_launcher.py\", line 16, in <module>\\n app.launch_new_instance()', 'File \"/usr/local/lib/python3.5/site-packages/traitlets/config/application.py\", line 658, in launch_instance\\n app.start()', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/kernelapp.py\", line 477, in start\\n ioloop.IOLoop.instance().start()', 'File \"/usr/local/lib/python3.5/site-packages/zmq/eventloop/ioloop.py\", line 177, in start\\n super(ZMQIOLoop, self).start()', 'File \"/usr/local/lib/python3.5/site-packages/tornado/ioloop.py\", line 888, in start\\n handler_func(fd_obj, events)', 'File \"/usr/local/lib/python3.5/site-packages/tornado/stack_context.py\", line 277, in null_wrapper\\n return fn(*args, **kwargs)', 'File \"/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py\", line 440, in _handle_events\\n self._handle_recv()', 'File \"/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py\", line 472, in _handle_recv\\n self._run_callback(callback, msg)', 'File \"/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py\", line 414, in _run_callback\\n callback(*args, **kwargs)', 'File \"/usr/local/lib/python3.5/site-packages/tornado/stack_context.py\", line 277, in null_wrapper\\n return fn(*args, **kwargs)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py\", line 283, in dispatcher\\n return self.dispatch_shell(stream, msg)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py\", line 235, in dispatch_shell\\n handler(stream, idents, msg)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py\", line 399, in execute_request\\n user_expressions, allow_stdin)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/ipkernel.py\", line 196, in do_execute\\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/zmqshell.py\", line 533, in run_cell\\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File \"/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py\", line 2698, in run_cell\\n interactivity=interactivity, compiler=compiler, result=result)', 'File \"/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py\", line 2808, in run_ast_nodes\\n if self.run_code(code, result):', 'File \"/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py\", line 2862, in run_code\\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File \"<ipython-input-7-6c45c8fd5ae4>\", line 22, in <module>\\n tests.test_model_inputs(model_inputs)', 'File \"/output/problem_unittests.py\", line 106, in test_model_inputs\\n assert tf.assert_rank(lr, 0, message=\\'Learning Rate has wrong rank\\')', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py\", line 617, in assert_rank\\n dynamic_condition, data, summarize)', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py\", line 571, in _assert_rank_condition\\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py\", line 170, in wrapped\\n return _add_should_use_warning(fn(*args, **kwargs))', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py\", line 139, in _add_should_use_warning\\n wrapped = TFShouldUseWarningWrapper(x)', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py\", line 96, in __init__\\n stack = [s.strip() for s in traceback.format_stack()]']\n",
"==================================\n",
"ERROR:tensorflow:==================================\n",
"Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>):\n",
"<tf.Operation 'assert_rank_3/Assert/Assert' type=Assert>\n",
"If you want to mark it as used call its \"mark_used()\" method.\n",
"It was originally created here:\n",
"['File \"/usr/local/lib/python3.5/runpy.py\", line 193, in _run_module_as_main\\n \"__main__\", mod_spec)', 'File \"/usr/local/lib/python3.5/runpy.py\", line 85, in _run_code\\n exec(code, run_globals)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel_launcher.py\", line 16, in <module>\\n app.launch_new_instance()', 'File \"/usr/local/lib/python3.5/site-packages/traitlets/config/application.py\", line 658, in launch_instance\\n app.start()', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/kernelapp.py\", line 477, in start\\n ioloop.IOLoop.instance().start()', 'File \"/usr/local/lib/python3.5/site-packages/zmq/eventloop/ioloop.py\", line 177, in start\\n super(ZMQIOLoop, self).start()', 'File \"/usr/local/lib/python3.5/site-packages/tornado/ioloop.py\", line 888, in start\\n handler_func(fd_obj, events)', 'File \"/usr/local/lib/python3.5/site-packages/tornado/stack_context.py\", line 277, in null_wrapper\\n return fn(*args, **kwargs)', 'File \"/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py\", line 440, in _handle_events\\n self._handle_recv()', 'File \"/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py\", line 472, in _handle_recv\\n self._run_callback(callback, msg)', 'File \"/usr/local/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py\", line 414, in _run_callback\\n callback(*args, **kwargs)', 'File \"/usr/local/lib/python3.5/site-packages/tornado/stack_context.py\", line 277, in null_wrapper\\n return fn(*args, **kwargs)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py\", line 283, in dispatcher\\n return self.dispatch_shell(stream, msg)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py\", line 235, in dispatch_shell\\n handler(stream, idents, msg)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/kernelbase.py\", line 399, in execute_request\\n user_expressions, allow_stdin)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/ipkernel.py\", line 196, in do_execute\\n res = shell.run_cell(code, store_history=store_history, silent=silent)', 'File \"/usr/local/lib/python3.5/site-packages/ipykernel/zmqshell.py\", line 533, in run_cell\\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)', 'File \"/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py\", line 2698, in run_cell\\n interactivity=interactivity, compiler=compiler, result=result)', 'File \"/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py\", line 2808, in run_ast_nodes\\n if self.run_code(code, result):', 'File \"/usr/local/lib/python3.5/site-packages/IPython/core/interactiveshell.py\", line 2862, in run_code\\n exec(code_obj, self.user_global_ns, self.user_ns)', 'File \"<ipython-input-7-6c45c8fd5ae4>\", line 22, in <module>\\n tests.test_model_inputs(model_inputs)', 'File \"/output/problem_unittests.py\", line 107, in test_model_inputs\\n assert tf.assert_rank(keep_prob, 0, message=\\'Keep Probability has wrong rank\\')', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py\", line 617, in assert_rank\\n dynamic_condition, data, summarize)', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/check_ops.py\", line 571, in _assert_rank_condition\\n return control_flow_ops.Assert(condition, data, summarize=summarize)', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py\", line 170, in wrapped\\n return _add_should_use_warning(fn(*args, **kwargs))', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py\", line 139, in _add_should_use_warning\\n wrapped = TFShouldUseWarningWrapper(x)', 'File \"/usr/local/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py\", line 96, in __init__\\n stack = [s.strip() for s in traceback.format_stack()]']\n",
"==================================\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"def model_inputs():\n",
" \"\"\"\n",
" Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n",
" :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n",
" max target sequence length, source sequence length)\n",
" \"\"\"\n",
" \n",
" input_text = tf.placeholder(tf.int32, [None, None], name='input')\n",
" targets = tf.placeholder(tf.int32, [None, None], name='targets')\n",
" lr = tf.placeholder(tf.float32, name='learning_rate' )\n",
" keep = tf.placeholder(tf.float32, name='keep_prob')\n",
" target_seq_len = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n",
" max_target_seq_len = tf.reduce_max(target_seq_len, name='max_target_len')\n",
" source_seq_len = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n",
"\n",
" return input_text, targets, lr, keep, target_seq_len, max_target_seq_len, source_seq_len \n",
"\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_model_inputs(model_inputs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Process Decoder Input\n",
"Implement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n",
" \"\"\"\n",
" Preprocess target data for encoding\n",
" :param target_data: Target Placehoder\n",
" :param target_vocab_to_int: Dictionary to go from the target words to an id\n",
" :param batch_size: Batch Size\n",
" :return: Preprocessed target data\n",
" \"\"\"\n",
" # TODO: Implement Function\n",
" ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])\n",
" dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n",
" \n",
" return dec_input\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_process_encoding_input(process_decoder_input)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Encoding\n",
"Implement `encoding_layer()` to create a Encoder RNN layer:\n",
" * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence)\n",
" * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.md#stacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper)\n",
" * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"from imp import reload\n",
"reload(tests)\n",
"\n",
"def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n",
" source_sequence_length, source_vocab_size, \n",
" encoding_embedding_size):\n",
" \"\"\"\n",
" Create encoding layer\n",
" :param rnn_inputs: Inputs for the RNN\n",
" :param rnn_size: RNN Size\n",
" :param num_layers: Number of layers\n",
" :param keep_prob: Dropout keep probability\n",
" :param source_sequence_length: a list of the lengths of each sequence in the batch\n",
" :param source_vocab_size: vocabulary size of source data\n",
" :param encoding_embedding_size: embedding size of source data\n",
" :return: tuple (RNN output, RNN state)\n",
" \"\"\"\n",
" \n",
" enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)\n",
" \n",
" #Rnn cell\n",
" def make_cell(rnn_size):\n",
" cell = tf.contrib.rnn.LSTMCell(rnn_size,\n",
" initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n",
" # add dropout layer\n",
" enc_cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\n",
" return enc_cell\n",
" \n",
" enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n",
" \n",
" enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)\n",
"\n",
" \n",
" return enc_output, enc_state\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_encoding_layer(encoding_layer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Decoding - Training\n",
"Create a training decoding layer:\n",
"* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) \n",
"* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)\n",
"* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"from IPython.core.debugger import Tracer\n",
"\n",
"def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n",
" target_sequence_length, max_summary_length, \n",
" output_layer, keep_prob):\n",
" \"\"\"\n",
" Create a decoding layer for training\n",
" :param encoder_state: Encoder State\n",
" :param dec_cell: Decoder RNN Cell\n",
" :param dec_embed_input: Decoder embedded input\n",
" :param target_sequence_length: The lengths of each sequence in the target batch\n",
" :param max_summary_length: The length of the longest sequence in the batch\n",
" :param output_layer: Function to apply the output layer\n",
" :param keep_prob: Dropout keep probability\n",
" :return: BasicDecoderOutput containing training logits and sample_id\n",
" \"\"\"\n",
" # Question: Why are we receiving keep_prob here\n",
" # Where would we add dropout layer here\n",
" \n",
" # Helper for the training process. Used by BasicDecoder to read inputs.\n",
" training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n",
" sequence_length=target_sequence_length,\n",
" time_major=False)\n",
"\n",
" \n",
" # Basic decoder\n",
" training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n",
" training_helper,\n",
" encoder_state,\n",
" output_layer) \n",
"\n",
" # Perform dynamic decoding using the decoder\n",
" training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n",
" impute_finished=True,\n",
" maximum_iterations=max_summary_length)[0]\n",
" \n",
" return training_decoder_output\n",
"\n",
"\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_decoding_layer_train(decoding_layer_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Decoding - Inference\n",
"Create inference decoder:\n",
"* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)\n",
"* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)\n",
"* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n",
" end_of_sequence_id, max_target_sequence_length,\n",
" vocab_size, output_layer, batch_size, keep_prob):\n",
" \"\"\"\n",
" Create a decoding layer for inference\n",
" :param encoder_state: Encoder state\n",
" :param dec_cell: Decoder RNN Cell\n",
" :param dec_embeddings: Decoder embeddings\n",
" :param start_of_sequence_id: GO ID\n",
" :param end_of_sequence_id: EOS Id\n",
" :param max_target_sequence_length: Maximum length of target sequences\n",
" :param vocab_size: Size of decoder/target vocabulary\n",
" :param decoding_scope: TenorFlow Variable Scope for decoding\n",
" :param output_layer: Function to apply the output layer\n",
" :param batch_size: Batch size\n",
" :param keep_prob: Dropout keep probability\n",
" :return: BasicDecoderOutput containing inference logits and sample_id\n",
" \"\"\"\n",
" # Start from GO\n",
" start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')\n",
"\n",
" \n",
" # Helper for the inference process.\n",
" inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n",
" start_tokens,\n",
" end_of_sequence_id)\n",
"\n",
" # Basic decoder\n",
" inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n",
" inference_helper,\n",
" encoder_state,\n",
" output_layer)\n",
"\n",
" # Perform dynamic decoding using the decoder\n",
" inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n",
" impute_finished=True,\n",
" maximum_iterations=max_target_sequence_length)[0]\n",
"\n",
" return inference_decoder_output\n",
"\n",
"\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_decoding_layer_infer(decoding_layer_infer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the Decoding Layer\n",
"Implement `decoding_layer()` to create a Decoder RNN layer.\n",
"\n",
"* Embed the target sequences\n",
"* Construct the decoder LSTM cell (just like you constructed the encoder cell above)\n",
"* Create an output layer to map the outputs of the decoder to the elements of our vocabulary\n",
"* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.\n",
"* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.\n",
"\n",
"Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"def decoding_layer(dec_input, encoder_state,\n",
" target_sequence_length, max_target_sequence_length,\n",
" rnn_size,\n",
" num_layers, target_vocab_to_int, target_vocab_size,\n",
" batch_size, keep_prob, decoding_embedding_size):\n",
" \"\"\"\n",
" Create decoding layer\n",
" :param dec_input: Decoder input\n",
" :param encoder_state: Encoder state\n",
" :param target_sequence_length: The lengths of each sequence in the target batch\n",
" :param max_target_sequence_length: Maximum length of target sequences\n",
" :param rnn_size: RNN Size\n",
" :param num_layers: Number of layers\n",
" :param target_vocab_to_int: Dictionary to go from the target words to an id\n",
" :param target_vocab_size: Size of target vocabulary\n",
" :param batch_size: The size of the batch\n",
" :param keep_prob: Dropout keep probability\n",
" :param decoding_embedding_size: Decoding embedding size\n",
" :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n",
" \"\"\"\n",
" \n",
" # 1. Decoder Embedding\n",
" dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n",
" dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n",
" \n",
" # 2. Construct the decoder cell\n",
" def make_cell(rnn_size):\n",
" dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n",
" initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n",
" \n",
" # Add dropout layer\n",
" dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)\n",
" \n",
" return dec_cell\n",
" \n",
" dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n",
" \n",
" # 3. Dense layer to translate the decoder's output at each time \n",
" # step into a choice from the target vocabulary\n",
" output_layer = Dense(target_vocab_size,\n",
" kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n",
" \n",
" \n",
" # 4. Get training and inference outputs\n",
" \n",
" ## In training mode\n",
" with tf.variable_scope('decode'):\n",
" \n",
" training_decoder_output = decoding_layer_train(encoder_state, \n",
" dec_cell, \n",
" dec_embed_input, \n",
" target_sequence_length, \n",
" max_target_sequence_length, \n",
" output_layer, \n",
" keep_prob)\n",
" \n",
" ## In inference mode we reuse variables\n",
" with tf.variable_scope('decode') as scope:\n",
" scope.reuse_variables()\n",
" \n",
" inference_decoder_output = decoding_layer_infer(encoder_state, \n",
" dec_cell, \n",
" dec_embeddings, \n",
" target_vocab_to_int['<GO>'], #start of seq ID\n",
" target_vocab_to_int['<EOS>'], # end of seq ID\n",
" max_target_sequence_length, \n",
" target_vocab_size,\n",
" output_layer,\n",
" batch_size,\n",
" keep_prob)\n",
" \n",
" \n",
" \n",
" return training_decoder_output, inference_decoder_output\n",
"\n",
"\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_decoding_layer(decoding_layer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the Neural Network\n",
"Apply the functions you implemented above to:\n",
"\n",
"- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.\n",
"- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.\n",
"- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n",
" source_sequence_length, target_sequence_length,\n",
" max_target_sentence_length,\n",
" source_vocab_size, target_vocab_size,\n",
" enc_embedding_size, dec_embedding_size,\n",
" rnn_size, num_layers, target_vocab_to_int):\n",
" \"\"\"\n",
" Build the Sequence-to-Sequence part of the neural network\n",
" :param input_data: Input placeholder\n",
" :param target_data: Target placeholder\n",
" :param keep_prob: Dropout keep probability placeholder\n",
" :param batch_size: Batch Size\n",
" :param source_sequence_length: Sequence Lengths of source sequences in the batch\n",
" :param target_sequence_length: Sequence Lengths of target sequences in the batch\n",
" :param source_vocab_size: Source vocabulary size\n",
" :param target_vocab_size: Target vocabulary size\n",
" :param enc_embedding_size: Decoder embedding size\n",
" :param dec_embedding_size: Encoder embedding size\n",
" :param rnn_size: RNN Size\n",
" :param num_layers: Number of layers\n",
" :param target_vocab_to_int: Dictionary to go from the target words to an id\n",
" :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n",
" \"\"\"\n",
" # TODO: Implement Function\n",
" \n",
" \n",
" # Pass the input data through the encoder. We'll ignore the encoder output, but use the state\n",
" _, enc_state = encoding_layer(input_data, \n",
" rnn_size, \n",
" num_layers,\n",
" keep_prob,\n",
" source_sequence_length,\n",
" source_vocab_size, \n",
" enc_embedding_size)\n",
" \n",
" # Prepare the target sequences we'll feed to the decoder in training mode\n",
" dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)\n",
" \n",
" \n",
" # Pass encoder state and decoder inputs to the decoders\n",
" training_decoder_output, inference_decoder_output = decoding_layer(dec_input, \n",
" enc_state, \n",
" target_sequence_length, \n",
" max_target_sentence_length,\n",
" rnn_size,\n",
" num_layers,\n",
" target_vocab_to_int, \n",
" target_vocab_size,\n",
" batch_size,\n",
" keep_prob,\n",
" dec_embedding_size) \n",
" \n",
" \n",
" return training_decoder_output, inference_decoder_output\n",
"\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_seq2seq_model(seq2seq_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Neural Network Training\n",
"### Hyperparameters\n",
"Tune the following parameters:\n",
"\n",
"- Set `epochs` to the number of epochs.\n",
"- Set `batch_size` to the batch size.\n",
"- Set `rnn_size` to the size of the RNNs.\n",
"- Set `num_layers` to the number of layers.\n",
"- Set `encoding_embedding_size` to the size of the embedding for the encoder.\n",
"- Set `decoding_embedding_size` to the size of the embedding for the decoder.\n",
"- Set `learning_rate` to the learning rate.\n",
"- Set `keep_probability` to the Dropout keep probability\n",
"- Set `display_step` to state how many steps between each debug output statement"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Number of Epochs\n",
"epochs = 5\n",
"# Batch Size\n",
"batch_size = 256\n",
"# RNN Size\n",
"rnn_size = 256\n",
"# Number of Layers\n",
"num_layers = 2\n",
"# Embedding Size\n",
"encoding_embedding_size = 260\n",
"decoding_embedding_size = 260\n",
"# Learning Rate\n",
"learning_rate = 0.001\n",
"# Dropout Keep Probability\n",
"keep_probability = 0.5\n",
"display_step = 10"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"RUN_NUMBER = 1\n",
"LOG_DIR = '/output/run_{}/logs/'\n",
"CHECKPOINT_DIR = '/output/run_{}/checkpoints/'\n",
"CHECKPOINT_PATH = CHECKPOINT_DIR.format(RUN_NUMBER)\n",
"LOG_PATH = LOG_DIR.format(RUN_NUMBER)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the Graph\n",
"Build the graph using the neural network you implemented."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"save_path = CHECKPOINT_PATH\n",
"(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\n",
"max_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n",
"\n",
"train_graph = tf.Graph()\n",
"with train_graph.as_default():\n",
" input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n",
"\n",
" #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n",
" input_shape = tf.shape(input_data)\n",
"\n",
" train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n",
" targets,\n",
" keep_prob,\n",
" batch_size,\n",
" source_sequence_length,\n",
" target_sequence_length,\n",
" max_target_sequence_length,\n",
" len(source_vocab_to_int),\n",
" len(target_vocab_to_int),\n",
" encoding_embedding_size,\n",
" decoding_embedding_size,\n",
" rnn_size,\n",
" num_layers,\n",
" target_vocab_to_int)\n",
"\n",
"\n",
" training_logits = tf.identity(train_logits.rnn_output, name='logits')\n",
" inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n",
"\n",
" masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n",
"\n",
" with tf.name_scope(\"optimization\"):\n",
" # Loss function\n",
" cost = tf.contrib.seq2seq.sequence_loss(\n",
" training_logits,\n",
" targets,\n",
" masks)\n",
" tf.summary.scalar('cost', cost)\n",
"\n",
" # Optimizer\n",
" optimizer = tf.train.AdamOptimizer(lr)\n",
"\n",
" # Gradient Clipping\n",
" gradients = optimizer.compute_gradients(cost)\n",
" capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n",
" train_op = optimizer.apply_gradients(capped_gradients)\n",
"\n",
" merged = tf.summary.merge_all()\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Batch and pad the source and target sequences"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"def pad_sentence_batch(sentence_batch, pad_int):\n",
" \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n",
" max_sentence = max([len(sentence) for sentence in sentence_batch])\n",
" return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n",
"\n",
"\n",
"def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n",
" \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n",
" for batch_i in range(0, len(sources)//batch_size):\n",
" start_i = batch_i * batch_size\n",
"\n",
" # Slice the right amount for the batch\n",
" sources_batch = sources[start_i:start_i + batch_size]\n",
" targets_batch = targets[start_i:start_i + batch_size]\n",
"\n",
" # Pad\n",
" pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n",
" pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n",
"\n",
" # Need the lengths for the _lengths parameters\n",
" pad_targets_lengths = []\n",
" for target in pad_targets_batch:\n",
" pad_targets_lengths.append(len(target))\n",
"\n",
" pad_source_lengths = []\n",
" for source in pad_sources_batch:\n",
" pad_source_lengths.append(len(source))\n",
"\n",
" yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# write out the graph for tensorboard\n",
"\n",
"with tf.Session(graph=train_graph) as sess:\n",
" train_writer = tf.summary.FileWriter(LOG_PATH + '/train', sess.graph)\n",
" test_writer = tf.summary.FileWriter(LOG_PATH + '/test')\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Train\n",
"Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 0 Batch 10/538 - Train Accuracy: 0.3002, Validation Accuracy: 0.3885, Loss: 3.6955\n",
"Epoch 0 Batch 20/538 - Train Accuracy: 0.3960, Validation Accuracy: 0.4355, Loss: 2.9661\n",
"Epoch 0 Batch 30/538 - Train Accuracy: 0.4240, Validation Accuracy: 0.4792, Loss: 2.8062\n",
"Epoch 0 Batch 40/538 - Train Accuracy: 0.4979, Validation Accuracy: 0.4789, Loss: 2.3624\n",
"Epoch 0 Batch 50/538 - Train Accuracy: 0.4619, Validation Accuracy: 0.4980, Loss: 2.3190\n",
"Epoch 0 Batch 60/538 - Train Accuracy: 0.4660, Validation Accuracy: 0.5165, Loss: 2.1668\n",
"Epoch 0 Batch 70/538 - Train Accuracy: 0.4650, Validation Accuracy: 0.4949, Loss: 1.9192\n",
"Epoch 0 Batch 80/538 - Train Accuracy: 0.4391, Validation Accuracy: 0.4949, Loss: 1.8879\n",
"Epoch 0 Batch 90/538 - Train Accuracy: 0.4801, Validation Accuracy: 0.5039, Loss: 1.6966\n",
"Epoch 0 Batch 100/538 - Train Accuracy: 0.4752, Validation Accuracy: 0.5099, Loss: 1.6296\n",
"Epoch 0 Batch 110/538 - Train Accuracy: 0.4352, Validation Accuracy: 0.4862, Loss: 1.5980\n",
"Epoch 0 Batch 120/538 - Train Accuracy: 0.4332, Validation Accuracy: 0.4838, Loss: 1.4634\n",
"Epoch 0 Batch 130/538 - Train Accuracy: 0.4494, Validation Accuracy: 0.4920, Loss: 1.3480\n",
"Epoch 0 Batch 140/538 - Train Accuracy: 0.4623, Validation Accuracy: 0.5130, Loss: 1.3912\n",
"Epoch 0 Batch 150/538 - Train Accuracy: 0.4697, Validation Accuracy: 0.5064, Loss: 1.2529\n",
"Epoch 0 Batch 160/538 - Train Accuracy: 0.5378, Validation Accuracy: 0.5508, Loss: 1.1481\n",
"Epoch 0 Batch 170/538 - Train Accuracy: 0.5491, Validation Accuracy: 0.5581, Loss: 1.1118\n",
"Epoch 0 Batch 180/538 - Train Accuracy: 0.5534, Validation Accuracy: 0.5476, Loss: 1.0687\n",
"Epoch 0 Batch 190/538 - Train Accuracy: 0.5234, Validation Accuracy: 0.5506, Loss: 1.0583\n",
"Epoch 0 Batch 200/538 - Train Accuracy: 0.5455, Validation Accuracy: 0.5676, Loss: 0.9990\n",
"Epoch 0 Batch 210/538 - Train Accuracy: 0.5406, Validation Accuracy: 0.5680, Loss: 0.9413\n",
"Epoch 0 Batch 220/538 - Train Accuracy: 0.5461, Validation Accuracy: 0.5843, Loss: 0.9197\n",
"Epoch 0 Batch 230/538 - Train Accuracy: 0.5428, Validation Accuracy: 0.5879, Loss: 0.9156\n",
"Epoch 0 Batch 240/538 - Train Accuracy: 0.5592, Validation Accuracy: 0.5795, Loss: 0.9008\n",
"Epoch 0 Batch 250/538 - Train Accuracy: 0.5785, Validation Accuracy: 0.5959, Loss: 0.8396\n",
"Epoch 0 Batch 260/538 - Train Accuracy: 0.5858, Validation Accuracy: 0.5989, Loss: 0.8275\n",
"Epoch 0 Batch 270/538 - Train Accuracy: 0.5781, Validation Accuracy: 0.5953, Loss: 0.8222\n",
"Epoch 0 Batch 280/538 - Train Accuracy: 0.6267, Validation Accuracy: 0.6154, Loss: 0.7569\n",
"Epoch 0 Batch 290/538 - Train Accuracy: 0.5830, Validation Accuracy: 0.6152, Loss: 0.7818\n",
"Epoch 0 Batch 300/538 - Train Accuracy: 0.6114, Validation Accuracy: 0.6170, Loss: 0.7352\n",
"Epoch 0 Batch 310/538 - Train Accuracy: 0.6029, Validation Accuracy: 0.6183, Loss: 0.7385\n",
"Epoch 0 Batch 320/538 - Train Accuracy: 0.6151, Validation Accuracy: 0.6255, Loss: 0.7263\n",
"Epoch 0 Batch 330/538 - Train Accuracy: 0.6170, Validation Accuracy: 0.6175, Loss: 0.6908\n",
"Epoch 0 Batch 340/538 - Train Accuracy: 0.5717, Validation Accuracy: 0.6213, Loss: 0.7202\n",
"Epoch 0 Batch 350/538 - Train Accuracy: 0.6075, Validation Accuracy: 0.6262, Loss: 0.6923\n",
"Epoch 0 Batch 360/538 - Train Accuracy: 0.6197, Validation Accuracy: 0.6333, Loss: 0.6942\n",
"Epoch 0 Batch 370/538 - Train Accuracy: 0.5906, Validation Accuracy: 0.6261, Loss: 0.6843\n",
"Epoch 0 Batch 380/538 - Train Accuracy: 0.6012, Validation Accuracy: 0.6374, Loss: 0.6539\n",
"Epoch 0 Batch 390/538 - Train Accuracy: 0.6603, Validation Accuracy: 0.6362, Loss: 0.6228\n",
"Epoch 0 Batch 400/538 - Train Accuracy: 0.6224, Validation Accuracy: 0.6440, Loss: 0.6205\n",
"Epoch 0 Batch 410/538 - Train Accuracy: 0.6279, Validation Accuracy: 0.6499, Loss: 0.6378\n",
"Epoch 0 Batch 420/538 - Train Accuracy: 0.6564, Validation Accuracy: 0.6273, Loss: 0.5990\n",
"Epoch 0 Batch 430/538 - Train Accuracy: 0.6490, Validation Accuracy: 0.6410, Loss: 0.5944\n",
"Epoch 0 Batch 440/538 - Train Accuracy: 0.6680, Validation Accuracy: 0.6566, Loss: 0.6338\n",
"Epoch 0 Batch 450/538 - Train Accuracy: 0.6522, Validation Accuracy: 0.6367, Loss: 0.5990\n",
"Epoch 0 Batch 460/538 - Train Accuracy: 0.6263, Validation Accuracy: 0.6557, Loss: 0.5675\n",
"Epoch 0 Batch 470/538 - Train Accuracy: 0.6847, Validation Accuracy: 0.6632, Loss: 0.5484\n",
"Epoch 0 Batch 480/538 - Train Accuracy: 0.6702, Validation Accuracy: 0.6550, Loss: 0.5310\n",
"Epoch 0 Batch 490/538 - Train Accuracy: 0.6851, Validation Accuracy: 0.6616, Loss: 0.5299\n",
"Epoch 0 Batch 500/538 - Train Accuracy: 0.7125, Validation Accuracy: 0.6777, Loss: 0.4948\n",
"Epoch 0 Batch 510/538 - Train Accuracy: 0.7161, Validation Accuracy: 0.6928, Loss: 0.5111\n",
"Epoch 0 Batch 520/538 - Train Accuracy: 0.6756, Validation Accuracy: 0.6717, Loss: 0.5343\n",
"Epoch 0 Batch 530/538 - Train Accuracy: 0.6719, Validation Accuracy: 0.6916, Loss: 0.5309\n",
"Epoch 1 Batch 10/538 - Train Accuracy: 0.6598, Validation Accuracy: 0.6651, Loss: 0.5121\n",
"Epoch 1 Batch 20/538 - Train Accuracy: 0.7059, Validation Accuracy: 0.6916, Loss: 0.5024\n",
"Epoch 1 Batch 30/538 - Train Accuracy: 0.6945, Validation Accuracy: 0.7005, Loss: 0.4938\n",
"Epoch 1 Batch 40/538 - Train Accuracy: 0.7244, Validation Accuracy: 0.7205, Loss: 0.4183\n",
"Epoch 1 Batch 50/538 - Train Accuracy: 0.7258, Validation Accuracy: 0.7227, Loss: 0.4692\n",
"Epoch 1 Batch 60/538 - Train Accuracy: 0.7250, Validation Accuracy: 0.7232, Loss: 0.4479\n",
"Epoch 1 Batch 70/538 - Train Accuracy: 0.7342, Validation Accuracy: 0.7106, Loss: 0.4234\n",
"Epoch 1 Batch 80/538 - Train Accuracy: 0.6930, Validation Accuracy: 0.7134, Loss: 0.4511\n",
"Epoch 1 Batch 90/538 - Train Accuracy: 0.7254, Validation Accuracy: 0.7244, Loss: 0.4251\n",
"Epoch 1 Batch 100/538 - Train Accuracy: 0.7746, Validation Accuracy: 0.7441, Loss: 0.3935\n",
"Epoch 1 Batch 110/538 - Train Accuracy: 0.7248, Validation Accuracy: 0.7354, Loss: 0.4242\n",
"Epoch 1 Batch 120/538 - Train Accuracy: 0.7604, Validation Accuracy: 0.7431, Loss: 0.3804\n",
"Epoch 1 Batch 130/538 - Train Accuracy: 0.7779, Validation Accuracy: 0.7353, Loss: 0.3716\n",
"Epoch 1 Batch 140/538 - Train Accuracy: 0.7469, Validation Accuracy: 0.7411, Loss: 0.4122\n",
"Epoch 1 Batch 150/538 - Train Accuracy: 0.7646, Validation Accuracy: 0.7475, Loss: 0.3716\n",
"Epoch 1 Batch 160/538 - Train Accuracy: 0.7413, Validation Accuracy: 0.7488, Loss: 0.3484\n",
"Epoch 1 Batch 170/538 - Train Accuracy: 0.7809, Validation Accuracy: 0.7473, Loss: 0.3566\n",
"Epoch 1 Batch 180/538 - Train Accuracy: 0.7853, Validation Accuracy: 0.7710, Loss: 0.3506\n",
"Epoch 1 Batch 190/538 - Train Accuracy: 0.7781, Validation Accuracy: 0.7884, Loss: 0.3540\n",
"Epoch 1 Batch 200/538 - Train Accuracy: 0.7994, Validation Accuracy: 0.7729, Loss: 0.3315\n",
"Epoch 1 Batch 210/538 - Train Accuracy: 0.7920, Validation Accuracy: 0.7990, Loss: 0.3154\n",
"Epoch 1 Batch 220/538 - Train Accuracy: 0.7798, Validation Accuracy: 0.7930, Loss: 0.3094\n",
"Epoch 1 Batch 230/538 - Train Accuracy: 0.8049, Validation Accuracy: 0.7884, Loss: 0.3151\n",
"Epoch 1 Batch 240/538 - Train Accuracy: 0.8084, Validation Accuracy: 0.7898, Loss: 0.3224\n",
"Epoch 1 Batch 250/538 - Train Accuracy: 0.8141, Validation Accuracy: 0.7791, Loss: 0.3067\n",
"Epoch 1 Batch 260/538 - Train Accuracy: 0.8039, Validation Accuracy: 0.7994, Loss: 0.3074\n",
"Epoch 1 Batch 270/538 - Train Accuracy: 0.8023, Validation Accuracy: 0.8104, Loss: 0.2979\n",
"Epoch 1 Batch 280/538 - Train Accuracy: 0.8402, Validation Accuracy: 0.8226, Loss: 0.2656\n",
"Epoch 1 Batch 290/538 - Train Accuracy: 0.8338, Validation Accuracy: 0.8349, Loss: 0.2634\n",
"Epoch 1 Batch 300/538 - Train Accuracy: 0.8153, Validation Accuracy: 0.8221, Loss: 0.2657\n",
"Epoch 1 Batch 310/538 - Train Accuracy: 0.8773, Validation Accuracy: 0.8292, Loss: 0.2728\n",
"Epoch 1 Batch 320/538 - Train Accuracy: 0.8304, Validation Accuracy: 0.8379, Loss: 0.2527\n",
"Epoch 1 Batch 330/538 - Train Accuracy: 0.8346, Validation Accuracy: 0.8303, Loss: 0.2402\n",
"Epoch 1 Batch 340/538 - Train Accuracy: 0.8539, Validation Accuracy: 0.8530, Loss: 0.2550\n",
"Epoch 1 Batch 350/538 - Train Accuracy: 0.8564, Validation Accuracy: 0.8331, Loss: 0.2609\n",
"Epoch 1 Batch 360/538 - Train Accuracy: 0.8438, Validation Accuracy: 0.8459, Loss: 0.2485\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1 Batch 370/538 - Train Accuracy: 0.8422, Validation Accuracy: 0.8427, Loss: 0.2579\n",
"Epoch 1 Batch 380/538 - Train Accuracy: 0.8684, Validation Accuracy: 0.8526, Loss: 0.2178\n",
"Epoch 1 Batch 390/538 - Train Accuracy: 0.8994, Validation Accuracy: 0.8661, Loss: 0.2011\n",
"Epoch 1 Batch 400/538 - Train Accuracy: 0.8687, Validation Accuracy: 0.8601, Loss: 0.2281\n",
"Epoch 1 Batch 410/538 - Train Accuracy: 0.8727, Validation Accuracy: 0.8610, Loss: 0.2327\n",
"Epoch 1 Batch 420/538 - Train Accuracy: 0.8832, Validation Accuracy: 0.8642, Loss: 0.2038\n",
"Epoch 1 Batch 430/538 - Train Accuracy: 0.8641, Validation Accuracy: 0.8608, Loss: 0.2079\n",
"Epoch 1 Batch 440/538 - Train Accuracy: 0.8586, Validation Accuracy: 0.8743, Loss: 0.2274\n",
"Epoch 1 Batch 450/538 - Train Accuracy: 0.8555, Validation Accuracy: 0.8489, Loss: 0.2110\n",
"Epoch 1 Batch 460/538 - Train Accuracy: 0.8590, Validation Accuracy: 0.8661, Loss: 0.2009\n",
"Epoch 1 Batch 470/538 - Train Accuracy: 0.8910, Validation Accuracy: 0.8786, Loss: 0.1802\n",
"Epoch 1 Batch 480/538 - Train Accuracy: 0.9007, Validation Accuracy: 0.8564, Loss: 0.1726\n",
"Epoch 1 Batch 490/538 - Train Accuracy: 0.8858, Validation Accuracy: 0.8729, Loss: 0.1721\n",
"Epoch 1 Batch 500/538 - Train Accuracy: 0.9032, Validation Accuracy: 0.8848, Loss: 0.1630\n",
"Epoch 1 Batch 510/538 - Train Accuracy: 0.8923, Validation Accuracy: 0.8812, Loss: 0.1686\n",
"Epoch 1 Batch 520/538 - Train Accuracy: 0.8877, Validation Accuracy: 0.8807, Loss: 0.1807\n",
"Epoch 1 Batch 530/538 - Train Accuracy: 0.8719, Validation Accuracy: 0.8919, Loss: 0.1775\n",
"Epoch 2 Batch 10/538 - Train Accuracy: 0.8943, Validation Accuracy: 0.8833, Loss: 0.1768\n",
"Epoch 2 Batch 20/538 - Train Accuracy: 0.9003, Validation Accuracy: 0.8975, Loss: 0.1649\n",
"Epoch 2 Batch 30/538 - Train Accuracy: 0.8777, Validation Accuracy: 0.8794, Loss: 0.1737\n",
"Epoch 2 Batch 40/538 - Train Accuracy: 0.9002, Validation Accuracy: 0.8903, Loss: 0.1366\n",
"Epoch 2 Batch 50/538 - Train Accuracy: 0.8979, Validation Accuracy: 0.9062, Loss: 0.1507\n",
"Epoch 2 Batch 60/538 - Train Accuracy: 0.9189, Validation Accuracy: 0.8782, Loss: 0.1457\n",
"Epoch 2 Batch 70/538 - Train Accuracy: 0.9040, Validation Accuracy: 0.8890, Loss: 0.1353\n",
"Epoch 2 Batch 80/538 - Train Accuracy: 0.8957, Validation Accuracy: 0.8880, Loss: 0.1490\n",
"Epoch 2 Batch 90/538 - Train Accuracy: 0.8865, Validation Accuracy: 0.8857, Loss: 0.1564\n",
"Epoch 2 Batch 100/538 - Train Accuracy: 0.9148, Validation Accuracy: 0.8975, Loss: 0.1301\n",
"Epoch 2 Batch 110/538 - Train Accuracy: 0.9000, Validation Accuracy: 0.8961, Loss: 0.1463\n",
"Epoch 2 Batch 120/538 - Train Accuracy: 0.9225, Validation Accuracy: 0.8929, Loss: 0.1330\n",
"Epoch 2 Batch 130/538 - Train Accuracy: 0.9167, Validation Accuracy: 0.8938, Loss: 0.1224\n",
"Epoch 2 Batch 140/538 - Train Accuracy: 0.8850, Validation Accuracy: 0.9027, Loss: 0.1490\n",
"Epoch 2 Batch 150/538 - Train Accuracy: 0.9227, Validation Accuracy: 0.9036, Loss: 0.1205\n",
"Epoch 2 Batch 160/538 - Train Accuracy: 0.8951, Validation Accuracy: 0.9013, Loss: 0.1173\n",
"Epoch 2 Batch 170/538 - Train Accuracy: 0.9022, Validation Accuracy: 0.8952, Loss: 0.1259\n",
"Epoch 2 Batch 180/538 - Train Accuracy: 0.9142, Validation Accuracy: 0.9048, Loss: 0.1225\n",
"Epoch 2 Batch 190/538 - Train Accuracy: 0.8973, Validation Accuracy: 0.8922, Loss: 0.1413\n",
"Epoch 2 Batch 200/538 - Train Accuracy: 0.9115, Validation Accuracy: 0.8938, Loss: 0.1011\n",
"Epoch 2 Batch 210/538 - Train Accuracy: 0.8906, Validation Accuracy: 0.9109, Loss: 0.1198\n",
"Epoch 2 Batch 220/538 - Train Accuracy: 0.9049, Validation Accuracy: 0.8967, Loss: 0.1083\n",
"Epoch 2 Batch 230/538 - Train Accuracy: 0.9115, Validation Accuracy: 0.9043, Loss: 0.1163\n",
"Epoch 2 Batch 240/538 - Train Accuracy: 0.8994, Validation Accuracy: 0.9043, Loss: 0.1244\n",
"Epoch 2 Batch 250/538 - Train Accuracy: 0.9187, Validation Accuracy: 0.9041, Loss: 0.1043\n",
"Epoch 2 Batch 260/538 - Train Accuracy: 0.8854, Validation Accuracy: 0.9167, Loss: 0.1173\n",
"Epoch 2 Batch 270/538 - Train Accuracy: 0.9098, Validation Accuracy: 0.9023, Loss: 0.1026\n",
"Epoch 2 Batch 280/538 - Train Accuracy: 0.9276, Validation Accuracy: 0.9128, Loss: 0.1005\n",
"Epoch 2 Batch 290/538 - Train Accuracy: 0.9283, Validation Accuracy: 0.9109, Loss: 0.0956\n",
"Epoch 2 Batch 300/538 - Train Accuracy: 0.9031, Validation Accuracy: 0.9087, Loss: 0.1090\n",
"Epoch 2 Batch 310/538 - Train Accuracy: 0.9424, Validation Accuracy: 0.9032, Loss: 0.1099\n",
"Epoch 2 Batch 320/538 - Train Accuracy: 0.9044, Validation Accuracy: 0.9102, Loss: 0.1009\n",
"Epoch 2 Batch 330/538 - Train Accuracy: 0.9115, Validation Accuracy: 0.9155, Loss: 0.0948\n",
"Epoch 2 Batch 340/538 - Train Accuracy: 0.9164, Validation Accuracy: 0.9162, Loss: 0.0972\n",
"Epoch 2 Batch 350/538 - Train Accuracy: 0.9336, Validation Accuracy: 0.9197, Loss: 0.1178\n",
"Epoch 2 Batch 360/538 - Train Accuracy: 0.9156, Validation Accuracy: 0.9112, Loss: 0.1010\n",
"Epoch 2 Batch 370/538 - Train Accuracy: 0.9375, Validation Accuracy: 0.9240, Loss: 0.0992\n",
"Epoch 2 Batch 380/538 - Train Accuracy: 0.9258, Validation Accuracy: 0.9103, Loss: 0.0927\n",
"Epoch 2 Batch 390/538 - Train Accuracy: 0.9306, Validation Accuracy: 0.9194, Loss: 0.0812\n",
"Epoch 2 Batch 400/538 - Train Accuracy: 0.9280, Validation Accuracy: 0.9132, Loss: 0.0971\n",
"Epoch 2 Batch 410/538 - Train Accuracy: 0.9201, Validation Accuracy: 0.9215, Loss: 0.1057\n",
"Epoch 2 Batch 420/538 - Train Accuracy: 0.9252, Validation Accuracy: 0.9132, Loss: 0.0920\n",
"Epoch 2 Batch 430/538 - Train Accuracy: 0.9156, Validation Accuracy: 0.9329, Loss: 0.0862\n",
"Epoch 2 Batch 440/538 - Train Accuracy: 0.9209, Validation Accuracy: 0.9165, Loss: 0.0949\n",
"Epoch 2 Batch 450/538 - Train Accuracy: 0.9089, Validation Accuracy: 0.9048, Loss: 0.1095\n",
"Epoch 2 Batch 460/538 - Train Accuracy: 0.9042, Validation Accuracy: 0.9219, Loss: 0.0971\n",
"Epoch 2 Batch 470/538 - Train Accuracy: 0.9312, Validation Accuracy: 0.9066, Loss: 0.0836\n",
"Epoch 2 Batch 480/538 - Train Accuracy: 0.9399, Validation Accuracy: 0.9205, Loss: 0.0793\n",
"Epoch 2 Batch 490/538 - Train Accuracy: 0.9373, Validation Accuracy: 0.9297, Loss: 0.0801\n",
"Epoch 2 Batch 500/538 - Train Accuracy: 0.9368, Validation Accuracy: 0.9114, Loss: 0.0709\n",
"Epoch 2 Batch 510/538 - Train Accuracy: 0.9474, Validation Accuracy: 0.9283, Loss: 0.0799\n",
"Epoch 2 Batch 520/538 - Train Accuracy: 0.9313, Validation Accuracy: 0.9176, Loss: 0.0847\n",
"Epoch 2 Batch 530/538 - Train Accuracy: 0.9039, Validation Accuracy: 0.9155, Loss: 0.0907\n",
"Epoch 3 Batch 10/538 - Train Accuracy: 0.9393, Validation Accuracy: 0.9096, Loss: 0.0886\n",
"Epoch 3 Batch 20/538 - Train Accuracy: 0.9271, Validation Accuracy: 0.9194, Loss: 0.0780\n",
"Epoch 3 Batch 30/538 - Train Accuracy: 0.9258, Validation Accuracy: 0.9027, Loss: 0.0883\n",
"Epoch 3 Batch 40/538 - Train Accuracy: 0.9295, Validation Accuracy: 0.9228, Loss: 0.0671\n",
"Epoch 3 Batch 50/538 - Train Accuracy: 0.9236, Validation Accuracy: 0.9137, Loss: 0.0772\n",
"Epoch 3 Batch 60/538 - Train Accuracy: 0.9363, Validation Accuracy: 0.9238, Loss: 0.0739\n",
"Epoch 3 Batch 70/538 - Train Accuracy: 0.9237, Validation Accuracy: 0.9215, Loss: 0.0715\n",
"Epoch 3 Batch 80/538 - Train Accuracy: 0.9256, Validation Accuracy: 0.9196, Loss: 0.0774\n",
"Epoch 3 Batch 90/538 - Train Accuracy: 0.9362, Validation Accuracy: 0.9132, Loss: 0.0828\n",
"Epoch 3 Batch 100/538 - Train Accuracy: 0.9453, Validation Accuracy: 0.9189, Loss: 0.0651\n",
"Epoch 3 Batch 110/538 - Train Accuracy: 0.9301, Validation Accuracy: 0.9206, Loss: 0.0758\n",
"Epoch 3 Batch 120/538 - Train Accuracy: 0.9428, Validation Accuracy: 0.9244, Loss: 0.0613\n",
"Epoch 3 Batch 130/538 - Train Accuracy: 0.9384, Validation Accuracy: 0.9256, Loss: 0.0720\n",
"Epoch 3 Batch 140/538 - Train Accuracy: 0.9145, Validation Accuracy: 0.9267, Loss: 0.0938\n",
"Epoch 3 Batch 150/538 - Train Accuracy: 0.9373, Validation Accuracy: 0.9366, Loss: 0.0668\n",
"Epoch 3 Batch 160/538 - Train Accuracy: 0.9202, Validation Accuracy: 0.9210, Loss: 0.0660\n",
"Epoch 3 Batch 170/538 - Train Accuracy: 0.9213, Validation Accuracy: 0.9288, Loss: 0.0789\n",
"Epoch 3 Batch 180/538 - Train Accuracy: 0.9343, Validation Accuracy: 0.9292, Loss: 0.0717\n",
"Epoch 3 Batch 190/538 - Train Accuracy: 0.9081, Validation Accuracy: 0.9174, Loss: 0.0896\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 3 Batch 200/538 - Train Accuracy: 0.9447, Validation Accuracy: 0.9258, Loss: 0.0563\n",
"Epoch 3 Batch 210/538 - Train Accuracy: 0.9355, Validation Accuracy: 0.9412, Loss: 0.0726\n",
"Epoch 3 Batch 220/538 - Train Accuracy: 0.9167, Validation Accuracy: 0.9334, Loss: 0.0654\n",
"Epoch 3 Batch 230/538 - Train Accuracy: 0.9230, Validation Accuracy: 0.9192, Loss: 0.0697\n",
"Epoch 3 Batch 240/538 - Train Accuracy: 0.9172, Validation Accuracy: 0.9272, Loss: 0.0713\n",
"Epoch 3 Batch 250/538 - Train Accuracy: 0.9494, Validation Accuracy: 0.9274, Loss: 0.0620\n",
"Epoch 3 Batch 260/538 - Train Accuracy: 0.9172, Validation Accuracy: 0.9219, Loss: 0.0720\n",
"Epoch 3 Batch 270/538 - Train Accuracy: 0.9465, Validation Accuracy: 0.9347, Loss: 0.0710\n",
"Epoch 3 Batch 280/538 - Train Accuracy: 0.9488, Validation Accuracy: 0.9086, Loss: 0.0637\n",
"Epoch 3 Batch 290/538 - Train Accuracy: 0.9414, Validation Accuracy: 0.9258, Loss: 0.0536\n",
"Epoch 3 Batch 300/538 - Train Accuracy: 0.9362, Validation Accuracy: 0.9196, Loss: 0.0745\n",
"Epoch 3 Batch 310/538 - Train Accuracy: 0.9510, Validation Accuracy: 0.9286, Loss: 0.0677\n",
"Epoch 3 Batch 320/538 - Train Accuracy: 0.9353, Validation Accuracy: 0.9386, Loss: 0.0688\n",
"Epoch 3 Batch 330/538 - Train Accuracy: 0.9334, Validation Accuracy: 0.9338, Loss: 0.0605\n",
"Epoch 3 Batch 340/538 - Train Accuracy: 0.9221, Validation Accuracy: 0.9276, Loss: 0.0685\n",
"Epoch 3 Batch 350/538 - Train Accuracy: 0.9399, Validation Accuracy: 0.9332, Loss: 0.0599\n",
"Epoch 3 Batch 360/538 - Train Accuracy: 0.9377, Validation Accuracy: 0.9458, Loss: 0.0652\n",
"Epoch 3 Batch 370/538 - Train Accuracy: 0.9437, Validation Accuracy: 0.9284, Loss: 0.0641\n",
"Epoch 3 Batch 380/538 - Train Accuracy: 0.9459, Validation Accuracy: 0.9398, Loss: 0.0615\n",
"Epoch 3 Batch 390/538 - Train Accuracy: 0.9468, Validation Accuracy: 0.9490, Loss: 0.0536\n",
"Epoch 3 Batch 400/538 - Train Accuracy: 0.9464, Validation Accuracy: 0.9304, Loss: 0.0583\n",
"Epoch 3 Batch 410/538 - Train Accuracy: 0.9502, Validation Accuracy: 0.9426, Loss: 0.0672\n",
"Epoch 3 Batch 420/538 - Train Accuracy: 0.9510, Validation Accuracy: 0.9416, Loss: 0.0593\n",
"Epoch 3 Batch 430/538 - Train Accuracy: 0.9355, Validation Accuracy: 0.9370, Loss: 0.0615\n",
"Epoch 3 Batch 440/538 - Train Accuracy: 0.9391, Validation Accuracy: 0.9329, Loss: 0.0732\n",
"Epoch 3 Batch 450/538 - Train Accuracy: 0.9258, Validation Accuracy: 0.9483, Loss: 0.0722\n",
"Epoch 3 Batch 460/538 - Train Accuracy: 0.9278, Validation Accuracy: 0.9336, Loss: 0.0666\n",
"Epoch 3 Batch 470/538 - Train Accuracy: 0.9524, Validation Accuracy: 0.9382, Loss: 0.0591\n",
"Epoch 3 Batch 480/538 - Train Accuracy: 0.9464, Validation Accuracy: 0.9288, Loss: 0.0622\n",
"Epoch 3 Batch 490/538 - Train Accuracy: 0.9420, Validation Accuracy: 0.9208, Loss: 0.0564\n",
"Epoch 3 Batch 500/538 - Train Accuracy: 0.9673, Validation Accuracy: 0.9219, Loss: 0.0608\n",
"Epoch 3 Batch 510/538 - Train Accuracy: 0.9487, Validation Accuracy: 0.9396, Loss: 0.0625\n",
"Epoch 3 Batch 520/538 - Train Accuracy: 0.9352, Validation Accuracy: 0.9231, Loss: 0.0610\n",
"Epoch 3 Batch 530/538 - Train Accuracy: 0.9221, Validation Accuracy: 0.9405, Loss: 0.0664\n",
"Epoch 4 Batch 10/538 - Train Accuracy: 0.9418, Validation Accuracy: 0.9480, Loss: 0.0639\n",
"Epoch 4 Batch 20/538 - Train Accuracy: 0.9501, Validation Accuracy: 0.9364, Loss: 0.0593\n",
"Epoch 4 Batch 30/538 - Train Accuracy: 0.9477, Validation Accuracy: 0.9409, Loss: 0.0579\n",
"Epoch 4 Batch 40/538 - Train Accuracy: 0.9411, Validation Accuracy: 0.9490, Loss: 0.0446\n",
"Epoch 4 Batch 50/538 - Train Accuracy: 0.9486, Validation Accuracy: 0.9398, Loss: 0.0528\n",
"Epoch 4 Batch 60/538 - Train Accuracy: 0.9490, Validation Accuracy: 0.9338, Loss: 0.0595\n",
"Epoch 4 Batch 70/538 - Train Accuracy: 0.9611, Validation Accuracy: 0.9345, Loss: 0.0431\n",
"Epoch 4 Batch 80/538 - Train Accuracy: 0.9496, Validation Accuracy: 0.9467, Loss: 0.0569\n",
"Epoch 4 Batch 90/538 - Train Accuracy: 0.9548, Validation Accuracy: 0.9545, Loss: 0.0565\n",
"Epoch 4 Batch 100/538 - Train Accuracy: 0.9693, Validation Accuracy: 0.9416, Loss: 0.0468\n",
"Epoch 4 Batch 110/538 - Train Accuracy: 0.9477, Validation Accuracy: 0.9325, Loss: 0.0519\n",
"Epoch 4 Batch 120/538 - Train Accuracy: 0.9596, Validation Accuracy: 0.9471, Loss: 0.0380\n",
"Epoch 4 Batch 130/538 - Train Accuracy: 0.9542, Validation Accuracy: 0.9373, Loss: 0.0409\n",
"Epoch 4 Batch 140/538 - Train Accuracy: 0.9229, Validation Accuracy: 0.9272, Loss: 0.0687\n",
"Epoch 4 Batch 150/538 - Train Accuracy: 0.9510, Validation Accuracy: 0.9471, Loss: 0.0444\n",
"Epoch 4 Batch 160/538 - Train Accuracy: 0.9221, Validation Accuracy: 0.9252, Loss: 0.0451\n",
"Epoch 4 Batch 170/538 - Train Accuracy: 0.9345, Validation Accuracy: 0.9460, Loss: 0.0549\n",
"Epoch 4 Batch 180/538 - Train Accuracy: 0.9420, Validation Accuracy: 0.9416, Loss: 0.0512\n",
"Epoch 4 Batch 190/538 - Train Accuracy: 0.9241, Validation Accuracy: 0.9267, Loss: 0.0738\n",
"Epoch 4 Batch 200/538 - Train Accuracy: 0.9672, Validation Accuracy: 0.9430, Loss: 0.0420\n",
"Epoch 4 Batch 210/538 - Train Accuracy: 0.9472, Validation Accuracy: 0.9467, Loss: 0.0544\n",
"Epoch 4 Batch 220/538 - Train Accuracy: 0.9384, Validation Accuracy: 0.9377, Loss: 0.0508\n",
"Epoch 4 Batch 230/538 - Train Accuracy: 0.9449, Validation Accuracy: 0.9403, Loss: 0.0480\n",
"Epoch 4 Batch 240/538 - Train Accuracy: 0.9383, Validation Accuracy: 0.9325, Loss: 0.0475\n",
"Epoch 4 Batch 250/538 - Train Accuracy: 0.9588, Validation Accuracy: 0.9434, Loss: 0.0499\n",
"Epoch 4 Batch 260/538 - Train Accuracy: 0.9340, Validation Accuracy: 0.9384, Loss: 0.0525\n",
"Epoch 4 Batch 270/538 - Train Accuracy: 0.9498, Validation Accuracy: 0.9400, Loss: 0.0432\n",
"Epoch 4 Batch 280/538 - Train Accuracy: 0.9511, Validation Accuracy: 0.9373, Loss: 0.0405\n",
"Epoch 4 Batch 290/538 - Train Accuracy: 0.9777, Validation Accuracy: 0.9345, Loss: 0.0382\n",
"Epoch 4 Batch 300/538 - Train Accuracy: 0.9520, Validation Accuracy: 0.9569, Loss: 0.0495\n",
"Epoch 4 Batch 310/538 - Train Accuracy: 0.9652, Validation Accuracy: 0.9570, Loss: 0.0517\n",
"Epoch 4 Batch 320/538 - Train Accuracy: 0.9528, Validation Accuracy: 0.9485, Loss: 0.0477\n",
"Epoch 4 Batch 330/538 - Train Accuracy: 0.9621, Validation Accuracy: 0.9474, Loss: 0.0446\n",
"Epoch 4 Batch 340/538 - Train Accuracy: 0.9430, Validation Accuracy: 0.9425, Loss: 0.0472\n",
"Epoch 4 Batch 350/538 - Train Accuracy: 0.9555, Validation Accuracy: 0.9501, Loss: 0.0529\n",
"Epoch 4 Batch 360/538 - Train Accuracy: 0.9471, Validation Accuracy: 0.9567, Loss: 0.0441\n",
"Epoch 4 Batch 370/538 - Train Accuracy: 0.9637, Validation Accuracy: 0.9501, Loss: 0.0498\n",
"Epoch 4 Batch 380/538 - Train Accuracy: 0.9516, Validation Accuracy: 0.9565, Loss: 0.0451\n",
"Epoch 4 Batch 390/538 - Train Accuracy: 0.9461, Validation Accuracy: 0.9522, Loss: 0.0348\n",
"Epoch 4 Batch 400/538 - Train Accuracy: 0.9680, Validation Accuracy: 0.9600, Loss: 0.0426\n",
"Epoch 4 Batch 410/538 - Train Accuracy: 0.9588, Validation Accuracy: 0.9494, Loss: 0.0504\n",
"Epoch 4 Batch 420/538 - Train Accuracy: 0.9525, Validation Accuracy: 0.9542, Loss: 0.0439\n",
"Epoch 4 Batch 430/538 - Train Accuracy: 0.9389, Validation Accuracy: 0.9517, Loss: 0.0446\n",
"Epoch 4 Batch 440/538 - Train Accuracy: 0.9537, Validation Accuracy: 0.9510, Loss: 0.0538\n",
"Epoch 4 Batch 450/538 - Train Accuracy: 0.9299, Validation Accuracy: 0.9513, Loss: 0.0570\n",
"Epoch 4 Batch 460/538 - Train Accuracy: 0.9472, Validation Accuracy: 0.9576, Loss: 0.0453\n",
"Epoch 4 Batch 470/538 - Train Accuracy: 0.9550, Validation Accuracy: 0.9547, Loss: 0.0474\n",
"Epoch 4 Batch 480/538 - Train Accuracy: 0.9678, Validation Accuracy: 0.9487, Loss: 0.0453\n",
"Epoch 4 Batch 490/538 - Train Accuracy: 0.9528, Validation Accuracy: 0.9556, Loss: 0.0409\n",
"Epoch 4 Batch 500/538 - Train Accuracy: 0.9686, Validation Accuracy: 0.9627, Loss: 0.0355\n",
"Epoch 4 Batch 510/538 - Train Accuracy: 0.9513, Validation Accuracy: 0.9480, Loss: 0.0488\n",
"Epoch 4 Batch 520/538 - Train Accuracy: 0.9535, Validation Accuracy: 0.9565, Loss: 0.0444\n",
"Epoch 4 Batch 530/538 - Train Accuracy: 0.9363, Validation Accuracy: 0.9498, Loss: 0.0512\n",
"Model Trained and Saved\n"
]
}
],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"def get_accuracy(target, logits, _type):\n",
" \"\"\"\n",
" Calculate accuracy\n",
" \"\"\"\n",
" max_seq = max(target.shape[1], logits.shape[1])\n",
" if max_seq - target.shape[1]:\n",
" target = np.pad(\n",
" target,\n",
" [(0,0),(0,max_seq - target.shape[1])],\n",
" 'constant')\n",
" if max_seq - logits.shape[1]:\n",
" logits = np.pad(\n",
" logits,\n",
" [(0,0),(0,max_seq - logits.shape[1])],\n",
" 'constant')\n",
" \n",
" acc = np.mean(np.equal(target, logits))\n",
" if _type is 'train':\n",
" with tf.name_scope('optimization'):\n",
" summary = tf.Summary(value=[tf.Summary.Value(tag=\"accuracy\", simple_value=acc)])\n",
" else:\n",
" with tf.name_scope('validation'):\n",
" summary = tf.Summary(value=[tf.Summary.Value(tag=\"accuracy\", simple_value=acc)])\n",
" return summary, acc\n",
"\n",
"# Split data to training and validation sets\n",
"train_source = source_int_text[batch_size:]\n",
"train_target = target_int_text[batch_size:]\n",
"valid_source = source_int_text[:batch_size]\n",
"valid_target = target_int_text[:batch_size]\n",
"(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n",
" valid_target,\n",
" batch_size,\n",
" source_vocab_to_int['<PAD>'],\n",
" target_vocab_to_int['<PAD>'])) \n",
"with tf.Session(graph=train_graph) as sess:\n",
" sess.run(tf.global_variables_initializer())\n",
" \n",
" saver = tf.train.Saver(keep_checkpoint_every_n_hours=0.5)\n",
" \n",
" \n",
" for epoch_i in range(epochs):\n",
" \n",
" \n",
" n_batches = len(train_source)//batch_size \n",
" for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n",
" get_batches(train_source, train_target, batch_size,\n",
" source_vocab_to_int['<PAD>'],\n",
" target_vocab_to_int['<PAD>'])):\n",
" \n",
" iteration = epoch_i*n_batches + batch_i\n",
" \n",
" summary, _, loss = sess.run(\n",
" [merged, train_op, cost],\n",
" {input_data: source_batch,\n",
" targets: target_batch,\n",
" lr: learning_rate,\n",
" target_sequence_length: targets_lengths,\n",
" source_sequence_length: sources_lengths,\n",
" keep_prob: keep_probability})\n",
" \n",
" train_writer.add_summary(summary, iteration)\n",
" \n",
" if epoch_i % 5 == 0:\n",
" saver.save(sess, save_path + 'ckpt', global_step=epoch_i)\n",
"\n",
" if batch_i % display_step == 0 and batch_i > 0:\n",
"\n",
"\n",
" batch_train_logits = sess.run(\n",
" inference_logits,\n",
" {input_data: source_batch,\n",
" source_sequence_length: sources_lengths,\n",
" target_sequence_length: targets_lengths,\n",
" keep_prob: 1.0})\n",
"\n",
"\n",
" batch_valid_logits = sess.run(\n",
" inference_logits,\n",
" {input_data: valid_sources_batch,\n",
" source_sequence_length: valid_sources_lengths,\n",
" target_sequence_length: valid_targets_lengths,\n",
" keep_prob: 1.0})\n",
"\n",
" train_acc_sum, train_acc = get_accuracy(target_batch, batch_train_logits, _type='train')\n",
" \n",
" valid_acc_sum, valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits, _type='test')\n",
" \n",
" \n",
" train_writer.add_summary(train_acc_sum, iteration)\n",
" test_writer.add_summary(valid_acc_sum, iteration)\n",
" \n",
" \n",
"\n",
" print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n",
" .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n",
" \n",
"\n",
" # Save Model\n",
" saver.save(sess, save_path + 'last-ckpt')\n",
" print('Model Trained and Saved')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save Parameters\n",
"Save the `batch_size` and `save_path` parameters for inference."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"# Save parameters for checkpoint\n",
"helper.save_params(save_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Checkpoint"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"import helper\n",
"import problem_unittests as tests\n",
"\n",
"_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\n",
"load_path = helper.load_params()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sentence to Sequence\n",
"To feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.\n",
"\n",
"- Convert the sentence to lowercase\n",
"- Convert words into ids using `vocab_to_int`\n",
" - Convert words not in the vocabulary, to the `<UNK>` word id."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tests Passed\n"
]
}
],
"source": [
"def sentence_to_seq(sentence, vocab_to_int):\n",
" \"\"\"\n",
" Convert a sentence to a sequence of ids\n",
" :param sentence: String\n",
" :param vocab_to_int: Dictionary to go from the words to an id\n",
" :return: List of word ids\n",
" \"\"\"\n",
" \n",
" \n",
" return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split(' ')]\n",
"\n",
"\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n",
"\"\"\"\n",
"tests.test_sentence_to_seq(sentence_to_seq)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Translate\n",
"This will translate `translate_sentence` from English to French."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from /output/run_1/checkpoints/last-ckpt\n",
"Input\n",
" Word Ids: [161, 156, 158, 103, 42, 112, 169]\n",
" English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']\n",
"\n",
"Prediction\n",
" Word Ids: [224, 237, 230, 340, 282, 78, 64, 1]\n",
" French Words: il a un vieux camion jaune . <EOS>\n"
]
}
],
"source": [
"translate_sentence = 'he saw a old yellow truck .'\n",
"\n",
"\n",
"\"\"\"\n",
"DON'T MODIFY ANYTHING IN THIS CELL\n",
"\"\"\"\n",
"translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n",
"\n",
"loaded_graph = tf.Graph()\n",
"with tf.Session(graph=loaded_graph) as sess:\n",
" # Load saved model\n",
" loader = tf.train.import_meta_graph(load_path + 'last-ckpt.meta')\n",
" loader.restore(sess, load_path + 'last-ckpt')\n",
"\n",
" input_data = loaded_graph.get_tensor_by_name('input:0')\n",
" logits = loaded_graph.get_tensor_by_name('predictions:0')\n",
" target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n",
" source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n",
" keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n",
"\n",
" translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n",
" target_sequence_length: [len(translate_sentence)*2]*batch_size,\n",
" source_sequence_length: [len(translate_sentence)]*batch_size,\n",
" keep_prob: 1.0})[0]\n",
"\n",
"print('Input')\n",
"print(' Word Ids: {}'.format([i for i in translate_sentence]))\n",
"print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n",
"\n",
"print('\\nPrediction')\n",
"print(' Word Ids: {}'.format([i for i in translate_logits]))\n",
"print(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imperfect Translation\n",
"You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\n",
"\n",
"You can train on the [WMT10 French-English corpus](http://www.statmt.org/wmt10/training-giga-fren.tar). This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\n",
"## Submitting This Project\n",
"When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 1
}