Implementation of Learning End-to-End Goal-Oriented Dialog with sklearn-like interface using Tensorflow. Tasks are from the bAbl dataset.
git clone [email protected]/rndabzooba/chatbot_memory_network.git
mkdir ./chatbot_memory_network/data/
cd ./chatbot_memory_network/data/
wget https://scontent.xx.fbcdn.net/t39.2365-6/13437784_1766606076905967_221214138_n.tgz
tar xzvf ./13437784_1766606076905967_221214138_n.tgz
cd ../
python single_dialog.py
Train the model
python single_dialog.py --train True --task_id 1 --interactive False
Running a single bAbI task Demo
python single_dialog.py --train False --task_id 1 --interactive True
These files are also a good example of usage.
- tensorflow 0.8
- scikit-learn 0.17.1
- six 1.10.0
- scipy
Unless specified, the Adam optimizer was used.
The following params were used:
- epochs: 200
- learning_rate: 0.01
- epsilon: 1e-8
- embedding_size: 20
Task | Training Accuracy | Validation Accuracy | Testing Accuracy | Testing Accuracy(OOV) |
---|---|---|---|---|
1 | 99.9 | 99.1 | 99.3 | 76.3 |
2 | 100 | 100 | 99.9 | 78.9 |
3 | 96.1 | 71.0 | 71.1 | 64.8 |
4 | 99.9 | 56.7 | 57.2 | 57.0 |
5 | 99.9 | 98.4 | 98.5 | 64.9 |
6 | 73.1 | 49.3 | 40.6 | --- |
I didn't play around with the epsilon param in Adam until after my initial results but values of 1.0 and 0.1 seem to help convergence and overfitting.