【TensorFlow】リカレントニューラルネットワークで為替レートを予測【FX】
- 2017.02.19
- Deep Learning Neural Network TensorFlow
以前、ニューラルネットワークで為替レートを予測する記事を書きましたが、
今度はTensorFlowを使って予測してみました。
前回とは異なり、今回はリカレントニューラルネットワークというものを使ってみます。理論は殆ど理解できていないので、
ここにあるコードを書き換えました。
入力は、4時間足のデータ25個、出力は3パターンで、4時間後に値が上昇するなら、[1,0,0]、下がるなら[0,1,0]、同値なら[0,0,1]としています。
なお、入力値は0~1の値に正規化しています。
学習プログラム
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
import tensorflow as tf from tensorflow.contrib import rnn import numpy as np # Parameters learning_rate = 0.01 training_iters = 100000 batch_size = 100 display_step = 10 # Network Parameters n_input = 1 n_steps = 25 # timesteps n_hidden = 1024 # hidden layer num of features n_classes = 3 #exchange rates data=np.loadtxt("input.csv",delimiter=",") # tf Graph input x = tf.placeholder("float", [None, n_steps, n_input]) y = tf.placeholder("float", [None, n_classes]) # Define weights weights = { # Hidden layer weights => 2*n_hidden because of foward + backward cells 'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes])) } biases = { 'out': tf.Variable(tf.random_normal([n_classes])) } def BiRNN(x, weights, biases): # Prepare data shape to match `bidirectional_rnn` function requirements # Current data input shape: (batch_size, n_steps, n_input) # Required shape: 'n_steps' tensors list of shape (batch_size, n_input) # Permuting batch_size and n_steps x = tf.transpose(x, [1, 0, 2]) # Reshape to (n_steps*batch_size, n_input) x = tf.reshape(x, [-1, n_input]) # Split to get a list of 'n_steps' tensors of shape (batch_size, n_input) x = tf.split(x, n_steps, 0) # Define lstm cells with tensorflow # Forward direction cell lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) # Backward direction cell lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) # Get lstm cell output try: outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, dtype=tf.float32) except Exception: # Old TensorFlow version only returns outputs not states outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, dtype=tf.float32) # Linear activation, using rnn inner loop last output return tf.matmul(outputs[-1], weights['out']) + biases['out'] pred = BiRNN(x, weights, biases) # Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Evaluate model correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initializing the variables init = tf.global_variables_initializer() # Launch the graph with tf.Session() as sess: sess.run(init) step = 1 # Keep training until reach max iterations while step * batch_size < training_iters: np.random.shuffle(data) batch_x=data[0:batch_size,0:n_steps] #normalize batch_x=batch_x-np.min(batch_x,axis=1).reshape(batch_size,1) batch_x=batch_x/np.max(batch_x,axis=1).reshape(batch_size,1) batch_x = batch_x.reshape((batch_size, n_steps, n_input)) batch_y=data[0:batch_size,n_steps:n_steps+n_classes] batch_y=batch_y.reshape((batch_size,n_classes)) # Run optimization op (backprop) sess.run(optimizer, feed_dict={x: batch_x, y: batch_y}) if step % display_step == 0: # Calculate batch accuracy acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y}) # Calculate batch loss loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y}) #print ("Iter"+str(step*batch_size)+", Training Accuracy="+str(acc)) print ("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \ "{:.6f}".format(loss) + ", Training Accuracy= " + \ "{:.5f}".format(acc)) step += 1 print ("Optimization Finished!") |
結果
1 2 3 4 5 6 7 8 9 10 11 |
Iter 90000, Minibatch Loss= 0.746534, Training Accuracy= 0.50000 Iter 91000, Minibatch Loss= 0.750871, Training Accuracy= 0.47000 Iter 92000, Minibatch Loss= 0.738956, Training Accuracy= 0.54000 Iter 93000, Minibatch Loss= 0.803958, Training Accuracy= 0.55000 Iter 94000, Minibatch Loss= 0.784448, Training Accuracy= 0.58000 Iter 95000, Minibatch Loss= 0.764126, Training Accuracy= 0.49000 Iter 96000, Minibatch Loss= 0.750926, Training Accuracy= 0.54000 Iter 97000, Minibatch Loss= 0.836504, Training Accuracy= 0.47000 Iter 98000, Minibatch Loss= 0.828720, Training Accuracy= 0.51000 Iter 99000, Minibatch Loss= 1.095886, Training Accuracy= 0.54000 Optimization Finished! |
10万回回した結果、学習データに対してすら50数%程度で、ランダムと対して変わらない結果になってしまいました。
モデルが単純すぎるのか、入力値が悪いのか、そもそも予測ができないものなのか分かりませんが、もう少し工夫してみたいです。
-
前の記事
WindowsにTensorFlowをインストール 2017.02.09
-
次の記事
格安のArduino Nano互換機を使う 2017.04.11