How can you create your own AI?

We define artificial intelligence as a program that learns with the help of data.optimizes your own behavior.In principle, we only need two things for the creation:

  • Data
  • an algorithmthat uses it

Let’s take text generationas an example.We want to teach a program to write a text. The first thing we need is a document, such as a Harry Potter book from the Internet.

Now we need a modelthat can learn from this data.Basically, text generation is about capturing statistical probabilities in letter sequences.This sounds complex at first, but it is actually quite simple. If our program sees the letter “H” in the word “Harry” and then an “a”, you want to learn that the next letter is probably two “r” and then a “y”.
“Hermi” is followed in almost all cases by “ne” etc.
This is a highly simplistic representation of the mathematical models in the background, but is sufficient for a basic understanding.

In this use case, one often opts for a recurrent neural network[1 , since it is able to generate output sequences based on another sequence.This means that our program can take into account in our generated text (as just described) which letters/words it has seenbefore.With other models, this situation is more difficult to map.

A short script might look like this:

import numpy as np n-input_data = "harry.txt" 'r').read() nchars = list(set(data))'ndata_size, vocab_size = len(data), len(chars)'nprint(input, 'imported')

Loading the text document and displaying metadata such as character count.

char_to_ix = ch:i for i,ch in enumerate(chars) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

To work with vectors, we assign a corresponding number to each letter.

鈥?hyperparameters _nhidden_size = 100 nseq_length = 25 nlearning_rate = 1e-1

Hyperparameters are parameters of the mathematical model that you can use to test around to achieve better performance.

鈥?model parameters nWxh = np.random.randn(hidden_size, vocab_size)*0.01 nWhh = np.random.randn(hidden_size, hidden_size)*0.01 nWhy = np.random.randn(vocab_size, hidden_size)*0.01 nbh = np.zeros(hidden_size, 1)) nby = np.zeros((vocab_size, 1))

The matrices of our RNN, which we first initialize by chance.

def lossFun(inputs, targets, hprev):  hs, ys, ps = s,-    xs[t = np.zeros((vocab_size,1)) n    xs[t[inputs[t = 1    , xs[t) + np.dot(Whh, hs[t-1) + bh) n    ys[t = np.dot(Why, hs[t) + by n    ps[t = np.exp(ys[t) / np.sum(np.exp(ys[t)) n    loss += -np.log(ps[t[targets[t,0)  , np.zeros_like(Why)  dby = np.zeros_like(bh), np.zeros_like(by)'n  dhnext = np.zeros_like(hs[0)'n  for t in reversed(range(len(inputs))''n dy =    np.copy(ps[t)'n    dy[targets[t -= 1'n    dWhy += np.dot(dy, hs[t.T)'n    n    dh = np.dot(Why.T, dy) + dhnext    dhraw = (1 - hs[t * hs[t) * dh n    dbh += dhraw n    dWxh += np.dot(dhraw, xs[t.T)'n    dWhh += np.dot(dhraw, hs[t-1. T) . . . . . . . . . .    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  dWhh, dWhy, dbh, dby:    n np.clip(dparam, -5, 5, out=dparam) n  return loss, dWxh, dWhh, dWhy, dbh, dby, hs[len(inputs)-1 n-def sample(h, seed_ix, n):  , 1)) n  x[seed_ix = 1 n  ixes = [n  for t in range(n): n    h = np.tanh(np.dot(Wxh, x) + np.dot(Whh, h) + bh) . . . . . . . . . . . . . . . . . . . . . . .    . . . . . . . . . . . . . . .    . . . . . . . . . . . . . . . . .    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    np.zeros(vocab_size, 1))  

A sample() function that generates a string and a so-calledloss-function, a function that determines how good a generated text is.

while True:'n  if p+seq_length+1 >= len(data) or n == 0: 'n    hprev = np.zeros((hidden_size,1)) ' reset'n    p = 0' go from start of data'n  inputs = [char_to_ix[ch for ch in data[p:p+seq_n  targets = [char_to_ix[ch for ch in data 1:p+seq_length+1  dWxh, dWhh, dWhy, dbh, dby, hprev = lossFun(inputs, targets, hprev)    sample_ix = sample(hprev, inputs[0, 200) n    txt = ''.join(ix_to_char[ix for ix in sample_ix)'n    with open("out.txt", "a") as out:'n          print('----'n %s 'n----' % (txt, ), file=out)  , Whh, Why, bh, by,                                [dWxh, dWhh, dWhy, dbh, dby, n                                [mWxh, mWhh, mWhy, mbh, mby):  

Our main loop, in which our network learns from the data and occasionally generates a sample (every 10000 iterations) and writes it to a text file.

Footnotes

[1 Recurrent neural network – Wikipedia

Leave a Reply