# My First Beginner AI class

# Quick intro

This was supposed to be a live class. The more recent ones improved, but are still behind a paywall :)

Moreover, the following notebook was supposed to be interactive, delivered with a call with me, for more clarity.

1 | !pip install jupyter_contrib_nbextensions |

# First DL Class: Fundamentals

1 | #Download all the libraries (If you don't have them installed already) |

1 | #Importing all libraries we'll need |

## The Tensorflow Library

1 | #If you have cuda (might be rather infuriating to install) |

1 | #If you don't |

1 | #The only correct way of importing tensorflow :) |

### Basic Optimization exercise

We have a basic and easy dataset, which is composed of random numbers and the corresponding output to the function:

f(x) = 3 + 2x

1 | # The inputs to any taks will often be called "x" |

1 | plt.scatter(xs, ys) |

<matplotlib.collections.PathCollection at 0x1e62fe401c8>

We know we can model this problem with the line formula:

f(x) = mx + b

Let’s create these varibles then:

1 | class Model(): |

(-2.0, 3.0)

1 | plt.scatter(xs, model.predict(xs)) |

<matplotlib.collections.PathCollection at 0x1e62fe6c588>

As you can see, the shape of the scattered points is a line, but the values are absolutely wrong, due to m and b being off

We can then compute how off our predictions are, we’ll call that function the loss function

1 | loss = tf.abs( |

<tf.Tensor: id=41, shape=(50,), dtype=float32, numpy=

array([ 16., 280., 132., 160., 40., 396., 388., 88., 268., 24., 304.,

260., 224., 292., 284., 384., 232., 20., 356., 72., 12., 340.,

320., 388., 260., 336., 272., 272., 296., 240., 332., 344., 16.,

172., 284., 12., 288., 360., 140., 132., 76., 300., 152., 136.,

228., 12., 48., 192., 356., 284.], dtype=float32)>

Since this is an optimization problem, we need to minimize the error, or the loss, to make our model better

There’s an entiere class in Tensorflow just for it, let’s use it

1 | #There's a variety of optimizers to use, we'll use Adam, which uses SGD (more on that later as well) |

We can now use it to minimize the loss, hence maximizing our accuracy

1 | #The simple way |

TypeError Traceback (most recent call last)

BLABLA

TypeError: ‘tensorflow.python.framework.ops.EagerTensor’ object is not callable

Sadly, it creates an error in the most recent versions of Tensorflow, so let’s break down what it does and recreate it manually

1 | #Let's wrap all this code in a function to call it multiple times |

1 | model.trainable_variables |

[<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=-2.0>,

<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=3.0>]

1 | train_step(model, xs, ys) |

<tf.Tensor: id=531, shape=(), dtype=float32, numpy=216.4>

1 | model.trainable_variables |

[<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=-1.95>,

<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=3.05>]

It got closer to the goal. Maybe if we do it repeatedly ?

1 | #Let's do it 100 times |

[<tf.Tensor: id=680, shape=(), dtype=float32, numpy=213.64502>,

<tf.Tensor: id=681, shape=(), dtype=float32, numpy=210.89>,

…

<tf.Tensor: id=778, shape=(), dtype=float32, numpy=4.0977607>,

<tf.Tensor: id=779, shape=(), dtype=float32, numpy=3.9657702>]

As you can see, the loss goes down dramatically with each epoch

1 | plt.scatter(xs, ys) |

[<matplotlib.lines.Line2D at 0x1e63339a7c8>]

In real life, our dataset will be noised with imperfections

1 | ys = [y + np.random.normal(-y/15, y/15) for y in ys] |

<matplotlib.collections.PathCollection at 0x1e63338c2c8>

Can we still do it ?

1 | model = Model() #Resetting the model |

1 | EPOCHS = 100 |

[<tf.Tensor: id=981, shape=(), dtype=float32, numpy=152.36555>,

<tf.Tensor: id=1034, shape=(), dtype=float32, numpy=148.70193>,

<tf.Tensor: id=1087, shape=(), dtype=float32, numpy=144.41466>,

…

<tf.Tensor: id=6175, shape=(), dtype=float32, numpy=5.790793>,

<tf.Tensor: id=6228, shape=(), dtype=float32, numpy=5.7920794>]

1 | plt.scatter(xs, ys) |

[<matplotlib.lines.Line2D at 0x1e6339b5508>]

### A bit of vocab: What we learned

This first class was rather easy and short, it was to see how well you’d understand the bases

#### Data Structures

1 | tf.Tensor # Can hold multiple values, basically a multi-dimensions array |

#### Tools and classes

1 | tf.keras.optimizers.XXXX # Has a learning_rate, and other properties such as momentum |