# Quick intro

This was supposed to be a live class. The more recent ones improved, but are still behind a paywall :)
Moreover, the following notebook was supposed to be interactive, delivered with a call with me, for more clarity.

# First DL Class: Fundamentals

## The Tensorflow Library

### Basic Optimization exercise

We have a basic and easy dataset, which is composed of random numbers and the corresponding output to the function:
f(x) = 3 + 2x

<matplotlib.collections.PathCollection at 0x1e62fe401c8> We know we can model this problem with the line formula:
f(x) = mx + b

Let’s create these varibles then:

(-2.0, 3.0)

<matplotlib.collections.PathCollection at 0x1e62fe6c588> As you can see, the shape of the scattered points is a line, but the values are absolutely wrong, due to m and b being off

We can then compute how off our predictions are, we’ll call that function the loss function

<tf.Tensor: id=41, shape=(50,), dtype=float32, numpy=
array([ 16., 280., 132., 160., 40., 396., 388., 88., 268., 24., 304.,
260., 224., 292., 284., 384., 232., 20., 356., 72., 12., 340.,
320., 388., 260., 336., 272., 272., 296., 240., 332., 344., 16.,
172., 284., 12., 288., 360., 140., 132., 76., 300., 152., 136.,
228., 12., 48., 192., 356., 284.], dtype=float32)>

Since this is an optimization problem, we need to minimize the error, or the loss, to make our model better

There’s an entiere class in Tensorflow just for it, let’s use it

We can now use it to minimize the loss, hence maximizing our accuracy

TypeError Traceback (most recent call last)

BLABLA

TypeError: ‘tensorflow.python.framework.ops.EagerTensor’ object is not callable

Sadly, it creates an error in the most recent versions of Tensorflow, so let’s break down what it does and recreate it manually

[<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=-2.0>,
<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=3.0>]

<tf.Tensor: id=531, shape=(), dtype=float32, numpy=216.4>

[<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=-1.95>,
<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=3.05>]

It got closer to the goal. Maybe if we do it repeatedly ?

[<tf.Tensor: id=680, shape=(), dtype=float32, numpy=213.64502>,
<tf.Tensor: id=681, shape=(), dtype=float32, numpy=210.89>,

<tf.Tensor: id=778, shape=(), dtype=float32, numpy=4.0977607>,
<tf.Tensor: id=779, shape=(), dtype=float32, numpy=3.9657702>]

As you can see, the loss goes down dramatically with each epoch

[<matplotlib.lines.Line2D at 0x1e63339a7c8>] In real life, our dataset will be noised with imperfections

<matplotlib.collections.PathCollection at 0x1e63338c2c8> Can we still do it ?

[<tf.Tensor: id=981, shape=(), dtype=float32, numpy=152.36555>,
<tf.Tensor: id=1034, shape=(), dtype=float32, numpy=148.70193>,
<tf.Tensor: id=1087, shape=(), dtype=float32, numpy=144.41466>,

<tf.Tensor: id=6175, shape=(), dtype=float32, numpy=5.790793>,
<tf.Tensor: id=6228, shape=(), dtype=float32, numpy=5.7920794>]

[<matplotlib.lines.Line2D at 0x1e6339b5508>] ### A bit of vocab: What we learned

This first class was rather easy and short, it was to see how well you’d understand the bases