Skip to main content

Tensorflow

12th October, 2022

Updated: 12th October, 2022

    Read up on tensor flow

    find a different pre-trained model and run it on some data

    Find a naive model and train it

    sort out a way of doing the code wihtout installing anything (codesandbox/codepen)

    maybe do something on training a model to differentiate between cats aqnd guinea pigs!

    https://developers.google.com/machine-learning/crash-course/ml-intro

    API's

    TensorFlow APIs are arranged hierarchically, with the high-level APIs built on the low-level APIs. Machine learning researchers use the low-level APIs to create and explore new machine learning algorithms

    Simplified hierarchy of TensorFlow toolkits. tf.keras API is at the top.

    (Supervised) machine learning

    In supervised machine learning we are learning to create models that combine inputs, to produce useful predictions even on previously unseen data.

    Labels

    When we're training a model we're assigning it labels - that might be 'spam' 'not spam' its the target that we're trying to predict.

    Features

    Are drawn from an email words in the email or to/from addresses. header or routing information or any piece of information we might extract from that email to represent it for our machine learning system.

    Model

    Thing thats doing the predicting, Its something that we're going to try and create through a process of learning from data.

    Generalisation

    We're more interested in generalisation - because if the model is too specific then new data will be on either side of the model and not fit into it

    Training, Test and Validation Sets

    The larger the training set the better the model - the larger the test set the more confidence we have in our data.

    Never train on test data

    Vocab

    Teachable Machine splits your samples into two buckets. That’s why you’ll see two labels, training and test, in the graphs below.

    Training samples

    (85% of the samples) are used to train the model how to correctly classify new samples into the classes you’ve made.

    Test samples

    (15% of the samples) are never used to train the model, so after the model has been trained on the training samples, they are used to check how well the model is performing on new, never-before-seen data.

    Underfit

    a model is underfit when it classifies poorly because the model hasn't captured the complexity of the training samples

    Overfit

    a model is overfit when it learns to classify the training samples so closely that it fails to make correct classifications on the test samples

    Epochs

    One epoch means that every training sample has been fed through the model at least once. If your epochs are set to 50, for example, it means that the model you are training will work through the entire training dataset 50 times.

    One epoch means that each and every sample in the training dataset has been fed through the training model at least once. If your epochs are set to 50, for example, it means that the model you are training will work through the entire training dataset 50 times. Generally the larger the number, the better your model will learn to predict the data.

    You probably want to tweak (usually increase) this number until you get good predictive results with your model.

    Accuracy per epoch

    Accuracy is the percentage of classifications that a model gets right during training. If your model classifies 70 samples right out of 100, the accuracy is 70 / 100 = 0.7.

    If the model's prediction is perfect, the accuracy is one; otherwise, the accuracy is lower than one.

    ![image-2021012662140379 pm](/Users/han/src/my-repos/TIL/machine-learning/image-2021012662140379 pm.png)

    Loss per epoch

    Loss is a measure for evaluating how well a model has learned to predict the right classifications for a given set of samples. If the model's predictions are perfect, the loss is zero; otherwise, the loss is greater than zero.

    To get an intuitive sense of what this measures, imagine you have two models: A and B. Model A predicts the right classification for a sample but is only 60% confident of that prediction. Model B also predicts the right classification for the same sample but is 90% confident of that prediction. Both models have the same accuracy, but model B has a lower loss value.

     

    ![image-2021012662153396 pm](/Users/han/src/my-repos/TIL/machine-learning/image-2021012662153396 pm.png)

    Accuracy per class

    Accuracy per class is calculated using the test samples. Check out the vocab section to learn more about test samples.

    ![image-2021012661937373 pm](/Users/han/src/my-repos/TIL/machine-learning/image-2021012661937373 pm.png)

    Confusion matrix

    A confusion matrix summarizes how accurate your model's predictions are. You can use this matrix to figure out which classes the model gets confused about.

    The y axis (Class) represents the class of your samples. The x axis (Prediction) represents the class that the model, after learning, guesses those samples belong to. So, if a sample’s Class is "Muffin" but its Prediction is "Cupcake", that means that after learning from your data, the model misclassified that Muffin sample as a Cupcake. This usually means that those two classes share characteristics that the model picks up on, and that particular "Muffin" sample was more similar to the "Cupcake" samples.

    ![image-2021012662101085 pm](/Users/han/src/my-repos/TIL/machine-learning/image-2021012662101085 pm.png)

    Batch size

    A batch is a set of samples used in one iteration of training. For example, let’s say that you have 80 images and you choose a batch size of 16. This means the data will be split into 80 / 16 = 5 batches. Once all 5 batches have been fed through the model, exactly one epoch will be complete.

    You probably won't need to tweak this number to get good training results.

    Learning rate

    Be careful tweaking this number! Even small differences can have huge effects on how well your model learns.


    gipowS5CC4W2DkCHp6HJHXUeorz2p6 hcpadL9u84oN 0287I1ET12kiGphP2zXgCzyglbAD0oiKk7UhqzH9nX9xB99tWLIt47C32x8lsBTa4ahWk06E eAgUqyD11AjJz 3V3QT65U k5OL4NTrHw8xTf vHLkysQ8hvei9KalcsLvatbsIKWjFpTfxY8HV7MXMEqFsfv4ea9WvUC50r3HrQvJ1c1mgcrj2H6lkNDTC2CFrBseBj3MQ47JUXDtKd3xmloHpe tjtpLJ2uyKQ28 img

    img

    Tensorflow - more notes

    20 min theory + intro 40 min exercises

    give people something they can take away and refer back too

    homework

    https://www.tensorflow.org/js/demos

    2 part

    naive model + train diff pretrained model and run it on some new data

    • need to find a way to code online only

    google slides + zoom call +

    https://codesandbox.io/ https://codepen.io/

    guinea pigs vs cats + dogs

    @2-Next Find a naive model and train it

    https://www.youtube.com/watch?v=bCFtoUm5uH4

    https://teachablemachine.withgoogle.com/

    https://medium.com/tensorpad/building-an-animal-classifier-with-tensorflow-ad5931b04946

    https://teachablemachine.withgoogle.com/train/image

    @2-Next find a different pre-trained model and run it on some data https://www.youtube.com/watch?v=uTdUUpfA83s

    what would it take to do that? 10:29 we could mix and match a few things together 10:30 you said you'd trained a model to recognise photos? 10:30 we could do something like that 10:30 and for the second half, we could apply two or three of those tensorflow examples and study the output 10:31 we could also structure a discussion around it with a few prompts: • what might you use this for?

    • what problems or ethical issues might arise?

    • which projects or clients might be interested in this?

    https://glitch.com/~tensorflow-js-object-detection https://glitch.com/~tensorflow-js-image-classification https://glitch.com/~disappearing-people


    22046f7a-7fb4-4ecc-96dd-72538af9968d

    Created on: 12th October, 2022

    Last updated: 12th October, 2022

    Tagged With: