211 lines
8.0 KiB
Plaintext
211 lines
8.0 KiB
Plaintext
{% steps %}
|
|
{% step title="Introduction to ML Model Generalization" %}
|
|
|
|
|
|
### Introduction
|
|
|
|
|
|
Welcome to the "Introduction to ML Model Generalization" lab!
|
|
This lab is designed to give you a foundational understanding of generalization in machine learning models, and how to prevent
|
|
over or under fitting in models.
|
|
|
|
|
|
### Learning Objectives
|
|
|
|
|
|
- Review generalization and the importance of not over or under fitting models.
|
|
- Practice implementing early learning cutoff and learning rate decay
|
|
|
|
|
|
### Prerequisites
|
|
|
|
|
|
Familiarity with basic ML principals and key concepts around, learning rates, and model structure.
|
|
|
|
|
|
{% /step %}
|
|
|
|
|
|
{% step title="Synthetic Data Generation" %}
|
|
|
|
|
|
### Synthetic Data Generation
|
|
Provided below is a basic function to create some synthetic data for classification. This data will have 2000 samples, each with 20
|
|
features, where 5 features do not affect the outcome of the classification and 15 are directly correlated to the classification. The
|
|
options for correct classification will only be between two options. Most importantly, the random state is defined to allow repeatability
|
|
of the model generation.
|
|
```python
|
|
X, y = make_classification(n_samples=2000,
|
|
n_features=20,
|
|
n_classes=2,
|
|
n_informative=15,
|
|
n_redundant=5,
|
|
random_state=42)
|
|
scaler = StandardScaler()
|
|
X = scaler.fit_transform(X)
|
|
```
|
|
|
|
|
|
### Train/Validation/Test Splits
|
|
For splitting the data you will first split into the test and training/validation sets. From there you will split out training/validation
|
|
into their separate sets of training and validation, resulting in a data distribution of train (64%), val (16%), test (20%).
|
|
|
|
|
|
```python
|
|
X_trainval, X_test, y_trainval, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
|
|
X_train, X_val, y_train, y_val = train_test_split(X_trainval, y_trainval, test_size=0.2, random_state=42)
|
|
```
|
|
|
|
|
|
In practice it is best to define your test dataset before model creation begins and to keep it out of production environments entirely
|
|
to ensure there is no data leakage or over fitting of a model.
|
|
|
|
|
|
|
|
|
|
{% /step %}
|
|
|
|
|
|
{% step title="Model and Generalization Feature Setup" %}
|
|
|
|
### Introduction
|
|
For this lab you will use a basic feed forward neural network because neural networks allow you to implement additional features
|
|
such as learning rate schedulers, and early stopping, that more traditional models such as linear regression do not have.
|
|
|
|
### Model Setup
|
|
For this
|
|
model you will use two dense layers of Relu activation functions, allowing for
|
|
more complex patterns to be learned, and ending the model with a sigmoid.
|
|
When setting up the model you could also include additional generalization techniques such as drop out, which
|
|
selectively turns off a certain percentage of neurons to ensure no single neuron within the neural net
|
|
learns to perform a single aspect of prediction.
|
|
|
|
**Note:** RELU is used to introduce non linearity into a neural networks learning, and sigmoid is used as a classification function
|
|
|
|
|
|
```python
|
|
model = tf.keras.Sequential([
|
|
tf.keras.layers.Dense(64, activation='relu', input_shape=(20,)),
|
|
tf.keras.layers.Dense(32, activation='relu'),
|
|
tf.keras.layers.Dense(1, activation='sigmoid')
|
|
])
|
|
```
|
|
|
|
|
|
### Loss function and Model Initialization Parameters
|
|
For this lab you will be using the Adam optimizer as Adam is a good starting optimizer for most problems.
|
|
Adam takes one parameter which is the starting learning rate, Most
|
|
models begin with a learning rate under 0.3 and usually closer to 0.1 at most. Here you also define the loss function
|
|
as ```binary_crossentropy``` which is a simple loss function that just compares is the predicted values match
|
|
the actual values.
|
|
```python
|
|
model.compile(
|
|
optimizer=tf.keras.optimizers.Adam(learning_rate=0.05),
|
|
loss='binary_crossentropy',
|
|
metrics=['accuracy']
|
|
)
|
|
```
|
|
|
|
|
|
#### Learning Rate Scheduler
|
|
For your learning rate scheduler in this lab you will be using "ReduceLROnPlateau" from the keras library which sets the learning rate
|
|
decay to plateau at a certain amount, ensuring the model never slows to a halt during training. Below the function parameters are defined:
|
|
- ```val_loss``` is the loss applied on the validation training set
|
|
- ```factor``` is the factor as which the learning rate will be cut
|
|
- ```patience``` is the number of epochs between learning rate reductions
|
|
- ```min_lr``` defines the lowest possible value of learning rate
|
|
- ``` verbose``` has 3 possible values, 0 which returns nothing, 1 which returns a progress bar,
|
|
and 2 which displays the values for each epoch individually as its own line.
|
|
|
|
```python
|
|
lr_scheduler = tf.keras.callbacks.ReduceLROnPlateau(
|
|
monitor='val_loss', # metric to monitor
|
|
factor=0.5, # reduce by a factor
|
|
patience=2, # wait 2 epochs before reducing LR
|
|
min_lr=1e-5, # don't reduce below this
|
|
verbose=1
|
|
)
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Early Stopping
|
|
|
|
|
|
Next in the code you will implement the early stopping aspect of the model. This function will prevent the model from over fitting by
|
|
returning to a previous model's parameter in the training if the monitored value does not increase by a specific increment. In your case
|
|
The monitored value is the validation loss again. A patience of 3 is defined, state the model has 3 times of not incrementing by a
|
|
great enough change before the previous model that did, is selected as the final model. ```min_delta``` defines how much the value
|
|
needs to change to be considered a large enough value difference to keep the model. Finally the ```restore_best_weights``` set to true
|
|
allows the model to restore to the last model that performed the best, in cases where the models ```min_delta``` was not met. This
|
|
functionality is important to ensure the model does not overfit to the training data and keeps some aspect of generalization.
|
|
|
|
|
|
```python
|
|
early_stop = tf.keras.callbacks.EarlyStopping(
|
|
monitor='val_loss',
|
|
patience=3,
|
|
min_delta=0.01, # minimum change to be considered an improvement
|
|
restore_best_weights=True,
|
|
verbose=1
|
|
)
|
|
```
|
|
|
|
|
|
{% /step %}
|
|
|
|
|
|
{% step title="Model training" %}
|
|
|
|
|
|
### Model Training
|
|
|
|
|
|
Finally onto the model training, you will use the basic fit method and set the validation set in the hyper parameters, this lets the
|
|
```val_loss``` correctly be used by the learning rate schedule and the early stopping mechanism. For this case the epochs defaulted
|
|
to 100 and the verbose is set as 2, this will ensure you have plenty of epochs to end early and the line by line model training information
|
|
can help you better understand the values of ```val_loss``` and how they are changing per epoch. As you run the model pay close attention
|
|
to the change in ```val_loss``` and how it correlates to when the model initiates early stopping and rolling back to previous models.
|
|
```python
|
|
model.fit(
|
|
X_train, y_train,
|
|
validation_data=(X_val, y_val),
|
|
epochs=100,
|
|
callbacks=[early_stop, lr_scheduler], # your custom early stopping + LR scheduler
|
|
verbose=2
|
|
)
|
|
```
|
|
|
|
|
|
{% /step %}
|
|
|
|
|
|
{% step title="Evaluating Model Results" %}
|
|
|
|
|
|
### Evaluating Model Results
|
|
|
|
|
|
The following code provides a basic metric test of your neural network. Depending on the domain of the model different levels of accuracy
|
|
are acceptable. It's more important to see a considerable increase in accuracy in predictions compared to existing methods, than it is
|
|
to hit a particular threshold of accuracy. Accuracy above 99.5% for validation can be a bit concerning as it may be a sign of over fitting,
|
|
and an accuracy below previous methods may be a sign of under fitting.
|
|
|
|
|
|
```python
|
|
y_pred_probs = model.predict(X_test).flatten()
|
|
y_pred = (y_pred_probs >= 0.5).astype(int)
|
|
|
|
|
|
print("\n Test Set Evaluation:")
|
|
print(classification_report(y_test, y_pred))
|
|
print("Confusion Matrix:")
|
|
print(confusion_matrix(y_test, y_pred))
|
|
```
|
|
|
|
|
|
{% /step %}
|
|
{% /steps %}
|
|
|