Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge pull request #1 from piaic-official/master #16

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Qausain
Copy link

@Qausain Qausain commented Apr 3, 2021

updated forked repo for assignments

updated forked repo for assignments
@Ghulam1215
Copy link

`LOAD LIBRARIEs
In [22]:

import pandas as pd
import numpy as np

import matplotlib.pyplot as plt
import seaborn as sns

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.optimizers import Adam, SGD

load data
In [28]:

car_df = pd.read_csv('CarPrice_Assignment.csv')

chek missing data
In [29]:


car_df.isnull().sum()

Out[29]:
car_ID 0
symboling 0
CarName 0
fueltype 0
aspiration 0
doornumber 0
carbody 0
drivewheel 0
enginelocation 0
wheelbase 0
carlength 0
carwidth 0
carheight 0
curbweight 0
enginetype 0
cylindernumber 0
enginesize 0
fuelsystem 0
boreratio 0
stroke 0
compressionratio 0
horsepower 0
peakrpm 0
citympg 0
highwaympg 0
price 0
dtype: int64
In [30]:


car_df.isnull().sum()

Out[30]:
car_ID 0
symboling 0
CarName 0
fueltype 0
aspiration 0
doornumber 0
carbody 0
drivewheel 0
enginelocation 0
wheelbase 0
carlength 0
carwidth 0
carheight 0
curbweight 0
enginetype 0
cylindernumber 0
enginesize 0
fuelsystem 0
boreratio 0
stroke 0
compressionratio 0
horsepower 0
peakrpm 0
citympg 0
highwaympg 0
price 0
dtype: int64

convert Non-numerical values into numerical values
In [31]:

from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()

In [32]:

car_df['fueltype'] = labelencoder.fit_transform(car_df['fueltype'])
car_df['aspiration'] = labelencoder.fit_transform(car_df['aspiration'])
car_df['doornumber'] = car_df['doornumber'].map({'two':2,'four':4})
car_df['carbody'] = labelencoder.fit_transform(car_df['carbody'])
car_df['drivewheel'] = labelencoder.fit_transform(car_df['drivewheel'])

car_df['enginelocation'] = car_df['enginelocation'].map({'front':1,'rear':2})
car_df['cylindernumber'] = car_df['cylindernumber'].map({'four':4, 'six':6, 'five':5, 'three':3, 'twelve':12,
'two':2, 'eight':8})

car_df['enginetype'] = labelencoder.fit_transform(car_df['enginetype'])
car_df['fuelsystem'] = labelencoder.fit_transform(car_df['enginelocation'])

In [33]:

car_df.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 205 entries, 0 to 204
Data columns (total 26 columns):

Column Non-Null Count Dtype


0 car_ID 205 non-null int64
1 symboling 205 non-null int64
2 CarName 205 non-null object
3 fueltype 205 non-null int32
4 aspiration 205 non-null int32
5 doornumber 205 non-null int64
6 carbody 205 non-null int32
7 drivewheel 205 non-null int32
8 enginelocation 205 non-null int64
9 wheelbase 205 non-null float64
10 carlength 205 non-null float64
11 carwidth 205 non-null float64
12 carheight 205 non-null float64
13 curbweight 205 non-null int64
14 enginetype 205 non-null int32
15 cylindernumber 205 non-null int64
16 enginesize 205 non-null int64
17 fuelsystem 205 non-null int64
18 boreratio 205 non-null float64
19 stroke 205 non-null float64
20 compressionratio 205 non-null float64
21 horsepower 205 non-null int64
22 peakrpm 205 non-null int64
23 citympg 205 non-null int64
24 highwaympg 205 non-null int64
25 price 205 non-null float64
dtypes: float64(8), int32(5), int64(12), object(1)
memory usage: 37.8+ KB
In [34]:

car_df.head()

Out[34]:

car_ID
symboling
CarName
fueltype
aspiration
doornumber
carbody
drivewheel
enginelocation
wheelbase
...
enginesize
fuelsystem
boreratio
stroke
compressionratio
horsepower
peakrpm
citympg
highwaympg
price
0
1
3
alfa-romero giulia
1
0
2
0
2
1
88.6
...
130
0
3.47
2.68
9.0
111
5000
21
27
13495.0
1
2
3
alfa-romero stelvio
1
0
2
0
2
1
88.6
...
130
0
3.47
2.68
9.0
111
5000
21
27
16500.0
2
3
1
alfa-romero Quadrifoglio
1
0
2
2
2
1
94.5
...
152
0
2.68
3.47
9.0
154
5000
19
26
16500.0
3
4
2
audi 100 ls
1
0
4
3
1
1
99.8
...
109
0
3.19
3.40
10.0
102
5500
24
30
13950.0
4
5
2
audi 100ls
1
0
4
3
0
1
99.4
...
136
0
3.19
3.40
8.0
115
5500
18
22
17450.0
5 rows × 26 columns

Correlation Matrix
In [35]:

plt.figure(figsize=(20,20))
sns.heatmap(car_df.corr(),annot=True,cmap="RdYlGn")

Out[35]:
AxesSubplot:

Select Feautures with highest +ve and -ve Correlation
In [36]:

car_df.corr()['price'].sort_values(ascending=False)

Out[36]:
price 1.000000
enginesize 0.874145
curbweight 0.835305
horsepower 0.808139
carwidth 0.759325
cylindernumber 0.718305
carlength 0.682920
drivewheel 0.577992
wheelbase 0.577816
boreratio 0.553173
enginelocation 0.324973
fuelsystem 0.324973
aspiration 0.177926
carheight 0.119336
stroke 0.079443
compressionratio 0.067984
enginetype 0.049171
doornumber 0.031835
symboling -0.079978
carbody -0.083976
peakrpm -0.085267
fueltype -0.105679
car_ID -0.109093
citympg -0.685751
highwaympg -0.697599
Name: price, dtype: float64
In [37]:

car = car_df[['drivewheel','enginelocation','wheelbase','carlength','carwidth','curbweight','cylindernumber','enginesize',
'fuelsystem','fuelsystem','boreratio','horsepower','citympg','highwaympg','price']]

In [38]:

car.head()

Out[38]:

drivewheel
enginelocation
wheelbase
carlength
carwidth
curbweight
cylindernumber
enginesize
fuelsystem
fuelsystem
boreratio
horsepower
citympg
highwaympg
price
0
2
1
88.6
168.8
64.1
2548
4
130
0
0
3.47
111
21
27
13495.0
1
2
1
88.6
168.8
64.1
2548
4
130
0
0
3.47
111
21
27
16500.0
2
2
1
94.5
171.2
65.5
2823
6
152
0
0
2.68
154
19
26
16500.0
3
1
1
99.8
176.6
66.2
2337
4
109
0
0
3.19
102
24
30
13950.0
4
0
1
99.4
176.6
66.4
2824
5
136
0
0
3.19
115
18
22
17450.0

Split Data into Train, Test and Validation¶
In [39]:


from sklearn.model_selection import train_test_split

In [40]:

x = (car.loc[:, car.columns != 'price'])
y = (car.loc[:, car.columns == 'price'])

In [41]:

Split to 50% Train and 50% Test

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.50, random_state=42)

Split 50% Test into further 30% Test and 20% Validation

x_test, x_val, y_test, y_val = train_test_split(x_test, y_test, test_size=0.40, random_state=42)

In [42]:

print(x_train.shape)
print(x_test.shape)
print(x_val.shape)

(102, 14)
(61, 14)
(42, 14)
In [43]:

print(y_train.shape)
print(y_test.shape)
print(y_val.shape)

(102, 1)
(61, 1)
(42, 1)

Use Min Max Scaler to Scale all Features
In [44]:


from sklearn.preprocessing import MinMaxScaler

In [45]:

min_max_scaler = MinMaxScaler()

In [46]:

x_train_s = min_max_scaler.fit_transform(x_train)
x_test_s = min_max_scaler.fit_transform(x_test)
x_val_s = min_max_scaler.fit_transform(x_val)

y_train_s = min_max_scaler.fit_transform(y_train)
y_test_s = min_max_scaler.fit_transform(y_test)
y_val_s = min_max_scaler.fit_transform(y_val)

In [72]:

print(x_train_s.shape)
print(x_test_s.shape)
print(x_val_s.shape)

print(y_train_s.shape)
print(y_test_s.shape)
print(y_val_s.shape)

(102, 14)
(61, 14)
(42, 14)
(102, 1)
(61, 1)
(42, 1)

Create Model
In [49]:

Build Model and Architecture of Deep Neural Network


model = Sequential() # Initialize ANN

Add layers of Neural Network = 4 , input_dim = number oof features

units = 10,8,6 is number of neurons

model.add(Dense(units = 10, activation='relu', input_dim=14))
model.add(Dense(units = 8, activation='relu'))
model.add(Dense(units = 6, activation='relu'))
model.add(Dense(units = 1, activation='relu'))

In [50]:

Loss Function measures how well the model did on training and then tries

to improve on it using optimizer

model.compile(optimizer = 'adam',
loss = 'mse',
metrics = ['mae'])

In [51]:

hist = model.fit(
x_train_s, y_train_s, epochs = 100,
validation_data = (x_val_s, y_val_s)
)

Epoch 1/100
4/4 [==============================] - 0s 53ms/step - loss: 0.0897 - mae: 0.2146 - val_loss: 0.1424 - val_mae: 0.2670
Epoch 2/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0886 - mae: 0.2133 - val_loss: 0.1404 - val_mae: 0.2649
Epoch 3/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0857 - mae: 0.2085 - val_loss: 0.1344 - val_mae: 0.2592
Epoch 4/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0812 - mae: 0.1992 - val_loss: 0.1286 - val_mae: 0.2520
Epoch 5/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0761 - mae: 0.1867 - val_loss: 0.1201 - val_mae: 0.2364
Epoch 6/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0705 - mae: 0.1756 - val_loss: 0.1113 - val_mae: 0.2167
Epoch 7/100
4/4 [==============================] - 0s 6ms/step - loss: 0.0649 - mae: 0.1638 - val_loss: 0.1025 - val_mae: 0.2007
Epoch 8/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0589 - mae: 0.1515 - val_loss: 0.0943 - val_mae: 0.1878
Epoch 9/100
4/4 [==============================] - 0s 6ms/step - loss: 0.0542 - mae: 0.1422 - val_loss: 0.0867 - val_mae: 0.1837
Epoch 10/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0486 - mae: 0.1346 - val_loss: 0.0803 - val_mae: 0.1838
Epoch 11/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0441 - mae: 0.1292 - val_loss: 0.0747 - val_mae: 0.1848
Epoch 12/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0401 - mae: 0.1249 - val_loss: 0.0694 - val_mae: 0.1856
Epoch 13/100
4/4 [==============================] - 0s 6ms/step - loss: 0.0366 - mae: 0.1202 - val_loss: 0.0642 - val_mae: 0.1833
Epoch 14/100
4/4 [==============================] - 0s 6ms/step - loss: 0.0330 - mae: 0.1153 - val_loss: 0.0590 - val_mae: 0.1785
Epoch 15/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0301 - mae: 0.1093 - val_loss: 0.0537 - val_mae: 0.1724
Epoch 16/100
4/4 [==============================] - 0s 6ms/step - loss: 0.0265 - mae: 0.1048 - val_loss: 0.0494 - val_mae: 0.1674
Epoch 17/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0242 - mae: 0.1013 - val_loss: 0.0458 - val_mae: 0.1631
Epoch 18/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0220 - mae: 0.0976 - val_loss: 0.0428 - val_mae: 0.1598
Epoch 19/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0201 - mae: 0.0950 - val_loss: 0.0399 - val_mae: 0.1567
Epoch 20/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0183 - mae: 0.0921 - val_loss: 0.0367 - val_mae: 0.1519
Epoch 21/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0166 - mae: 0.0873 - val_loss: 0.0336 - val_mae: 0.1456
Epoch 22/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0151 - mae: 0.0824 - val_loss: 0.0312 - val_mae: 0.1406
Epoch 23/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0138 - mae: 0.0772 - val_loss: 0.0292 - val_mae: 0.1357
Epoch 24/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0130 - mae: 0.0742 - val_loss: 0.0273 - val_mae: 0.1343
Epoch 25/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0118 - mae: 0.0726 - val_loss: 0.0259 - val_mae: 0.1338
Epoch 26/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0111 - mae: 0.0727 - val_loss: 0.0245 - val_mae: 0.1312
Epoch 27/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0103 - mae: 0.0700 - val_loss: 0.0229 - val_mae: 0.1266
Epoch 28/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0097 - mae: 0.0677 - val_loss: 0.0215 - val_mae: 0.1227
Epoch 29/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0092 - mae: 0.0663 - val_loss: 0.0202 - val_mae: 0.1178
Epoch 30/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0090 - mae: 0.0659 - val_loss: 0.0194 - val_mae: 0.1152
Epoch 31/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0088 - mae: 0.0657 - val_loss: 0.0188 - val_mae: 0.1135
Epoch 32/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0084 - mae: 0.0656 - val_loss: 0.0191 - val_mae: 0.1181
Epoch 33/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0081 - mae: 0.0654 - val_loss: 0.0192 - val_mae: 0.1208
Epoch 34/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0081 - mae: 0.0656 - val_loss: 0.0188 - val_mae: 0.1194
Epoch 35/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0079 - mae: 0.0651 - val_loss: 0.0182 - val_mae: 0.1174
Epoch 36/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0077 - mae: 0.0643 - val_loss: 0.0174 - val_mae: 0.1138
Epoch 37/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0077 - mae: 0.0637 - val_loss: 0.0167 - val_mae: 0.1104
Epoch 38/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0074 - mae: 0.0631 - val_loss: 0.0174 - val_mae: 0.1167
Epoch 39/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0074 - mae: 0.0639 - val_loss: 0.0175 - val_mae: 0.1187
Epoch 40/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0073 - mae: 0.0633 - val_loss: 0.0163 - val_mae: 0.1122
Epoch 41/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0072 - mae: 0.0624 - val_loss: 0.0154 - val_mae: 0.1062
Epoch 42/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0071 - mae: 0.0613 - val_loss: 0.0152 - val_mae: 0.1055
Epoch 43/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0071 - mae: 0.0607 - val_loss: 0.0148 - val_mae: 0.1040
Epoch 44/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0069 - mae: 0.0604 - val_loss: 0.0148 - val_mae: 0.1059
Epoch 45/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0069 - mae: 0.0604 - val_loss: 0.0148 - val_mae: 0.1070
Epoch 46/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0069 - mae: 0.0603 - val_loss: 0.0143 - val_mae: 0.1047
Epoch 47/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0068 - mae: 0.0597 - val_loss: 0.0138 - val_mae: 0.1022
Epoch 48/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0067 - mae: 0.0593 - val_loss: 0.0135 - val_mae: 0.1006
Epoch 49/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0066 - mae: 0.0588 - val_loss: 0.0130 - val_mae: 0.0974
Epoch 50/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0066 - mae: 0.0589 - val_loss: 0.0127 - val_mae: 0.0958
Epoch 51/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0066 - mae: 0.0589 - val_loss: 0.0126 - val_mae: 0.0952
Epoch 52/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0066 - mae: 0.0588 - val_loss: 0.0129 - val_mae: 0.0974
Epoch 53/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0584 - val_loss: 0.0128 - val_mae: 0.0970
Epoch 54/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0581 - val_loss: 0.0125 - val_mae: 0.0949
Epoch 55/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0583 - val_loss: 0.0123 - val_mae: 0.0932
Epoch 56/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0586 - val_loss: 0.0122 - val_mae: 0.0925
Epoch 57/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0587 - val_loss: 0.0121 - val_mae: 0.0917
Epoch 58/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0586 - val_loss: 0.0121 - val_mae: 0.0917
Epoch 59/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0587 - val_loss: 0.0120 - val_mae: 0.0905
Epoch 60/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0586 - val_loss: 0.0123 - val_mae: 0.0930
Epoch 61/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0064 - mae: 0.0576 - val_loss: 0.0129 - val_mae: 0.0979
Epoch 62/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0064 - mae: 0.0576 - val_loss: 0.0139 - val_mae: 0.1038
Epoch 63/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0067 - mae: 0.0591 - val_loss: 0.0129 - val_mae: 0.0987
Epoch 64/100

4/4 [==============================] - 0s 4ms/step - loss: 0.0064 - mae: 0.0576 - val_loss: 0.0120 - val_mae: 0.0929
Epoch 65/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0064 - mae: 0.0579 - val_loss: 0.0111 - val_mae: 0.0861
Epoch 66/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0064 - mae: 0.0586 - val_loss: 0.0114 - val_mae: 0.0891
Epoch 67/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0063 - mae: 0.0577 - val_loss: 0.0122 - val_mae: 0.0949
Epoch 68/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0064 - mae: 0.0573 - val_loss: 0.0119 - val_mae: 0.0930
Epoch 69/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0572 - val_loss: 0.0111 - val_mae: 0.0866
Epoch 70/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0064 - mae: 0.0589 - val_loss: 0.0108 - val_mae: 0.0829
Epoch 71/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0063 - mae: 0.0585 - val_loss: 0.0111 - val_mae: 0.0867
Epoch 72/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0571 - val_loss: 0.0116 - val_mae: 0.0904
Epoch 73/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0062 - mae: 0.0567 - val_loss: 0.0118 - val_mae: 0.0920
Epoch 74/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0569 - val_loss: 0.0116 - val_mae: 0.0901
Epoch 75/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0565 - val_loss: 0.0116 - val_mae: 0.0902
Epoch 76/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0565 - val_loss: 0.0114 - val_mae: 0.0884
Epoch 77/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0565 - val_loss: 0.0113 - val_mae: 0.0883
Epoch 78/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0565 - val_loss: 0.0113 - val_mae: 0.0884
Epoch 79/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0564 - val_loss: 0.0112 - val_mae: 0.0877
Epoch 80/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0564 - val_loss: 0.0110 - val_mae: 0.0863
Epoch 81/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0568 - val_loss: 0.0107 - val_mae: 0.0836
Epoch 82/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0577 - val_loss: 0.0107 - val_mae: 0.0835
Epoch 83/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0573 - val_loss: 0.0109 - val_mae: 0.0847
Epoch 84/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0569 - val_loss: 0.0108 - val_mae: 0.0840
Epoch 85/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0574 - val_loss: 0.0108 - val_mae: 0.0827
Epoch 86/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0573 - val_loss: 0.0112 - val_mae: 0.0881
Epoch 87/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0061 - mae: 0.0562 - val_loss: 0.0121 - val_mae: 0.0944
Epoch 88/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0569 - val_loss: 0.0115 - val_mae: 0.0900
Epoch 89/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0059 - mae: 0.0558 - val_loss: 0.0107 - val_mae: 0.0807
Epoch 90/100
4/4 [==============================] - 0s 5ms/step - loss: 0.0066 - mae: 0.0606 - val_loss: 0.0107 - val_mae: 0.0773
Epoch 91/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0066 - mae: 0.0610 - val_loss: 0.0109 - val_mae: 0.0836
Epoch 92/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0569 - val_loss: 0.0123 - val_mae: 0.0958
Epoch 93/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0583 - val_loss: 0.0129 - val_mae: 0.0989
Epoch 94/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0065 - mae: 0.0587 - val_loss: 0.0117 - val_mae: 0.0926
Epoch 95/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0573 - val_loss: 0.0103 - val_mae: 0.0807
Epoch 96/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0061 - mae: 0.0583 - val_loss: 0.0102 - val_mae: 0.0789
Epoch 97/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0060 - mae: 0.0570 - val_loss: 0.0110 - val_mae: 0.0884
Epoch 98/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0062 - mae: 0.0565 - val_loss: 0.0125 - val_mae: 0.0967
Epoch 99/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0067 - mae: 0.0596 - val_loss: 0.0120 - val_mae: 0.0943
Epoch 100/100
4/4 [==============================] - 0s 4ms/step - loss: 0.0060 - mae: 0.0564 - val_loss: 0.0103 - val_mae: 0.0821

Check Model's Loss and Mean Absolute Error against Validation Data

In [52]:

hist.history.keys()

Out[52]:
dict_keys(['loss', 'mae', 'val_loss', 'val_mae'])
In [53]:

Visuialize the Training and Valudation Loss to see if Model is overfitting

plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epochs')
plt.legend(['Train', 'Val'])

Out[53]:
<matplotlib.legend.Legend at 0x13dadc92408>

In [54]:

Visuialize the Training and Valudation Loss to see if Model is overfitting

plt.plot(hist.history["mae"])
plt.plot(hist.history["val_mae"])
plt.title('Model Accuracy')
plt.ylabel('Mean Absolute Error')
plt.xlabel('Epochs')
plt.legend(['Train', 'Val'])

Out[54]:
<matplotlib.legend.Legend at 0x13dae8e1588>

Retune Model based on number of Epocs, to get minimum Loss

In [55]:

Build Model and Architecture of Deep Neural Network


model = Sequential() # Initialize ANN

Add layers of Neural Network = 4 , input_dim = number oof features

units = 10,8,6 is number of neurons

model.add(Dense(units = 10, activation='relu', input_dim=14))
model.add(Dense(units = 8, activation='relu'))
model.add(Dense(units = 6, activation='relu'))
model.add(Dense(units = 1, activation='relu'))

In [56]:

Loss Function measures how well the model did on training and then tries

to improve on it using optimizer

model.compile(optimizer = 'adam',
loss = 'mse',
metrics = ['mae'])

In [57]:

hist = model.fit(
x_train_s, y_train_s, epochs = 37,
validation_data = (x_val_s, y_val_s)
)

Epoch 1/37
4/4 [==============================] - 0s 38ms/step - loss: 0.0790 - mae: 0.1862 - val_loss: 0.1247 - val_mae: 0.2394
Epoch 2/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0693 - mae: 0.1728 - val_loss: 0.1101 - val_mae: 0.2271
Epoch 3/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0593 - mae: 0.1634 - val_loss: 0.0950 - val_mae: 0.2224
Epoch 4/37
4/4 [==============================] - 0s 8ms/step - loss: 0.0519 - mae: 0.1584 - val_loss: 0.0828 - val_mae: 0.2191
Epoch 5/37
4/4 [==============================] - 0s 7ms/step - loss: 0.0450 - mae: 0.1541 - val_loss: 0.0751 - val_mae: 0.2164
Epoch 6/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0419 - mae: 0.1529 - val_loss: 0.0698 - val_mae: 0.2148
Epoch 7/37
4/4 [==============================] - 0s 6ms/step - loss: 0.0392 - mae: 0.1521 - val_loss: 0.0660 - val_mae: 0.2120
Epoch 8/37
4/4 [==============================] - 0s 7ms/step - loss: 0.0374 - mae: 0.1497 - val_loss: 0.0629 - val_mae: 0.2079
Epoch 9/37
4/4 [==============================] - 0s 7ms/step - loss: 0.0356 - mae: 0.1471 - val_loss: 0.0602 - val_mae: 0.2038
Epoch 10/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0341 - mae: 0.1441 - val_loss: 0.0577 - val_mae: 0.1992
Epoch 11/37
4/4 [==============================] - 0s 6ms/step - loss: 0.0326 - mae: 0.1407 - val_loss: 0.0555 - val_mae: 0.1950
Epoch 12/37
4/4 [==============================] - 0s 7ms/step - loss: 0.0312 - mae: 0.1372 - val_loss: 0.0534 - val_mae: 0.1909
Epoch 13/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0298 - mae: 0.1338 - val_loss: 0.0514 - val_mae: 0.1873
Epoch 14/37
4/4 [==============================] - 0s 6ms/step - loss: 0.0286 - mae: 0.1308 - val_loss: 0.0493 - val_mae: 0.1836
Epoch 15/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0271 - mae: 0.1274 - val_loss: 0.0474 - val_mae: 0.1805
Epoch 16/37
4/4 [==============================] - 0s 7ms/step - loss: 0.0258 - mae: 0.1249 - val_loss: 0.0448 - val_mae: 0.1787
Epoch 17/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0242 - mae: 0.1226 - val_loss: 0.0423 - val_mae: 0.1763
Epoch 18/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0229 - mae: 0.1201 - val_loss: 0.0401 - val_mae: 0.1738
Epoch 19/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0213 - mae: 0.1169 - val_loss: 0.0381 - val_mae: 0.1702
Epoch 20/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0198 - mae: 0.1127 - val_loss: 0.0363 - val_mae: 0.1669
Epoch 21/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0184 - mae: 0.1090 - val_loss: 0.0344 - val_mae: 0.1632
Epoch 22/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0169 - mae: 0.1042 - val_loss: 0.0327 - val_mae: 0.1595
Epoch 23/37
4/4 [==============================] - 0s 5ms/step - loss: 0.0158 - mae: 0.1007 - val_loss: 0.0312 - val_mae: 0.1569
Epoch 24/37
4/4 [==============================] - 0s 6ms/step - loss: 0.0146 - mae: 0.0977 - val_loss: 0.0296 - val_mae: 0.1536
Epoch 25/37
4/4 [==============================] - 0s 7ms/step - loss: 0.0134 - mae: 0.0928 - val_loss: 0.0279 - val_mae: 0.1493
Epoch 26/37
4/4 [==============================] - 0s 8ms/step - loss: 0.0122 - mae: 0.0866 - val_loss: 0.0263 - val_mae: 0.1446
Epoch 27/37
4/4 [==============================] - 0s 6ms/step - loss: 0.0112 - mae: 0.0808 - val_loss: 0.0249 - val_mae: 0.1407
Epoch 28/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0105 - mae: 0.0758 - val_loss: 0.0237 - val_mae: 0.1373
Epoch 29/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0099 - mae: 0.0712 - val_loss: 0.0228 - val_mae: 0.1351
Epoch 30/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0091 - mae: 0.0674 - val_loss: 0.0226 - val_mae: 0.1380
Epoch 31/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0084 - mae: 0.0663 - val_loss: 0.0233 - val_mae: 0.1436
Epoch 32/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0082 - mae: 0.0673 - val_loss: 0.0242 - val_mae: 0.1470
Epoch 33/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0081 - mae: 0.0665 - val_loss: 0.0235 - val_mae: 0.1447
Epoch 34/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0077 - mae: 0.0627 - val_loss: 0.0215 - val_mae: 0.1377
Epoch 35/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0073 - mae: 0.0597 - val_loss: 0.0202 - val_mae: 0.1326
Epoch 36/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0073 - mae: 0.0596 - val_loss: 0.0198 - val_mae: 0.1308
Epoch 37/37
4/4 [==============================] - 0s 4ms/step - loss: 0.0072 - mae: 0.0591 - val_loss: 0.0199 - val_mae: 0.1312
In [58]:

Visuialize the Training and Valudation Loss to see if Model is overfitting

plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epochs')
plt.legend(['Train', 'Val'])

Out[58]:
<matplotlib.legend.Legend at 0x13db1415588>

Get the Model's Predicted Values based on XTest data

In [59]:

predictions = model.predict(x_test_s)
predictions = min_max_scaler.inverse_transform(predictions)

In [60]:


print(len(predictions))
print(len(x_test_s))

61
61
In [61]:

predictions

Out[61]:
array([[10048.839 ],
[ 8380.175 ],
[ 8223.703 ],
[ 8810.324 ],
[14645.429 ],
[10618.392 ],
[16750.482 ],
[22431.844 ],
[14646.784 ],
[14541.105 ],
[11405.239 ],
[ 7192.611 ],
[12878.014 ],
[14615.151 ],
[ 6591.732 ],
[11019.461 ],
[ 7192.611 ],
[ 7243.526 ],
[17288.361 ],
[11949.2295],
[10361.42 ],
[12546.919 ],
[13176.6045],
[ 7763.626 ],
[ 6462.4473],
[10918.073 ],
[ 7132.678 ],
[ 7897.882 ],
[19225.326 ],
[17506.357 ],
[15722.58 ],
[10610.157 ],
[27337.594 ],
[14543.716 ],
[27198.547 ],
[ 8718.396 ],
[12174.981 ],
[ 9417.016 ],
[10529.083 ],
[ 6697.23 ],
[16905.152 ],
[ 6884.7744],
[ 8718.396 ],
[14281.147 ],
[17759.367 ],
[ 6317.3433],
[19859.943 ],
[10533.662 ],
[ 5913.2134],
[ 8058.2573],
[10633.473 ],
[15969.933 ],
[12289.701 ],
[14882.57 ],
[10714.174 ],
[10618.468 ],
[15606.13 ],
[29543.697 ],
[16460.264 ],
[20283.934 ],
[10186.812 ]], dtype=float32)

Get the Model's Predicted Values based on XTest data

In [62]:

y_test['price'].values

Out[62]:
array([ 7689. , 9258. , 6918. , 7603. , 13415. , 9960. ,
18150. , 41315. , 12764. , 12964. , 9988. , 6692. ,
11850. , 12629. , 8916.5 , 10198. , 7609. , 7788. ,
20970. , 11259. , 6989. , 8449. , 15510. , 7349. ,
6295. , 10345. , 6229. , 6938. , 17199. , 18950. ,
18620. , 18344. , 37028. , 16900. , 34028. , 11845. ,
9538. , 7898. , 13845. , 5389. , 16503. , 6377. ,
10945. , 12940. , 17859.167, 7099. , 13499. , 9549. ,
6479. , 7609. , 10595. , 16695. , 11048. , 16845. ,
8495. , 9233. , 18150. , 40960. , 17075. , 16558. ,
8558. ])
In [63]:

y_train['Type'] = 'Train'

C:\Users\tahir\anaconda3\envs\tensor\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
In [64]:

y_test['Type'] = 'Test'

In [65]:

pred = pd.DataFrame(data = predictions,index = y_test.index, columns=['Predcited Price'])

In [67]:

plot = pd.DataFrame()
plot = pd.concat([y_train,y_test])
plot = plot.merge(pred,how='left',left_index=True, right_index=True)

In [68]:

plot

Out[68]:

price
Type
Predcited Price
126
32528.0
Train
NaN
196
15985.0
Train
NaN
141
7126.0
Train
NaN
4
17450.0
Train
NaN
32
5399.0
Train
NaN
...
...
...
...
136
18150.0
Test
15606.129883
73
40960.0
Test
29543.697266
114
17075.0
Test
16460.263672
178
16558.0
Test
20283.933594
27
8558.0
Test
10186.811523
163 rows × 3 columns
In [69]:


plot_s = plot.reset_index()

In [70]:


plt.figure(figsize=(12,7))
plt.plot(plot_s[plot_s.Type == 'Train']['price'],label='Train')
plt.plot(plot_s[plot_s.Type == 'Test']['price'],label='Test')
plt.plot(plot_s['Predcited Price'],label='Predictions')
plt.legend()
plt.show()

@Ghulam1215
Copy link

Task::
Identify fraudulent credit card transactions

import pandas as pd
import numpy as np

import matplotlib.pyplot as plt
import seaborn as sns

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.optimizers import Adam, SGD

In [26]:


credit_df = pd.read_csv('creditcard.csv')

In [27]:

credit_df.isnull().sum()

Out[27]:
Time 0
V1 0
V2 0
V3 0
V4 0
V5 0
V6 0
V7 0
V8 0
V9 0
V10 0
V11 0
V12 0
V13 0
V14 0
V15 0
V16 0
V17 0
V18 0
V19 0
V20 0
V21 0
V22 0
V23 0
V24 0
V25 0
V26 0
V27 0
V28 0
Amount 0
Class 0
dtype: int64
In [28]:

credit_df.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 284807 entries, 0 to 284806
Data columns (total 31 columns):

Column Non-Null Count Dtype


0 Time 284807 non-null float64
1 V1 284807 non-null float64
2 V2 284807 non-null float64
3 V3 284807 non-null float64
4 V4 284807 non-null float64
5 V5 284807 non-null float64
6 V6 284807 non-null float64
7 V7 284807 non-null float64
8 V8 284807 non-null float64
9 V9 284807 non-null float64
10 V10 284807 non-null float64
11 V11 284807 non-null float64
12 V12 284807 non-null float64
13 V13 284807 non-null float64
14 V14 284807 non-null float64
15 V15 284807 non-null float64
16 V16 284807 non-null float64
17 V17 284807 non-null float64
18 V18 284807 non-null float64
19 V19 284807 non-null float64
20 V20 284807 non-null float64
21 V21 284807 non-null float64
22 V22 284807 non-null float64
23 V23 284807 non-null float64
24 V24 284807 non-null float64
25 V25 284807 non-null float64
26 V26 284807 non-null float64
27 V27 284807 non-null float64
28 V28 284807 non-null float64
29 Amount 284807 non-null float64
30 Class 284807 non-null int64
dtypes: float64(30), int64(1)
memory usage: 67.4 MB

Correlation
In [29]:

plt.figure(figsize=(20,20))
sns.heatmap(credit_df.corr(),annot=True,cmap="RdYlGn")

Out[29]:
AxesSubplot:

In [30]:

credit_df.corr()['Class'].sort_values(ascending=False)

Out[30]:
Class 1.000000
V11 0.154876
V4 0.133447
V2 0.091289
V21 0.040413
V19 0.034783
V20 0.020090
V8 0.019875
V27 0.017580
V28 0.009536
Amount 0.005632
V26 0.004455
V25 0.003308
V22 0.000805
V23 -0.002685
V15 -0.004223
V13 -0.004570
V24 -0.007221
Time -0.012323
V6 -0.043643
V5 -0.094974
V9 -0.097733
V1 -0.101347
V18 -0.111485
V7 -0.187257
V3 -0.192961
V16 -0.196539
V10 -0.216883
V12 -0.260593
V14 -0.302544
V17 -0.326481
Name: Class, dtype: float64

Select Features with High Corelation
In [31]:


credit = credit_df[['V11','V4','V7','V3','V16','V10','V12','V14','V17','Class']]

In [32]:

credit.head()

Out[32]:

V11
V4
V7
V3
V16
V10
V12
V14
V17
Class
0
-0.551600
1.378155
0.239599
2.536347
-0.470401
0.090794
-0.617801
-0.311169
0.207971
0
1
1.612727
0.448154
-0.078803
0.166480
0.463917
-0.166974
1.065235
-0.143772
-0.114805
0
2
0.624501
0.379780
0.791461
1.773209
-2.890083
0.207643
0.066084
-0.165946
1.109969
0
3
-0.226487
-0.863291
0.237609
1.792993
-1.059647
-0.054952
0.178228
-0.287924
-0.684093
0
4
-0.822843
0.403034
0.592941
1.548718
-0.451449
0.753074
0.538196
-1.119670
-0.237033
0

Split Data
In [33]:


from sklearn.model_selection import train_test_split

In [34]:


x = (credit.loc[:, credit.columns != 'Class'])
y = (credit.loc[:, credit.columns == 'Class'])

In [35]:

Split to 50% Train and 50% Test

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.50, random_state=42)

Split 50% Test into further 30% Test and 20% Validation

x_test, x_val, y_test, y_val = train_test_split(x_test, y_test, test_size=0.40, random_state=42)

In [36]:

print(x_train.shape)
print(x_test.shape)
print(x_val.shape)

(142403, 9)
(85442, 9)
(56962, 9)
In [37]:


print(y_train.shape)
print(y_test.shape)
print(y_val.shape)

(142403, 1)
(85442, 1)
(56962, 1)

Scale Data
In [38]:


from sklearn.preprocessing import MinMaxScaler

In [39]:

Use MinMax Scaller to scale all features

min_max_scaler = MinMaxScaler()

In [40]:


x_train_s = min_max_scaler.fit_transform(x_train)
x_test_s = min_max_scaler.fit_transform(x_test)
x_val_s = min_max_scaler.fit_transform(x_val)

y_train_s = min_max_scaler.fit_transform(y_train)
y_test_s = min_max_scaler.fit_transform(y_test)
y_val_s = min_max_scaler.fit_transform(y_val)

In [41]:

print(x_train_s.shape)
print(x_test_s.shape)
print(x_val_s.shape)

print(y_train_s.shape)
print(y_test_s.shape)
print(y_val_s.shape)

(142403, 9)
(85442, 9)
(56962, 9)
(142403, 1)
(85442, 1)
(56962, 1)

Build Model
In [42]:

Build Model and Architecture of Deep Neural Network


model = Sequential() # Initialize ANN

Add layers of Neural Network = 4 , input_dim = number oof features

units = 10,8,6 is number of neurons

model.add(Dense(units = 10, activation='relu', input_dim=9))
model.add(Dense(units = 8, activation='relu'))
model.add(Dense(units = 6, activation='relu'))
model.add(Dense(units = 1, activation='relu'))

In [43]:

Loss Dunction measures how well the model did on training and then tries

to improve on it using optimizer

model.compile(optimizer = 'sgd',
loss = 'binary_crossentropy',
metrics = ['accuracy'])

In [44]:

hist = model.fit(
x_train_s, y_train_s, epochs = 100,
validation_data = (x_val_s, y_val_s)
)

Epoch 1/100
4451/4451 [==============================] - 4s 805us/step - loss: 0.0291 - accuracy: 0.9980 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 2/100
4451/4451 [==============================] - 3s 764us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 3/100
4451/4451 [==============================] - 3s 753us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 4/100
4451/4451 [==============================] - 3s 757us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 5/100
4451/4451 [==============================] - 3s 764us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 6/100
4451/4451 [==============================] - 3s 752us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 7/100
4451/4451 [==============================] - 3s 750us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 8/100
4451/4451 [==============================] - 3s 768us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 9/100
4451/4451 [==============================] - 3s 757us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 10/100
4451/4451 [==============================] - 3s 764us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 11/100
4451/4451 [==============================] - 3s 761us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 12/100
4451/4451 [==============================] - 3s 759us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 13/100
4451/4451 [==============================] - 3s 768us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 14/100
4451/4451 [==============================] - 3s 757us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 15/100
4451/4451 [==============================] - 3s 772us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 16/100
4451/4451 [==============================] - 3s 765us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 17/100
4451/4451 [==============================] - 3s 750us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 18/100
4451/4451 [==============================] - 3s 773us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 19/100
4451/4451 [==============================] - 3s 754us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 20/100
4451/4451 [==============================] - 3s 753us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 21/100
4451/4451 [==============================] - 3s 761us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 22/100
4451/4451 [==============================] - 3s 773us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 23/100
4451/4451 [==============================] - 3s 749us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 24/100
4451/4451 [==============================] - 3s 754us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 25/100
4451/4451 [==============================] - 3s 759us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 26/100
4451/4451 [==============================] - 3s 754us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 27/100
4451/4451 [==============================] - 3s 761us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 28/100
4451/4451 [==============================] - 3s 753us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 29/100
4451/4451 [==============================] - 3s 759us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 30/100
4451/4451 [==============================] - 3s 773us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 31/100
4451/4451 [==============================] - 3s 760us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 32/100
4451/4451 [==============================] - 3s 742us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 33/100
4451/4451 [==============================] - 3s 748us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 34/100
4451/4451 [==============================] - 3s 751us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 35/100
4451/4451 [==============================] - 3s 755us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 36/100
4451/4451 [==============================] - 3s 746us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 37/100
4451/4451 [==============================] - 3s 763us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 38/100
4451/4451 [==============================] - 3s 756us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 39/100
4451/4451 [==============================] - 3s 749us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 40/100
4451/4451 [==============================] - 3s 754us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 41/100
4451/4451 [==============================] - 3s 752us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 42/100
4451/4451 [==============================] - 3s 757us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 43/100
4451/4451 [==============================] - 3s 757us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 44/100
4451/4451 [==============================] - 3s 745us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 45/100
4451/4451 [==============================] - 3s 761us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 46/100
4451/4451 [==============================] - 3s 756us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 47/100
4451/4451 [==============================] - 3s 742us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 48/100
4451/4451 [==============================] - 3s 745us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 49/100
4451/4451 [==============================] - 3s 747us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 50/100
4451/4451 [==============================] - 3s 763us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 51/100
4451/4451 [==============================] - 3s 749us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 52/100
4451/4451 [==============================] - 3s 741us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 53/100
4451/4451 [==============================] - 3s 750us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 54/100
4451/4451 [==============================] - 3s 747us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 55/100
4451/4451 [==============================] - 3s 739us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 56/100

4451/4451 [==============================] - 3s 764us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 57/100
4451/4451 [==============================] - 3s 773us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 58/100
4451/4451 [==============================] - 3s 781us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 59/100
4451/4451 [==============================] - 4s 808us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 60/100
4451/4451 [==============================] - 4s 790us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 61/100
4451/4451 [==============================] - 3s 742us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 62/100
4451/4451 [==============================] - 3s 771us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 63/100
4451/4451 [==============================] - 3s 757us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 64/100
4451/4451 [==============================] - 3s 761us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 65/100
4451/4451 [==============================] - 3s 771us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 66/100
4451/4451 [==============================] - 3s 764us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 67/100
4451/4451 [==============================] - 3s 768us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 68/100
4451/4451 [==============================] - 3s 757us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 69/100
4451/4451 [==============================] - 3s 776us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 70/100
4451/4451 [==============================] - 3s 751us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 71/100
4451/4451 [==============================] - 3s 760us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 72/100
4451/4451 [==============================] - 3s 773us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 73/100
4451/4451 [==============================] - 3s 756us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 74/100
4451/4451 [==============================] - 3s 749us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 75/100
4451/4451 [==============================] - 3s 750us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 76/100
4451/4451 [==============================] - 3s 753us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 77/100
4451/4451 [==============================] - 3s 766us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 78/100
4451/4451 [==============================] - 3s 768us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 79/100
4451/4451 [==============================] - 3s 769us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 80/100
4451/4451 [==============================] - 3s 749us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 81/100
4451/4451 [==============================] - 3s 749us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 82/100
4451/4451 [==============================] - 3s 750us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 83/100
4451/4451 [==============================] - 3s 756us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 84/100
4451/4451 [==============================] - 3s 773us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 85/100
4451/4451 [==============================] - 3s 769us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 86/100
4451/4451 [==============================] - 3s 763us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 87/100
4451/4451 [==============================] - 3s 765us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 88/100
4451/4451 [==============================] - 3s 767us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 89/100
4451/4451 [==============================] - 3s 752us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 90/100
4451/4451 [==============================] - 3s 765us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 91/100
4451/4451 [==============================] - 3s 775us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 92/100
4451/4451 [==============================] - 3s 769us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 93/100
4451/4451 [==============================] - 3s 749us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 94/100
4451/4451 [==============================] - 3s 757us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 95/100
4451/4451 [==============================] - 3s 752us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 96/100
4451/4451 [==============================] - 3s 754us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 97/100
4451/4451 [==============================] - 3s 771us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 98/100
4451/4451 [==============================] - 3s 735us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 99/100
4451/4451 [==============================] - 3s 739us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983
Epoch 100/100
4451/4451 [==============================] - 3s 742us/step - loss: 0.0266 - accuracy: 0.9983 - val_loss: 0.0265 - val_accuracy: 0.9983

Evaluate Accuracy of Data based on Test Data

Accuracy of 99.8%
In [45]:


model.evaluate(x_test_s, y_test_s)[1]

2671/2671 [==============================] - 1s 505us/step - loss: 0.0267 - accuracy: 0.9983
Out[45]:
0.9982678294181824

Check Model's Loss against Validation Data
In [46]:


hist.history.keys()

Out[46]:
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
In [47]:

Visuialize the Training and Valudation Loss to see if Model is overfitting

plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epochs')
plt.legend(['Train', 'Val'])

Out[47]:
<matplotlib.legend.Legend at 0x1c883009bc8>

Do Predictions on Test Data
In [48]:

predictions = model.predict(x_test_s)
prediction2 = np.where(predictions>= 0.87, 1,0 )

In [49]:

print(len(predictions))
print(len(x_test_s))

85442
85442

@Ghulam1215
Copy link

[1]:

import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
from PIL import Image
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
import tensorflow as tf
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Dropout, BatchNormalization, Activation
from tensorflow.keras import layers
from keras.utils import to_categorical

Test to load and Resize file

In [2]:

p = 64

In [22]:

img = load_img('E:/PIAIC AI Course March 2020/Q2/deeplearning assigment/flowers/daisy/5547758_eea9edfd54_n.jpg')
img = img.resize((p, p), Image.ANTIALIAS)
img_array = img_to_array(img)
plt.imshow(img)

Out[22]:
<matplotlib.image.AxesImage at 0x1fd4d0ce508>

Load Pictures and Convert to array and make a Single X(Pixels) and Y(Lables) dataset
In [23]:

classification = ['daisy', 'dandelion', 'rose', 'sunflower', 'tulip']

In [32]:

path = 'E:/PIAIC AI Course March 2020/Q2/deeplearning assigment/flowers/'

In [33]:

length_daisy = len(os.listdir(path+'daisy'))
length_dandelion = len(os.listdir(path+'dandelion'))
length_rose = len(os.listdir(path+'rose'))
length_sunflower = len(os.listdir(path+'sunflower'))
length_tulip = len(os.listdir(path+'tulip'))

In [34]:

print(length_daisy)
print(length_dandelion)
print(length_rose)
print(length_sunflower)
print(length_tulip)

769
326
1
1
1
In [35]:

length_x = length_daisy + length_dandelion + length_rose + length_sunflower + length_tulip
length_x

Out[35]:
1098
In [36]:

x = np.zeros((length_x,p,p,3))
x.shape

Out[36]:
(1098, 64, 64, 3)
In [38]:


y = np.zeros((length_x,1))
y.shape

Out[38]:
(1098, 1)

Load daisy pictures
In [39]:

z = 0 #Counter for image file loading
d = 0 #Counter for array append for different flowers
for i in os.listdir(path+'daisy'):
img = load_img(path+'daisy/'+os.listdir(path+'daisy')[z])
img = img.resize((p, p), Image.ANTIALIAS)
img_array = img_to_array(img)
x[d] = img_array
y[d] = 0

z += 1
d += 1

d

Out[39]:
769

Load dandelion pictures
In [40]:


z = 0 #Counter for image file loading
d = 769 #Counter for array append for different flowers
for i in os.listdir(path+'dandelion'):
img = load_img(path+'dandelion/'+os.listdir(path+'dandelion')[z])
img = img.resize((p, p), Image.ANTIALIAS)
img_array = img_to_array(img)
x[d] = img_array
y[d] = 1

z += 1
d += 1

d

Out[40]:
1095

Load rose pictures
In [42]:

z = 0 #Counter for image file loading
d = 1095 #Counter for array append for different flowers
for i in os.listdir(path+'rose'):
img = load_img(path+'rose/'+os.listdir(path+'rose')[z])
img = img.resize((p, p), Image.ANTIALIAS)
img_array = img_to_array(img)
x[d] = img_array
y[d] = 2

z += 1
d += 1

d

Out[42]:
1096

Load sunflower pictures
In [44]:

z = 0 #Counter for image file loading
d = 1096 #Counter for array append for different flowers
for i in os.listdir(path+'sunflower'):
img = load_img(path+'sunflower/'+os.listdir(path+'sunflower')[z])
img = img.resize((p, p), Image.ANTIALIAS)
img_array = img_to_array(img)
x[d] = img_array
y[d] = 3

z += 1
d += 1

d

Out[44]:
1097

Load tulip pictures
In [45]:


z = 0 #Counter for image file loading
d = 1097 #Counter for array append for different flowers
for i in os.listdir(path+'tulip'):
img = load_img(path+'tulip/'+os.listdir(path+'tulip')[z])
img = img.resize((p, p), Image.ANTIALIAS)
img_array = img_to_array(img)
x[d] = img_array
y[d] = 4

z += 1
d += 1

d

Out[45]:
1098
In [46]:

print(x.shape)
print(y.shape)

(1098, 64, 64, 3)
(1098, 1)

Split Data into Train, Test and Validation
In [47]:

from sklearn.model_selection import train_test_split

In [48]:

Split to 60% Train and 40% Test

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.40, random_state=42)

In [50]:

print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)

(658, 64, 64, 3)
(440, 64, 64, 3)
(658, 1)
(440, 1)

One Hot encoding of Labels
In [51]:

Convert the labels into a set of 5 numbers to input into NN

Every set of rows will have 5 values, all 0 except the one with actual label index will be 1

y_train_one_hot = to_categorical(y_train)
y_test_one_hot = to_categorical(y_test)

In [52]:


y_train_one_hot[0]

Out[52]:
array([0., 1., 0., 0., 0.], dtype=float32)
In [53]:


print(y_train_one_hot.shape)
print(y_test_one_hot.shape)

(658, 5)
(440, 2)

Normalize pixel values between 0 and 1
In [54]:

x_train = x_train /255
x_test = x_test /255

Create Model
In [55]:

model = Sequential()

First Layer (32 5x5 relu convoluted feature maps)

model.add(Conv2D(32, (5,5) , activation = 'relu', input_shape=(p,p,3)))

2nd Layer as pooling Layer (creating a pooling layer with 2x2 pixel filter)

model.add(MaxPooling2D(pool_size = (2,2)))

3rd Layer (2nd Colvolution Layer)

model.add(Conv2D(32, (5,5) , activation = 'relu'))

4th Layer (2nd Pooling Layer)

model.add(MaxPooling2D(pool_size = (2,2)))

5th Layer (Flattening Layer)

model.add(Flatten())

6th Layer with 1000 neurons to feed from previous layers

model.add(Dense(1000, activation = 'relu'))

7th Layer (Dropout layer with 50%)

model.add(Dropout(0.5))

8th Layer with 500 neurons to feed from previous layers

model.add(Dense(500, activation = 'relu'))

9th Layer (Dropout layer with 50%)

model.add(Dropout(0.5))

10th Layer with 250 neurons to feed from previous layers

model.add(Dense(250, activation = 'relu'))

11th Layer with 5 neurons (as there are 10 different Classifications)

model.add(Dense(5, activation = 'softmax'))

In [56]:

Compile the Model


model.compile(loss = 'categorical_crossentropy',
optimizer = 'adam',
metrics = ['accuracy']
)

In [57]:

Train the Model with Validation Data of 20%


hist = model.fit(x_train, y_train_one_hot, epochs = 15, batch_size = 256,
validation_split = 0.2)

Epoch 1/15
3/3 [==============================] - 5s 2s/step - loss: 1.4614 - accuracy: 0.4563 - val_loss: 1.0337 - val_accuracy: 0.3030
Epoch 2/15
3/3 [==============================] - 1s 445ms/step - loss: 0.9057 - accuracy: 0.4487 - val_loss: 0.7313 - val_accuracy: 0.6894
Epoch 3/15
3/3 [==============================] - 1s 435ms/step - loss: 0.7361 - accuracy: 0.6787 - val_loss: 0.6622 - val_accuracy: 0.6894
Epoch 4/15
3/3 [==============================] - 1s 454ms/step - loss: 0.7091 - accuracy: 0.6787 - val_loss: 0.6351 - val_accuracy: 0.6894
Epoch 5/15
3/3 [==============================] - 1s 470ms/step - loss: 0.6380 - accuracy: 0.6787 - val_loss: 0.6495 - val_accuracy: 0.6894
Epoch 6/15
3/3 [==============================] - 1s 462ms/step - loss: 0.6798 - accuracy: 0.6787 - val_loss: 0.6517 - val_accuracy: 0.6894
Epoch 7/15
3/3 [==============================] - 1s 448ms/step - loss: 0.6546 - accuracy: 0.6749 - val_loss: 0.7031 - val_accuracy: 0.6894
Epoch 8/15
3/3 [==============================] - 1s 450ms/step - loss: 0.6491 - accuracy: 0.6597 - val_loss: 0.6277 - val_accuracy: 0.6894
Epoch 9/15
3/3 [==============================] - 1s 477ms/step - loss: 0.6738 - accuracy: 0.6787 - val_loss: 0.6255 - val_accuracy: 0.6894
Epoch 10/15
3/3 [==============================] - 1s 459ms/step - loss: 0.6074 - accuracy: 0.6768 - val_loss: 0.6474 - val_accuracy: 0.6894
Epoch 11/15
3/3 [==============================] - 1s 454ms/step - loss: 0.6049 - accuracy: 0.6768 - val_loss: 0.6118 - val_accuracy: 0.6894
Epoch 12/15
3/3 [==============================] - 1s 457ms/step - loss: 0.5942 - accuracy: 0.6787 - val_loss: 0.6127 - val_accuracy: 0.6894
Epoch 13/15
3/3 [==============================] - 1s 452ms/step - loss: 0.5777 - accuracy: 0.6787 - val_loss: 0.6374 - val_accuracy: 0.6894
Epoch 14/15
3/3 [==============================] - 1s 441ms/step - loss: 0.5825 - accuracy: 0.6806 - val_loss: 0.6140 - val_accuracy: 0.6894
Epoch 15/15
3/3 [==============================] - 1s 430ms/step - loss: 0.5918 - accuracy: 0.6806 - val_loss: 0.6168 - val_accuracy: 0.6894

Check Model's Loss and Accuracy against Validation Data
In [62]:

hist.history.keys()

Out[62]:
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
In [63]:

Visuialize the Training and Valudation Loss to see if Model is overfitting

plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epochs')
plt.legend(['Train', 'Val'])

Out[63]:
<matplotlib.legend.Legend at 0x1fd01baf108>

In [67]:

Visuialize the Training and Valudation Loss to see if Model is overfitting

plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.legend(['Train', 'Val'])

Out[67]:
<matplotlib.legend.Legend at 0x1fd01cf8a08>

Test the Model with random images online

Test Image 1
In [69]:

img = load_img('tulip.jpg')
img = img.resize((p, p), Image.ANTIALIAS)
plt.imshow(img)

Out[69]:
<matplotlib.image.AxesImage at 0x1fd01d63888>

In [70]:


img_array = img_to_array(img)

In [71]:

img_array.shape

Out[71]:
(64, 64, 3)
In [72]:


img_array = np.expand_dims(img_array, axis=0)
img_array.shape

Out[72]:
(1, 64, 64, 3)
In [73]:

predictions = model.predict(img_array)
predictions

Out[73]:
array([[1., 0., 0., 0., 0.]], dtype=float32)
In [74]:

Sort predictions from least to Greatest

list_index = [0,1,2,3,4]
x = predictions

for i in range(5):
for j in range(5):
if x[0][list_index[i]] > x[0][list_index[j]]:
temp = list_index[i]
list_index[i] = list_index[j]
list_index[j] = temp

Show the sorted labels in order

print(list_index)

[0, 1, 2, 3, 4]
In [75]:


for i in range(5):
print(classification[list_index[i]], ':', round(predictions[0][list_index[i]] * 100,2), '%' )

daisy : 100.0 %
dandelion : 0.0 %
rose : 0.0 %
sunflower : 0.0 %
tulip : 0.0 %

Test Image 2
In [76]:


img = load_img('Dandelion2.jpg')
img = img.resize((p, p), Image.ANTIALIAS)
plt.imshow(img)

Out[76]:
<matplotlib.image.AxesImage at 0x1fd00095408>

In [77]:


img_array = img_to_array(img)

In [78]:

img_array.shape

Out[78]:
(64, 64, 3)
In [79]:

img_array = np.expand_dims(img_array, axis=0)
img_array.shape

Out[79]:
(1, 64, 64, 3)
In [80]:

predictions = model.predict(img_array)
predictions

Out[80]:
array([[1., 0., 0., 0., 0.]], dtype=float32)
In [81]:

Sort predictions from least to Greatest

list_index = [0,1,2,3,4]
x = predictions

for i in range(5):
for j in range(5):
if x[0][list_index[i]] > x[0][list_index[j]]:
temp = list_index[i]
list_index[i] = list_index[j]
list_index[j] = temp

Show the sorted labels in order

print(list_index)

[0, 1, 2, 3, 4]
In [82]:


for i in range(5):
print(classification[list_index[i]], ':', round(predictions[0][list_index[i]] * 100,2), '%' )

daisy : 100.0 %
dandelion : 0.0 %
rose : 0.0 %
sunflower : 0.0 %
tulip : 0.0 %

Test Image 3
In [83]:

img = load_img('tulip2.jpg')
img = img.resize((p, p), Image.ANTIALIAS)
plt.imshow(img)

Out[83]:
<matplotlib.image.AxesImage at 0x1fd0006ef08>

In [84]:

img_array = img_to_array(img)

In [85]:


img_array.shape

Out[85]:
(64, 64, 3)
In [86]:

img_array = np.expand_dims(img_array, axis=0)
img_array.shape

Out[86]:
(1, 64, 64, 3)
In [87]:

predictions = model.predict(img_array)
predictions

Out[87]:
array([[1., 0., 0., 0., 0.]], dtype=float32)
In [88]:

Sort predictions from least to Greatest

list_index = [0,1,2,3,4]
x = predictions

for i in range(5):
for j in range(5):
if x[0][list_index[i]] > x[0][list_index[j]]:
temp = list_index[i]
list_index[i] = list_index[j]
list_index[j] = temp

Show the sorted labels in order

print(list_index)

[0, 1, 2, 3, 4]
In [89]:

for i in range(5):
print(classification[list_index[i]], ':', round(predictions[0][list_index[i]] * 100,2), '%' )

daisy : 100.0 %
dandelion : 0.0 %
rose : 0.0 %
sunflower : 0.0 %
tulip : 0.0 %

@Ghulam1215
Copy link

In [1]:

import pandas as pd
import numpy as np

import matplotlib.pyplot as plt
import seaborn as sns

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.optimizers import Adam, SGD

In [2]:

ion_df = pd.read_csv('ionosphere_data.csv')

In [3]:

ion_df.isnull().sum()

Out[3]:
feature1 0
feature2 0
feature3 0
feature4 0
feature5 0
feature6 0
feature7 0
feature8 0
feature9 0
feature10 0
feature11 0
feature12 0
feature13 0
feature14 0
feature15 0
feature16 0
feature17 0
feature18 0
feature19 0
feature20 0
feature21 0
feature22 0
feature23 0
feature24 0
feature25 0
feature26 0
feature27 0
feature28 0
feature29 0
feature30 0
feature31 0
feature32 0
feature33 0
feature34 0
label 0
dtype: int64
In [4]:


ion_df.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 351 entries, 0 to 350
Data columns (total 35 columns):

Column Non-Null Count Dtype


0 feature1 351 non-null int64
1 feature2 351 non-null int64
2 feature3 351 non-null float64
3 feature4 351 non-null float64
4 feature5 351 non-null float64
5 feature6 351 non-null float64
6 feature7 351 non-null float64
7 feature8 351 non-null float64
8 feature9 351 non-null float64
9 feature10 351 non-null float64
10 feature11 351 non-null float64
11 feature12 351 non-null float64
12 feature13 351 non-null float64
13 feature14 351 non-null float64
14 feature15 351 non-null float64
15 feature16 351 non-null float64
16 feature17 351 non-null float64
17 feature18 351 non-null float64
18 feature19 351 non-null float64
19 feature20 351 non-null float64
20 feature21 351 non-null float64
21 feature22 351 non-null float64
22 feature23 351 non-null float64
23 feature24 351 non-null float64
24 feature25 351 non-null float64
25 feature26 351 non-null float64
26 feature27 351 non-null float64
27 feature28 351 non-null float64
28 feature29 351 non-null float64
29 feature30 351 non-null float64
30 feature31 351 non-null float64
31 feature32 351 non-null float64
32 feature33 351 non-null float64
33 feature34 351 non-null float64
34 label 351 non-null object
dtypes: float64(32), int64(2), object(1)
memory usage: 96.1+ KB
In [5]:

ion_df['label'] = ion_df['label'].map({'g':1,'b':0})

In [6]:

ion_df.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 351 entries, 0 to 350
Data columns (total 35 columns):

Column Non-Null Count Dtype


0 feature1 351 non-null int64
1 feature2 351 non-null int64
2 feature3 351 non-null float64
3 feature4 351 non-null float64
4 feature5 351 non-null float64
5 feature6 351 non-null float64
6 feature7 351 non-null float64
7 feature8 351 non-null float64
8 feature9 351 non-null float64
9 feature10 351 non-null float64
10 feature11 351 non-null float64
11 feature12 351 non-null float64
12 feature13 351 non-null float64
13 feature14 351 non-null float64
14 feature15 351 non-null float64
15 feature16 351 non-null float64
16 feature17 351 non-null float64
17 feature18 351 non-null float64
18 feature19 351 non-null float64
19 feature20 351 non-null float64
20 feature21 351 non-null float64
21 feature22 351 non-null float64
22 feature23 351 non-null float64
23 feature24 351 non-null float64
24 feature25 351 non-null float64
25 feature26 351 non-null float64
26 feature27 351 non-null float64
27 feature28 351 non-null float64
28 feature29 351 non-null float64
29 feature30 351 non-null float64
30 feature31 351 non-null float64
31 feature32 351 non-null float64
32 feature33 351 non-null float64
33 feature34 351 non-null float64
34 label 351 non-null int64
dtypes: float64(32), int64(3)
memory usage: 96.1 KB
In [7]:

plt.figure(figsize=(20,20))
sns.heatmap(ion_df.corr(),annot=True,cmap="RdYlGn")

Out[7]:
AxesSubplot:

In [8]:


ion_df.corr()['label'].sort_values(ascending=False)

Out[8]:
label 1.000000
feature3 0.519145
feature5 0.516477
feature1 0.465614
feature7 0.450429
feature9 0.294852
feature31 0.294417
feature33 0.261157
feature29 0.250036
feature21 0.219583
feature8 0.207544
feature15 0.207201
feature23 0.204361
feature14 0.197041
feature25 0.188185
feature13 0.181682
feature11 0.167908
feature12 0.159940
feature6 0.149099
feature16 0.148775
feature4 0.125884
feature10 0.120634
feature18 0.119346
feature19 0.117435
feature17 0.087060
feature28 0.042756
feature20 0.035620
feature24 0.006193
feature26 0.001541
feature30 -0.003942
feature32 -0.036004
feature34 -0.064168
feature27 -0.111107
feature22 -0.116385
feature2 NaN
Name: label, dtype: float64

Select Features with high Correlation
In [9]:

ion = ion_df[['feature3','feature5','feature1','feature7','feature9',
'feature31','feature33','feature29','feature21','feature27','feature22','label']]

In [10]:

ion.head()

Out[10]:

feature3
feature5
feature1
feature7
feature9
feature31
feature33
feature29
feature21
feature27
feature22
label
0
0.99539
0.85243
1
0.83398
1.00000
0.42267
0.18641
0.21266
0.56971
0.41078
-0.29674
1
1
1.00000
0.93035
1
-0.10868
1.00000
-0.16626
-0.13738
-0.19040
-0.13151
-0.20468
-0.45300
0
2
1.00000
1.00000
1
1.00000
0.88965
0.60436
0.56045
0.43100
0.70887
0.58984
-0.27502
1
3
1.00000
1.00000
1
0.71216
0.00000
0.25682
-0.32382
1.00000
-0.69975
0.51613
1.00000
0
4
1.00000
0.94140
1
0.92106
0.77152
-0.05707
-0.04608
0.02431
0.05982
0.13290
-0.35575
1

Split Data
In [11]:

from sklearn.model_selection import train_test_split

In [12]:

x = (ion.loc[:, ion.columns != 'label'])
y = (ion.loc[:, ion.columns == 'label'])

In [13]:

Split to 50% Train and 50% Test

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.40, random_state=42)

In [14]:

print(x_train.shape)
print(x_test.shape)

print(y_train.shape)
print(y_test.shape)

(210, 11)
(141, 11)
(210, 1)
(141, 1)

Scale Data
In [15]:


from sklearn.preprocessing import MinMaxScaler

Use MinMax Scaller to scale all features

min_max_scaler = MinMaxScaler()

In [16]:


x_train_s = min_max_scaler.fit_transform(x_train)
x_test_s = min_max_scaler.fit_transform(x_test)

y_train_s = min_max_scaler.fit_transform(y_train)
y_test_s = min_max_scaler.fit_transform(y_test)

In [17]:


print(x_train_s.shape)
print(x_test_s.shape)

print(y_train_s.shape)
print(y_test_s.shape)

(210, 11)
(141, 11)
(210, 1)
(141, 1)

Build Model
In [18]:

Build Model and Architecture of Deep Neural Network


model = Sequential() # Initialize ANN

Add layers of Neural Network = 4 , input_dim = number oof features

units = 10,8,6 is number of neurons

model.add(Dense(units = 16, activation='relu', input_dim=11))
model.add(Dense(units = 1, activation='relu'))

In [19]:

Loss Dunction measures how well the model did on training and then tries

to improve on it using optimizer

model.compile(optimizer = 'sgd',
loss = 'binary_crossentropy',
metrics = ['accuracy'])

In [20]:

hist = model.fit(
x_train, y_train, epochs = 100
)

Epoch 1/100
7/7 [==============================] - 0s 3ms/step - loss: 9.4503 - accuracy: 0.3238
Epoch 2/100
7/7 [==============================] - 0s 572us/step - loss: 8.8411 - accuracy: 0.3286
Epoch 3/100
7/7 [==============================] - 0s 463us/step - loss: 1.5946 - accuracy: 0.6476
Epoch 4/100
7/7 [==============================] - 0s 1ms/step - loss: 1.1703 - accuracy: 0.7381
Epoch 5/100
7/7 [==============================] - 0s 1ms/step - loss: 0.8532 - accuracy: 0.7524
Epoch 6/100
7/7 [==============================] - 0s 572us/step - loss: 0.7769 - accuracy: 0.7810
Epoch 7/100
7/7 [==============================] - 0s 1ms/step - loss: 0.6678 - accuracy: 0.8048
Epoch 8/100
7/7 [==============================] - 0s 571us/step - loss: 0.6955 - accuracy: 0.8238
Epoch 9/100
7/7 [==============================] - 0s 571us/step - loss: 2.3554 - accuracy: 0.7571
Epoch 10/100
7/7 [==============================] - 0s 1ms/step - loss: 6.0458 - accuracy: 0.3952
Epoch 11/100
7/7 [==============================] - 0s 571us/step - loss: 0.6173 - accuracy: 0.8333
Epoch 12/100
7/7 [==============================] - 0s 572us/step - loss: 0.4774 - accuracy: 0.8476
Epoch 13/100
7/7 [==============================] - 0s 571us/step - loss: 0.3857 - accuracy: 0.8524
Epoch 14/100
7/7 [==============================] - 0s 787us/step - loss: 0.3708 - accuracy: 0.8619
Epoch 15/100
7/7 [==============================] - 0s 643us/step - loss: 0.3649 - accuracy: 0.8524
Epoch 16/100
7/7 [==============================] - 0s 357us/step - loss: 0.3525 - accuracy: 0.8619
Epoch 17/100
7/7 [==============================] - 0s 571us/step - loss: 0.3442 - accuracy: 0.8524
Epoch 18/100
7/7 [==============================] - 0s 571us/step - loss: 0.3379 - accuracy: 0.8762
Epoch 19/100
7/7 [==============================] - 0s 571us/step - loss: 0.3330 - accuracy: 0.8619
Epoch 20/100
7/7 [==============================] - 0s 1ms/step - loss: 0.3318 - accuracy: 0.8667
Epoch 21/100
7/7 [==============================] - ETA: 0s - loss: 0.2912 - accuracy: 0.87 - 0s 571us/step - loss: 0.3247 - accuracy: 0.8762
Epoch 22/100
7/7 [==============================] - 0s 571us/step - loss: 0.3177 - accuracy: 0.8762
Epoch 23/100
7/7 [==============================] - 0s 572us/step - loss: 0.3255 - accuracy: 0.8905
Epoch 24/100
7/7 [==============================] - 0s 571us/step - loss: 0.3093 - accuracy: 0.8810
Epoch 25/100
7/7 [==============================] - 0s 571us/step - loss: 0.3060 - accuracy: 0.8952
Epoch 26/100
7/7 [==============================] - 0s 571us/step - loss: 0.3295 - accuracy: 0.8952
Epoch 27/100
7/7 [==============================] - 0s 858us/step - loss: 0.2998 - accuracy: 0.8905
Epoch 28/100
7/7 [==============================] - 0s 643us/step - loss: 0.2950 - accuracy: 0.8905
Epoch 29/100
7/7 [==============================] - 0s 429us/step - loss: 0.2951 - accuracy: 0.8905
Epoch 30/100
7/7 [==============================] - 0s 571us/step - loss: 0.3092 - accuracy: 0.8857
Epoch 31/100
7/7 [==============================] - 0s 571us/step - loss: 0.2879 - accuracy: 0.8952
Epoch 32/100
7/7 [==============================] - 0s 571us/step - loss: 0.2880 - accuracy: 0.8952
Epoch 33/100
7/7 [==============================] - 0s 571us/step - loss: 0.2858 - accuracy: 0.8905
Epoch 34/100
7/7 [==============================] - 0s 571us/step - loss: 0.2823 - accuracy: 0.8857
Epoch 35/100
7/7 [==============================] - 0s 571us/step - loss: 0.2862 - accuracy: 0.8905
Epoch 36/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2796 - accuracy: 0.8952
Epoch 37/100
7/7 [==============================] - 0s 1ms/step - loss: 0.3014 - accuracy: 0.8857
Epoch 38/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2767 - accuracy: 0.8952
Epoch 39/100
7/7 [==============================] - 0s 571us/step - loss: 0.2789 - accuracy: 0.8952
Epoch 40/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2718 - accuracy: 0.8952
Epoch 41/100
7/7 [==============================] - 0s 643us/step - loss: 0.2739 - accuracy: 0.9000
Epoch 42/100
7/7 [==============================] - 0s 286us/step - loss: 0.2803 - accuracy: 0.8905
Epoch 43/100
7/7 [==============================] - 0s 571us/step - loss: 0.2666 - accuracy: 0.8952
Epoch 44/100
7/7 [==============================] - 0s 571us/step - loss: 0.2650 - accuracy: 0.8952
Epoch 45/100
7/7 [==============================] - 0s 571us/step - loss: 0.2629 - accuracy: 0.8905
Epoch 46/100
7/7 [==============================] - 0s 572us/step - loss: 0.2651 - accuracy: 0.8952
Epoch 47/100
7/7 [==============================] - 0s 571us/step - loss: 0.2612 - accuracy: 0.9000
Epoch 48/100
7/7 [==============================] - 0s 571us/step - loss: 0.2572 - accuracy: 0.8952
Epoch 49/100
7/7 [==============================] - 0s 571us/step - loss: 0.2623 - accuracy: 0.8905
Epoch 50/100
7/7 [==============================] - 0s 572us/step - loss: 0.2544 - accuracy: 0.9000
Epoch 51/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2552 - accuracy: 0.9000
Epoch 52/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2598 - accuracy: 0.9000
Epoch 53/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2577 - accuracy: 0.9000
Epoch 54/100
7/7 [==============================] - 0s 715us/step - loss: 0.2513 - accuracy: 0.9000
Epoch 55/100
7/7 [==============================] - 0s 286us/step - loss: 0.2479 - accuracy: 0.9000
Epoch 56/100
7/7 [==============================] - 0s 571us/step - loss: 0.2469 - accuracy: 0.9095
Epoch 57/100
7/7 [==============================] - 0s 571us/step - loss: 0.2474 - accuracy: 0.9048
Epoch 58/100
7/7 [==============================] - 0s 571us/step - loss: 0.2434 - accuracy: 0.8952
Epoch 59/100
7/7 [==============================] - 0s 571us/step - loss: 0.2428 - accuracy: 0.9000
Epoch 60/100
7/7 [==============================] - 0s 571us/step - loss: 0.2394 - accuracy: 0.9048
Epoch 61/100
7/7 [==============================] - 0s 571us/step - loss: 0.2470 - accuracy: 0.9000
Epoch 62/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2403 - accuracy: 0.9095
Epoch 63/100
7/7 [==============================] - 0s 571us/step - loss: 0.2412 - accuracy: 0.9048
Epoch 64/100
7/7 [==============================] - 0s 571us/step - loss: 0.2357 - accuracy: 0.9143
Epoch 65/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2370 - accuracy: 0.9190
Epoch 66/100
7/7 [==============================] - 0s 643us/step - loss: 0.2387 - accuracy: 0.9190
Epoch 67/100
7/7 [==============================] - 0s 286us/step - loss: 0.2371 - accuracy: 0.9143
Epoch 68/100
7/7 [==============================] - 0s 572us/step - loss: 0.2353 - accuracy: 0.9143
Epoch 69/100
7/7 [==============================] - 0s 572us/step - loss: 0.2350 - accuracy: 0.9143
Epoch 70/100
7/7 [==============================] - 0s 572us/step - loss: 0.2313 - accuracy: 0.9048
Epoch 71/100
7/7 [==============================] - 0s 572us/step - loss: 0.2368 - accuracy: 0.9143
Epoch 72/100
7/7 [==============================] - 0s 571us/step - loss: 0.2340 - accuracy: 0.9190
Epoch 73/100
7/7 [==============================] - 0s 571us/step - loss: 0.2305 - accuracy: 0.9190
Epoch 74/100
7/7 [==============================] - 0s 572us/step - loss: 0.2304 - accuracy: 0.9143
Epoch 75/100
7/7 [==============================] - 0s 572us/step - loss: 0.2297 - accuracy: 0.9095
Epoch 76/100
7/7 [==============================] - 0s 572us/step - loss: 0.2319 - accuracy: 0.9143
Epoch 77/100
7/7 [==============================] - 0s 572us/step - loss: 0.2290 - accuracy: 0.9190
Epoch 78/100
7/7 [==============================] - 0s 572us/step - loss: 0.2314 - accuracy: 0.9095
Epoch 79/100
7/7 [==============================] - 0s 715us/step - loss: 0.2270 - accuracy: 0.9143
Epoch 80/100
7/7 [==============================] - 0s 357us/step - loss: 0.2218 - accuracy: 0.9190
Epoch 81/100
7/7 [==============================] - 0s 572us/step - loss: 0.2603 - accuracy: 0.9143
Epoch 82/100
7/7 [==============================] - 0s 571us/step - loss: 0.2250 - accuracy: 0.9190
Epoch 83/100
7/7 [==============================] - 0s 572us/step - loss: 0.2307 - accuracy: 0.9190
Epoch 84/100
7/7 [==============================] - 0s 571us/step - loss: 0.2212 - accuracy: 0.9190
Epoch 85/100
7/7 [==============================] - 0s 572us/step - loss: 0.2219 - accuracy: 0.9190
Epoch 86/100
7/7 [==============================] - 0s 571us/step - loss: 0.2208 - accuracy: 0.9238
Epoch 87/100
7/7 [==============================] - 0s 1ms/step - loss: 0.2192 - accuracy: 0.9143
Epoch 88/100
7/7 [==============================] - 0s 571us/step - loss: 0.2182 - accuracy: 0.9143
Epoch 89/100
7/7 [==============================] - 0s 571us/step - loss: 0.2187 - accuracy: 0.9095
Epoch 90/100
7/7 [==============================] - 0s 858us/step - loss: 0.2181 - accuracy: 0.9143
Epoch 91/100
7/7 [==============================] - 0s 786us/step - loss: 0.2291 - accuracy: 0.9095
Epoch 92/100
7/7 [==============================] - ETA: 0s - loss: 0.1431 - accuracy: 0.96 - 0s 143us/step - loss: 0.2175 - accuracy: 0.9190
Epoch 93/100
7/7 [==============================] - 0s 571us/step - loss: 0.2148 - accuracy: 0.9143
Epoch 94/100
7/7 [==============================] - 0s 571us/step - loss: 0.2145 - accuracy: 0.9190
Epoch 95/100
7/7 [==============================] - 0s 572us/step - loss: 0.2128 - accuracy: 0.9143
Epoch 96/100
7/7 [==============================] - 0s 571us/step - loss: 0.2173 - accuracy: 0.9143
Epoch 97/100
7/7 [==============================] - 0s 571us/step - loss: 0.2109 - accuracy: 0.9143
Epoch 98/100
7/7 [==============================] - 0s 571us/step - loss: 0.2117 - accuracy: 0.9190
Epoch 99/100
7/7 [==============================] - 0s 571us/step - loss: 0.2101 - accuracy: 0.9143
Epoch 100/100
7/7 [==============================] - 0s 571us/step - loss: 0.2121 - accuracy: 0.9143

Check Accuracy of Model based on Test Data

Model Accuracy is 91%
In [21]:

model.evaluate(x_test, y_test)[1]

5/5 [==============================] - 0s 0s/step - loss: 0.3353 - accuracy: 0.9149
Out[21]:
0.914893627166748

Predict Outcomes
In [22]:

predictions = model.predict(x_test)
prediction2 = np.where(predictions>= 0.87, 1,0)

In [23]:

print(len(prediction2))
print(len(x_test))

141
141
In [ ]:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants