carlos-pino commited on
Commit
d4c9761
1 Parent(s): e83c66f

Upload 28 files

Browse files
Files changed (28) hide show
  1. .gitignore +7 -0
  2. README.md +55 -13
  3. logs/cnn/train/events.out.tfevents.1670589111.DESKTOP-DHTLKO8.21804.243.v2 +3 -0
  4. logs/cnn/train/events.out.tfevents.1670589111.DESKTOP-DHTLKO8.profile-empty +3 -0
  5. logs/cnn/train/events.out.tfevents.1670589908.DESKTOP-DHTLKO8.4388.243.v2 +3 -0
  6. logs/cnn/train/events.out.tfevents.1670590244.DESKTOP-DHTLKO8.29980.243.v2 +3 -0
  7. logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.input_pipeline.pb +3 -0
  8. logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.kernel_stats.pb +3 -0
  9. logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.overview_page.pb +3 -0
  10. logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.tensorflow_stats.pb +3 -0
  11. logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.trace.json.gz +3 -0
  12. logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.input_pipeline.pb +3 -0
  13. logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.kernel_stats.pb +3 -0
  14. logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.overview_page.pb +3 -0
  15. logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.tensorflow_stats.pb +3 -0
  16. logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.trace.json.gz +3 -0
  17. logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.input_pipeline.pb +3 -0
  18. logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.kernel_stats.pb +3 -0
  19. logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.overview_page.pb +3 -0
  20. logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.tensorflow_stats.pb +3 -0
  21. logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.trace.json.gz +3 -0
  22. logs/cnn/validation/events.out.tfevents.1670589116.DESKTOP-DHTLKO8.21804.1555.v2 +3 -0
  23. logs/cnn/validation/events.out.tfevents.1670589916.DESKTOP-DHTLKO8.4388.1603.v2 +3 -0
  24. logs/cnn/validation/events.out.tfevents.1670590254.DESKTOP-DHTLKO8.29980.1655.v2 +3 -0
  25. main.py +142 -0
  26. predict.py +51 -0
  27. saves/dogs-cats.h5 +3 -0
  28. saves/wights-dogs-cats.h5 +3 -0
.gitignore ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Dataset/training/Cat
2
+ Dataset/training/Dog
3
+ Dataset/validation/Cat
4
+ Dataset/validation/Dog
5
+
6
+ venv
7
+ rename.sh
README.md CHANGED
@@ -1,13 +1,55 @@
1
- ---
2
- title: Dogs Cats
3
- emoji: 🌍
4
- colorFrom: green
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.12.0
8
- app_file: app.py
9
- pinned: false
10
- license: openrail
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Modelo Clasificatorio de perros y gatos
2
+
3
+ Nuestro proyecto es un modelo que clasifica perros y gatos, fue entrenado y validado a partir de imágenes y fue modelado con redes neuronales convolucionales.
4
+
5
+ ## Decisions
6
+
7
+ `1` Utilizar redes neuronales convolucionales, ya que estas son capaces de clasificar de forma mas precisa imágenes a diferencia de una neuronal, y son las indicadas para los casos clasificatorios a diferencia de un modelo denso que se usa para regresiones y no es capaz de acertar en la mayoria de los casos.
8
+
9
+ `2` Clasificar perros y gatos.
10
+
11
+ `3` Añadir un dropout en las capas para que el modelo no se sobreajuste.
12
+
13
+ `4` Anadir capas de pooling para agrupar las decisiones tomadas entre cada capa convolucional.
14
+
15
+ `5` Utilizar la funcion de activacion relu que es de las mas populares por sus buenos resultados en casos de clasificacion.
16
+
17
+ `6` Utilizar la funcion de activacion sigmoid en al ultima capa para un mejor resultado final.
18
+
19
+ `7` Utilizar optimizador adam y funcion de perdida binary_crossentropy que tambien son de las mas recomendadas y usadas para estos casos.
20
+
21
+ `8` Utilizar transformaciones en algunas imágenes de entrenamiento para evitar que el modelo se ajuste y luego no pueda clasificar imágenes con mas zoom, o que esten rotadas, etc.
22
+
23
+ `9` Utilizar un tamaño para las imágenes de 100x100 para disminuir el tiempo de entrenamiento ya que con un tamaño de 400x400 inicialmente nos tomo aproximadamente una hora.
24
+
25
+ `10` Utilizar 1800 imágenes de entrenamiento y 600 imágenes de validacion.
26
+
27
+ ## Data Sources
28
+
29
+ Usamos las imágenes de entrenamiento y de validacion sobre gatos y perros de este link provenientes de TensorFlow [Cat vs Dog](https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_5340.zip).
30
+
31
+ ## Features
32
+
33
+ La entrada pueden ser con imágenes o con el animal directamente, a traves de la camara del computador en este caso, para predecir y/o clasificar si es un gato o un perro.
34
+
35
+ ## Data Collection
36
+
37
+ Decidimos utilizar esta coleccion de datos porque tiene gran variedad de imágenes, donde no solo sale un gato o perro en la misma forma, sino que con diferentes elementos acompañando, la imágenes son de diferentes tamaños, resoluciones, orientaciones, hay algunas con multiples gatos/perros, tienen texto, son carteles, etc.
38
+
39
+ ## Value Proposition
40
+
41
+ Puede ser utilizado en aplicaciones de reconocimiento y distincion de estos animales, en programas de seleccion, en aplicaciones de competencias de gatos o perros para permitir la inscripcion, en juegos educativos, en aplicaciones de imagenes para categorizar, etc.
42
+
43
+ # Environment requirements to run the model
44
+
45
+ Python 3.8.0
46
+
47
+ Tensorflow 2.2.2
48
+
49
+ Numpy 1.19.4
50
+
51
+ Scipy 1.9.3
52
+
53
+ Matplotlib.pyplot
54
+
55
+ Cv2
logs/cnn/train/events.out.tfevents.1670589111.DESKTOP-DHTLKO8.21804.243.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04a90b08af638d4fab1df8e8b62a43f87f29ca2f08e1909bfee8ba824a8f82d3
3
+ size 29172
logs/cnn/train/events.out.tfevents.1670589111.DESKTOP-DHTLKO8.profile-empty ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1ad8c819b06bc5c55f396bfa486d227679ac3f1b9db4b283da509f52174df93
3
+ size 40
logs/cnn/train/events.out.tfevents.1670589908.DESKTOP-DHTLKO8.4388.243.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad4206d06b35bfb6ea3e5ae38442e7183983f76c453991dead519223e8210826
3
+ size 19856
logs/cnn/train/events.out.tfevents.1670590244.DESKTOP-DHTLKO8.29980.243.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9dddaa362bcd60fb8f8d95be8ba1fbf07a25194ad158f8a1a086cfd9406a09c
3
+ size 66300
logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.input_pipeline.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95bce5f2cf2e3a268edd53ea5a995a880068a9b79a26c18d1eeab352b1847719
3
+ size 2077
logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.kernel_stats.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
3
+ size 0
logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.overview_page.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cd4e4d6e16eb6a8cc41f8bc1e0cc503d711a401e197bf1e9371224341f0c2fc
3
+ size 3011
logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.tensorflow_stats.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:464d368bf7332b5706b2c8d5b2bebc00b2c4e2355042c25e912e52d03fc8da4e
3
+ size 36508
logs/cnn/train/plugins/profile/2022_12_09_12_31_51/DESKTOP-DHTLKO8.trace.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff487b0f4b4b9c6a9b4b544b61e40a291dfd2f374b0c695d93c93dd7dbcb1896
3
+ size 4792
logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.input_pipeline.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df49a55d2e9da208c7136495bc7fee2095a6ad88d8393d90fdf5560c6e8c5870
3
+ size 2077
logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.kernel_stats.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
3
+ size 0
logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.overview_page.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf2f60377ddb1dc1f65b5470a804953f94f553237ae8c4e7edbb027b493058c7
3
+ size 3011
logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.tensorflow_stats.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02b7c35f63888a511586ff2eacafa77728b983d9e7ea5a78034ceb0a5e0ceb4f
3
+ size 36508
logs/cnn/train/plugins/profile/2022_12_09_12_45_09/DESKTOP-DHTLKO8.trace.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6211d0462ed5c8e1f0686da20e86763779b66f9dd298e2aa38d8e8424036ab9e
3
+ size 4854
logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.input_pipeline.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61bb706e800ffb05421c1c4045365ec2b5b694cc9560b904de0c6cf18091da73
3
+ size 2077
logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.kernel_stats.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
3
+ size 0
logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.overview_page.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7aaa59e1543d35cc6a8f50d3f580a879a87180babc1b6e2b69b978bb2ab6f38d
3
+ size 3011
logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.tensorflow_stats.pb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f166e753702a5d9da8a493bcf02ff395c5a7ae1a9bf5589dbfaa0cef11481527
3
+ size 36508
logs/cnn/train/plugins/profile/2022_12_09_12_50_45/DESKTOP-DHTLKO8.trace.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9952c168c5fb9d1b751c9bae0fc57d43a459de0fa3dfebaa5de609b226b3b12c
3
+ size 4825
logs/cnn/validation/events.out.tfevents.1670589116.DESKTOP-DHTLKO8.21804.1555.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6d2b1df09e3974d460735f0500e967a69c4818f9b0bb0213ae30f671aa51c59
3
+ size 13652
logs/cnn/validation/events.out.tfevents.1670589916.DESKTOP-DHTLKO8.4388.1603.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80abcf0e567063e5fc9db3ebba2ebb8eac425db9d9002e1537a73b05d0d4893b
3
+ size 4336
logs/cnn/validation/events.out.tfevents.1670590254.DESKTOP-DHTLKO8.29980.1655.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:373f9e8828bbd0704fcdd803158b3adcf00a13618c4d02dcf6d27a8a4875f96c
3
+ size 50780
main.py ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Press Mayús+F10 to execute it or replace it with your code.
2
+ # Press Double Shift to search everywhere for classes, files, tool windows, actions, and settings.
3
+
4
+ import os
5
+ import cv2
6
+ import random
7
+ import numpy as np
8
+ import tensorflow as tf
9
+ import matplotlib.pyplot as plt
10
+ from tensorflow.keras.callbacks import TensorBoard
11
+ from tensorflow.keras.preprocessing.image import ImageDataGenerator
12
+
13
+ training_images = './Dataset/training'
14
+ validation_images = './Dataset/validation'
15
+
16
+ training_images_list = os.listdir(training_images)
17
+ validation_images_list = os.listdir(validation_images)
18
+
19
+ IMAGE_SIZE = 100
20
+
21
+ # Get images from dataset
22
+ def get_dataset_image(is_training_data):
23
+ images = []
24
+ tags = []
25
+ data = []
26
+ count = 0
27
+ image_list = validation_images_list
28
+ image_rute = validation_images
29
+
30
+ if is_training_data:
31
+ image_list = training_images_list
32
+ image_rute = training_images
33
+
34
+ for dir_name in image_list:
35
+ name = image_rute + '/' + dir_name
36
+
37
+ for file_name in os.listdir(name):
38
+ tags.append(count)
39
+ img = cv2.imread(name + '/' + file_name, 0)
40
+
41
+ if img is None:
42
+ print('Wrong path:', name + '/' + file_name)
43
+ else:
44
+ img = cv2.resize(img, (IMAGE_SIZE, IMAGE_SIZE), interpolation=cv2.INTER_CUBIC)
45
+ img = img.reshape(IMAGE_SIZE, IMAGE_SIZE, 1)
46
+ data.append([img, count])
47
+ images.append(img)
48
+
49
+ count = count + 1
50
+
51
+ return images, tags, data, count
52
+
53
+
54
+ # Normalize images - white and black
55
+ def normalize_images(images):
56
+ new_images = np.array(images).astype(float) / 255
57
+ return new_images
58
+
59
+
60
+ # Avoid over fitting
61
+ def avoid_over_fitting(images, tags):
62
+ rotation_range = random.randint(0, 90)
63
+ width_shift_range = random.uniform(0, 1)
64
+ height_shift_range = random.uniform(0, 1)
65
+ shear_range = random.randint(0, 25)
66
+
67
+ img_train_gen = ImageDataGenerator(
68
+ rotation_range=rotation_range,
69
+ width_shift_range=width_shift_range,
70
+ height_shift_range=height_shift_range,
71
+ shear_range=shear_range,
72
+ zoom_range=[0.5, 1.5],
73
+ vertical_flip=True,
74
+ horizontal_flip=True
75
+ )
76
+
77
+ img_train_gen.fit(images)
78
+ img_train = img_train_gen.flow(images, tags, batch_size=38)
79
+
80
+ return img_train
81
+
82
+
83
+ # Training lists
84
+ train_images, train_tags, train_data, train_count = get_dataset_image(True)
85
+
86
+ # Validation lists
87
+ val_images, val_tags, val_data, val_count = get_dataset_image(False)
88
+
89
+ print('Read images finalized!')
90
+
91
+ # Normalize
92
+ train_images = normalize_images(train_images)
93
+ val_images = normalize_images(val_images)
94
+
95
+ train_tags = np.array(train_tags)
96
+ val_tags = np.array(val_tags)
97
+
98
+ img_to_train = avoid_over_fitting(train_images, train_tags)
99
+
100
+ # Set layers and config CNN
101
+ CNN_model = tf.keras.models.Sequential([
102
+ tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(IMAGE_SIZE, IMAGE_SIZE, 1)),
103
+ tf.keras.layers.MaxPooling2D(2, 2),
104
+ tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
105
+ tf.keras.layers.MaxPooling2D(2, 2),
106
+ tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
107
+ tf.keras.layers.MaxPooling2D(2, 2),
108
+ tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
109
+ tf.keras.layers.MaxPooling2D(2, 2),
110
+
111
+ # Clasification dense layers
112
+ tf.keras.layers.Dropout(0.2),
113
+ tf.keras.layers.Flatten(),
114
+ tf.keras.layers.Dense(250, activation='relu'),
115
+ tf.keras.layers.Dense(1, activation='sigmoid')
116
+ ])
117
+
118
+ CNN_model.compile(
119
+ optimizer='adam',
120
+ loss='binary_crossentropy',
121
+ metrics=['accuracy']
122
+ )
123
+
124
+
125
+ # Train model
126
+ BoardCNN = TensorBoard(log_dir='./logs/cnn')
127
+ CNN_model.fit(
128
+ img_to_train,
129
+ batch_size=38,
130
+ validation_data=(val_images, val_tags),
131
+ epochs=500,
132
+ callbacks=[BoardCNN],
133
+ steps_per_epoch=int(np.ceil(len(train_images)/float(38))),
134
+ validation_steps=int(np.ceil(len(val_images)/float(38)))
135
+ )
136
+
137
+
138
+ # Save model
139
+ CNN_model.save('./saves/dogs-cats.h5')
140
+ CNN_model.save_weights('./saves/wights-dogs-cats.h5')
141
+
142
+ print("Finish!")
predict.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import tensorflow as tf
2
+ import cv2
3
+ import numpy as np
4
+ from keras_preprocessing.image import img_to_array
5
+
6
+ # Get model
7
+ model = './saves/dogs-cats.h5'
8
+
9
+ IMAGE_SIZE = 100
10
+
11
+ CNN_MODEL = tf.keras.models.load_model(model)
12
+ weight_model = CNN_MODEL.get_weights()
13
+ CNN_MODEL.set_weights(weight_model)
14
+
15
+ # CAM
16
+ cap = cv2.VideoCapture(0)
17
+
18
+ while True:
19
+ ret, frame = cap.read()
20
+
21
+ # Parse to gray
22
+ img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
23
+ img = cv2.resize(img, (IMAGE_SIZE, IMAGE_SIZE), interpolation=cv2.INTER_CUBIC)
24
+
25
+ # Img normalized
26
+ img = np.array(img).astype(float) / 255
27
+
28
+ # Parse to 2D array
29
+ image = img_to_array(img)
30
+ image = np.expand_dims(image, axis=0)
31
+
32
+ # Predict
33
+ prediction = CNN_MODEL.predict(image)
34
+ prediction = prediction[0][0]
35
+
36
+ print(prediction)
37
+
38
+ # Classification
39
+ if prediction <= 0.5:
40
+ cv2.putText(frame, "Gato", (200, 70), cv2.FONT_HERSHEY_SIMPLEX, 3, (236, 19, 180), 6)
41
+ else:
42
+ cv2.putText(frame, "Perro", (200, 70), cv2.FONT_HERSHEY_SIMPLEX, 3, (20, 106, 231), 6)
43
+
44
+ cv2.imshow("CNN", frame)
45
+
46
+ t = cv2.waitKey(1)
47
+ if t == 25:
48
+ break
49
+
50
+ cv2.destroyAllWindows()
51
+ cap.release()
saves/dogs-cats.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:824d2473e0ff51bc5009ea63dac3cd2d4b420124576cda3641c81b6824aeae63
3
+ size 7769168
saves/wights-dogs-cats.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2e7411658c25790f3bba05ee2e8e06619001d66fc01fe7f719291c73e352eea
3
+ size 2597952