jgwill commited on
Commit
1b677c1
1 Parent(s): 30a2812

add:ast-app

Browse files
Files changed (13) hide show
  1. Dockerfile +3 -0
  2. LICENSE +674 -0
  3. __init__.py +0 -0
  4. ckp2SavedModel.py +14 -0
  5. img_augm.py +187 -0
  6. main.py +173 -0
  7. model.py +555 -0
  8. module.py +246 -0
  9. ops.py +84 -0
  10. prepare_dataset.py +159 -0
  11. server.py +48 -0
  12. test-gpu.py +12 -0
  13. utils.py +75 -0
Dockerfile ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ FROM jgwill/gix-adaptive-style-transfer:gpu
2
+
3
+
LICENSE ADDED
@@ -0,0 +1,674 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU GENERAL PUBLIC LICENSE
2
+ Version 3, 29 June 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+ Preamble
9
+
10
+ The GNU General Public License is a free, copyleft license for
11
+ software and other kinds of works.
12
+
13
+ The licenses for most software and other practical works are designed
14
+ to take away your freedom to share and change the works. By contrast,
15
+ the GNU General Public License is intended to guarantee your freedom to
16
+ share and change all versions of a program--to make sure it remains free
17
+ software for all its users. We, the Free Software Foundation, use the
18
+ GNU General Public License for most of our software; it applies also to
19
+ any other work released this way by its authors. You can apply it to
20
+ your programs, too.
21
+
22
+ When we speak of free software, we are referring to freedom, not
23
+ price. Our General Public Licenses are designed to make sure that you
24
+ have the freedom to distribute copies of free software (and charge for
25
+ them if you wish), that you receive source code or can get it if you
26
+ want it, that you can change the software or use pieces of it in new
27
+ free programs, and that you know you can do these things.
28
+
29
+ To protect your rights, we need to prevent others from denying you
30
+ these rights or asking you to surrender the rights. Therefore, you have
31
+ certain responsibilities if you distribute copies of the software, or if
32
+ you modify it: responsibilities to respect the freedom of others.
33
+
34
+ For example, if you distribute copies of such a program, whether
35
+ gratis or for a fee, you must pass on to the recipients the same
36
+ freedoms that you received. You must make sure that they, too, receive
37
+ or can get the source code. And you must show them these terms so they
38
+ know their rights.
39
+
40
+ Developers that use the GNU GPL protect your rights with two steps:
41
+ (1) assert copyright on the software, and (2) offer you this License
42
+ giving you legal permission to copy, distribute and/or modify it.
43
+
44
+ For the developers' and authors' protection, the GPL clearly explains
45
+ that there is no warranty for this free software. For both users' and
46
+ authors' sake, the GPL requires that modified versions be marked as
47
+ changed, so that their problems will not be attributed erroneously to
48
+ authors of previous versions.
49
+
50
+ Some devices are designed to deny users access to install or run
51
+ modified versions of the software inside them, although the manufacturer
52
+ can do so. This is fundamentally incompatible with the aim of
53
+ protecting users' freedom to change the software. The systematic
54
+ pattern of such abuse occurs in the area of products for individuals to
55
+ use, which is precisely where it is most unacceptable. Therefore, we
56
+ have designed this version of the GPL to prohibit the practice for those
57
+ products. If such problems arise substantially in other domains, we
58
+ stand ready to extend this provision to those domains in future versions
59
+ of the GPL, as needed to protect the freedom of users.
60
+
61
+ Finally, every program is threatened constantly by software patents.
62
+ States should not allow patents to restrict development and use of
63
+ software on general-purpose computers, but in those that do, we wish to
64
+ avoid the special danger that patents applied to a free program could
65
+ make it effectively proprietary. To prevent this, the GPL assures that
66
+ patents cannot be used to render the program non-free.
67
+
68
+ The precise terms and conditions for copying, distribution and
69
+ modification follow.
70
+
71
+ TERMS AND CONDITIONS
72
+
73
+ 0. Definitions.
74
+
75
+ "This License" refers to version 3 of the GNU General Public License.
76
+
77
+ "Copyright" also means copyright-like laws that apply to other kinds of
78
+ works, such as semiconductor masks.
79
+
80
+ "The Program" refers to any copyrightable work licensed under this
81
+ License. Each licensee is addressed as "you". "Licensees" and
82
+ "recipients" may be individuals or organizations.
83
+
84
+ To "modify" a work means to copy from or adapt all or part of the work
85
+ in a fashion requiring copyright permission, other than the making of an
86
+ exact copy. The resulting work is called a "modified version" of the
87
+ earlier work or a work "based on" the earlier work.
88
+
89
+ A "covered work" means either the unmodified Program or a work based
90
+ on the Program.
91
+
92
+ To "propagate" a work means to do anything with it that, without
93
+ permission, would make you directly or secondarily liable for
94
+ infringement under applicable copyright law, except executing it on a
95
+ computer or modifying a private copy. Propagation includes copying,
96
+ distribution (with or without modification), making available to the
97
+ public, and in some countries other activities as well.
98
+
99
+ To "convey" a work means any kind of propagation that enables other
100
+ parties to make or receive copies. Mere interaction with a user through
101
+ a computer network, with no transfer of a copy, is not conveying.
102
+
103
+ An interactive user interface displays "Appropriate Legal Notices"
104
+ to the extent that it includes a convenient and prominently visible
105
+ feature that (1) displays an appropriate copyright notice, and (2)
106
+ tells the user that there is no warranty for the work (except to the
107
+ extent that warranties are provided), that licensees may convey the
108
+ work under this License, and how to view a copy of this License. If
109
+ the interface presents a list of user commands or options, such as a
110
+ menu, a prominent item in the list meets this criterion.
111
+
112
+ 1. Source Code.
113
+
114
+ The "source code" for a work means the preferred form of the work
115
+ for making modifications to it. "Object code" means any non-source
116
+ form of a work.
117
+
118
+ A "Standard Interface" means an interface that either is an official
119
+ standard defined by a recognized standards body, or, in the case of
120
+ interfaces specified for a particular programming language, one that
121
+ is widely used among developers working in that language.
122
+
123
+ The "System Libraries" of an executable work include anything, other
124
+ than the work as a whole, that (a) is included in the normal form of
125
+ packaging a Major Component, but which is not part of that Major
126
+ Component, and (b) serves only to enable use of the work with that
127
+ Major Component, or to implement a Standard Interface for which an
128
+ implementation is available to the public in source code form. A
129
+ "Major Component", in this context, means a major essential component
130
+ (kernel, window system, and so on) of the specific operating system
131
+ (if any) on which the executable work runs, or a compiler used to
132
+ produce the work, or an object code interpreter used to run it.
133
+
134
+ The "Corresponding Source" for a work in object code form means all
135
+ the source code needed to generate, install, and (for an executable
136
+ work) run the object code and to modify the work, including scripts to
137
+ control those activities. However, it does not include the work's
138
+ System Libraries, or general-purpose tools or generally available free
139
+ programs which are used unmodified in performing those activities but
140
+ which are not part of the work. For example, Corresponding Source
141
+ includes interface definition files associated with source files for
142
+ the work, and the source code for shared libraries and dynamically
143
+ linked subprograms that the work is specifically designed to require,
144
+ such as by intimate data communication or control flow between those
145
+ subprograms and other parts of the work.
146
+
147
+ The Corresponding Source need not include anything that users
148
+ can regenerate automatically from other parts of the Corresponding
149
+ Source.
150
+
151
+ The Corresponding Source for a work in source code form is that
152
+ same work.
153
+
154
+ 2. Basic Permissions.
155
+
156
+ All rights granted under this License are granted for the term of
157
+ copyright on the Program, and are irrevocable provided the stated
158
+ conditions are met. This License explicitly affirms your unlimited
159
+ permission to run the unmodified Program. The output from running a
160
+ covered work is covered by this License only if the output, given its
161
+ content, constitutes a covered work. This License acknowledges your
162
+ rights of fair use or other equivalent, as provided by copyright law.
163
+
164
+ You may make, run and propagate covered works that you do not
165
+ convey, without conditions so long as your license otherwise remains
166
+ in force. You may convey covered works to others for the sole purpose
167
+ of having them make modifications exclusively for you, or provide you
168
+ with facilities for running those works, provided that you comply with
169
+ the terms of this License in conveying all material for which you do
170
+ not control copyright. Those thus making or running the covered works
171
+ for you must do so exclusively on your behalf, under your direction
172
+ and control, on terms that prohibit them from making any copies of
173
+ your copyrighted material outside their relationship with you.
174
+
175
+ Conveying under any other circumstances is permitted solely under
176
+ the conditions stated below. Sublicensing is not allowed; section 10
177
+ makes it unnecessary.
178
+
179
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180
+
181
+ No covered work shall be deemed part of an effective technological
182
+ measure under any applicable law fulfilling obligations under article
183
+ 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184
+ similar laws prohibiting or restricting circumvention of such
185
+ measures.
186
+
187
+ When you convey a covered work, you waive any legal power to forbid
188
+ circumvention of technological measures to the extent such circumvention
189
+ is effected by exercising rights under this License with respect to
190
+ the covered work, and you disclaim any intention to limit operation or
191
+ modification of the work as a means of enforcing, against the work's
192
+ users, your or third parties' legal rights to forbid circumvention of
193
+ technological measures.
194
+
195
+ 4. Conveying Verbatim Copies.
196
+
197
+ You may convey verbatim copies of the Program's source code as you
198
+ receive it, in any medium, provided that you conspicuously and
199
+ appropriately publish on each copy an appropriate copyright notice;
200
+ keep intact all notices stating that this License and any
201
+ non-permissive terms added in accord with section 7 apply to the code;
202
+ keep intact all notices of the absence of any warranty; and give all
203
+ recipients a copy of this License along with the Program.
204
+
205
+ You may charge any price or no price for each copy that you convey,
206
+ and you may offer support or warranty protection for a fee.
207
+
208
+ 5. Conveying Modified Source Versions.
209
+
210
+ You may convey a work based on the Program, or the modifications to
211
+ produce it from the Program, in the form of source code under the
212
+ terms of section 4, provided that you also meet all of these conditions:
213
+
214
+ a) The work must carry prominent notices stating that you modified
215
+ it, and giving a relevant date.
216
+
217
+ b) The work must carry prominent notices stating that it is
218
+ released under this License and any conditions added under section
219
+ 7. This requirement modifies the requirement in section 4 to
220
+ "keep intact all notices".
221
+
222
+ c) You must license the entire work, as a whole, under this
223
+ License to anyone who comes into possession of a copy. This
224
+ License will therefore apply, along with any applicable section 7
225
+ additional terms, to the whole of the work, and all its parts,
226
+ regardless of how they are packaged. This License gives no
227
+ permission to license the work in any other way, but it does not
228
+ invalidate such permission if you have separately received it.
229
+
230
+ d) If the work has interactive user interfaces, each must display
231
+ Appropriate Legal Notices; however, if the Program has interactive
232
+ interfaces that do not display Appropriate Legal Notices, your
233
+ work need not make them do so.
234
+
235
+ A compilation of a covered work with other separate and independent
236
+ works, which are not by their nature extensions of the covered work,
237
+ and which are not combined with it such as to form a larger program,
238
+ in or on a volume of a storage or distribution medium, is called an
239
+ "aggregate" if the compilation and its resulting copyright are not
240
+ used to limit the access or legal rights of the compilation's users
241
+ beyond what the individual works permit. Inclusion of a covered work
242
+ in an aggregate does not cause this License to apply to the other
243
+ parts of the aggregate.
244
+
245
+ 6. Conveying Non-Source Forms.
246
+
247
+ You may convey a covered work in object code form under the terms
248
+ of sections 4 and 5, provided that you also convey the
249
+ machine-readable Corresponding Source under the terms of this License,
250
+ in one of these ways:
251
+
252
+ a) Convey the object code in, or embodied in, a physical product
253
+ (including a physical distribution medium), accompanied by the
254
+ Corresponding Source fixed on a durable physical medium
255
+ customarily used for software interchange.
256
+
257
+ b) Convey the object code in, or embodied in, a physical product
258
+ (including a physical distribution medium), accompanied by a
259
+ written offer, valid for at least three years and valid for as
260
+ long as you offer spare parts or customer support for that product
261
+ model, to give anyone who possesses the object code either (1) a
262
+ copy of the Corresponding Source for all the software in the
263
+ product that is covered by this License, on a durable physical
264
+ medium customarily used for software interchange, for a price no
265
+ more than your reasonable cost of physically performing this
266
+ conveying of source, or (2) access to copy the
267
+ Corresponding Source from a network server at no charge.
268
+
269
+ c) Convey individual copies of the object code with a copy of the
270
+ written offer to provide the Corresponding Source. This
271
+ alternative is allowed only occasionally and noncommercially, and
272
+ only if you received the object code with such an offer, in accord
273
+ with subsection 6b.
274
+
275
+ d) Convey the object code by offering access from a designated
276
+ place (gratis or for a charge), and offer equivalent access to the
277
+ Corresponding Source in the same way through the same place at no
278
+ further charge. You need not require recipients to copy the
279
+ Corresponding Source along with the object code. If the place to
280
+ copy the object code is a network server, the Corresponding Source
281
+ may be on a different server (operated by you or a third party)
282
+ that supports equivalent copying facilities, provided you maintain
283
+ clear directions next to the object code saying where to find the
284
+ Corresponding Source. Regardless of what server hosts the
285
+ Corresponding Source, you remain obligated to ensure that it is
286
+ available for as long as needed to satisfy these requirements.
287
+
288
+ e) Convey the object code using peer-to-peer transmission, provided
289
+ you inform other peers where the object code and Corresponding
290
+ Source of the work are being offered to the general public at no
291
+ charge under subsection 6d.
292
+
293
+ A separable portion of the object code, whose source code is excluded
294
+ from the Corresponding Source as a System Library, need not be
295
+ included in conveying the object code work.
296
+
297
+ A "User Product" is either (1) a "consumer product", which means any
298
+ tangible personal property which is normally used for personal, family,
299
+ or household purposes, or (2) anything designed or sold for incorporation
300
+ into a dwelling. In determining whether a product is a consumer product,
301
+ doubtful cases shall be resolved in favor of coverage. For a particular
302
+ product received by a particular user, "normally used" refers to a
303
+ typical or common use of that class of product, regardless of the status
304
+ of the particular user or of the way in which the particular user
305
+ actually uses, or expects or is expected to use, the product. A product
306
+ is a consumer product regardless of whether the product has substantial
307
+ commercial, industrial or non-consumer uses, unless such uses represent
308
+ the only significant mode of use of the product.
309
+
310
+ "Installation Information" for a User Product means any methods,
311
+ procedures, authorization keys, or other information required to install
312
+ and execute modified versions of a covered work in that User Product from
313
+ a modified version of its Corresponding Source. The information must
314
+ suffice to ensure that the continued functioning of the modified object
315
+ code is in no case prevented or interfered with solely because
316
+ modification has been made.
317
+
318
+ If you convey an object code work under this section in, or with, or
319
+ specifically for use in, a User Product, and the conveying occurs as
320
+ part of a transaction in which the right of possession and use of the
321
+ User Product is transferred to the recipient in perpetuity or for a
322
+ fixed term (regardless of how the transaction is characterized), the
323
+ Corresponding Source conveyed under this section must be accompanied
324
+ by the Installation Information. But this requirement does not apply
325
+ if neither you nor any third party retains the ability to install
326
+ modified object code on the User Product (for example, the work has
327
+ been installed in ROM).
328
+
329
+ The requirement to provide Installation Information does not include a
330
+ requirement to continue to provide support service, warranty, or updates
331
+ for a work that has been modified or installed by the recipient, or for
332
+ the User Product in which it has been modified or installed. Access to a
333
+ network may be denied when the modification itself materially and
334
+ adversely affects the operation of the network or violates the rules and
335
+ protocols for communication across the network.
336
+
337
+ Corresponding Source conveyed, and Installation Information provided,
338
+ in accord with this section must be in a format that is publicly
339
+ documented (and with an implementation available to the public in
340
+ source code form), and must require no special password or key for
341
+ unpacking, reading or copying.
342
+
343
+ 7. Additional Terms.
344
+
345
+ "Additional permissions" are terms that supplement the terms of this
346
+ License by making exceptions from one or more of its conditions.
347
+ Additional permissions that are applicable to the entire Program shall
348
+ be treated as though they were included in this License, to the extent
349
+ that they are valid under applicable law. If additional permissions
350
+ apply only to part of the Program, that part may be used separately
351
+ under those permissions, but the entire Program remains governed by
352
+ this License without regard to the additional permissions.
353
+
354
+ When you convey a copy of a covered work, you may at your option
355
+ remove any additional permissions from that copy, or from any part of
356
+ it. (Additional permissions may be written to require their own
357
+ removal in certain cases when you modify the work.) You may place
358
+ additional permissions on material, added by you to a covered work,
359
+ for which you have or can give appropriate copyright permission.
360
+
361
+ Notwithstanding any other provision of this License, for material you
362
+ add to a covered work, you may (if authorized by the copyright holders of
363
+ that material) supplement the terms of this License with terms:
364
+
365
+ a) Disclaiming warranty or limiting liability differently from the
366
+ terms of sections 15 and 16 of this License; or
367
+
368
+ b) Requiring preservation of specified reasonable legal notices or
369
+ author attributions in that material or in the Appropriate Legal
370
+ Notices displayed by works containing it; or
371
+
372
+ c) Prohibiting misrepresentation of the origin of that material, or
373
+ requiring that modified versions of such material be marked in
374
+ reasonable ways as different from the original version; or
375
+
376
+ d) Limiting the use for publicity purposes of names of licensors or
377
+ authors of the material; or
378
+
379
+ e) Declining to grant rights under trademark law for use of some
380
+ trade names, trademarks, or service marks; or
381
+
382
+ f) Requiring indemnification of licensors and authors of that
383
+ material by anyone who conveys the material (or modified versions of
384
+ it) with contractual assumptions of liability to the recipient, for
385
+ any liability that these contractual assumptions directly impose on
386
+ those licensors and authors.
387
+
388
+ All other non-permissive additional terms are considered "further
389
+ restrictions" within the meaning of section 10. If the Program as you
390
+ received it, or any part of it, contains a notice stating that it is
391
+ governed by this License along with a term that is a further
392
+ restriction, you may remove that term. If a license document contains
393
+ a further restriction but permits relicensing or conveying under this
394
+ License, you may add to a covered work material governed by the terms
395
+ of that license document, provided that the further restriction does
396
+ not survive such relicensing or conveying.
397
+
398
+ If you add terms to a covered work in accord with this section, you
399
+ must place, in the relevant source files, a statement of the
400
+ additional terms that apply to those files, or a notice indicating
401
+ where to find the applicable terms.
402
+
403
+ Additional terms, permissive or non-permissive, may be stated in the
404
+ form of a separately written license, or stated as exceptions;
405
+ the above requirements apply either way.
406
+
407
+ 8. Termination.
408
+
409
+ You may not propagate or modify a covered work except as expressly
410
+ provided under this License. Any attempt otherwise to propagate or
411
+ modify it is void, and will automatically terminate your rights under
412
+ this License (including any patent licenses granted under the third
413
+ paragraph of section 11).
414
+
415
+ However, if you cease all violation of this License, then your
416
+ license from a particular copyright holder is reinstated (a)
417
+ provisionally, unless and until the copyright holder explicitly and
418
+ finally terminates your license, and (b) permanently, if the copyright
419
+ holder fails to notify you of the violation by some reasonable means
420
+ prior to 60 days after the cessation.
421
+
422
+ Moreover, your license from a particular copyright holder is
423
+ reinstated permanently if the copyright holder notifies you of the
424
+ violation by some reasonable means, this is the first time you have
425
+ received notice of violation of this License (for any work) from that
426
+ copyright holder, and you cure the violation prior to 30 days after
427
+ your receipt of the notice.
428
+
429
+ Termination of your rights under this section does not terminate the
430
+ licenses of parties who have received copies or rights from you under
431
+ this License. If your rights have been terminated and not permanently
432
+ reinstated, you do not qualify to receive new licenses for the same
433
+ material under section 10.
434
+
435
+ 9. Acceptance Not Required for Having Copies.
436
+
437
+ You are not required to accept this License in order to receive or
438
+ run a copy of the Program. Ancillary propagation of a covered work
439
+ occurring solely as a consequence of using peer-to-peer transmission
440
+ to receive a copy likewise does not require acceptance. However,
441
+ nothing other than this License grants you permission to propagate or
442
+ modify any covered work. These actions infringe copyright if you do
443
+ not accept this License. Therefore, by modifying or propagating a
444
+ covered work, you indicate your acceptance of this License to do so.
445
+
446
+ 10. Automatic Licensing of Downstream Recipients.
447
+
448
+ Each time you convey a covered work, the recipient automatically
449
+ receives a license from the original licensors, to run, modify and
450
+ propagate that work, subject to this License. You are not responsible
451
+ for enforcing compliance by third parties with this License.
452
+
453
+ An "entity transaction" is a transaction transferring control of an
454
+ organization, or substantially all assets of one, or subdividing an
455
+ organization, or merging organizations. If propagation of a covered
456
+ work results from an entity transaction, each party to that
457
+ transaction who receives a copy of the work also receives whatever
458
+ licenses to the work the party's predecessor in interest had or could
459
+ give under the previous paragraph, plus a right to possession of the
460
+ Corresponding Source of the work from the predecessor in interest, if
461
+ the predecessor has it or can get it with reasonable efforts.
462
+
463
+ You may not impose any further restrictions on the exercise of the
464
+ rights granted or affirmed under this License. For example, you may
465
+ not impose a license fee, royalty, or other charge for exercise of
466
+ rights granted under this License, and you may not initiate litigation
467
+ (including a cross-claim or counterclaim in a lawsuit) alleging that
468
+ any patent claim is infringed by making, using, selling, offering for
469
+ sale, or importing the Program or any portion of it.
470
+
471
+ 11. Patents.
472
+
473
+ A "contributor" is a copyright holder who authorizes use under this
474
+ License of the Program or a work on which the Program is based. The
475
+ work thus licensed is called the contributor's "contributor version".
476
+
477
+ A contributor's "essential patent claims" are all patent claims
478
+ owned or controlled by the contributor, whether already acquired or
479
+ hereafter acquired, that would be infringed by some manner, permitted
480
+ by this License, of making, using, or selling its contributor version,
481
+ but do not include claims that would be infringed only as a
482
+ consequence of further modification of the contributor version. For
483
+ purposes of this definition, "control" includes the right to grant
484
+ patent sublicenses in a manner consistent with the requirements of
485
+ this License.
486
+
487
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
488
+ patent license under the contributor's essential patent claims, to
489
+ make, use, sell, offer for sale, import and otherwise run, modify and
490
+ propagate the contents of its contributor version.
491
+
492
+ In the following three paragraphs, a "patent license" is any express
493
+ agreement or commitment, however denominated, not to enforce a patent
494
+ (such as an express permission to practice a patent or covenant not to
495
+ sue for patent infringement). To "grant" such a patent license to a
496
+ party means to make such an agreement or commitment not to enforce a
497
+ patent against the party.
498
+
499
+ If you convey a covered work, knowingly relying on a patent license,
500
+ and the Corresponding Source of the work is not available for anyone
501
+ to copy, free of charge and under the terms of this License, through a
502
+ publicly available network server or other readily accessible means,
503
+ then you must either (1) cause the Corresponding Source to be so
504
+ available, or (2) arrange to deprive yourself of the benefit of the
505
+ patent license for this particular work, or (3) arrange, in a manner
506
+ consistent with the requirements of this License, to extend the patent
507
+ license to downstream recipients. "Knowingly relying" means you have
508
+ actual knowledge that, but for the patent license, your conveying the
509
+ covered work in a country, or your recipient's use of the covered work
510
+ in a country, would infringe one or more identifiable patents in that
511
+ country that you have reason to believe are valid.
512
+
513
+ If, pursuant to or in connection with a single transaction or
514
+ arrangement, you convey, or propagate by procuring conveyance of, a
515
+ covered work, and grant a patent license to some of the parties
516
+ receiving the covered work authorizing them to use, propagate, modify
517
+ or convey a specific copy of the covered work, then the patent license
518
+ you grant is automatically extended to all recipients of the covered
519
+ work and works based on it.
520
+
521
+ A patent license is "discriminatory" if it does not include within
522
+ the scope of its coverage, prohibits the exercise of, or is
523
+ conditioned on the non-exercise of one or more of the rights that are
524
+ specifically granted under this License. You may not convey a covered
525
+ work if you are a party to an arrangement with a third party that is
526
+ in the business of distributing software, under which you make payment
527
+ to the third party based on the extent of your activity of conveying
528
+ the work, and under which the third party grants, to any of the
529
+ parties who would receive the covered work from you, a discriminatory
530
+ patent license (a) in connection with copies of the covered work
531
+ conveyed by you (or copies made from those copies), or (b) primarily
532
+ for and in connection with specific products or compilations that
533
+ contain the covered work, unless you entered into that arrangement,
534
+ or that patent license was granted, prior to 28 March 2007.
535
+
536
+ Nothing in this License shall be construed as excluding or limiting
537
+ any implied license or other defenses to infringement that may
538
+ otherwise be available to you under applicable patent law.
539
+
540
+ 12. No Surrender of Others' Freedom.
541
+
542
+ If conditions are imposed on you (whether by court order, agreement or
543
+ otherwise) that contradict the conditions of this License, they do not
544
+ excuse you from the conditions of this License. If you cannot convey a
545
+ covered work so as to satisfy simultaneously your obligations under this
546
+ License and any other pertinent obligations, then as a consequence you may
547
+ not convey it at all. For example, if you agree to terms that obligate you
548
+ to collect a royalty for further conveying from those to whom you convey
549
+ the Program, the only way you could satisfy both those terms and this
550
+ License would be to refrain entirely from conveying the Program.
551
+
552
+ 13. Use with the GNU Affero General Public License.
553
+
554
+ Notwithstanding any other provision of this License, you have
555
+ permission to link or combine any covered work with a work licensed
556
+ under version 3 of the GNU Affero General Public License into a single
557
+ combined work, and to convey the resulting work. The terms of this
558
+ License will continue to apply to the part which is the covered work,
559
+ but the special requirements of the GNU Affero General Public License,
560
+ section 13, concerning interaction through a network will apply to the
561
+ combination as such.
562
+
563
+ 14. Revised Versions of this License.
564
+
565
+ The Free Software Foundation may publish revised and/or new versions of
566
+ the GNU General Public License from time to time. Such new versions will
567
+ be similar in spirit to the present version, but may differ in detail to
568
+ address new problems or concerns.
569
+
570
+ Each version is given a distinguishing version number. If the
571
+ Program specifies that a certain numbered version of the GNU General
572
+ Public License "or any later version" applies to it, you have the
573
+ option of following the terms and conditions either of that numbered
574
+ version or of any later version published by the Free Software
575
+ Foundation. If the Program does not specify a version number of the
576
+ GNU General Public License, you may choose any version ever published
577
+ by the Free Software Foundation.
578
+
579
+ If the Program specifies that a proxy can decide which future
580
+ versions of the GNU General Public License can be used, that proxy's
581
+ public statement of acceptance of a version permanently authorizes you
582
+ to choose that version for the Program.
583
+
584
+ Later license versions may give you additional or different
585
+ permissions. However, no additional obligations are imposed on any
586
+ author or copyright holder as a result of your choosing to follow a
587
+ later version.
588
+
589
+ 15. Disclaimer of Warranty.
590
+
591
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592
+ APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594
+ OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595
+ THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596
+ PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597
+ IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598
+ ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599
+
600
+ 16. Limitation of Liability.
601
+
602
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604
+ THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605
+ GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606
+ USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608
+ PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609
+ EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610
+ SUCH DAMAGES.
611
+
612
+ 17. Interpretation of Sections 15 and 16.
613
+
614
+ If the disclaimer of warranty and limitation of liability provided
615
+ above cannot be given local legal effect according to their terms,
616
+ reviewing courts shall apply local law that most closely approximates
617
+ an absolute waiver of all civil liability in connection with the
618
+ Program, unless a warranty or assumption of liability accompanies a
619
+ copy of the Program in return for a fee.
620
+
621
+ END OF TERMS AND CONDITIONS
622
+
623
+ How to Apply These Terms to Your New Programs
624
+
625
+ If you develop a new program, and you want it to be of the greatest
626
+ possible use to the public, the best way to achieve this is to make it
627
+ free software which everyone can redistribute and change under these terms.
628
+
629
+ To do so, attach the following notices to the program. It is safest
630
+ to attach them to the start of each source file to most effectively
631
+ state the exclusion of warranty; and each file should have at least
632
+ the "copyright" line and a pointer to where the full notice is found.
633
+
634
+ <one line to give the program's name and a brief idea of what it does.>
635
+ Copyright (C) <year> <name of author>
636
+
637
+ This program is free software: you can redistribute it and/or modify
638
+ it under the terms of the GNU General Public License as published by
639
+ the Free Software Foundation, either version 3 of the License, or
640
+ (at your option) any later version.
641
+
642
+ This program is distributed in the hope that it will be useful,
643
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
644
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645
+ GNU General Public License for more details.
646
+
647
+ You should have received a copy of the GNU General Public License
648
+ along with this program. If not, see <http://www.gnu.org/licenses/>.
649
+
650
+ Also add information on how to contact you by electronic and paper mail.
651
+
652
+ If the program does terminal interaction, make it output a short
653
+ notice like this when it starts in an interactive mode:
654
+
655
+ <program> Copyright (C) <year> <name of author>
656
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657
+ This is free software, and you are welcome to redistribute it
658
+ under certain conditions; type `show c' for details.
659
+
660
+ The hypothetical commands `show w' and `show c' should show the appropriate
661
+ parts of the General Public License. Of course, your program's commands
662
+ might be different; for a GUI interface, you would use an "about box".
663
+
664
+ You should also get your employer (if you work as a programmer) or school,
665
+ if any, to sign a "copyright disclaimer" for the program, if necessary.
666
+ For more information on this, and how to apply and follow the GNU GPL, see
667
+ <http://www.gnu.org/licenses/>.
668
+
669
+ The GNU General Public License does not permit incorporating your program
670
+ into proprietary programs. If your program is a subroutine library, you
671
+ may consider it more useful to permit linking proprietary applications with
672
+ the library. If this is what you want to do, use the GNU Lesser General
673
+ Public License instead of this License. But first, please read
674
+ <http://www.gnu.org/philosophy/why-not-lgpl.html>.
__init__.py ADDED
File without changes
ckp2SavedModel.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import tensorflow as tf
3
+
4
+ export_dir = 'export_dir'
5
+ trained_checkpoint_prefix = '/m/wk/1/model_gia-young-picasso-v03-201216-var2_new_240000.ckpt-240000'
6
+
7
+ #/m/wk/1/model_gia-young-picasso-v03-201216-var2_new_240000.ckpt-240000.data-00000-of-00001
8
+ graph = tf.Graph()
9
+ loader = tf.train.import_meta_graph(trained_checkpoint_prefix + ".meta" )
10
+ sess = tf.Session()
11
+ loader.restore(sess,trained_checkpoint_prefix)
12
+ builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
13
+ builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING], strip_default_attrs=True)
14
+ builder.save()
img_augm.py ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (C) 2018 Artsiom Sanakoyeu and Dmytro Kotovenko
2
+ #
3
+ # This file is part of Adaptive Style Transfer
4
+ #
5
+ # Adaptive Style Transfer is free software: you can redistribute it and/or modify
6
+ # it under the terms of the GNU General Public License as published by
7
+ # the Free Software Foundation, either version 3 of the License, or
8
+ # (at your option) any later version.
9
+ #
10
+ # Adaptive Style Transfer is distributed in the hope that it will be useful,
11
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
12
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13
+ # GNU General Public License for more details.
14
+ #
15
+ # You should have received a copy of the GNU General Public License
16
+ # along with this program. If not, see <https://www.gnu.org/licenses/>.
17
+
18
+ import numpy as np
19
+ import scipy.misc
20
+ import cv2
21
+ from PIL import Image
22
+
23
+
24
+ class Augmentor():
25
+ def __init__(self,
26
+ crop_size=(256, 256),
27
+ scale_augm_prb=0.5, scale_augm_range=0.2,
28
+ rotation_augm_prb=0.5, rotation_augm_range=0.15,
29
+ hsv_augm_prb=1.0,
30
+ hue_augm_shift=0.05,
31
+ saturation_augm_shift=0.05, saturation_augm_scale=0.05,
32
+ value_augm_shift=0.05, value_augm_scale=0.05,
33
+ affine_trnsfm_prb=0.5, affine_trnsfm_range=0.05,
34
+ horizontal_flip_prb=0.5,
35
+ vertical_flip_prb=0.5):
36
+
37
+ self.crop_size = crop_size
38
+
39
+ self.scale_augm_prb = scale_augm_prb
40
+ self.scale_augm_range = scale_augm_range
41
+
42
+ self.rotation_augm_prb = rotation_augm_prb
43
+ self.rotation_augm_range = rotation_augm_range
44
+
45
+ self.hsv_augm_prb = hsv_augm_prb
46
+ self.hue_augm_shift = hue_augm_shift
47
+ self.saturation_augm_scale = saturation_augm_scale
48
+ self.saturation_augm_shift = saturation_augm_shift
49
+ self.value_augm_scale = value_augm_scale
50
+ self.value_augm_shift = value_augm_shift
51
+
52
+ self.affine_trnsfm_prb = affine_trnsfm_prb
53
+ self.affine_trnsfm_range = affine_trnsfm_range
54
+
55
+ self.horizontal_flip_prb = horizontal_flip_prb
56
+ self.vertical_flip_prb = vertical_flip_prb
57
+
58
+ def __call__(self, image, is_inference=False):
59
+ if is_inference:
60
+ return cv2.resize(image, None, fx=self.crop_size[0], fy=self.crop_size[1], interpolation=cv2.INTER_CUBIC)
61
+
62
+ # If not inference stage apply the pipeline of augmentations.
63
+ if self.scale_augm_prb > np.random.uniform():
64
+ image = self.scale(image=image,
65
+ scale_x=1. + np.random.uniform(low=-self.scale_augm_range, high=-self.scale_augm_range),
66
+ scale_y=1. + np.random.uniform(low=-self.scale_augm_range, high=-self.scale_augm_range)
67
+ )
68
+
69
+
70
+ rows, cols, ch = image.shape
71
+ image = np.pad(array=image, pad_width=[[rows // 4, rows // 4], [cols // 4, cols // 4], [0, 0]], mode='reflect')
72
+ if self.rotation_augm_prb > np.random.uniform():
73
+ image = self.rotate(image=image,
74
+ angle=np.random.uniform(low=-self.rotation_augm_range*90.,
75
+ high=self.rotation_augm_range*90.)
76
+ )
77
+
78
+ if self.affine_trnsfm_prb > np.random.uniform():
79
+ image = self.affine(image=image,
80
+ rng=self.affine_trnsfm_range
81
+ )
82
+ image = image[(rows // 4):-(rows // 4), (cols // 4):-(cols // 4), :]
83
+
84
+ # Crop out patch of desired size.
85
+ image = self.crop(image=image,
86
+ crop_size=self.crop_size
87
+ )
88
+
89
+ if self.hsv_augm_prb > np.random.uniform():
90
+ image = self.hsv_transform(image=image,
91
+ hue_shift=self.hue_augm_shift,
92
+ saturation_shift=self.saturation_augm_shift,
93
+ saturation_scale=self.saturation_augm_scale,
94
+ value_shift=self.value_augm_shift,
95
+ value_scale=self.value_augm_scale)
96
+
97
+ if self.horizontal_flip_prb > np.random.uniform():
98
+ image = self.horizontal_flip(image)
99
+
100
+ if self.vertical_flip_prb > np.random.uniform():
101
+ image = self.vertical_flip(image)
102
+
103
+ return image
104
+
105
+ def scale(self, image, scale_x, scale_y):
106
+ """
107
+ Args:
108
+ image:
109
+ scale_x: float positive value. New horizontal scale
110
+ scale_y: float positive value. New vertical scale
111
+ Returns:
112
+ """
113
+ image = cv2.resize(image, None, fx=scale_x, fy=scale_y, interpolation=cv2.INTER_CUBIC)
114
+ return image
115
+
116
+ def rotate(self, image, angle):
117
+ """
118
+ Args:
119
+ image: input image
120
+ angle: angle of rotation in degrees
121
+ Returns:
122
+ """
123
+ rows, cols, ch = image.shape
124
+
125
+ rot_M = cv2.getRotationMatrix2D((cols / 2, rows / 2), angle, 1)
126
+ image = cv2.warpAffine(image, rot_M, (cols, rows))
127
+ return image
128
+
129
+ def crop(self, image, crop_size=(256, 256)):
130
+ rows, cols, chs = image.shape
131
+ x = int(np.random.uniform(low=0, high=max(0, rows - crop_size[0])))
132
+ y = int(np.random.uniform(low=0, high=max(0, cols - crop_size[1])))
133
+
134
+ image = image[x:x+crop_size[0], y:y+crop_size[1], :]
135
+ # If the input image was too small to comprise patch of size crop_size,
136
+ # resize obtained patch to desired size.
137
+ if image.shape[0] < crop_size[0] or image.shape[1] < crop_size[1]:
138
+ image = scipy.misc.imresize(arr=image, size=crop_size)
139
+ return image
140
+
141
+ def hsv_transform(self, image,
142
+ hue_shift=0.2,
143
+ saturation_shift=0.2, saturation_scale=0.2,
144
+ value_shift=0.2, value_scale=0.2,
145
+ ):
146
+
147
+ image = Image.fromarray(image)
148
+ hsv = np.array(image.convert("HSV"), 'float64')
149
+
150
+ # scale the values to fit between 0 and 1
151
+ hsv /= 255.
152
+
153
+ # do the scalings & shiftings
154
+ hsv[..., 0] += np.random.uniform(-hue_shift, hue_shift)
155
+ hsv[..., 1] *= np.random.uniform(1. / (1. + saturation_scale), 1. + saturation_scale)
156
+ hsv[..., 1] += np.random.uniform(-saturation_shift, saturation_shift)
157
+ hsv[..., 2] *= np.random.uniform(1. / (1. + value_scale), 1. + value_scale)
158
+ hsv[..., 2] += np.random.uniform(-value_shift, value_shift)
159
+
160
+ # cut off invalid values
161
+ hsv.clip(0.01, 0.99, hsv)
162
+
163
+ # round to full numbers
164
+ hsv = np.uint8(np.round(hsv * 254.))
165
+
166
+ # convert back to rgb image
167
+ return np.asarray(Image.fromarray(hsv, "HSV").convert("RGB"))
168
+
169
+
170
+ def affine(self, image, rng):
171
+ rows, cols, ch = image.shape
172
+ pts1 = np.float32([[0., 0.], [0., 1.], [1., 0.]])
173
+ [x0, y0] = [0. + np.random.uniform(low=-rng, high=rng), 0. + np.random.uniform(low=-rng, high=rng)]
174
+ [x1, y1] = [0. + np.random.uniform(low=-rng, high=rng), 1. + np.random.uniform(low=-rng, high=rng)]
175
+ [x2, y2] = [1. + np.random.uniform(low=-rng, high=rng), 0. + np.random.uniform(low=-rng, high=rng)]
176
+ pts2 = np.float32([[x0, y0], [x1, y1], [x2, y2]])
177
+ affine_M = cv2.getAffineTransform(pts1, pts2)
178
+ image = cv2.warpAffine(image, affine_M, (cols, rows))
179
+
180
+ return image
181
+
182
+ def horizontal_flip(self, image):
183
+ return image[:, ::-1, :]
184
+
185
+ def vertical_flip(self, image):
186
+ return image[::-1, :, :]
187
+
main.py ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (C) 2018 Artsiom Sanakoyeu and Dmytro Kotovenko
2
+ #
3
+ # This file is part of Adaptive Style Transfer
4
+ #
5
+ # Adaptive Style Transfer is free software: you can redistribute it and/or modify
6
+ # it under the terms of the GNU General Public License as published by
7
+ # the Free Software Foundation, either version 3 of the License, or
8
+ # (at your option) any later version.
9
+ #
10
+ # Adaptive Style Transfer is distributed in the hope that it will be useful,
11
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
12
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13
+ # GNU General Public License for more details.
14
+ #
15
+ # You should have received a copy of the GNU General Public License
16
+ # along with this program. If not, see <https://www.gnu.org/licenses/>.
17
+
18
+ import argparse
19
+ import tensorflow as tf
20
+ tf.set_random_seed(228)
21
+ from model import Artgan
22
+
23
+ def parse_list(str_value):
24
+ if ',' in str_value:
25
+ str_value = str_value.split(',')
26
+ else:
27
+ str_value = [str_value]
28
+ return str_value
29
+
30
+
31
+ parser = argparse.ArgumentParser(description='')
32
+
33
+ # ========================== GENERAL PARAMETERS ========================= #
34
+ parser.add_argument('--model_name',
35
+ dest='model_name',
36
+ default='model1',
37
+ help='Name of the model')
38
+ parser.add_argument('--phase',
39
+ dest='phase',
40
+ default='train',
41
+ help='Specify current phase: train or inference.')
42
+ parser.add_argument('--image_size',
43
+ dest='image_size',
44
+ type=int,
45
+ default=256*3,
46
+ help='For training phase: will crop out images of this particular size.'
47
+ 'For inference phase: each input image will have the smallest side of this size. '
48
+ 'For inference recommended size is 1280.')
49
+
50
+
51
+ # ========================= TRAINING PARAMETERS ========================= #
52
+ parser.add_argument('--ptad',
53
+ dest='path_to_art_dataset',
54
+ type=str,
55
+ #default='./data/vincent-van-gogh_paintings/',
56
+ default='./data/vincent-van-gogh_road-with-cypresses-1890',
57
+ help='Directory with paintings representing style we want to learn.')
58
+ parser.add_argument('--ptcd',
59
+ dest='path_to_content_dataset',
60
+ type=str,
61
+ default=None,
62
+ help='Path to Places365 training dataset.')
63
+
64
+
65
+ parser.add_argument('--total_steps',
66
+ dest='total_steps',
67
+ type=int,
68
+ default=int(3e5),
69
+ help='Total number of steps')
70
+
71
+ parser.add_argument('--batch_size',
72
+ dest='batch_size',
73
+ type=int,
74
+ default=1,
75
+ help='# images in batch')
76
+ parser.add_argument('--lr',
77
+ dest='lr',
78
+ type=float,
79
+ default=0.0002,
80
+ help='initial learning rate for adam')
81
+ parser.add_argument('--save_freq',
82
+ dest='save_freq',
83
+ type=int,
84
+ default=1000,
85
+ help='Save model every save_freq steps')
86
+ parser.add_argument('--ngf',
87
+ dest='ngf',
88
+ type=int,
89
+ default=32,
90
+ help='Number of filters in first conv layer of generator(encoder-decoder).')
91
+ parser.add_argument('--ndf',
92
+ dest='ndf',
93
+ type=int,
94
+ default=64,
95
+ help='Number of filters in first conv layer of discriminator.')
96
+
97
+ # Weights of different losses.
98
+ parser.add_argument('--dlw',
99
+ dest='discr_loss_weight',
100
+ type=float,
101
+ default=1.,
102
+ help='Weight of discriminator loss.')
103
+ parser.add_argument('--tlw',
104
+ dest='transformer_loss_weight',
105
+ type=float,
106
+ default=100.,
107
+ help='Weight of transformer loss.')
108
+ parser.add_argument('--flw',
109
+ dest='feature_loss_weight',
110
+ type=float,
111
+ default=100.,
112
+ help='Weight of feature loss.')
113
+ parser.add_argument('--dsr',
114
+ dest='discr_success_rate',
115
+ type=float,
116
+ default=0.8,
117
+ help='Rate of trials that discriminator will win on average.')
118
+
119
+
120
+ # ========================= INFERENCE PARAMETERS ========================= #
121
+ parser.add_argument('--ii_dir',
122
+ dest='inference_images_dir',
123
+ type=parse_list,
124
+ default=['./data/sample_photographs/'],
125
+ help='Directory with images we want to process.')
126
+ parser.add_argument('--save_dir',
127
+ type=str,
128
+ default=None,
129
+ help='Directory to save inference output images.'
130
+ 'If not specified will save in the model directory.')
131
+ parser.add_argument('--file_suffix',
132
+ type=str,
133
+ default='_stylized',
134
+ help='Suffix to append in between ext format and fn.'
135
+ 'If not specified will save in the model directory.')
136
+ parser.add_argument('--ckpt_nmbr',
137
+ dest='ckpt_nmbr',
138
+ type=int,
139
+ default=None,
140
+ help='Checkpoint number we want to use for inference. '
141
+ 'Might be None(unspecified), then the latest available will be used.')
142
+
143
+ args = parser.parse_args()
144
+
145
+
146
+ def main(_):
147
+
148
+ tfconfig = tf.ConfigProto(allow_soft_placement=False)
149
+ tfconfig.gpu_options.allow_growth = True
150
+ with tf.Session(config=tfconfig) as sess:
151
+ model = Artgan(sess, args)
152
+
153
+ if args.phase == 'train':
154
+ model.train(args, ckpt_nmbr=args.ckpt_nmbr)
155
+ if args.phase == 'inference' or args.phase == 'test':
156
+ print("Inference.")
157
+ model.inference(args, args.inference_images_dir, resize_to_original=False,
158
+ to_save_dir=args.save_dir,
159
+ ckpt_nmbr=args.ckpt_nmbr,
160
+ file_suffix=args.file_suffix)
161
+
162
+ if args.phase == 'inference_on_frames' or args.phase == 'test_on_frames':
163
+ print("Inference on frames sequence.")
164
+ model.inference_video(args,
165
+ path_to_folder=args.inference_images_dir[0],
166
+ resize_to_original=False,
167
+ to_save_dir=args.save_dir,
168
+ ckpt_nmbr = args.ckpt_nmbr,
169
+ file_suffix=args.file_suffix)
170
+ sess.close()
171
+
172
+ if __name__ == '__main__':
173
+ tf.app.run()
model.py ADDED
@@ -0,0 +1,555 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (C) 2018 Artsiom Sanakoyeu and Dmytro Kotovenko
2
+ #
3
+ # This file is part of Adaptive Style Transfer
4
+ #
5
+ # Adaptive Style Transfer is free software: you can redistribute it and/or modify
6
+ # it under the terms of the GNU General Public License as published by
7
+ # the Free Software Foundation, either version 3 of the License, or
8
+ # (at your option) any later version.
9
+ #
10
+ # Adaptive Style Transfer is distributed in the hope that it will be useful,
11
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
12
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13
+ # GNU General Public License for more details.
14
+ #
15
+ # You should have received a copy of the GNU General Public License
16
+ # along with this program. If not, see <https://www.gnu.org/licenses/>.
17
+
18
+ from __future__ import division
19
+ from __future__ import print_function
20
+
21
+ import os
22
+ import time
23
+ from glob import glob
24
+ import tensorflow as tf
25
+ import numpy as np
26
+ from collections import namedtuple
27
+ from tqdm import tqdm
28
+ import multiprocessing
29
+
30
+ from module import *
31
+ from utils import *
32
+ import prepare_dataset
33
+ import img_augm
34
+
35
+
36
+ class Artgan(object):
37
+ def __init__(self, sess, args):
38
+ self.model_name = args.model_name
39
+ self.root_dir = './models'
40
+ self.checkpoint_dir = os.path.join(self.root_dir, self.model_name, 'checkpoint')
41
+ self.checkpoint_long_dir = os.path.join(self.root_dir, self.model_name, 'checkpoint_long')
42
+ self.sample_dir = os.path.join(self.root_dir, self.model_name, 'sample')
43
+ self.inference_dir = os.path.join(self.root_dir, self.model_name, 'inference')
44
+ self.logs_dir = os.path.join(self.root_dir, self.model_name, 'logs')
45
+
46
+ self.sess = sess
47
+ self.batch_size = args.batch_size
48
+ self.image_size = args.image_size
49
+
50
+ self.loss = sce_criterion
51
+
52
+ self.initial_step = 0
53
+
54
+ OPTIONS = namedtuple('OPTIONS',
55
+ 'batch_size image_size \
56
+ total_steps save_freq lr\
57
+ gf_dim df_dim \
58
+ is_training \
59
+ path_to_content_dataset \
60
+ path_to_art_dataset \
61
+ discr_loss_weight transformer_loss_weight feature_loss_weight')
62
+ self.options = OPTIONS._make((args.batch_size, args.image_size,
63
+ args.total_steps, args.save_freq, args.lr,
64
+ args.ngf, args.ndf,
65
+ args.phase == 'train',
66
+ args.path_to_content_dataset,
67
+ args.path_to_art_dataset,
68
+ args.discr_loss_weight, args.transformer_loss_weight, args.feature_loss_weight
69
+ ))
70
+
71
+ # Create all the folders for saving the model
72
+ if not os.path.exists(self.root_dir):
73
+ os.makedirs(self.root_dir)
74
+ if not os.path.exists(os.path.join(self.root_dir, self.model_name)):
75
+ os.makedirs(os.path.join(self.root_dir, self.model_name))
76
+ if not os.path.exists(self.checkpoint_dir):
77
+ os.makedirs(self.checkpoint_dir)
78
+ if not os.path.exists(self.checkpoint_long_dir):
79
+ os.makedirs(self.checkpoint_long_dir)
80
+ if not os.path.exists(self.sample_dir):
81
+ os.makedirs(self.sample_dir)
82
+ if not os.path.exists(self.inference_dir):
83
+ os.makedirs(self.inference_dir)
84
+
85
+ self._build_model()
86
+ #@STCGoal Keep an entire sequence of each 1000 iterations steps
87
+ #@q Do that bellow set to 405 would keep the whole sequence ??
88
+ self.saver = tf.train.Saver(max_to_keep=2)
89
+ self.saver_long = tf.train.Saver(max_to_keep=None)
90
+
91
+ def _build_model(self):
92
+ if self.options.is_training:
93
+ # ==================== Define placeholders. ===================== #
94
+ with tf.name_scope('placeholder'):
95
+ self.input_painting = tf.placeholder(dtype=tf.float32,
96
+ shape=[self.batch_size, None, None, 3],
97
+ name='painting')
98
+ self.input_photo = tf.placeholder(dtype=tf.float32,
99
+ shape=[self.batch_size, None, None, 3],
100
+ name='photo')
101
+ self.lr = tf.placeholder(dtype=tf.float32, shape=(), name='learning_rate')
102
+
103
+ # ===================== Wire the graph. ========================= #
104
+ # Encode input images.
105
+ self.input_photo_features = encoder(image=self.input_photo,
106
+ options=self.options,
107
+ reuse=False)
108
+
109
+ # Decode obtained features
110
+ self.output_photo = decoder(features=self.input_photo_features,
111
+ options=self.options,
112
+ reuse=False)
113
+
114
+ # Get features of output images. Need them to compute feature loss.
115
+ self.output_photo_features = encoder(image=self.output_photo,
116
+ options=self.options,
117
+ reuse=True)
118
+
119
+ # Add discriminators.
120
+ # Note that each of the predictions contain multiple predictions
121
+ # at different scale.
122
+ self.input_painting_discr_predictions = discriminator(image=self.input_painting,
123
+ options=self.options,
124
+ reuse=False)
125
+ self.input_photo_discr_predictions = discriminator(image=self.input_photo,
126
+ options=self.options,
127
+ reuse=True)
128
+ self.output_photo_discr_predictions = discriminator(image=self.output_photo,
129
+ options=self.options,
130
+ reuse=True)
131
+
132
+ # ===================== Final losses that we optimize. ===================== #
133
+
134
+ # Discriminator.
135
+ # Have to predict ones only for original paintings, otherwise predict zero.
136
+ scale_weight = {"scale_0": 1.,
137
+ "scale_1": 1.,
138
+ "scale_3": 1.,
139
+ "scale_5": 1.,
140
+ "scale_6": 1.}
141
+ self.input_painting_discr_loss = {key: self.loss(pred, tf.ones_like(pred)) * scale_weight[key]
142
+ for key, pred in zip(self.input_painting_discr_predictions.keys(),
143
+ self.input_painting_discr_predictions.values())}
144
+ self.input_photo_discr_loss = {key: self.loss(pred, tf.zeros_like(pred)) * scale_weight[key]
145
+ for key, pred in zip(self.input_photo_discr_predictions.keys(),
146
+ self.input_photo_discr_predictions.values())}
147
+ self.output_photo_discr_loss = {key: self.loss(pred, tf.zeros_like(pred)) * scale_weight[key]
148
+ for key, pred in zip(self.output_photo_discr_predictions.keys(),
149
+ self.output_photo_discr_predictions.values())}
150
+
151
+ self.discr_loss = tf.add_n(list(self.input_painting_discr_loss.values())) + \
152
+ tf.add_n(list(self.input_photo_discr_loss.values())) + \
153
+ tf.add_n(list(self.output_photo_discr_loss.values()))
154
+
155
+ # Compute discriminator accuracies.
156
+ self.input_painting_discr_acc = {key: tf.reduce_mean(tf.cast(x=(pred > tf.zeros_like(pred)),
157
+ dtype=tf.float32)) * scale_weight[key]
158
+ for key, pred in zip(self.input_painting_discr_predictions.keys(),
159
+ self.input_painting_discr_predictions.values())}
160
+ self.input_photo_discr_acc = {key: tf.reduce_mean(tf.cast(x=(pred < tf.zeros_like(pred)),
161
+ dtype=tf.float32)) * scale_weight[key]
162
+ for key, pred in zip(self.input_photo_discr_predictions.keys(),
163
+ self.input_photo_discr_predictions.values())}
164
+ self.output_photo_discr_acc = {key: tf.reduce_mean(tf.cast(x=(pred < tf.zeros_like(pred)),
165
+ dtype=tf.float32)) * scale_weight[key]
166
+ for key, pred in zip(self.output_photo_discr_predictions.keys(),
167
+ self.output_photo_discr_predictions.values())}
168
+ self.discr_acc = (tf.add_n(list(self.input_painting_discr_acc.values())) + \
169
+ tf.add_n(list(self.input_photo_discr_acc.values())) + \
170
+ tf.add_n(list(self.output_photo_discr_acc.values()))) / float(len(scale_weight.keys())*3)
171
+
172
+
173
+ # Generator.
174
+ # Predicts ones for both output images.
175
+ self.output_photo_gener_loss = {key: self.loss(pred, tf.ones_like(pred)) * scale_weight[key]
176
+ for key, pred in zip(self.output_photo_discr_predictions.keys(),
177
+ self.output_photo_discr_predictions.values())}
178
+
179
+ self.gener_loss = tf.add_n(list(self.output_photo_gener_loss.values()))
180
+
181
+ # Compute generator accuracies.
182
+ self.output_photo_gener_acc = {key: tf.reduce_mean(tf.cast(x=(pred > tf.zeros_like(pred)),
183
+ dtype=tf.float32)) * scale_weight[key]
184
+ for key, pred in zip(self.output_photo_discr_predictions.keys(),
185
+ self.output_photo_discr_predictions.values())}
186
+
187
+ self.gener_acc = tf.add_n(list(self.output_photo_gener_acc.values())) / float(len(scale_weight.keys()))
188
+
189
+
190
+ # Image loss.
191
+ self.img_loss_photo = mse_criterion(transformer_block(self.output_photo),
192
+ transformer_block(self.input_photo))
193
+ self.img_loss = self.img_loss_photo
194
+
195
+ # Features loss.
196
+ self.feature_loss_photo = abs_criterion(self.output_photo_features, self.input_photo_features)
197
+ self.feature_loss = self.feature_loss_photo
198
+
199
+ # ================== Define optimization steps. =============== #
200
+ t_vars = tf.trainable_variables()
201
+ self.discr_vars = [var for var in t_vars if 'discriminator' in var.name]
202
+ self.encoder_vars = [var for var in t_vars if 'encoder' in var.name]
203
+ self.decoder_vars = [var for var in t_vars if 'decoder' in var.name]
204
+
205
+ # Discriminator and generator steps.
206
+ update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
207
+
208
+ with tf.control_dependencies(update_ops):
209
+ self.d_optim_step = tf.train.AdamOptimizer(self.lr).minimize(
210
+ loss=self.options.discr_loss_weight * self.discr_loss,
211
+ var_list=[self.discr_vars])
212
+ self.g_optim_step = tf.train.AdamOptimizer(self.lr).minimize(
213
+ loss=self.options.discr_loss_weight * self.gener_loss +
214
+ self.options.transformer_loss_weight * self.img_loss +
215
+ self.options.feature_loss_weight * self.feature_loss,
216
+ var_list=[self.encoder_vars + self.decoder_vars])
217
+
218
+ # ============= Write statistics to tensorboard. ================ #
219
+
220
+ # Discriminator loss summary.
221
+ s_d1 = [tf.summary.scalar("discriminator/input_painting_discr_loss/"+key, val)
222
+ for key, val in zip(self.input_painting_discr_loss.keys(), self.input_painting_discr_loss.values())]
223
+ s_d2 = [tf.summary.scalar("discriminator/input_photo_discr_loss/"+key, val)
224
+ for key, val in zip(self.input_photo_discr_loss.keys(), self.input_photo_discr_loss.values())]
225
+ s_d3 = [tf.summary.scalar("discriminator/output_photo_discr_loss/" + key, val)
226
+ for key, val in zip(self.output_photo_discr_loss.keys(), self.output_photo_discr_loss.values())]
227
+ s_d = tf.summary.scalar("discriminator/discr_loss", self.discr_loss)
228
+ self.summary_discriminator_loss = tf.summary.merge(s_d1+s_d2+s_d3+[s_d])
229
+
230
+ # Discriminator acc summary.
231
+ s_d1_acc = [tf.summary.scalar("discriminator/input_painting_discr_acc/"+key, val)
232
+ for key, val in zip(self.input_painting_discr_acc.keys(), self.input_painting_discr_acc.values())]
233
+ s_d2_acc = [tf.summary.scalar("discriminator/input_photo_discr_acc/"+key, val)
234
+ for key, val in zip(self.input_photo_discr_acc.keys(), self.input_photo_discr_acc.values())]
235
+ s_d3_acc = [tf.summary.scalar("discriminator/output_photo_discr_acc/" + key, val)
236
+ for key, val in zip(self.output_photo_discr_acc.keys(), self.output_photo_discr_acc.values())]
237
+ s_d_acc = tf.summary.scalar("discriminator/discr_acc", self.discr_acc)
238
+ s_d_acc_g = tf.summary.scalar("discriminator/discr_acc", self.gener_acc)
239
+ self.summary_discriminator_acc = tf.summary.merge(s_d1_acc+s_d2_acc+s_d3_acc+[s_d_acc])
240
+
241
+ # Image loss summary.
242
+ s_i1 = tf.summary.scalar("image_loss/photo", self.img_loss_photo)
243
+ s_i = tf.summary.scalar("image_loss/loss", self.img_loss)
244
+ self.summary_image_loss = tf.summary.merge([s_i1 + s_i])
245
+
246
+ # Feature loss summary.
247
+ s_f1 = tf.summary.scalar("feature_loss/photo", self.feature_loss_photo)
248
+ s_f = tf.summary.scalar("feature_loss/loss", self.feature_loss)
249
+ self.summary_feature_loss = tf.summary.merge([s_f1 + s_f])
250
+
251
+ self.summary_merged_all = tf.summary.merge_all()
252
+ self.writer = tf.summary.FileWriter(self.logs_dir, self.sess.graph)
253
+ else:
254
+ # ==================== Define placeholders. ===================== #
255
+ with tf.name_scope('placeholder'):
256
+ self.input_photo = tf.placeholder(dtype=tf.float32,
257
+ shape=[self.batch_size, None, None, 3],
258
+ name='photo')
259
+
260
+ # ===================== Wire the graph. ========================= #
261
+ # Encode input images.
262
+ self.input_photo_features = encoder(image=self.input_photo,
263
+ options=self.options,
264
+ reuse=False)
265
+
266
+ # Decode obtained features.
267
+ self.output_photo = decoder(features=self.input_photo_features,
268
+ options=self.options,
269
+ reuse=False)
270
+
271
+ def train(self, args, ckpt_nmbr=None):
272
+ # Initialize augmentor.
273
+ augmentor = img_augm.Augmentor(crop_size=[self.options.image_size, self.options.image_size],
274
+ vertical_flip_prb=0.,
275
+ hsv_augm_prb=1.0,
276
+ hue_augm_shift=0.05,
277
+ saturation_augm_shift=0.05, saturation_augm_scale=0.05,
278
+ value_augm_shift=0.05, value_augm_scale=0.05, )
279
+ content_dataset_places = prepare_dataset.PlacesDataset(path_to_dataset=self.options.path_to_content_dataset)
280
+ art_dataset = prepare_dataset.ArtDataset(path_to_art_dataset=self.options.path_to_art_dataset)
281
+
282
+
283
+ # Initialize queue workers for both datasets.
284
+ q_art = multiprocessing.Queue(maxsize=10)
285
+ q_content = multiprocessing.Queue(maxsize=10)
286
+ jobs = []
287
+ for i in range(5):
288
+ p = multiprocessing.Process(target=content_dataset_places.initialize_batch_worker,
289
+ args=(q_content, augmentor, self.batch_size, i))
290
+ p.start()
291
+ jobs.append(p)
292
+
293
+ p = multiprocessing.Process(target=art_dataset.initialize_batch_worker,
294
+ args=(q_art, augmentor, self.batch_size, i))
295
+ p.start()
296
+ jobs.append(p)
297
+ print("Processes are started.")
298
+ time.sleep(3)
299
+
300
+ # Now initialize the graph
301
+ init_op = tf.global_variables_initializer()
302
+ self.sess.run(init_op)
303
+ print("Start training.")
304
+
305
+ if self.load(self.checkpoint_dir, ckpt_nmbr):
306
+ print(" [*] Load SUCCESS")
307
+ else:
308
+ if self.load(self.checkpoint_long_dir, ckpt_nmbr):
309
+ print(" [*] Load SUCCESS")
310
+ else:
311
+ print(" [!] Load failed...")
312
+
313
+ # Initial discriminator success rate.
314
+ win_rate = args.discr_success_rate
315
+ discr_success = args.discr_success_rate
316
+ alpha = 0.05
317
+
318
+ for step in tqdm(range(self.initial_step, self.options.total_steps+1),
319
+ initial=self.initial_step,
320
+ total=self.options.total_steps):
321
+ # Get batch from the queue with batches q, if the last is non-empty.
322
+ while q_art.empty() or q_content.empty():
323
+ pass
324
+ batch_art = q_art.get()
325
+ batch_content = q_content.get()
326
+
327
+ if discr_success >= win_rate:
328
+ # Train generator
329
+ _, summary_all, gener_acc_ = self.sess.run(
330
+ [self.g_optim_step, self.summary_merged_all, self.gener_acc],
331
+ feed_dict={
332
+ self.input_painting: normalize_arr_of_imgs(batch_art['image']),
333
+ self.input_photo: normalize_arr_of_imgs(batch_content['image']),
334
+ self.lr: self.options.lr
335
+ })
336
+ discr_success = discr_success * (1. - alpha) + alpha * (1. - gener_acc_)
337
+ else:
338
+ # Train discriminator.
339
+ _, summary_all, discr_acc_ = self.sess.run(
340
+ [self.d_optim_step, self.summary_merged_all, self.discr_acc],
341
+ feed_dict={
342
+ self.input_painting: normalize_arr_of_imgs(batch_art['image']),
343
+ self.input_photo: normalize_arr_of_imgs(batch_content['image']),
344
+ self.lr: self.options.lr
345
+ })
346
+
347
+ discr_success = discr_success * (1. - alpha) + alpha * discr_acc_
348
+ self.writer.add_summary(summary_all, step * self.batch_size)
349
+
350
+ if step % self.options.save_freq == 0 and step > self.initial_step:
351
+ self.save(step)
352
+
353
+ # And additionally save all checkpoints each 15000 steps.
354
+ if step % 15000 == 0 and step > self.initial_step:
355
+ self.save(step, is_long=True)
356
+
357
+ if step % 500 == 0:
358
+ output_paintings_, output_photos_= self.sess.run(
359
+ [self.input_painting, self.output_photo],
360
+ feed_dict={
361
+ self.input_painting: normalize_arr_of_imgs(batch_art['image']),
362
+ self.input_photo: normalize_arr_of_imgs(batch_content['image']),
363
+ self.lr: self.options.lr
364
+ })
365
+
366
+ save_batch(input_painting_batch=batch_art['image'],
367
+ input_photo_batch=batch_content['image'],
368
+ output_painting_batch=denormalize_arr_of_imgs(output_paintings_),
369
+ output_photo_batch=denormalize_arr_of_imgs(output_photos_),
370
+ filepath='%s/step_%d.jpg' % (self.sample_dir, step))
371
+ print("Training is finished. Terminate jobs.")
372
+ for p in jobs:
373
+ p.join()
374
+ p.terminate()
375
+
376
+ print("Done.")
377
+ print("Does the sys.exit() made this process to exit ??")
378
+ sys.exit()
379
+
380
+ # Don't use this function yet.
381
+ def inference_video(self, args, path_to_folder, to_save_dir=None, resize_to_original=True,
382
+ use_time_smooth_randomness=True, ckpt_nmbr=None,file_suffix= "_stylized"):
383
+ """
384
+ Run inference on the video frames. Original aspect ratio will be preserved.
385
+ Args:
386
+ args:
387
+ path_to_folder: path to the folder with frames from the video
388
+ to_save_dir:
389
+ resize_to_original:
390
+ use_time_smooth_randomness: change the random vector
391
+ which is added to the bottleneck features linearly over tim
392
+
393
+ Returns:
394
+
395
+ """
396
+ init_op = tf.global_variables_initializer()
397
+ self.sess.run(init_op)
398
+ print("Start inference.")
399
+
400
+ if self.load(self.checkpoint_dir, ckpt_nmbr):
401
+ print(" [*] Load SUCCESS")
402
+ else:
403
+ if self.load(self.checkpoint_long_dir, ckpt_nmbr):
404
+ print(" [*] Load SUCCESS")
405
+ else:
406
+ print(" [!] Load failed...")
407
+
408
+ # Create folder to store results.
409
+ if to_save_dir is None:
410
+ to_save_dir = os.path.join(self.root_dir, self.model_name,
411
+ 'inference_ckpt%d_sz%d' % (self.initial_step, self.image_size))
412
+
413
+ if not os.path.exists(to_save_dir):
414
+ os.makedirs(to_save_dir)
415
+
416
+ image_paths = sorted(os.listdir(path_to_folder))
417
+ num_images = len(image_paths)
418
+ for img_idx, img_name in enumerate(tqdm(image_paths)):
419
+
420
+ img_path = os.path.join(path_to_folder, img_name)
421
+ img = scipy.misc.imread(img_path, mode='RGB')
422
+ img_shape = img.shape[:2]
423
+ # Prepare image for feeding into network.
424
+ scale_mult = self.image_size / np.min(img_shape)
425
+ new_shape = (np.array(img_shape, dtype=float) * scale_mult).astype(int)
426
+
427
+ img = scipy.misc.imresize(img, size=new_shape)
428
+
429
+ img = np.expand_dims(img, axis=0)
430
+
431
+ if use_time_smooth_randomness and img_idx == 0:
432
+ features_delta = self.sess.run(self.labels_to_concatenate_to_features,
433
+ feed_dict={
434
+ self.input_photo: normalize_arr_of_imgs(img),
435
+ })
436
+ features_delta_start = features_delta + np.random.random(size=features_delta.shape) * 0.5 - 0.25
437
+ features_delta_start = features_delta_start.clip(0, 1000)
438
+ print('features_delta_start.shape=', features_delta_start.shape)
439
+ features_delta_end = features_delta + np.random.random(size=features_delta.shape) * 0.5 - 0.25
440
+ features_delta_end = features_delta_end.clip(0, 1000)
441
+ step = (features_delta_end - features_delta_start) / (num_images - 1)
442
+
443
+ feed_dict = {
444
+ self.input_painting: normalize_arr_of_imgs(img),
445
+ self.input_photo: normalize_arr_of_imgs(img),
446
+ self.lr: self.options.lr
447
+ }
448
+ if use_time_smooth_randomness:
449
+ pass
450
+
451
+ img = self.sess.run(self.output_photo, feed_dict=feed_dict)
452
+
453
+ img = img[0]
454
+ img = denormalize_arr_of_imgs(img)
455
+ if resize_to_original:
456
+ img = scipy.misc.imresize(img, size=img_shape)
457
+ else:
458
+ pass
459
+
460
+ scipy.misc.imsave(os.path.join(to_save_dir, img_name[:-4] + file_suffix +".jpg"), img)
461
+
462
+ print("Inference is finished.")
463
+
464
+ def inference(self, args, path_to_folder, to_save_dir=None, resize_to_original=True,
465
+ ckpt_nmbr=None,file_suffix= "_stylized"):
466
+
467
+ init_op = tf.global_variables_initializer()
468
+ self.sess.run(init_op)
469
+ print("Start inference.")
470
+
471
+ if self.load(self.checkpoint_dir, ckpt_nmbr):
472
+ print(" [*] Load SUCCESS")
473
+ else:
474
+ if self.load(self.checkpoint_long_dir, ckpt_nmbr):
475
+ print(" [*] Load SUCCESS")
476
+ else:
477
+ print(" [!] Load failed...")
478
+ #Exit if we can not load (fix issue inferencing noizy image)
479
+ sys.exit()
480
+
481
+ # Create folder to store results.
482
+ if to_save_dir is None:
483
+ to_save_dir = os.path.join(self.root_dir, self.model_name,
484
+ 'inference_ckpt%d_sz%d' % (self.initial_step, self.image_size))
485
+
486
+ if not os.path.exists(to_save_dir):
487
+ os.makedirs(to_save_dir)
488
+
489
+ names = []
490
+ for d in path_to_folder:
491
+ names += glob(os.path.join(d, '*'))
492
+ names = [x for x in names if os.path.basename(x)[0] != '.']
493
+ names.sort()
494
+ for img_idx, img_path in enumerate(tqdm(names)):
495
+ img = scipy.misc.imread(img_path, mode='RGB')
496
+ img_shape = img.shape[:2]
497
+
498
+ # Resize the smallest side of the image to the self.image_size
499
+ alpha = float(self.image_size) / float(min(img_shape))
500
+ img = scipy.misc.imresize(img, size=alpha)
501
+ img = np.expand_dims(img, axis=0)
502
+
503
+ img = self.sess.run(
504
+ self.output_photo,
505
+ feed_dict={
506
+ self.input_photo: normalize_arr_of_imgs(img),
507
+ })
508
+
509
+ img = img[0]
510
+ img = denormalize_arr_of_imgs(img)
511
+ if resize_to_original:
512
+ img = scipy.misc.imresize(img, size=img_shape)
513
+ else:
514
+ pass
515
+ img_name = os.path.basename(img_path)
516
+ #@STCGoal HERE TO APPEND SUFFIX TO FILE
517
+ scipy.misc.imsave(os.path.join(to_save_dir, img_name[:-4] + file_suffix +".jpg"), img)
518
+
519
+ print("Inference is finished.")
520
+
521
+ def save(self, step, is_long=False):
522
+ if not os.path.exists(self.checkpoint_dir):
523
+ os.makedirs(self.checkpoint_dir)
524
+ if is_long:
525
+ self.saver_long.save(self.sess,
526
+ os.path.join(self.checkpoint_long_dir, self.model_name+'_%d.ckpt' % step),
527
+ global_step=step)
528
+ else:
529
+ self.saver.save(self.sess,
530
+ os.path.join(self.checkpoint_dir, self.model_name + '_%d.ckpt' % step),
531
+ global_step=step)
532
+
533
+ def load(self, checkpoint_dir, ckpt_nmbr=None):
534
+ if ckpt_nmbr:
535
+ if len([x for x in os.listdir(checkpoint_dir) if ("ckpt-" + str(ckpt_nmbr)) in x]) > 0:
536
+ print(" [*] Reading checkpoint %d from folder %s." % (ckpt_nmbr, checkpoint_dir))
537
+ ckpt_name = [x for x in os.listdir(checkpoint_dir) if ("ckpt-" + str(ckpt_nmbr)) in x][0]
538
+ ckpt_name = '.'.join(ckpt_name.split('.')[:-1])
539
+ self.initial_step = ckpt_nmbr
540
+ print("Load checkpoint %s. Initial step: %s." % (ckpt_name, self.initial_step))
541
+ self.saver.restore(self.sess, os.path.join(checkpoint_dir, ckpt_name))
542
+ return True
543
+ else:
544
+ return False
545
+ else:
546
+ print(" [*] Reading latest checkpoint from folder %s." % (checkpoint_dir))
547
+ ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
548
+ if ckpt and ckpt.model_checkpoint_path:
549
+ ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
550
+ self.initial_step = int(ckpt_name.split("_")[-1].split(".")[0])
551
+ print("Load checkpoint %s. Initial step: %s." % (ckpt_name, self.initial_step))
552
+ self.saver.restore(self.sess, os.path.join(checkpoint_dir, ckpt_name))
553
+ return True
554
+ else:
555
+ return False
module.py ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (C) 2018 Artsiom Sanakoyeu and Dmytro Kotovenko
2
+ #
3
+ # This file is part of Adaptive Style Transfer
4
+ #
5
+ # Adaptive Style Transfer is free software: you can redistribute it and/or modify
6
+ # it under the terms of the GNU General Public License as published by
7
+ # the Free Software Foundation, either version 3 of the License, or
8
+ # (at your option) any later version.
9
+ #
10
+ # Adaptive Style Transfer is distributed in the hope that it will be useful,
11
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
12
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13
+ # GNU General Public License for more details.
14
+ #
15
+ # You should have received a copy of the GNU General Public License
16
+ # along with this program. If not, see <https://www.gnu.org/licenses/>.
17
+
18
+ from __future__ import division
19
+ from ops import *
20
+
21
+
22
+ def encoder(image, options, reuse=True, name="encoder"):
23
+ """
24
+ Args:
25
+ image: input tensor, must have
26
+ options: options defining number of kernels in conv layers
27
+ reuse: to create new encoder or use existing
28
+ name: name of the encoder
29
+
30
+ Returns: Encoded image.
31
+ """
32
+
33
+ with tf.variable_scope(name):
34
+ if reuse:
35
+ tf.get_variable_scope().reuse_variables()
36
+ else:
37
+ assert tf.get_variable_scope().reuse is False
38
+ image = instance_norm(input=image,
39
+ is_training=options.is_training,
40
+ name='g_e0_bn')
41
+ c0 = tf.pad(image, [[0, 0], [15, 15], [15, 15], [0, 0]], "REFLECT")
42
+ c1 = tf.nn.relu(instance_norm(input=conv2d(c0, options.gf_dim, 3, 1, padding='VALID', name='g_e1_c'),
43
+ is_training=options.is_training,
44
+ name='g_e1_bn'))
45
+ c2 = tf.nn.relu(instance_norm(input=conv2d(c1, options.gf_dim, 3, 2, padding='VALID', name='g_e2_c'),
46
+ is_training=options.is_training,
47
+ name='g_e2_bn'))
48
+ c3 = tf.nn.relu(instance_norm(conv2d(c2, options.gf_dim * 2, 3, 2, padding='VALID', name='g_e3_c'),
49
+ is_training=options.is_training,
50
+ name='g_e3_bn'))
51
+ c4 = tf.nn.relu(instance_norm(conv2d(c3, options.gf_dim * 4, 3, 2, padding='VALID', name='g_e4_c'),
52
+ is_training=options.is_training,
53
+ name='g_e4_bn'))
54
+ c5 = tf.nn.relu(instance_norm(conv2d(c4, options.gf_dim * 8, 3, 2, padding='VALID', name='g_e5_c'),
55
+ is_training=options.is_training,
56
+ name='g_e5_bn'))
57
+ return c5
58
+
59
+
60
+ def decoder(features, options, reuse=True, name="decoder"):
61
+ """
62
+ Args:
63
+ features: input tensor, must have
64
+ options: options defining number of kernels in conv layers
65
+ reuse: to create new decoder or use existing
66
+ name: name of the encoder
67
+
68
+ Returns: Decoded image.
69
+ """
70
+
71
+ with tf.variable_scope(name):
72
+ if reuse:
73
+ tf.get_variable_scope().reuse_variables()
74
+ else:
75
+ assert tf.get_variable_scope().reuse is False
76
+
77
+ def residule_block(x, dim, ks=3, s=1, name='res'):
78
+ p = int((ks - 1) / 2)
79
+ y = tf.pad(x, [[0, 0], [p, p], [p, p], [0, 0]], "REFLECT")
80
+ y = instance_norm(conv2d(y, dim, ks, s, padding='VALID', name=name+'_c1'), name+'_bn1')
81
+ y = tf.pad(tf.nn.relu(y), [[0, 0], [p, p], [p, p], [0, 0]], "REFLECT")
82
+ y = instance_norm(conv2d(y, dim, ks, s, padding='VALID', name=name+'_c2'), name+'_bn2')
83
+ return y + x
84
+
85
+ # Now stack 9 residual blocks
86
+ num_kernels = features.get_shape().as_list()[-1]
87
+ r1 = residule_block(features, num_kernels, name='g_r1')
88
+ r2 = residule_block(r1, num_kernels, name='g_r2')
89
+ r3 = residule_block(r2, num_kernels, name='g_r3')
90
+ r4 = residule_block(r3, num_kernels, name='g_r4')
91
+ r5 = residule_block(r4, num_kernels, name='g_r5')
92
+ r6 = residule_block(r5, num_kernels, name='g_r6')
93
+ r7 = residule_block(r6, num_kernels, name='g_r7')
94
+ r8 = residule_block(r7, num_kernels, name='g_r8')
95
+ r9 = residule_block(r8, num_kernels, name='g_r9')
96
+
97
+ # Decode image.
98
+ d1 = deconv2d(r9, options.gf_dim * 8, 3, 2, name='g_d1_dc')
99
+ d1 = tf.nn.relu(instance_norm(input=d1,
100
+ name='g_d1_bn',
101
+ is_training=options.is_training))
102
+
103
+ d2 = deconv2d(d1, options.gf_dim * 4, 3, 2, name='g_d2_dc')
104
+ d2 = tf.nn.relu(instance_norm(input=d2,
105
+ name='g_d2_bn',
106
+ is_training=options.is_training))
107
+
108
+ d3 = deconv2d(d2, options.gf_dim * 2, 3, 2, name='g_d3_dc')
109
+ d3 = tf.nn.relu(instance_norm(input=d3,
110
+ name='g_d3_bn',
111
+ is_training=options.is_training))
112
+
113
+ d4 = deconv2d(d3, options.gf_dim, 3, 2, name='g_d4_dc')
114
+ d4 = tf.nn.relu(instance_norm(input=d4,
115
+ name='g_d4_bn',
116
+ is_training=options.is_training))
117
+
118
+ d4 = tf.pad(d4, [[0, 0], [3, 3], [3, 3], [0, 0]], "REFLECT")
119
+ pred = tf.nn.sigmoid(conv2d(d4, 3, 7, 1, padding='VALID', name='g_pred_c'))*2. - 1.
120
+ return pred
121
+
122
+
123
+ def discriminator(image, options, reuse=True, name="discriminator"):
124
+ """
125
+ Discriminator agent, that provides us with information about image plausibility at
126
+ different scales.
127
+ Args:
128
+ image: input tensor
129
+ options: options defining number of kernels in conv layers
130
+ reuse: to create new discriminator or use existing
131
+ name: name of the discriminator
132
+
133
+ Returns:
134
+ Image estimates at different scales.
135
+ """
136
+ with tf.variable_scope(name):
137
+ if reuse:
138
+ tf.get_variable_scope().reuse_variables()
139
+ else:
140
+ assert tf.get_variable_scope().reuse is False
141
+
142
+ h0 = lrelu(instance_norm(conv2d(image, options.df_dim * 2, ks=5, name='d_h0_conv'),
143
+ name='d_bn0'))
144
+ h0_pred = conv2d(h0, 1, ks=5, s=1, name='d_h0_pred', activation_fn=None)
145
+
146
+ h1 = lrelu(instance_norm(conv2d(h0, options.df_dim * 2, ks=5, name='d_h1_conv'),
147
+ name='d_bn1'))
148
+ h1_pred = conv2d(h1, 1, ks=10, s=1, name='d_h1_pred', activation_fn=None)
149
+
150
+ h2 = lrelu(instance_norm(conv2d(h1, options.df_dim * 4, ks=5, name='d_h2_conv'),
151
+ name='d_bn2'))
152
+
153
+ h3 = lrelu(instance_norm(conv2d(h2, options.df_dim * 8, ks=5, name='d_h3_conv'),
154
+ name='d_bn3'))
155
+ h3_pred = conv2d(h3, 1, ks=10, s=1, name='d_h3_pred', activation_fn=None)
156
+
157
+ h4 = lrelu(instance_norm(conv2d(h3, options.df_dim * 8, ks=5, name='d_h4_conv'),
158
+ name='d_bn4'))
159
+
160
+ h5 = lrelu(instance_norm(conv2d(h4, options.df_dim * 16, ks=5, name='d_h5_conv'),
161
+ name='d_bn5'))
162
+ h5_pred = conv2d(h5, 1, ks=6, s=1, name='d_h5_pred', activation_fn=None)
163
+
164
+ h6 = lrelu(instance_norm(conv2d(h5, options.df_dim * 16, ks=5, name='d_h6_conv'),
165
+ name='d_bn6'))
166
+ h6_pred = conv2d(h6, 1, ks=3, s=1, name='d_h6_pred', activation_fn=None)
167
+
168
+ return {"scale_0": h0_pred,
169
+ "scale_1": h1_pred,
170
+ "scale_3": h3_pred,
171
+ "scale_5": h5_pred,
172
+ "scale_6": h6_pred}
173
+
174
+
175
+ # ====== Define different types of losses applied to discriminator's output. ====== #
176
+
177
+ def abs_criterion(in_, target):
178
+ return tf.reduce_mean(tf.abs(in_ - target))
179
+
180
+
181
+ def mae_criterion(in_, target):
182
+ return tf.reduce_mean(tf.abs(in_-target))
183
+
184
+ def mse_criterion(in_, target):
185
+ return tf.reduce_mean((in_-target)**2)
186
+
187
+ def sce_criterion(logits, labels):
188
+ return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
189
+
190
+
191
+ def reduce_spatial_dim(input_tensor):
192
+ """
193
+ Since labels and discriminator outputs are of different shapes (and even ranks)
194
+ we should write a routine to deal with that.
195
+ Args:
196
+ input: tensor of shape [batch_size, spatial_resol_1, spatial_resol_2, depth]
197
+ Returns:
198
+ tensor of shape [batch_size, depth]
199
+ """
200
+ input_tensor = tf.reduce_mean(input_tensor=input_tensor, axis=1)
201
+ input_tensor = tf.reduce_mean(input_tensor=input_tensor, axis=1)
202
+ return input_tensor
203
+
204
+
205
+ def add_spatial_dim(input_tensor, dims_list, resol_list):
206
+ """
207
+ Appends dimensions mentioned in dims_list resol_list times. S
208
+ Args:
209
+ input: tensor of shape [batch_size, depth0]
210
+ dims_list: list of integers with position of new dimensions to append.
211
+ resol_list: list of integers with corresponding new dimensionalities for each dimension.
212
+ Returns:
213
+ tensor of new shape
214
+ """
215
+ for dim, res in zip(dims_list, resol_list):
216
+
217
+ input_tensor = tf.expand_dims(input=input_tensor, axis=dim)
218
+ input_tensor = tf.concat(values=[input_tensor]*res, axis=dim)
219
+ return input_tensor
220
+
221
+
222
+ def repeat_scalar(input_tensor, shape):
223
+ """
224
+ Repeat scalar values.
225
+ :param input_tensor: tensor of shape [batch_size, 1]
226
+ :param shape: new_shape of the element of the tensor
227
+ :return: tensor of the shape [batch_size, *shape] with elements repeated.
228
+ """
229
+ with tf.control_dependencies([tf.assert_equal(tf.shape(input_tensor)[1], 1)]):
230
+ batch_size = tf.shape(input_tensor)[0]
231
+ input_tensor = tf.tile(input_tensor, tf.stack(values=[1, tf.reduce_prod(shape)], axis=0))
232
+ input_tensor = tf.reshape(input_tensor, tf.concat(values=[[batch_size], shape, [1]], axis=0))
233
+ return input_tensor
234
+
235
+
236
+ def transformer_block(input_tensor, kernel_size=10):
237
+ """
238
+ This is a simplified version of transformer block described in our paper
239
+ https://arxiv.org/abs/1807.10201.
240
+ Args:
241
+ input_tensor: Image(or tensor of rank 4) we want to transform.
242
+ kernel_size: Size of kernel we apply to the input_tensor.
243
+ Returns:
244
+ Transformed tensor
245
+ """
246
+ return slim.avg_pool2d(inputs=input_tensor, kernel_size=kernel_size, stride=1, padding='SAME')
ops.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (C) 2018 Artsiom Sanakoyeu and Dmytro Kotovenko
2
+ #
3
+ # This file is part of Adaptive Style Transfer
4
+ #
5
+ # Adaptive Style Transfer is free software: you can redistribute it and/or modify
6
+ # it under the terms of the GNU General Public License as published by
7
+ # the Free Software Foundation, either version 3 of the License, or
8
+ # (at your option) any later version.
9
+ #
10
+ # Adaptive Style Transfer is distributed in the hope that it will be useful,
11
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
12
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13
+ # GNU General Public License for more details.
14
+ #
15
+ # You should have received a copy of the GNU General Public License
16
+ # along with this program. If not, see <https://www.gnu.org/licenses/>.
17
+
18
+ import math
19
+ import numpy as np
20
+ import tensorflow as tf
21
+ import tensorflow.contrib.slim as slim
22
+ from tensorflow.python.framework import ops
23
+ import cv2
24
+
25
+ import tensorflow.contrib.layers as tflayers
26
+
27
+ from utils import *
28
+
29
+
30
+ def batch_norm(input, is_training=True, name="batch_norm"):
31
+ x = tflayers.batch_norm(inputs=input,
32
+ scale=True,
33
+ is_training=is_training,
34
+ trainable=True,
35
+ reuse=None)
36
+ return x
37
+
38
+
39
+ def instance_norm(input, name="instance_norm", is_training=True):
40
+ with tf.variable_scope(name):
41
+ depth = input.get_shape()[3]
42
+ scale = tf.get_variable("scale", [depth], initializer=tf.random_normal_initializer(1.0, 0.02, dtype=tf.float32))
43
+ offset = tf.get_variable("offset", [depth], initializer=tf.constant_initializer(0.0))
44
+ mean, variance = tf.nn.moments(input, axes=[1, 2], keep_dims=True)
45
+ epsilon = 1e-5
46
+ inv = tf.rsqrt(variance + epsilon)
47
+ normalized = (input-mean)*inv
48
+ return scale*normalized + offset
49
+
50
+
51
+ def conv2d(input_, output_dim, ks=4, s=2, stddev=0.02, padding='SAME', name="conv2d", activation_fn=None):
52
+ with tf.variable_scope(name):
53
+ return slim.conv2d(input_, output_dim, ks, s, padding=padding, activation_fn=activation_fn,
54
+ weights_initializer=tf.truncated_normal_initializer(stddev=stddev),
55
+ biases_initializer=None)
56
+
57
+
58
+ def deconv2d(input_, output_dim, ks=4, s=2, stddev=0.02, name="deconv2d"):
59
+ # Upsampling procedure, like suggested in this article:
60
+ # https://distill.pub/2016/deconv-checkerboard/. At first upsample
61
+ # tensor like an image and then apply convolutions.
62
+ with tf.variable_scope(name):
63
+ input_ = tf.image.resize_images(images=input_,
64
+ size=tf.shape(input_)[1:3] * s,
65
+ method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) # That is optional
66
+ return conv2d(input_=input_, output_dim=output_dim, ks=ks, s=1, padding='SAME')
67
+
68
+
69
+ def lrelu(x, leak=0.2, name="lrelu"):
70
+ return tf.maximum(x, leak*x)
71
+
72
+
73
+ def linear(input_, output_size, scope=None, stddev=0.02, bias_start=0.0, with_w=False):
74
+
75
+ with tf.variable_scope(scope or "Linear"):
76
+ matrix = tf.get_variable("Matrix", [input_.get_shape()[-1], output_size], tf.float32,
77
+ tf.random_normal_initializer(stddev=stddev))
78
+ bias = tf.get_variable("bias", [output_size],
79
+ initializer=tf.constant_initializer(bias_start))
80
+ if with_w:
81
+ return tf.matmul(input_, matrix) + bias, matrix, bias
82
+ else:
83
+ return tf.matmul(input_, matrix) + bias
84
+
prepare_dataset.py ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (C) 2018 Artsiom Sanakoyeu and Dmytro Kotovenko
2
+ #
3
+ # This file is part of Adaptive Style Transfer
4
+ #
5
+ # Adaptive Style Transfer is free software: you can redistribute it and/or modify
6
+ # it under the terms of the GNU General Public License as published by
7
+ # the Free Software Foundation, either version 3 of the License, or
8
+ # (at your option) any later version.
9
+ #
10
+ # Adaptive Style Transfer is distributed in the hope that it will be useful,
11
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
12
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13
+ # GNU General Public License for more details.
14
+ #
15
+ # You should have received a copy of the GNU General Public License
16
+ # along with this program. If not, see <https://www.gnu.org/licenses/>.
17
+
18
+ from __future__ import print_function
19
+ import pandas as pd
20
+ import numpy as np
21
+ import os
22
+ import time
23
+ from tqdm import tqdm
24
+ import scipy.misc
25
+ import utils
26
+ import random
27
+
28
+
29
+ class ArtDataset():
30
+ def __init__(self, path_to_art_dataset):
31
+
32
+ self.dataset = [os.path.join(path_to_art_dataset, x) for x in os.listdir(path_to_art_dataset)]
33
+ print("Art dataset contains %d images." % len(self.dataset))
34
+
35
+ def get_batch(self, augmentor, batch_size=1):
36
+ """
37
+ Reads data from dataframe data containing path to images in column 'path' and, in case of dataframe,
38
+ also containing artist name, technique name, and period of creation for given artist.
39
+ In case of content images we have only the 'path' column.
40
+ Args:
41
+ augmentor: Augmentor object responsible for augmentation pipeline
42
+ batch_size: size of batch
43
+ Returns:
44
+ dictionary with fields: image
45
+ """
46
+
47
+ batch_image = []
48
+
49
+ for _ in range(batch_size):
50
+ image = scipy.misc.imread(name=random.choice(self.dataset), mode='RGB')
51
+
52
+ if max(image.shape) > 1800.:
53
+ image = scipy.misc.imresize(image, size=1800. / max(image.shape))
54
+ if max(image.shape) < 800:
55
+ # Resize the smallest side of the image to 800px
56
+ alpha = 800. / float(min(image.shape))
57
+ if alpha < 4.:
58
+ image = scipy.misc.imresize(image, size=alpha)
59
+ image = np.expand_dims(image, axis=0)
60
+ else:
61
+ image = scipy.misc.imresize(image, size=[800, 800])
62
+
63
+ if augmentor:
64
+ batch_image.append(augmentor(image).astype(np.float32))
65
+ else:
66
+ batch_image.append((image).astype(np.float32))
67
+ # Now return a batch in correct form
68
+ batch_image = np.asarray(batch_image)
69
+
70
+ return {"image": batch_image}
71
+
72
+ def initialize_batch_worker(self, queue, augmentor, batch_size=1, seed=228):
73
+ np.random.seed(seed)
74
+ while True:
75
+ batch = self.get_batch(augmentor=augmentor, batch_size=batch_size)
76
+ queue.put(batch)
77
+
78
+
79
+ class PlacesDataset():
80
+ categories_names = \
81
+ ['/a/abbey', '/a/arch', '/a/amphitheater', '/a/aqueduct', '/a/arena/rodeo', '/a/athletic_field/outdoor',
82
+ '/b/badlands', '/b/balcony/exterior', '/b/bamboo_forest', '/b/barn', '/b/barndoor', '/b/baseball_field',
83
+ '/b/basilica', '/b/bayou', '/b/beach', '/b/beach_house', '/b/beer_garden', '/b/boardwalk', '/b/boathouse',
84
+ '/b/botanical_garden', '/b/bullring', '/b/butte', '/c/cabin/outdoor', '/c/campsite', '/c/campus',
85
+ '/c/canal/natural', '/c/canal/urban', '/c/canyon', '/c/castle', '/c/church/outdoor', '/c/chalet',
86
+ '/c/cliff', '/c/coast', '/c/corn_field', '/c/corral', '/c/cottage', '/c/courtyard', '/c/crevasse',
87
+ '/d/dam', '/d/desert/vegetation', '/d/desert_road', '/d/doorway/outdoor', '/f/farm', '/f/fairway',
88
+ '/f/field/cultivated', '/f/field/wild', '/f/field_road', '/f/fishpond', '/f/florist_shop/indoor',
89
+ '/f/forest/broadleaf', '/f/forest_path', '/f/forest_road', '/f/formal_garden', '/g/gazebo/exterior',
90
+ '/g/glacier', '/g/golf_course', '/g/greenhouse/indoor', '/g/greenhouse/outdoor', '/g/grotto', '/g/gorge',
91
+ '/h/hayfield', '/h/herb_garden', '/h/hot_spring', '/h/house', '/h/hunting_lodge/outdoor', '/i/ice_floe',
92
+ '/i/ice_shelf', '/i/iceberg', '/i/inn/outdoor', '/i/islet', '/j/japanese_garden', '/k/kasbah',
93
+ '/k/kennel/outdoor', '/l/lagoon', '/l/lake/natural', '/l/lawn', '/l/library/outdoor', '/l/lighthouse',
94
+ '/m/mansion', '/m/marsh', '/m/mausoleum', '/m/moat/water', '/m/mosque/outdoor', '/m/mountain',
95
+ '/m/mountain_path', '/m/mountain_snowy', '/o/oast_house', '/o/ocean', '/o/orchard', '/p/park',
96
+ '/p/pasture', '/p/pavilion', '/p/picnic_area', '/p/pier', '/p/pond', '/r/raft', '/r/railroad_track',
97
+ '/r/rainforest', '/r/rice_paddy', '/r/river', '/r/rock_arch', '/r/roof_garden', '/r/rope_bridge',
98
+ '/r/ruin', '/s/schoolhouse', '/s/sky', '/s/snowfield', '/s/swamp', '/s/swimming_hole',
99
+ '/s/synagogue/outdoor', '/t/temple/asia', '/t/topiary_garden', '/t/tree_farm', '/t/tree_house',
100
+ '/u/underwater/ocean_deep', '/u/utility_room', '/v/valley', '/v/vegetable_garden', '/v/viaduct',
101
+ '/v/village', '/v/vineyard', '/v/volcano', '/w/waterfall', '/w/watering_hole', '/w/wave',
102
+ '/w/wheat_field', '/z/zen_garden', '/a/alcove', '/a/apartment-building/outdoor', '/a/artists_loft',
103
+ '/b/building_facade', '/c/cemetery']
104
+ categories_names = [x[1:] for x in categories_names]
105
+
106
+ def __init__(self, path_to_dataset):
107
+ self.dataset = []
108
+ for category_idx, category_name in enumerate(tqdm(self.categories_names)):
109
+ print(category_name, category_idx)
110
+ if os.path.exists(os.path.join(path_to_dataset, category_name)):
111
+ for file_name in tqdm(os.listdir(os.path.join(path_to_dataset, category_name))):
112
+ self.dataset.append(os.path.join(path_to_dataset, category_name, file_name))
113
+ else:
114
+ print("Category %s can't be found in path %s. Skip it." %
115
+ (category_name, os.path.join(path_to_dataset, category_name)))
116
+
117
+ print("Finished. Constructed Places2 dataset of %d images." % len(self.dataset))
118
+
119
+ def get_batch(self, augmentor, batch_size=1):
120
+ """
121
+ Generate bathes of images with attached labels(place category) in two different formats:
122
+ textual and one-hot-encoded.
123
+ Args:
124
+ augmentor: Augmentor object responsible for augmentation pipeline
125
+ batch_size: size of batch we return
126
+ Returns:
127
+ dictionary with fields: image
128
+ """
129
+
130
+ batch_image = []
131
+ for _ in range(batch_size):
132
+ image = scipy.misc.imread(name=random.choice(self.dataset), mode='RGB')
133
+ image = scipy.misc.imresize(image, size=2.)
134
+ image_shape = image.shape
135
+
136
+ if max(image_shape) > 1800.:
137
+ image = scipy.misc.imresize(image, size=1800. / max(image_shape))
138
+ if max(image_shape) < 800:
139
+ # Resize the smallest side of the image to 800px
140
+ alpha = 800. / float(min(image_shape))
141
+ if alpha < 4.:
142
+ image = scipy.misc.imresize(image, size=alpha)
143
+ image = np.expand_dims(image, axis=0)
144
+ else:
145
+ image = scipy.misc.imresize(image, size=[800, 800])
146
+
147
+ batch_image.append(augmentor(image).astype(np.float32))
148
+
149
+ return {"image": np.asarray(batch_image)}
150
+
151
+ def initialize_batch_worker(self, queue, augmentor, batch_size=1, seed=228):
152
+ np.random.seed(seed)
153
+ while True:
154
+ batch = self.get_batch(augmentor=augmentor, batch_size=batch_size)
155
+ queue.put(batch)
156
+
157
+
158
+
159
+
server.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import numpy as np
3
+ import tensorflow as tf
4
+ from module import encoder, decoder
5
+ from glob import glob
6
+ import runway
7
+
8
+
9
+ @runway.setup(options={'styleCheckpoint': runway.file(is_directory=True)})
10
+ def setup(opts):
11
+ sess = tf.Session()
12
+ init_op = tf.global_variables_initializer()
13
+ sess.run(init_op)
14
+ with tf.name_scope('placeholder'):
15
+ input_photo = tf.placeholder(dtype=tf.float32,
16
+ shape=[1, None, None, 3],
17
+ name='photo')
18
+ input_photo_features = encoder(image=input_photo,
19
+ options={'gf_dim': 32},
20
+ reuse=False)
21
+ output_photo = decoder(features=input_photo_features,
22
+ options={'gf_dim': 32},
23
+ reuse=False)
24
+ saver = tf.train.Saver()
25
+ path = opts['styleCheckpoint']
26
+ model_name = [p for p in os.listdir(path) if os.path.isdir(os.path.join(path, p))][0]
27
+ checkpoint_dir = os.path.join(path, model_name, 'checkpoint_long')
28
+ ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
29
+ ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
30
+ saver.restore(sess, os.path.join(checkpoint_dir, ckpt_name))
31
+ return dict(sess=sess, input_photo=input_photo, output_photo=output_photo)
32
+
33
+
34
+ @runway.command('stylize', inputs={'contentImage': runway.image}, outputs={'stylizedImage': runway.image})
35
+ def stylize(model, inp):
36
+ img = inp['contentImage']
37
+ img = np.array(img)
38
+ img = img / 127.5 - 1.
39
+ img = np.expand_dims(img, axis=0)
40
+ img = model['sess'].run(model['output_photo'], feed_dict={model['input_photo']: img})
41
+ img = (img + 1.) * 127.5
42
+ img = img.astype('uint8')
43
+ img = img[0]
44
+ return dict(stylizedImage=img)
45
+
46
+
47
+ if __name__ == '__main__':
48
+ runway.run()
test-gpu.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from tensorflow.keras import layers
2
+ from tensorflow.keras import models
3
+ model = models.Sequential()
4
+ model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
5
+ model.add(layers.MaxPooling2D((2, 2)))
6
+ model.add(layers.Conv2D(64, (3, 3), activation='relu'))
7
+ model.add(layers.MaxPooling2D((2, 2)))
8
+ model.add(layers.Conv2D(64, (3, 3), activation='relu'))
9
+ model.add(layers.Flatten())
10
+ model.add(layers.Dense(64, activation='relu'))
11
+ model.add(layers.Dense(10, activation='softmax'))
12
+ model.summary()
utils.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (C) 2018 Artsiom Sanakoyeu and Dmytro Kotovenko
2
+ #
3
+ # This file is part of Adaptive Style Transfer
4
+ #
5
+ # Adaptive Style Transfer is free software: you can redistribute it and/or modify
6
+ # it under the terms of the GNU General Public License as published by
7
+ # the Free Software Foundation, either version 3 of the License, or
8
+ # (at your option) any later version.
9
+ #
10
+ # Adaptive Style Transfer is distributed in the hope that it will be useful,
11
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
12
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13
+ # GNU General Public License for more details.
14
+ #
15
+ # You should have received a copy of the GNU General Public License
16
+ # along with this program. If not, see <https://www.gnu.org/licenses/>.
17
+
18
+ from __future__ import division
19
+ import math
20
+ import scipy.misc
21
+ from scipy.ndimage.filters import gaussian_filter
22
+
23
+ import numpy as np
24
+ from ops import *
25
+ import random
26
+ import copy
27
+
28
+
29
+ def save_batch(input_painting_batch, input_photo_batch, output_painting_batch, output_photo_batch, filepath):
30
+ """
31
+ Concatenates, processes and stores batches as image 'filepath'.
32
+ Args:
33
+ input_painting_batch: numpy array of size [B x H x W x C]
34
+ input_photo_batch: numpy array of size [B x H x W x C]
35
+ output_painting_batch: numpy array of size [B x H x W x C]
36
+ output_photo_batch: numpy array of size [B x H x W x C]
37
+ filepath: full name with path of file that we save
38
+
39
+ Returns:
40
+
41
+ """
42
+ def batch_to_img(batch):
43
+ return np.reshape(batch,
44
+ newshape=(batch.shape[0]*batch.shape[1], batch.shape[2], batch.shape[3]))
45
+
46
+ inputs = np.concatenate([batch_to_img(input_painting_batch), batch_to_img(input_photo_batch)],
47
+ axis=0)
48
+ outputs = np.concatenate([batch_to_img(output_painting_batch), batch_to_img(output_photo_batch)],
49
+ axis=0)
50
+
51
+ to_save = np.concatenate([inputs,outputs], axis=1)
52
+ to_save = np.clip(to_save, a_min=0., a_max=255.).astype(np.uint8)
53
+
54
+ scipy.misc.imsave(filepath, arr=to_save)
55
+
56
+
57
+ def normalize_arr_of_imgs(arr):
58
+ """
59
+ Normalizes an array so that the result lies in [-1; 1].
60
+ Args:
61
+ arr: numpy array of arbitrary shape and dimensions.
62
+ Returns:
63
+ """
64
+ return arr/127.5 - 1.
65
+ # return (arr - np.mean(arr)) / np.std(arr)
66
+
67
+
68
+ def denormalize_arr_of_imgs(arr):
69
+ """
70
+ Inverse of the normalize_arr_of_imgs function.
71
+ Args:
72
+ arr: numpy array of arbitrary shape and dimensions.
73
+ Returns:
74
+ """
75
+ return (arr + 1.) * 127.5